2017-08-03 97 views
4

我正在研究基本的Tensorflow服务示例。我遵循MNIST的例子,除了代替分类我想用numpy array预计另一个numpy array张量流服务客户端的最小工作实例

要做到这一点,我首先训练我的神经网络

x = tf.placeholder("float", [None, n_input],name ="input_values") 

weights = { 
    'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 
    'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 
    'encoder_h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3])), 
    'decoder_h1': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_2])), 
    'decoder_h2': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])), 
    'decoder_h3': tf.Variable(tf.random_normal([n_hidden_1, n_input])), 
} 
biases = { 
    'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])), 
    'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])), 
    'encoder_b3': tf.Variable(tf.random_normal([n_hidden_3])), 
    'decoder_b1': tf.Variable(tf.random_normal([n_hidden_2])), 
    'decoder_b2': tf.Variable(tf.random_normal([n_hidden_1])), 
    'decoder_b3': tf.Variable(tf.random_normal([n_input])), 
} 

# Building the encoder 
def encoder(x): 
    # Encoder Hidden layer with sigmoid activation #1 
    layer_1 = tf.nn.tanh(tf.matmul(x, weights['encoder_h1'])+biases['encoder_b1']) 
    print(layer_1.shape) 
    # Decoder Hidden layer with sigmoid activation #2 
    layer_2 = tf.nn.tanh(tf.matmul(layer_1, weights['encoder_h2'])+biases['encoder_b2']) 
    print(layer_2.shape) 
    # Layer 3 
    layer_3 = tf.nn.tanh(tf.matmul(layer_2, weights['encoder_h3'])+biases['encoder_b3']) 
    print(layer_3.shape) 
    return layer_3 


# Building the decoder 
def decoder(x): 
    # Encoder Hidden layer with sigmoid activation #1 
    layer_1 = tf.nn.tanh(tf.matmul(x, weights['decoder_h1'])+biases['decoder_b1']) 
    print(layer_1.shape) 
    # Decoder Hidden layer with sigmoid activation #2 
    layer_2 = tf.nn.tanh(tf.matmul(layer_1, weights['decoder_h2'])+biases['decoder_b2']) 
    # Layer 3 
    layer_3 = tf.nn.tanh(tf.matmul(layer_2, weights['decoder_h3'])+biases['decoder_b3']) 
    return layer_3 

# Construct model 
encoder_op = encoder(x) 
decoder_op = decoder(encoder_op) 

# Prediction 
y = decoder_op 



# Objective functions 
y_ = tf.placeholder("float", [None,n_input],name="predict") 

下一页有人建议我在这里保存有我的网络,像这样..

import os 
import sys 

from tensorflow.python.saved_model import builder as saved_model_builder 
from tensorflow.python.saved_model import utils 
from tensorflow.python.saved_model import tag_constants, signature_constants 
from tensorflow.python.saved_model.signature_def_utils_impl import  build_signature_def, predict_signature_def 
from tensorflow.contrib.session_bundle import exporter 

with tf.Session() as sess: 
# Initialize variables 
    sess.run(init) 

    # Restore model weights from previously saved model 
    saver.restore(sess, model_path) 
    print("Model restored from file: %s" % save_path) 

    export_path = '/tmp/AE_model/6' 
    print('Exporting trained model to', export_path) 
    builder = tf.saved_model.builder.SavedModelBuilder(export_path) 


    signature = predict_signature_def(inputs={'inputs': x}, 
            outputs={'outputs': y}) 

    builder.add_meta_graph_and_variables(sess=sess, 
             tags=[tag_constants.SERVING], 
             signature_def_map={'predict': signature}) 

    builder.save() 


    print 'Done exporting!' 

接下来我按照说明来运行我的服务器on localhost:9000

bazel build //tensorflow_serving/model_servers:tensorflow_model_server 

我设置了服务器

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_base_path=/tmp/AE_model/ 

的问题

现在我想编写一个程序,所以我可以在Eclipse C++程序通过垫载体(我使用库的很多),我的服务器,所以我可以做某种预言。

我首先想到了使用inception_client.cc作为参考。然而,似乎我需要巴泽尔编译它,因为我找不到prediction_service.grpc.pb.h任何地方:(

所以看来,我唯一的其他选择是使用python调用脚本我得到以下输出:

<grpc.beta._client_adaptations._Rendezvous object at 0x7f9bcf8cb850> 

这个问题的任何帮助,将不胜感激

谢谢

编辑:。

我重新安装protobuf的一个nd grpc并按照建议运行命令:

我的命令有点不同,我不得不在我的服务文件夹之外使用它(在Ubuntu 14.04中)。

sudo protoc -I=serving -I serving/tensorflow --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_cpp_plugin` serving/tensorflow_serving/apis/*.proto 

这产生的.gprc.pb.h文件,我把它们拉到了/的API /文件夹和错误下去。现在我收到错误

/tensorflow/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1:42: fatal error: unsupported/Eigen/CXX11/Tensor: No such file or directory 

即使此文件确实存在。任何建议表示赞赏。

谢谢@subzero!

EDIT 2

我能够通过更新到最新版本的本征和建立从源头上解决与本征问题。接下来我指向/ usr/local/include/eigen3/

之后我遇到了张量流库的问题。我通过使用lababidi的建议生成libtensorflow_cc.so库来解决这些问题。 https://github.com/tensorflow/tensorflow/issues/2412

我有最后一个问题。一切似乎要被罚款,除了我得到的错误:

未定义参考to`tensorflow ::服务:: PredictRequest ::〜PredictRequest()”

看来,我很想念无论是连接或库。有谁知道我错过了什么?

+0

我遇到了编辑2中的相同问题,您是否找到解决方案? – Matt2048

+0

嘿不,我没有:(我不得不切换到tensorflow C++ –

+0

我放弃了,并使用自定义服务器和客户端,而不是 – Matt2048

回答

1

定制客户端和服务器的一个示例:

服务器代码添加到tensorflow模型:

import grpc 
from concurrent import futures 
import python_pb2 
import python_pb2_grpc 

class PythonServicer(python_pb2_grpc.PythonServicer): 


    def makePredictions(self, request, context): 


     items = eval(str(request.items)) #Receives the input values for the model as a string and evaluates them into an array to be passed to tensorflow 

     x_feed = items 

     targetEval_out = sess.run(confidences, feed_dict={x:x_feed}) #"confidences" is the output of my model, replace it for the appropriate function from your model 


     out = str(targetEval_out.tolist()) #The model output is then put into string format to be passed back to the client. It has to be reformatted on the other end, but this method was easier to implement 

     return python_pb2.value(name=out) 


print("server online") 
MAX_MESSAGE_LENGTH = 4 * 1024 * 1024 #can be edited to allow for larger amount of data to be transmitted per message. This can be helpful for making large numbers of predictions at once. 
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10), 
options=[('grpc.max_send_message_length', MAX_MESSAGE_LENGTH), (
'grpc.max_receive_message_length', MAX_MESSAGE_LENGTH)]) 
python_pb2_grpc.add_PythonServicer_to_server(
PythonServicer(), server) 
server.add_insecure_port('[::]:50051') 
server.start() 

客户端C++代码:

#include <grpc/grpc.h> 
#include <grpc++/channel.h> 
#include <grpc++/client_context.h> 
#include <grpc++/create_channel.h> 
#include <grpc++/security/credentials.h> 
#include "python.grpc.pb.h" 

using grpc::Channel; 
using grpc::ClientContext; 
using grpc::ClientReader; 
using grpc::ClientReaderWriter; 
using grpc::ClientWriter; 
using grpc::Status; 
using python::request; 
using python::value; 
using python::Python; 

using namespace std; 


unsigned MAX_MESSAGE_LENGTH = 4 * 1024 * 1024; //can be edited to allow for larger amount of data to be transmitted per message. This can be helpful for making large numbers of predictions at once. 
grpc::ChannelArguments channel_args; 
channel_args.SetMaxReceiveMessageSize(MAX_MESSAGE_LENGTH); 
channel_args.SetMaxSendMessageSize(MAX_MESSAGE_LENGTH); 

shared_ptr<Channel> channel = CreateCustomChannel("localhost:50051", grpc::InsecureChannelCredentials(),channel_args); 
unique_ptr<python::Python::Stub>stub = python::Python::NewStub(channel); 

request r; 
r.set_items(dataInputString); //The input data should be a string that can be parsed to a python array, for example "[[1.0,2.0,3.0],[4.0,5.0,6.0]]" 
//The server code was made to be able to make multiple predictions at once, hence the multiple data arrays 
value val; 
ClientContext context; 

Status status = stub->makePredictions(&context, r, &val); 

cout << val.name() << "\n"; //This prints the returned model prediction 

的python.proto代码:

syntax = "proto3"; 


package python; 

service Python { 

    rpc makePredictions(request) returns (value) {} 


} 

message request { 
    string items = 1; 
} 


message value { 
    string name = 1; 
} 

我不确定这些代码片段是否可以自行工作,因为我刚从当前项目中复制了相关的代码。但希望这对于需要张量流客户机和服务器的任何人都是一个很好的起点。

0

您正在查找的pb.h文件是通过在this file上运行protc而生成的。

您可以按照说明here的说明生成头文件并自己使用它。在任何情况下,您运行的Basel版本都应该在您的build目录中生成该文件,您可以设置您的eclipse项目以使用这些包含路径来编译您的C客户端。

+0

感谢您的快速响应,我尝试了您的建议,并且在这里是我所在的位置:通过运行:protoc -I =正在服务-I服务/ tensorflow --grpc_out =。 --plugin = protoc-gen-grpc = grpc_cpp_plugin服务/ tensorflow_serving/apis/*。proto。我得到错误:grpc_cpp_plugin:程序未找到或不可执行 --grpc_out:protoc-gen-grpc:插件失败, 1. 。此外,似乎原始的bazel生成生成.pb.h文件&不grpc.pb.h文件,所以我不能使用该 –

+0

你必须指向protobuf的gRPC插件,就像'--plugin = protoc-gen-grpc = \'grpc_cpp_plugin \''。假设它在你的'$ PATH'变量上。 – subzero

+0

你好,感谢您的建议。这工作,但现在看来我被困在依赖炼狱。当我尝试编译它时,我收到了一些关于Eigen和其他库的抱怨。我能够从.proto中产生一些,但我没有摆脱Eigen的投诉。 –

相关问题