3RNN/Lib/site-packages/tensorflow/include/tsl/protobuf/rpc_options.proto

42 lines
1.8 KiB
Protocol Buffer
Raw Normal View History

2024-05-26 19:49:15 +02:00
syntax = "proto3";
package tensorflow;
option go_package = "github.com/google/tsl/tsl/go/protobuf/for_core_protos_go_proto";
// RPC options for distributed runtime.
message RPCOptions {
// If true, always use RPC to contact the session target.
//
// If false (the default option), TensorFlow may use an optimized
// transport for client-master communication that avoids the RPC
// stack. This option is primarily for used testing the RPC stack.
bool use_rpc_for_inprocess_master = 1;
// The compression algorithm to be used. One of "deflate", "gzip".
string compression_algorithm = 2;
// If compression_algorithm is set, the compression level to be used.
// From 0 (no compression), up to 3.
int32 compression_level = 3;
// Setting cache_rpc_response to true will enable sender side caching of
// response for RecvTensorAsync and RecvBufAsync to allow receiver to retry
// requests . This is only necessary when the network fabric is experiencing a
// significant error rate. Without it we'll fail a step on an network error,
// while with it we'll be able to complete long steps (like complex
// initializations) in the face of some network errors during RecvTensor.
bool cache_rpc_response = 4;
// Disables TCP connection sharing when opening a new RPC channel.
bool disable_session_connection_sharing = 5;
// Setting num_channels_per_target > 0 allows uses of multiple channels to
// communicate to the same target. This can be used to improve the aggregate
// throughput on high speed links (e.g 100G) where single connection is not
// sufficient to maximize link utilization. Note that a single RPC only goes
// on a single channel, this only helps in situations where there are multiple
// transfers to the same target overlapping in time.
int32 num_channels_per_target = 6;
}