desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Get small set of idxs to compute nearest neighbor queries on. This is an expensive look-up on the whole memory that is used to avoid more expensive operations later on. Args: normalized_query: A Tensor of shape [None, key_dim]. Returns: A Tensor of shape [None, choose_k] of indices in memory that are closest to the qu...
def get_hint_pool_idxs(self, normalized_query):
with tf.device(self.nn_device): similarities = tf.matmul(tf.stop_gradient(normalized_query), self.mem_keys, transpose_b=True, name='nn_mmul') (_, hint_pool_idxs) = tf.nn.top_k(tf.stop_gradient(similarities), k=self.choose_k, name='nn_topk') return hint_pool_idxs
'Function that creates all the update ops.'
def make_update_op(self, upd_idxs, upd_keys, upd_vals, batch_size, use_recent_idx, intended_output):
mem_age_incr = self.mem_age.assign_add(tf.ones([self.memory_size], dtype=tf.float32)) with tf.control_dependencies([mem_age_incr]): mem_age_upd = tf.scatter_update(self.mem_age, upd_idxs, tf.zeros([batch_size], dtype=tf.float32)) mem_key_upd = tf.scatter_update(self.mem_keys, upd_idxs, upd_keys) ...
'Queries memory for nearest neighbor. Args: query_vec: A batch of vectors to query (embedding of input to model). intended_output: The values that would be the correct output of the memory. use_recent_idx: Whether to always insert at least one instance of a correct memory fetch. Returns: A tuple (result, mask, teacher_...
def query(self, query_vec, intended_output, use_recent_idx=True):
batch_size = tf.shape(query_vec)[0] output_given = (intended_output is not None) query_vec = tf.matmul(query_vec, self.query_proj) normalized_query = tf.nn.l2_normalize(query_vec, dim=1) hint_pool_idxs = self.get_hint_pool_idxs(normalized_query) if (output_given and use_recent_idx): most...
'Gets hashed-to buckets for batch of queries. Args: query: 2-d Tensor of query vectors. Returns: A list of hashed-to buckets for each hash function.'
def get_hash_slots(self, query):
binary_hash = [tf.less(tf.matmul(query, self.hash_vecs[i], transpose_b=True), 0) for i in xrange(self.num_libraries)] hash_slot_idxs = [tf.reduce_sum((tf.to_int32(binary_hash[i]) * tf.constant([[(2 ** i) for i in xrange(self.num_hashes)]], dtype=tf.int32)), 1) for i in xrange(self.num_libraries)] return has...
'Get small set of idxs to compute nearest neighbor queries on. This is an expensive look-up on the whole memory that is used to avoid more expensive operations later on. Args: normalized_query: A Tensor of shape [None, key_dim]. Returns: A Tensor of shape [None, choose_k] of indices in memory that are closest to the qu...
def get_hint_pool_idxs(self, normalized_query):
hash_slot_idxs = self.get_hash_slots(normalized_query) hint_pool_idxs = [tf.maximum(tf.minimum(tf.gather(self.hash_slots[i], idxs), (self.memory_size - 1)), 0) for (i, idxs) in enumerate(hash_slot_idxs)] return tf.concat(axis=1, values=hint_pool_idxs)
'Function that creates all the update ops.'
def make_update_op(self, upd_idxs, upd_keys, upd_vals, batch_size, use_recent_idx, intended_output):
base_update_op = super(LSHMemory, self).make_update_op(upd_idxs, upd_keys, upd_vals, batch_size, use_recent_idx, intended_output) hash_slot_idxs = self.get_hash_slots(upd_keys) update_ops = [] with tf.control_dependencies([base_update_op]): for (i, slot_idxs) in enumerate(hash_slot_idxs): ...
'Generate random pseudo-boolean key and message values.'
def get_message_and_key(self):
batch_size = tf.placeholder_with_default(FLAGS.batch_size, shape=[]) in_m = batch_of_random_bools(batch_size, TEXT_SIZE) in_k = batch_of_random_bools(batch_size, KEY_SIZE) return (in_m, in_k)
'The model for Alice, Bob, and Eve. If key=None, the first FC layer takes only the message as inputs. Otherwise, it uses both the key and the message. Args: collection: The graph keys collection to add new vars to. message: The input message to process. key: The input key (if any) to use.'
def model(self, collection, message, key=None):
if (key is not None): combined_message = tf.concat(axis=1, values=[message, key]) else: combined_message = message with tf.contrib.framework.arg_scope([tf.contrib.layers.fully_connected, tf.contrib.layers.conv2d], variables_collections=[collection]): fc = tf.contrib.layers.fully_conn...
'Initializes the ComponentBuilder from specifications. Args: master: dragnn.MasterBuilder object. component_spec: dragnn.ComponentSpec proto to be built. attr_defaults: Optional dict of component attribute defaults. If not provided or if empty, attributes are not extracted.'
def __init__(self, master, component_spec, attr_defaults=None):
self.master = master self.num_actions = component_spec.num_actions self.name = component_spec.name self.spec = component_spec self.moving_average = None self.eligible_for_self_norm = ((not self.master.hyperparams.self_norm_components_filter) or (self.name in self.master.hyperparams.self_norm_com...
'Makes a NetworkUnitInterface object based on the network_unit spec. Components may override this method to exert control over the network unit construction, such as which network units are supported. Args: network_unit: RegisteredModuleSpec proto defining the network unit. Returns: An implementation of NetworkUnitInte...
def make_network(self, network_unit):
network_type = network_unit.registered_name with tf.variable_scope(self.name): return network_units.NetworkUnitInterface.Create(network_type, self)
'Builds a training graph for this component. Two assumptions are made about the resulting graph: 1. An oracle will be used to unroll the state and compute the cost. 2. The graph will be differentiable when the cost is being minimized. Args: state: MasterState from the \'AdvanceMaster\' op that advances the underlying m...
@abstractmethod def build_greedy_training(self, state, network_states):
pass
'Builds a beam search based training loop for this component. The default implementation builds a dummy graph and raises a TensorFlow runtime exception to indicate that structured training is not implemented. Args: state: MasterState from the \'AdvanceMaster\' op that advances the underlying master to this component. n...
def build_structured_training(self, state, network_states):
del network_states with tf.control_dependencies([tf.Assert(False, ['Not implemented.'])]): handle = tf.identity(state.handle) cost = tf.constant(0.0) (correct, total) = (tf.constant(0), tf.constant(0)) return (handle, cost, correct, total)
'Builds an inference graph for this component. If this graph is being constructed \'during_training\', then it needs to be differentiable even though it doesn\'t return an explicit cost. There may be other cases where the distinction between training and eval is important. The handling of dropout is an example of this....
@abstractmethod def build_greedy_inference(self, state, network_states, during_training=False):
pass
'Constructs a set of summaries for this component. Returns: List of Summary ops to get parameter norms, progress reports, and so forth for this component.'
def get_summaries(self):
def combine_norm(matrices): squares = [tf.reduce_sum(tf.square(m)) for m in matrices if (m is not None)] if squares: return tf.sqrt(tf.add_n(squares)) else: return tf.constant(0, tf.float32) summaries = [] summaries.append(tf.summary.scalar(('%s step' % sel...
'Returns either the original or averaged version of a given variable. If the master.read_from_avg flag is set to True, and the ExponentialMovingAverage (EMA) object has been attached, then this will ask the EMA object for the given variable. This is to allow executing inference from the averaged version of parameters. ...
def get_variable(self, var_name=None, var_params=None):
if var_params: var_name = var_params.name else: check.NotNone(var_name, 'specify at least one of var_name or var_params') var_params = tf.get_variable(var_name) if (self.moving_average and self.master.read_from_avg): logging.info('Retrieving average ...
'Returns ops to advance the per-component step and total counters. Args: total: Total number of actions to increment counters by. Returns: tf.Group op incrementing \'step\' by 1 and \'total\' by total.'
def advance_counters(self, total):
update_total = tf.assign_add(self._total, total, use_locking=True) update_step = tf.assign_add(self._step, 1, use_locking=True) return tf.group(update_total, update_step)
'Adds L2 regularization for parameters which have it turned on. Args: cost: float cost before regularization. Returns: Updated cost optionally including regularization.'
def add_regularizer(self, cost):
if (self.network is None): return cost regularized_weights = self.network.get_l2_regularized_weights() if (not regularized_weights): return cost l2_coeff = self.master.hyperparams.l2_regularization_coefficient if (l2_coeff == 0.0): return cost tf.logging.info('[%s] Reg...
'Builds a post restore graph for this component. This is a run-once graph that prepares any state necessary for the inference portion of the component. It is generally a no-op. Returns: A no-op state.'
def build_post_restore_hook(self):
logging.info('Building default post restore hook for component: %s', self.spec.name) return tf.no_op(name=('setup_%s' % self.spec.name))
'Returns the value of the component attribute with the |name|.'
def attr(self, name):
return self._attrs[name]
'Builds a training loop for this component. This loop repeatedly evaluates the network and computes the loss, but it does not advance using the predictions of the network. Instead, it advances using the oracle defined in the underlying transition system. The final state will always correspond to the gold annotation. Ar...
def build_greedy_training(self, state, network_states):
logging.info('Building component: %s', self.spec.name) with tf.control_dependencies([tf.assert_equal(self.training_beam_size, 1)]): stride = (state.current_batch_size * self.training_beam_size) cost = tf.constant(0.0) correct = tf.constant(0) total = tf.constant(0) def cond(handle,...
'Builds an inference loop for this component. Repeatedly evaluates the network and advances the underlying state according to the predicted scores. Args: state: MasterState from the \'AdvanceMaster\' op that advances the underlying master to this component. network_states: NetworkState object containing component Tenso...
def build_greedy_inference(self, state, network_states, during_training=False):
logging.info('Building component: %s', self.spec.name) if during_training: stride = (state.current_batch_size * self.training_beam_size) else: stride = (state.current_batch_size * self.inference_beam_size) def cond(handle, *_): all_final = dragnn_ops.emit_all_final(handle, ...
'Constructs a single instance of a feed-forward cell. Given an input state and access to the arrays storing activations, this function encapsulates creation of a single network unit. This will *not* create new variables. Args: state: MasterState for the state that will be used to extract features. arrays: List of Tenso...
def _feedforward_unit(self, state, arrays, network_states, stride, during_training):
with tf.variable_scope(self.name, reuse=True): fixed_embeddings = [] for (channel_id, feature_spec) in enumerate(self.spec.fixed_feature): fixed_embedding = network_units.fixed_feature_lookup(self, state, channel_id, stride) if feature_spec.is_constant: fixed_...
'Construct a new Composite optimizer. Args: optimizer1: A tf.python.training.optimizer.Optimizer object. optimizer2: A tf.python.training.optimizer.Optimizer object. switch: A tf.bool Tensor, selecting whether to use the first or the second optimizer. use_locking: Bool. If True apply use locks to prevent concurrent upd...
def __init__(self, optimizer1, optimizer2, switch, use_locking=False, name='Composite'):
super(CompositeOptimizer, self).__init__(use_locking, name) self._optimizer1 = optimizer1 self._optimizer2 = optimizer2 self._switch = switch
'Initializes the MasterBuilder from specifications. During construction, all components are initialized along with their parameter tf.Variables. Args: master_spec: dragnn.MasterSpec proto. hyperparam_config: dragnn.GridPoint proto specifying hyperparameters. Defaults to empty specification. pool_scope: string identifie...
def __init__(self, master_spec, hyperparam_config=None, pool_scope='shared'):
self.spec = master_spec self.hyperparams = (spec_pb2.GridPoint() if (hyperparam_config is None) else hyperparam_config) self.pool_scope = pool_scope tf.set_random_seed(hyperparam_config.seed) self.components = [] self.lookup_component = {} for component_spec in master_spec.component: ...
'Returns a new ComputeSession handle.'
def _get_compute_session(self):
return dragnn_ops.get_session(self.pool_scope, master_spec=self.spec.SerializeToString(), grid_point=self.hyperparams.SerializeToString(), name='GetSession')
'Utility to create ComputeSession management ops. Creates a new ComputeSession handle and provides the following named nodes: ComputeSession/InputBatch -- a placeholder for attaching a string specification for AttachReader. ComputeSession/AttachReader -- the AttachReader op. Args: enable_tracing: bool, whether to enabl...
def _get_session_with_reader(self, enable_tracing):
with tf.name_scope('ComputeSession'): input_batch = tf.placeholder(dtype=tf.string, shape=[None], name='InputBatch') handle = self._get_compute_session() if enable_tracing: handle = dragnn_ops.set_tracing(handle, True) handle = dragnn_ops.attach_data_reader(handle, input_...
'Ensures ComputeSession is released before outputs are returned. Args: handle: Handle to ComputeSession on which all computation until now has depended. It will be released and assigned to the output \'run\'. inputs: list of nodes we want to pass through without any dependencies. outputs: list of nodes whose access sho...
def _outputs_with_release(self, handle, inputs, outputs):
with tf.control_dependencies(outputs.values()): with tf.name_scope('ComputeSession'): release_op = dragnn_ops.release_session(handle) run_op = tf.group(release_op, name='run') for output in outputs: with tf.control_dependencies([release_op]): outputs[o...
'Builds a training pipeline. Args: handle: Handle tensor for the ComputeSession. compute_gradients: Whether to generate gradients and an optimizer op. When False, build_training will return a \'dry run\' training op, used normally only for oracle tracing. use_moving_average: Whether or not to read from the moving avera...
def build_training(self, handle, compute_gradients=True, use_moving_average=False, advance_counters=True, component_weights=None, unroll_using_oracle=None, max_index=(-1)):
check.IsFalse((compute_gradients and use_moving_average), 'It is not possible to make gradient updates when reading from the moving average variables.') self.read_from_avg = use_moving_average if (max_index < 0): max_index = len(self.components) elif (no...
'Clips gradients if the hyperparameter `gradient_clip_norm` requires it. Sparse tensors, in the form of IndexedSlices returned for the gradients of embeddings, require special handling. Args: grad: Gradient Tensor, IndexedSlices, or None. Returns: Optionally clipped gradient.'
def _clip_gradients(self, grad):
if ((grad is not None) and (self.hyperparams.gradient_clip_norm > 0)): logging.info('Clipping gradient %s', grad) if isinstance(grad, tf.IndexedSlices): tmp = tf.clip_by_norm(grad.values, self.hyperparams.gradient_clip_norm) return tf.IndexedSlices(tmp, grad.indices, gr...
'Builds a graph that should be executed after the restore op. This graph is intended to be run once, before the inference pipeline is run. Returns: setup_op - An op that, when run, guarantees all setup ops will run.'
def build_post_restore_hook(self):
with tf.control_dependencies([comp.build_post_restore_hook() for comp in self.components]): return tf.no_op(name='post_restore_hook_master')
'Builds an inference pipeline. This always uses the whole pipeline. Args: handle: Handle tensor for the ComputeSession. use_moving_average: Whether or not to read from the moving average variables instead of the true parameters. Note: it is not possible to make gradient updates when this is True. Returns: handle: Handl...
def build_inference(self, handle, use_moving_average=False):
self.read_from_avg = use_moving_average network_states = {} for comp in self.components: network_states[comp.name] = component.NetworkState() handle = dragnn_ops.init_component_data(handle, beam_size=comp.inference_beam_size, component=comp.name) master_state = component.MasterState(...
'Constructs a training pipeline from a TrainTarget proto. This constructs a separately managed pipeline for a given target: it has its own ComputeSession, InputSpec placeholder, etc. The ops are given standardized names to allow access from the C++ API. It passes the values in target_config to build_training() above. F...
def add_training_from_config(self, target_config, prefix='train-', trace_only=False, **kwargs):
logging.info('Creating new training target %s from config: %s', target_config.name, str(target_config)) scope_id = (prefix + target_config.name) with tf.name_scope(scope_id): (handle, input_batch) = self._get_session_with_reader(trace_only) (handle, outputs) = self.build...
'Adds an annotation pipeline to the graph. This will create the following additional named targets by default, for use in C++ annotation code (as well as regular ComputeSession targets): annotation/ComputeSession/session_id (placeholder for giving unique id) annotation/EmitAnnotations (get annotated data) annotation/Ge...
def add_annotation(self, name_scope='annotation', enable_tracing=False):
with tf.name_scope(name_scope): (handle, input_batch) = self._get_session_with_reader(enable_tracing) handle = self.build_inference(handle, use_moving_average=True) annotations = dragnn_ops.emit_annotations(handle, component=self.spec.component[(-1)].name) outputs = {'annotations': a...
'Adds the post restore ops.'
def add_post_restore_hook(self, name_scope):
with tf.name_scope(name_scope): return self.build_post_restore_hook()
'Adds a Saver for all variables in the graph.'
def add_saver(self):
logging.info('Saving non-quantized variables:\n DCTB %s', '\n DCTB '.join([x.name for x in tf.global_variables() if ('quantized' not in x.name)])) self.saver = tf.train.Saver(var_list=[x for x in tf.global_variables() if ('quantized' not in x.name)], write_version=saver_pb2.SaverDef.V1)
'Initializes the ComponentSpec with some defaults for SyntaxNet. Args: name: The name of this Component in the pipeline. builder: The component builder type. backend: The component backend type.'
def __init__(self, name, builder='DynamicComponentBuilder', backend='SyntaxNetComponent'):
self.spec = spec_pb2.ComponentSpec(name=name, backend=self.make_module(backend), component_builder=self.make_module(builder))
'Forwards kwargs to easily created a RegisteredModuleSpec. Note: all kwargs should be string-valued. Args: name: The registered name of the module. **kwargs: Proto fields to be specified in the module. Returns: Newly created RegisteredModuleSpec.'
def make_module(self, name, **kwargs):
return spec_pb2.RegisteredModuleSpec(registered_name=name, parameters=kwargs)
'Returns the default source_layer setting for this ComponentSpec. Usually links are intended for a specific layer in the network unit. For common network units, this returns the hidden layer intended to be read by recurrent and cross-component connections. Returns: String name of default network layer. Raises: ValueErr...
def default_source_layer(self):
for (network, default_layer) in [('FeedForwardNetwork', 'layer_0'), ('LayerNormBasicLSTMNetwork', 'state_h_0'), ('LSTMNetwork', 'layer_0'), ('IdentityNetwork', 'input_embeddings')]: if self.spec.network_unit.registered_name.endswith(network): return default_layer raise ValueError(('No def...
'Returns the default source_translator setting for token representations. Most links are token-based: given a target token index, retrieve a learned representation for that token from this component. This depends on the transition system; e.g. we should make sure that left-to-right sequence models reverse the incoming ...
def default_token_translator(self):
transition_spec = self.spec.transition_system if (transition_spec.registered_name == 'arc-standard'): return 'shift-reduce-step' if (transition_spec.registered_name in ('shift-only', 'tagger')): if ('left_to_right' in transition_spec.parameters): if (transition_spec.parameters['l...
'Adds a link to source\'s token representations using default settings. Constructs a LinkedFeatureChannel proto and adds it to the spec, using defaults to assign the name, component, translator, and layer of the channel. The user must provide fml and embedding_dim. Args: source: SyntaxComponentBuilder object to pull r...
def add_token_link(self, source=None, source_layer=None, **kwargs):
if (source_layer is None): source_layer = source.default_source_layer() self.spec.linked_feature.add(name=source.spec.name, source_component=source.spec.name, source_layer=source_layer, source_translator=source.default_token_translator(), **kwargs)
'Adds a recurrent link to this component using default settings. This adds the connection to the previous time step only to the network. It constructs a LinkedFeatureChannel proto and adds it to the spec, using defaults to assign the name, component, translator, and layer of the channel. The user must provide the emb...
def add_rnn_link(self, source_layer=None, **kwargs):
if (source_layer is None): source_layer = self.default_source_layer() self.spec.linked_feature.add(name='rnn', source_layer=source_layer, source_component=self.spec.name, source_translator='history', fml='constant', **kwargs)
'Shorthand to set transition_system using kwargs.'
def set_transition_system(self, *args, **kwargs):
self.spec.transition_system.CopyFrom(self.make_module(*args, **kwargs))
'Shorthand to set network_unit using kwargs.'
def set_network_unit(self, *args, **kwargs):
self.spec.network_unit.CopyFrom(self.make_module(*args, **kwargs))
'Shorthand to add a fixed_feature using kwargs.'
def add_fixed_feature(self, **kwargs):
self.spec.fixed_feature.add(**kwargs)
'Add a link using default naming and layers only.'
def add_link(self, source, source_layer=None, source_translator='identity', name=None, **kwargs):
if (source_layer is None): source_layer = source.default_source_layer() if (name is None): name = source.spec.name self.spec.linked_feature.add(source_component=source.spec.name, source_layer=source_layer, name=name, source_translator=source_translator, **kwargs)
'Fills in feature sizes and vocabularies using SyntaxNet lexicon. Must be called before the spec is ready to be used to build TensorFlow graphs. Requires a SyntaxNet lexicon built at the resource_path. Using the lexicon, this will call the SyntaxNet custom ops to return the number of features and vocabulary sizes based...
def fill_from_resources(self, resource_path, tf_master=''):
check.IsTrue(self.spec.transition_system.registered_name, 'Set a transition system before calling fill_from_resources().') context = lexicon.create_lexicon_context(resource_path) for (key, value) in self.spec.transition_system.parameters.iteritems(): context.parameter.add(name=key,...
'Returns attrs based on the |defaults| and one |key|,|value| override.'
def MakeAttrs(self, defaults, key=None, value=None):
spec = spec_pb2.RegisteredModuleSpec() if (key and value): spec.parameters[key] = value return network_units.get_attrs_with_defaults(spec.parameters, defaults)
'Extracts features and advances a batch using the oracle path. Args: state: MasterState from the \'AdvanceMaster\' op that advances the underlying master to this component. network_states: dictionary of component NetworkState objects Returns: state handle: final state after advancing cost: regularization cost, possibly...
def build_greedy_training(self, state, network_states):
logging.info('Building component: %s', self.spec.name) stride = (state.current_batch_size * self.training_beam_size) with tf.variable_scope(self.name, reuse=True): (state.handle, fixed_embeddings) = fetch_differentiable_fixed_embeddings(self, state, stride) linked_embeddings = [fetch_linke...
'Extracts features and advances a batch using the oracle path. NOTE(danielandor) For now this method cannot be called during training. That is to say, unroll_using_oracle for this component must be set to true. This will be fixed by separating train_with_oracle and train_with_inference. Args: state: MasterState from th...
def build_greedy_inference(self, state, network_states, during_training=False):
logging.info('Building component: %s', self.spec.name) if during_training: stride = (state.current_batch_size * self.training_beam_size) else: stride = (state.current_batch_size * self.inference_beam_size) with tf.variable_scope(self.name, reuse=True): if during_training: ...
'Initializes the feature ID extractor component. Args: master: dragnn.MasterBuilder object. component_spec: dragnn.ComponentSpec proto to be built.'
def __init__(self, master, component_spec):
super(BulkFeatureIdExtractorComponentBuilder, self).__init__(master, component_spec) check.Eq(len(self.spec.linked_feature), 0, 'Linked features are forbidden') for feature_spec in self.spec.fixed_feature: check.Lt(feature_spec.embedding_dim, 0, ('Features must be non-embedded: ...
'See base class.'
def build_greedy_training(self, state, network_states):
state.handle = self._extract_feature_ids(state, network_states, True) cost = self.add_regularizer(tf.constant(0.0)) (correct, total) = (tf.constant(0), tf.constant(0)) return (state.handle, cost, correct, total)
'See base class.'
def build_greedy_inference(self, state, network_states, during_training=False):
return self._extract_feature_ids(state, network_states, during_training)
'Extracts feature IDs and advances a batch using the oracle path. Args: state: MasterState from the \'AdvanceMaster\' op that advances the underlying master to this component. network_states: Dictionary of component NetworkState objects. during_training: Whether the graph is being constructed during training. Returns: ...
def _extract_feature_ids(self, state, network_states, during_training):
logging.info('Building component: %s', self.spec.name) if during_training: stride = (state.current_batch_size * self.training_beam_size) else: stride = (state.current_batch_size * self.inference_beam_size) with tf.variable_scope(self.name, reuse=True): (state.handle, ids) =...
'Advances a batch using oracle paths, returning the overall CE cost. Args: state: MasterState from the \'AdvanceMaster\' op that advances the underlying master to this component. network_states: dictionary of component NetworkState objects Returns: (state handle, cost, correct, total): TF ops corresponding to the final...
def build_greedy_training(self, state, network_states):
logging.info('Building component: %s', self.spec.name) if self.spec.fixed_feature: raise RuntimeError('Fixed features are not compatible with bulk annotation. Use the "bulk-features" component instead.') linked_embeddings = [fetch_linked_embedding(self, netw...
'Annotates a batch of documents using network scores. Args: state: MasterState from the \'AdvanceMaster\' op that advances the underlying master to this component. network_states: dictionary of component NetworkState objects during_training: whether the graph is being constructed during training Returns: Handle to the ...
def build_greedy_inference(self, state, network_states, during_training=False):
logging.info('Building component: %s', self.spec.name) if self.spec.fixed_feature: raise RuntimeError('Fixed features are not compatible with bulk annotation. Use the "bulk-features" component instead.') linked_embeddings = [fetch_linked_embedding(self, netw...
'Initializes the LSTM base class. Parameters used: hidden_layer_sizes: Comma-delimited number of hidden units for each layer. input_dropout_rate (-1.0): Input dropout rate for each layer. If < 0.0, use the global |dropout_rate| hyperparameter. recurrent_dropout_rate (0.8): Recurrent dropout rate. If < 0.0, use the gl...
def __init__(self, component, additional_attr_defaults=None):
attr_defaults = (additional_attr_defaults or {}) attr_defaults.update({'layer_norm': True, 'input_dropout_rate': (-1.0), 'recurrent_dropout_rate': 0.8, 'hidden_layer_sizes': '256'}) self._attrs = dragnn.get_attrs_with_defaults(component.spec.network_unit.parameters, defaults=attr_defaults) self._hidden_...
'Returns the logits for prediction.'
def get_logits(self, network_tensors):
return network_tensors[self.get_layer_index('logits')]
'Creates hidden network layers. Args: component: Parent ComponentBuilderBase object. hidden_layer_sizes: List of requested hidden layer activation sizes. Returns: layers: List of layers created by this network. context_layers: List of context layers created by this network.'
@abc.abstractmethod def create_hidden_layers(self, component, hidden_layer_sizes):
pass
'Appends layers defined by the base class to the |hidden_layers|.'
def _append_base_layers(self, hidden_layers):
last_layer = hidden_layers[(-1)] logits = tf.nn.xw_plus_b(last_layer, self._component.get_variable('weights_softmax'), self._component.get_variable('bias_softmax')) return (hidden_layers + [last_layer, logits])
'Creates a single LSTM cell, possibly with dropout. Requires that BaseLSTMNetwork.__init__() was called. Args: num_units: Number of hidden units in the cell. during_training: Whether to create a cell for training (vs inference). Returns: A RNNCell of the requested size, possibly with dropout.'
def _create_cell(self, num_units, during_training):
if (not during_training): return tf.contrib.rnn.LayerNormBasicLSTMCell(num_units, layer_norm=self._attrs['layer_norm'], reuse=True) cell = tf.contrib.rnn.LayerNormBasicLSTMCell(num_units, dropout_keep_prob=self._recurrent_dropout_rate, layer_norm=self._attrs['layer_norm']) cell = tf.contrib.rnn.Drop...
'Creates a list of LSTM cells for training.'
def _create_train_cells(self):
return [self._create_cell(num_units, during_training=True) for num_units in self._hidden_layer_sizes]
'Creates a list of LSTM cells for inference.'
def _create_inference_cells(self):
return [self._create_cell(num_units, during_training=False) for num_units in self._hidden_layer_sizes]
'Captures variables created by a function in |self._params|. Args: function: Function whose variables should be captured. The function should take one argument, its enclosing variable scope.'
def _capture_variables_as_params(self, function):
created_vars = {} def _custom_getter(getter, *args, **kwargs): 'Calls the real getter and captures its result in |created_vars|.' real_variable = getter(*args, **kwargs) created_vars[real_variable.name] = real_variable return real_variable with tf.v...
'Applies a function using previously-captured variables. Args: function: Function to apply using captured variables. The function should take one argument, its enclosing variable scope. Returns: Results of function application.'
def _apply_with_captured_variables(self, function):
def _custom_getter(getter, *args, **kwargs): 'Retrieves the normal or moving-average variables.' return self._component.get_variable(var_params=getter(*args, **kwargs)) with tf.variable_scope('cell', reuse=True, custom_getter=_custom_getter) as scope: return function(scope...
'Sets up context and output layers, as well as a final softmax.'
def __init__(self, component):
super(LayerNormBasicLSTMNetwork, self).__init__(component) self._train_cell = tf.contrib.rnn.MultiRNNCell(self._create_train_cells()) self._inference_cell = tf.contrib.rnn.MultiRNNCell(self._create_inference_cells()) def _cell_closure(scope): 'Applies the LSTM cell to placeholder ...
'See base class.'
def create_hidden_layers(self, component, hidden_layer_sizes):
layers = [] for (index, num_units) in enumerate(hidden_layer_sizes): layers.append(dragnn.Layer(component, name=('state_c_%d' % index), dim=num_units)) layers.append(dragnn.Layer(component, name=('state_h_%d' % index), dim=num_units)) context_layers = list(layers) return (layers, context...
'See base class.'
def create(self, fixed_embeddings, linked_embeddings, context_tensor_arrays, attention_tensor, during_training, stride=None):
check.Eq(len(context_tensor_arrays), (2 * len(self._hidden_layer_sizes)), 'require two context tensors per hidden layer') length = context_tensor_arrays[0].size() substates = [] for (index, num_units) in enumerate(self._hidden_layer_sizes): state_c = context_tensor_arrays[(2 * ...
'Initializes the bulk bi-LSTM. Parameters used: parallel_iterations (1): Parallelism of the underlying tf.while_loop(). Defaults to 1 thread to encourage deterministic behavior, but can be increased to trade memory for speed. Args: component: parent ComponentBuilderBase object.'
def __init__(self, component):
super(BulkBiLSTMNetwork, self).__init__(component, additional_attr_defaults={'parallel_iterations': 1}) check.In('lengths', self._linked_feature_dims, 'Missing required linked feature') check.Eq(self._linked_feature_dims['lengths'], 1, 'Wrong dimension for "lengths" feature') self._...
'See base class.'
def create_hidden_layers(self, component, hidden_layer_sizes):
dim = (2 * hidden_layer_sizes[(-1)]) return ([dragnn.Layer(component, name='outputs', dim=dim)], [])
'Requires |stride|; otherwise see base class.'
def create(self, fixed_embeddings, linked_embeddings, context_tensor_arrays, attention_tensor, during_training, stride=None):
check.NotNone(stride, 'BulkBiLSTMNetwork requires "stride" and must be called in the bulk feature extractor component.') lengths = dragnn.lookup_named_tensor('lengths', linked_embeddings) lengths_s = tf.squeeze(lengths.tensor, [1]) linked_embeddings = [named_tensor fo...
'Returns stacked and batched initial states for the bi-LSTM.'
def _create_initial_states(self, stride):
initial_states_forward = [] initial_states_backward = [] for index in range(len(self._hidden_layer_sizes)): states_sxd = [] for direction in ['forward', 'backward']: for substate in ['c', 'h']: state_1xd = self._component.get_variable(('initial_state_%s_%s_%d' % (...
'Initializes weights and layers. Args: component: Parent ComponentBuilderBase object.'
def __init__(self, component):
super(BiaffineDigraphNetwork, self).__init__(component) check.Eq(len(self._fixed_feature_dims.items()), 0, 'Expected no fixed features') check.Eq(len(self._linked_feature_dims.items()), 2, 'Expected two linked features') check.In('sources', self._linked_feature_dims, 'Missing requir...
'Requires |stride|; otherwise see base class.'
def create(self, fixed_embeddings, linked_embeddings, context_tensor_arrays, attention_tensor, during_training, stride=None):
check.NotNone(stride, 'BiaffineDigraphNetwork requires "stride" and must be called in the bulk feature extractor component.') del during_training weights_arc = self._component.get_variable('weights_arc') weights_source = self._component.get_variable('weights_source') ...
'Initializes weights and layers. Args: component: Parent ComponentBuilderBase object.'
def __init__(self, component):
super(BiaffineLabelNetwork, self).__init__(component) parameters = component.spec.network_unit.parameters self._num_labels = int(parameters['num_labels']) check.Gt(self._num_labels, 0, 'Expected some labels') check.Eq(len(self._fixed_feature_dims.items()), 0, 'Expected no fixed featur...
'Requires |stride|; otherwise see base class.'
def create(self, fixed_embeddings, linked_embeddings, context_tensor_arrays, attention_tensor, during_training, stride=None):
check.NotNone(stride, 'BiaffineLabelNetwork requires "stride" and must be called in the bulk feature extractor component.') del during_training weights_pair = self._component.get_variable('weights_pair') weights_source = self._component.get_variable('weights_source') ...
'Reads a single batch of sentences.'
def read(self):
if self._session: (sentences, is_last) = self._session.run([self._source, self._is_last]) if is_last: self._session.close() self._session = None else: (sentences, is_last) = ([], True) return (sentences, is_last)
'Reads the entire corpus, and returns in a list.'
def corpus(self):
tf.logging.info('Reading corpus...') corpus = [] while True: (sentences, is_last) = self.read() corpus.extend(sentences) if is_last: break tf.logging.info(('Read %d sentences.' % len(corpus))) return corpus
'Adds a sentence to the corpus.'
def _add_sentence(self, tags, heads, labels, corpus):
sentence = sentence_pb2.Sentence() for (tag, head, label) in zip(tags, heads, labels): sentence.token.add(word='x', start=0, end=0, tag=tag, head=head, label=label) corpus.append(sentence.SerializeToString())
'Assert that an object has zero length. Args: container: Anything that implements the collections.Sized interface. msg: Optional message to report on failure.'
def assertEmpty(self, container, msg=None):
if (not isinstance(container, collections.Sized)): self.fail('Expected a Sized object, got: {!r}'.format(type(container).__name__), msg) if len(container): self.fail('{!r} has length of {}.'.format(container, len(container)), msg)
'Assert that an object has non-zero length. Args: container: Anything that implements the collections.Sized interface. msg: Optional message to report on failure.'
def assertNotEmpty(self, container, msg=None):
if (not isinstance(container, collections.Sized)): self.fail('Expected a Sized object, got: {!r}'.format(type(container).__name__), msg) if (not len(container)): self.fail('{!r} has length of 0.'.format(container), msg)
'Tests the default hyperparameter settings.'
def testTraining(self):
self.RunTraining(self.MakeHyperparams())
'Adds code coverage for gradient clipping.'
def testTrainingWithGradientClipping(self):
self.RunTraining(self.MakeHyperparams(gradient_clip_norm=1.25))
'Adds code coverage for ADAM and the use of moving averaging.'
def testTrainingWithAdamAndAveraging(self):
self.RunTraining(self.MakeHyperparams(learning_method='adam', use_moving_average=True))
'Adds code coverage for CompositeOptimizer.'
def testTrainingWithCompositeOptimizer(self):
grid_point = self.MakeHyperparams(learning_method='composite') grid_point.composite_optimizer_spec.method1.learning_method = 'adam' grid_point.composite_optimizer_spec.method2.learning_method = 'momentum' grid_point.composite_optimizer_spec.method2.momentum = 0.9 self.RunTraining(grid_point)
'Checks that ops ending up at root are called in the expected order. To check the order, we find a path along the directed graph formed by the inputs of each op. If op X has a chain of inputs to op Y, then X cannot be executed before Y. There may be multiple paths between any two ops, but the ops along any path are exe...
def checkOpOrder(self, name, endpoint, expected_op_order):
for target in reversed(expected_op_order): path = _find_input_path_to_type(endpoint, target) self.assertNotEmpty(path) logging.info('path[%d] from %s to %s: %s', len(path), name, target, [_as_op(x).type for x in path]) endpoint = path[(-1)]
'Generates a MasterBuilder and TrainTarget based on a simple spec.'
def getBuilderAndTarget(self, test_name, master_spec_path='simple_parser_master_spec.textproto'):
master_spec = self.LoadSpec(master_spec_path) hyperparam_config = spec_pb2.GridPoint() target = spec_pb2.TrainTarget() target.name = ('test-%s-train' % test_name) target.component_weights.extend(([0] * len(master_spec.component))) target.component_weights[(-1)] = 1.0 target.unroll_using_orac...
'Checks that GetSession and ReleaseSession are called in order.'
def testGetSessionReleaseSession(self):
test_name = 'get-session-release-session' with tf.Graph().as_default(): (builder, target) = self.getBuilderAndTarget(test_name) train = builder.add_training_from_config(target) anno = builder.add_annotation(test_name) path = _find_input_path_to_type(train['run'], 'foo') s...
'Checks that train[\'run\'] and \'annotations\' call AttachDataReader.'
def testAttachDataReader(self):
test_name = 'attach-data-reader' with tf.Graph().as_default(): (builder, target) = self.getBuilderAndTarget(test_name) train = builder.add_training_from_config(target) anno = builder.add_annotation(test_name) self.checkOpOrder('train', train['run'], ['GetSession', 'AttachDataRead...
'Checks that \'annotations\' doesn\'t call SetTracing if disabled.'
def testSetTracingFalse(self):
test_name = 'set-tracing-false' with tf.Graph().as_default(): (builder, _) = self.getBuilderAndTarget(test_name) anno = builder.add_annotation(test_name, enable_tracing=False) path = _find_input_path_to_type(anno['annotations'], 'ReleaseSession') self.assertNotEmpty(path) ...
'Checks that \'annotations\' does call SetTracing if enabled.'
def testSetTracingTrue(self):
test_name = 'set-tracing-true' with tf.Graph().as_default(): (builder, _) = self.getBuilderAndTarget(test_name) anno = builder.add_annotation(test_name, enable_tracing=True) self.checkOpOrder('annotations', anno['annotations'], ['GetSession', 'SetTracing', 'AttachDataReader', 'ReleaseSes...
'Creates ops for converting the input to either format. If \'tensor\' is used, then a conversion from [stride * steps, dim] to [steps + 1, stride, dim] is performed for dynamic_tensor reads. If \'array\' is used, then a conversion from [steps + 1, stride, dim] to [stride * steps, dim] is performed for bulk_tensor reads...
def __init__(self, tensor=None, array=None, stride=None, dim=None):
if (tensor is not None): check.IsNone(array, 'Cannot initialize from tensor and array') check.NotNone(stride, 'Stride is required for bulk tensor') check.NotNone(dim, 'Dim is required for bulk tensor') self._bulk_tensor = tensor wi...
'Inits NamedTensor with tensor, name and optional dim.'
def __init__(self, tensor, name, dim=None):
self.tensor = tensor self.name = name self.dim = dim
'Construct variables to normalize an input of given shape. Arguments: component: ComponentBuilder handle. name: Human readable name to organize the variables. shape: Shape of the layer to be normalized. dtype: Type of the layer to be normalized.'
def __init__(self, component, name, shape, dtype):
self._name = name self._shape = shape self._component = component beta = tf.get_variable(('beta_%s' % name), shape=shape, dtype=dtype, initializer=tf.zeros_initializer()) gamma = tf.get_variable(('gamma_%s' % name), shape=shape, dtype=dtype, initializer=tf.ones_initializer()) self._params = [bet...
'Apply normalization to input. The shape must match the declared shape in the constructor. [This is copied from tf.contrib.rnn.LayerNormBasicLSTMCell.] Args: inputs: Input tensor Returns: Normalized version of input tensor. Raises: ValueError: if inputs has undefined rank.'
def normalize(self, inputs):
inputs_shape = inputs.get_shape() inputs_rank = inputs_shape.ndims if (inputs_rank is None): raise ValueError(('Inputs %s has undefined rank.' % inputs.name)) axis = range(1, inputs_rank) beta = self._component.get_variable(('beta_%s' % self._name)) gamma = self._component.ge...
'Creates a new tensor array to store this layer\'s activations. Arguments: stride: Possibly dynamic batch * beam size with which to initialize the tensor array Returns: TensorArray object'
def create_array(self, stride):
check.Gt(self.dim, 0, 'Cannot create array when dimension is dynamic') tensor_array = ta.TensorArray(dtype=tf.float32, size=0, dynamic_size=True, clear_after_read=False, infer_shape=False, name=('%s_array' % self.name)) initial_value = tf.zeros([stride, self.dim]) return tensor_array.w...
'Initializes parameters for embedding matrices. The subclass may provide optional lists of initial layers and context layers to allow this base class constructor to use accessors like get_layer_size(), which is required for networks that may be used self-recurrently. Args: component: parent ComponentBuilderBase object....
def __init__(self, component, init_layers=None, init_context_layers=None):
self._component = component self._params = [] self._layers = (init_layers if init_layers else []) self._regularized_weights = [] self._context_layers = (init_context_layers if init_context_layers else []) self._fixed_feature_dims = {} self._linked_feature_dims = {} for (channel_id, spec)...
'Constructs a feed-forward unit based on the features and context tensors. Args: fixed_embeddings: list of NamedTensor objects linked_embeddings: list of NamedTensor objects context_tensor_arrays: optional list of TensorArray objects used for implicit recurrence. attention_tensor: optional Tensor used for attention. du...
@abc.abstractmethod def create(self, fixed_embeddings, linked_embeddings, context_tensor_arrays, attention_tensor, during_training, stride=None):
pass
'Gets the index of the given named layer of the network.'
def get_layer_index(self, layer_name):
return [x.name for x in self.layers].index(layer_name)
'Gets the size of the given named layer of the network. Args: layer_name: string name of layer to look update Returns: the size of the layer. Raises: KeyError: if the layer_name to look up doesn\'t exist.'
def get_layer_size(self, layer_name):
for layer in self.layers: if (layer.name == layer_name): return layer.dim raise KeyError('Layer {} not found in component {}'.format(layer_name, self._component.name))
'Pulls out the logits from the tensors produced by this unit. Args: network_tensors: list of tensors as output by create(). Raises: NotImplementedError: by default a \'logits\' tensor need not be implemented.'
def get_logits(self, network_tensors):
raise NotImplementedError()