Code
stringlengths
103
85.9k
Summary
listlengths
0
94
Please provide a description of the function:def table_data_client(self): if self._table_data_client is None: self._table_data_client = _create_gapic_client(bigtable_v2.BigtableClient)( self ) return self._table_data_client
[ "Getter for the gRPC stub used for the Table Admin API.\n\n For example:\n\n .. literalinclude:: snippets.py\n :start-after: [START bigtable_table_data_client]\n :end-before: [END bigtable_table_data_client]\n\n :rtype: :class:`.bigtable_v2.BigtableClient`\n :returns: A BigtableClient object.\n " ]
Please provide a description of the function:def table_admin_client(self): if self._table_admin_client is None: if not self._admin: raise ValueError("Client is not an admin client.") self._table_admin_client = _create_gapic_client( bigtable_admin_v2.BigtableTableAdminClient )(self) return self._table_admin_client
[ "Getter for the gRPC stub used for the Table Admin API.\n\n For example:\n\n .. literalinclude:: snippets.py\n :start-after: [START bigtable_table_admin_client]\n :end-before: [END bigtable_table_admin_client]\n\n :rtype: :class:`.bigtable_admin_pb2.BigtableTableAdmin`\n :returns: A BigtableTableAdmin instance.\n :raises: :class:`ValueError <exceptions.ValueError>` if the current\n client is not an admin client or if it has not been\n :meth:`start`-ed.\n " ]
Please provide a description of the function:def instance_admin_client(self): if self._instance_admin_client is None: if not self._admin: raise ValueError("Client is not an admin client.") self._instance_admin_client = _create_gapic_client( bigtable_admin_v2.BigtableInstanceAdminClient )(self) return self._instance_admin_client
[ "Getter for the gRPC stub used for the Table Admin API.\n\n For example:\n\n .. literalinclude:: snippets.py\n :start-after: [START bigtable_instance_admin_client]\n :end-before: [END bigtable_instance_admin_client]\n\n :rtype: :class:`.bigtable_admin_pb2.BigtableInstanceAdmin`\n :returns: A BigtableInstanceAdmin instance.\n :raises: :class:`ValueError <exceptions.ValueError>` if the current\n client is not an admin client or if it has not been\n :meth:`start`-ed.\n " ]
Please provide a description of the function:def instance(self, instance_id, display_name=None, instance_type=None, labels=None): return Instance( instance_id, self, display_name=display_name, instance_type=instance_type, labels=labels, )
[ "Factory to create a instance associated with this client.\n\n For example:\n\n .. literalinclude:: snippets.py\n :start-after: [START bigtable_create_prod_instance]\n :end-before: [END bigtable_create_prod_instance]\n\n :type instance_id: str\n :param instance_id: The ID of the instance.\n\n :type display_name: str\n :param display_name: (Optional) The display name for the instance in\n the Cloud Console UI. (Must be between 4 and 30\n characters.) If this value is not set in the\n constructor, will fall back to the instance ID.\n\n :type instance_type: int\n :param instance_type: (Optional) The type of the instance.\n Possible values are represented\n by the following constants:\n :data:`google.cloud.bigtable.enums.InstanceType.PRODUCTION`.\n :data:`google.cloud.bigtable.enums.InstanceType.DEVELOPMENT`,\n Defaults to\n :data:`google.cloud.bigtable.enums.InstanceType.UNSPECIFIED`.\n\n :type labels: dict\n :param labels: (Optional) Labels are a flexible and lightweight\n mechanism for organizing cloud resources into groups\n that reflect a customer's organizational needs and\n deployment strategies. They can be used to filter\n resources and aggregate metrics. Label keys must be\n between 1 and 63 characters long. Maximum 64 labels can\n be associated with a given resource. Label values must\n be between 0 and 63 characters long. Keys and values\n must both be under 128 bytes.\n\n :rtype: :class:`~google.cloud.bigtable.instance.Instance`\n :returns: an instance owned by this client.\n " ]
Please provide a description of the function:def list_instances(self): resp = self.instance_admin_client.list_instances(self.project_path) instances = [Instance.from_pb(instance, self) for instance in resp.instances] return instances, resp.failed_locations
[ "List instances owned by the project.\n\n For example:\n\n .. literalinclude:: snippets.py\n :start-after: [START bigtable_list_instances]\n :end-before: [END bigtable_list_instances]\n\n :rtype: tuple\n :returns:\n (instances, failed_locations), where 'instances' is list of\n :class:`google.cloud.bigtable.instance.Instance`, and\n 'failed_locations' is a list of locations which could not\n be resolved.\n " ]
Please provide a description of the function:def list_clusters(self): resp = self.instance_admin_client.list_clusters( self.instance_admin_client.instance_path(self.project, "-") ) clusters = [] instances = {} for cluster in resp.clusters: match_cluster_name = _CLUSTER_NAME_RE.match(cluster.name) instance_id = match_cluster_name.group("instance") if instance_id not in instances: instances[instance_id] = self.instance(instance_id) clusters.append(Cluster.from_pb(cluster, instances[instance_id])) return clusters, resp.failed_locations
[ "List the clusters in the project.\n\n For example:\n\n .. literalinclude:: snippets.py\n :start-after: [START bigtable_list_clusters_in_project]\n :end-before: [END bigtable_list_clusters_in_project]\n\n :rtype: tuple\n :returns:\n (clusters, failed_locations), where 'clusters' is list of\n :class:`google.cloud.bigtable.instance.Cluster`, and\n 'failed_locations' is a list of strings representing\n locations which could not be resolved.\n " ]
Please provide a description of the function:def DeleteMetricDescriptor(self, request, context): context.set_code(grpc.StatusCode.UNIMPLEMENTED) context.set_details("Method not implemented!") raise NotImplementedError("Method not implemented!")
[ "Deletes a metric descriptor. Only user-created\n [custom metrics](/monitoring/custom-metrics) can be deleted.\n " ]
Please provide a description of the function:def list_clusters( self, project_id, zone, parent=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): # Wrap the transport method to add retry and timeout logic. if "list_clusters" not in self._inner_api_calls: self._inner_api_calls[ "list_clusters" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.list_clusters, default_retry=self._method_configs["ListClusters"].retry, default_timeout=self._method_configs["ListClusters"].timeout, client_info=self._client_info, ) request = cluster_service_pb2.ListClustersRequest( project_id=project_id, zone=zone, parent=parent ) return self._inner_api_calls["list_clusters"]( request, retry=retry, timeout=timeout, metadata=metadata )
[ "\n Lists all clusters owned by a project in either the specified zone or all\n zones.\n\n Example:\n >>> from google.cloud import container_v1\n >>>\n >>> client = container_v1.ClusterManagerClient()\n >>>\n >>> # TODO: Initialize `project_id`:\n >>> project_id = ''\n >>>\n >>> # TODO: Initialize `zone`:\n >>> zone = ''\n >>>\n >>> response = client.list_clusters(project_id, zone)\n\n Args:\n project_id (str): Deprecated. The Google Developers Console `project ID or project\n number <https://support.google.com/cloud/answer/6158840>`__. This field\n has been deprecated and replaced by the parent field.\n zone (str): Deprecated. The name of the Google Compute Engine\n `zone <https://cloud.google.com/compute/docs/zones#available>`__ in\n which the cluster resides, or \"-\" for all zones. This field has been\n deprecated and replaced by the parent field.\n parent (str): The parent (project and location) where the clusters will be listed.\n Specified in the format 'projects/*/locations/*'. Location \"-\" matches\n all zones and all regions.\n retry (Optional[google.api_core.retry.Retry]): A retry object used\n to retry requests. If ``None`` is specified, requests will not\n be retried.\n timeout (Optional[float]): The amount of time, in seconds, to wait\n for the request to complete. Note that if ``retry`` is\n specified, the timeout applies to each individual attempt.\n metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata\n that is provided to the method.\n\n Returns:\n A :class:`~google.cloud.container_v1.types.ListClustersResponse` instance.\n\n Raises:\n google.api_core.exceptions.GoogleAPICallError: If the request\n failed for any reason.\n google.api_core.exceptions.RetryError: If the request failed due\n to a retryable error and retry attempts failed.\n ValueError: If the parameters are invalid.\n " ]
Please provide a description of the function:def get_cluster( self, project_id, zone, cluster_id, name=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): # Wrap the transport method to add retry and timeout logic. if "get_cluster" not in self._inner_api_calls: self._inner_api_calls[ "get_cluster" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.get_cluster, default_retry=self._method_configs["GetCluster"].retry, default_timeout=self._method_configs["GetCluster"].timeout, client_info=self._client_info, ) request = cluster_service_pb2.GetClusterRequest( project_id=project_id, zone=zone, cluster_id=cluster_id, name=name ) return self._inner_api_calls["get_cluster"]( request, retry=retry, timeout=timeout, metadata=metadata )
[ "\n Gets the details of a specific cluster.\n\n Example:\n >>> from google.cloud import container_v1\n >>>\n >>> client = container_v1.ClusterManagerClient()\n >>>\n >>> # TODO: Initialize `project_id`:\n >>> project_id = ''\n >>>\n >>> # TODO: Initialize `zone`:\n >>> zone = ''\n >>>\n >>> # TODO: Initialize `cluster_id`:\n >>> cluster_id = ''\n >>>\n >>> response = client.get_cluster(project_id, zone, cluster_id)\n\n Args:\n project_id (str): Deprecated. The Google Developers Console `project ID or project\n number <https://support.google.com/cloud/answer/6158840>`__. This field\n has been deprecated and replaced by the name field.\n zone (str): Deprecated. The name of the Google Compute Engine\n `zone <https://cloud.google.com/compute/docs/zones#available>`__ in\n which the cluster resides. This field has been deprecated and replaced\n by the name field.\n cluster_id (str): Deprecated. The name of the cluster to retrieve.\n This field has been deprecated and replaced by the name field.\n name (str): The name (project, location, cluster) of the cluster to retrieve.\n Specified in the format 'projects/*/locations/*/clusters/\\*'.\n retry (Optional[google.api_core.retry.Retry]): A retry object used\n to retry requests. If ``None`` is specified, requests will not\n be retried.\n timeout (Optional[float]): The amount of time, in seconds, to wait\n for the request to complete. Note that if ``retry`` is\n specified, the timeout applies to each individual attempt.\n metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata\n that is provided to the method.\n\n Returns:\n A :class:`~google.cloud.container_v1.types.Cluster` instance.\n\n Raises:\n google.api_core.exceptions.GoogleAPICallError: If the request\n failed for any reason.\n google.api_core.exceptions.RetryError: If the request failed due\n to a retryable error and retry attempts failed.\n ValueError: If the parameters are invalid.\n " ]
Please provide a description of the function:def set_labels( self, project_id, zone, cluster_id, resource_labels, label_fingerprint, name=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): # Wrap the transport method to add retry and timeout logic. if "set_labels" not in self._inner_api_calls: self._inner_api_calls[ "set_labels" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.set_labels, default_retry=self._method_configs["SetLabels"].retry, default_timeout=self._method_configs["SetLabels"].timeout, client_info=self._client_info, ) request = cluster_service_pb2.SetLabelsRequest( project_id=project_id, zone=zone, cluster_id=cluster_id, resource_labels=resource_labels, label_fingerprint=label_fingerprint, name=name, ) return self._inner_api_calls["set_labels"]( request, retry=retry, timeout=timeout, metadata=metadata )
[ "\n Sets labels on a cluster.\n\n Example:\n >>> from google.cloud import container_v1\n >>>\n >>> client = container_v1.ClusterManagerClient()\n >>>\n >>> # TODO: Initialize `project_id`:\n >>> project_id = ''\n >>>\n >>> # TODO: Initialize `zone`:\n >>> zone = ''\n >>>\n >>> # TODO: Initialize `cluster_id`:\n >>> cluster_id = ''\n >>>\n >>> # TODO: Initialize `resource_labels`:\n >>> resource_labels = {}\n >>>\n >>> # TODO: Initialize `label_fingerprint`:\n >>> label_fingerprint = ''\n >>>\n >>> response = client.set_labels(project_id, zone, cluster_id, resource_labels, label_fingerprint)\n\n Args:\n project_id (str): Deprecated. The Google Developers Console `project ID or project\n number <https://developers.google.com/console/help/new/#projectnumber>`__.\n This field has been deprecated and replaced by the name field.\n zone (str): Deprecated. The name of the Google Compute Engine\n `zone <https://cloud.google.com/compute/docs/zones#available>`__ in\n which the cluster resides. This field has been deprecated and replaced\n by the name field.\n cluster_id (str): Deprecated. The name of the cluster.\n This field has been deprecated and replaced by the name field.\n resource_labels (dict[str -> str]): The labels to set for that cluster.\n label_fingerprint (str): The fingerprint of the previous set of labels for this resource,\n used to detect conflicts. The fingerprint is initially generated by\n Kubernetes Engine and changes after every request to modify or update\n labels. You must always provide an up-to-date fingerprint hash when\n updating or changing labels. Make a <code>get()</code> request to the\n resource to get the latest fingerprint.\n name (str): The name (project, location, cluster id) of the cluster to set labels.\n Specified in the format 'projects/*/locations/*/clusters/\\*'.\n retry (Optional[google.api_core.retry.Retry]): A retry object used\n to retry requests. If ``None`` is specified, requests will not\n be retried.\n timeout (Optional[float]): The amount of time, in seconds, to wait\n for the request to complete. Note that if ``retry`` is\n specified, the timeout applies to each individual attempt.\n metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata\n that is provided to the method.\n\n Returns:\n A :class:`~google.cloud.container_v1.types.Operation` instance.\n\n Raises:\n google.api_core.exceptions.GoogleAPICallError: If the request\n failed for any reason.\n google.api_core.exceptions.RetryError: If the request failed due\n to a retryable error and retry attempts failed.\n ValueError: If the parameters are invalid.\n " ]
Please provide a description of the function:def metric_path(cls, project, metric): return google.api_core.path_template.expand( "projects/{project}/metrics/{metric}", project=project, metric=metric )
[ "Return a fully-qualified metric string." ]
Please provide a description of the function:def get_log_metric( self, metric_name, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): # Wrap the transport method to add retry and timeout logic. if "get_log_metric" not in self._inner_api_calls: self._inner_api_calls[ "get_log_metric" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.get_log_metric, default_retry=self._method_configs["GetLogMetric"].retry, default_timeout=self._method_configs["GetLogMetric"].timeout, client_info=self._client_info, ) request = logging_metrics_pb2.GetLogMetricRequest(metric_name=metric_name) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("metric_name", metric_name)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["get_log_metric"]( request, retry=retry, timeout=timeout, metadata=metadata )
[ "\n Gets a logs-based metric.\n\n Example:\n >>> from google.cloud import logging_v2\n >>>\n >>> client = logging_v2.MetricsServiceV2Client()\n >>>\n >>> metric_name = client.metric_path('[PROJECT]', '[METRIC]')\n >>>\n >>> response = client.get_log_metric(metric_name)\n\n Args:\n metric_name (str): The resource name of the desired metric:\n\n ::\n\n \"projects/[PROJECT_ID]/metrics/[METRIC_ID]\"\n retry (Optional[google.api_core.retry.Retry]): A retry object used\n to retry requests. If ``None`` is specified, requests will not\n be retried.\n timeout (Optional[float]): The amount of time, in seconds, to wait\n for the request to complete. Note that if ``retry`` is\n specified, the timeout applies to each individual attempt.\n metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata\n that is provided to the method.\n\n Returns:\n A :class:`~google.cloud.logging_v2.types.LogMetric` instance.\n\n Raises:\n google.api_core.exceptions.GoogleAPICallError: If the request\n failed for any reason.\n google.api_core.exceptions.RetryError: If the request failed due\n to a retryable error and retry attempts failed.\n ValueError: If the parameters are invalid.\n " ]
Please provide a description of the function:def metric_descriptor_path(cls, project, metric_descriptor): return google.api_core.path_template.expand( "projects/{project}/metricDescriptors/{metric_descriptor=**}", project=project, metric_descriptor=metric_descriptor, )
[ "Return a fully-qualified metric_descriptor string." ]
Please provide a description of the function:def monitored_resource_descriptor_path(cls, project, monitored_resource_descriptor): return google.api_core.path_template.expand( "projects/{project}/monitoredResourceDescriptors/{monitored_resource_descriptor}", project=project, monitored_resource_descriptor=monitored_resource_descriptor, )
[ "Return a fully-qualified monitored_resource_descriptor string." ]
Please provide a description of the function:def _to_pb(self): kwargs = {} if self.start_open is not None: kwargs["start_open"] = _make_list_value_pb(self.start_open) if self.start_closed is not None: kwargs["start_closed"] = _make_list_value_pb(self.start_closed) if self.end_open is not None: kwargs["end_open"] = _make_list_value_pb(self.end_open) if self.end_closed is not None: kwargs["end_closed"] = _make_list_value_pb(self.end_closed) return KeyRangePB(**kwargs)
[ "Construct a KeyRange protobuf.\n\n :rtype: :class:`~google.cloud.spanner_v1.proto.keys_pb2.KeyRange`\n :returns: protobuf corresponding to this instance.\n " ]
Please provide a description of the function:def _to_dict(self): mapping = {} if self.start_open: mapping["start_open"] = self.start_open if self.start_closed: mapping["start_closed"] = self.start_closed if self.end_open: mapping["end_open"] = self.end_open if self.end_closed: mapping["end_closed"] = self.end_closed return mapping
[ "Return keyrange's state as a dict.\n\n :rtype: dict\n :returns: state of this instance.\n " ]
Please provide a description of the function:def _to_pb(self): if self.all_: return KeySetPB(all=True) kwargs = {} if self.keys: kwargs["keys"] = _make_list_value_pbs(self.keys) if self.ranges: kwargs["ranges"] = [krange._to_pb() for krange in self.ranges] return KeySetPB(**kwargs)
[ "Construct a KeySet protobuf.\n\n :rtype: :class:`~google.cloud.spanner_v1.proto.keys_pb2.KeySet`\n :returns: protobuf corresponding to this instance.\n " ]
Please provide a description of the function:def _to_dict(self): if self.all_: return {"all": True} return { "keys": self.keys, "ranges": [keyrange._to_dict() for keyrange in self.ranges], }
[ "Return keyset's state as a dict.\n\n The result can be used to serialize the instance and reconstitute\n it later using :meth:`_from_dict`.\n\n :rtype: dict\n :returns: state of this instance.\n " ]
Please provide a description of the function:def _from_dict(cls, mapping): if mapping.get("all"): return cls(all_=True) r_mappings = mapping.get("ranges", ()) ranges = [KeyRange(**r_mapping) for r_mapping in r_mappings] return cls(keys=mapping.get("keys", ()), ranges=ranges)
[ "Create an instance from the corresponding state mapping.\n\n :type mapping: dict\n :param mapping: the instance state.\n " ]
Please provide a description of the function:def _wrap_unary_errors(callable_): _patch_callable_name(callable_) @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: return callable_(*args, **kwargs) except grpc.RpcError as exc: six.raise_from(exceptions.from_grpc_error(exc), exc) return error_remapped_callable
[ "Map errors for Unary-Unary and Stream-Unary gRPC callables." ]
Please provide a description of the function:def _wrap_stream_errors(callable_): _patch_callable_name(callable_) @general_helpers.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: result = callable_(*args, **kwargs) return _StreamingResponseIterator(result) except grpc.RpcError as exc: six.raise_from(exceptions.from_grpc_error(exc), exc) return error_remapped_callable
[ "Wrap errors for Unary-Stream and Stream-Stream gRPC callables.\n\n The callables that return iterators require a bit more logic to re-map\n errors when iterating. This wraps both the initial invocation and the\n iterator of the return value to re-map errors.\n " ]
Please provide a description of the function:def create_channel( target, credentials=None, scopes=None, ssl_credentials=None, **kwargs ): if credentials is None: credentials, _ = google.auth.default(scopes=scopes) else: credentials = google.auth.credentials.with_scopes_if_required( credentials, scopes ) request = google.auth.transport.requests.Request() # Create the metadata plugin for inserting the authorization header. metadata_plugin = google.auth.transport.grpc.AuthMetadataPlugin( credentials, request ) # Create a set of grpc.CallCredentials using the metadata plugin. google_auth_credentials = grpc.metadata_call_credentials(metadata_plugin) if ssl_credentials is None: ssl_credentials = grpc.ssl_channel_credentials() # Combine the ssl credentials and the authorization credentials. composite_credentials = grpc.composite_channel_credentials( ssl_credentials, google_auth_credentials ) if HAS_GRPC_GCP: # If grpc_gcp module is available use grpc_gcp.secure_channel, # otherwise, use grpc.secure_channel to create grpc channel. return grpc_gcp.secure_channel(target, composite_credentials, **kwargs) else: return grpc.secure_channel(target, composite_credentials, **kwargs)
[ "Create a secure channel with credentials.\n\n Args:\n target (str): The target service address in the format 'hostname:port'.\n credentials (google.auth.credentials.Credentials): The credentials. If\n not specified, then this function will attempt to ascertain the\n credentials from the environment using :func:`google.auth.default`.\n scopes (Sequence[str]): A optional list of scopes needed for this\n service. These are only used when credentials are not specified and\n are passed to :func:`google.auth.default`.\n ssl_credentials (grpc.ChannelCredentials): Optional SSL channel\n credentials. This can be used to specify different certificates.\n kwargs: Additional key-word args passed to\n :func:`grpc_gcp.secure_channel` or :func:`grpc.secure_channel`.\n\n Returns:\n grpc.Channel: The created channel.\n " ]
Please provide a description of the function:def next(self): try: return six.next(self._wrapped) except grpc.RpcError as exc: six.raise_from(exceptions.from_grpc_error(exc), exc)
[ "Get the next response from the stream.\n\n Returns:\n protobuf.Message: A single response from the stream.\n " ]
Please provide a description of the function:def wraps(wrapped): if isinstance(wrapped, functools.partial): return six.wraps(wrapped, assigned=_PARTIAL_VALID_ASSIGNMENTS) else: return six.wraps(wrapped)
[ "A functools.wraps helper that handles partial objects on Python 2." ]
Please provide a description of the function:def _determine_default_project(project=None): if project is None: project = _get_gcd_project() if project is None: project = _base_default_project(project=project) return project
[ "Determine default project explicitly or implicitly as fall-back.\n\n In implicit case, supports four environments. In order of precedence, the\n implicit environments are:\n\n * DATASTORE_DATASET environment variable (for ``gcd`` / emulator testing)\n * GOOGLE_CLOUD_PROJECT environment variable\n * Google App Engine application ID\n * Google Compute Engine project ID (from metadata server)\n\n :type project: str\n :param project: Optional. The project to use as default.\n\n :rtype: str or ``NoneType``\n :returns: Default project if it can be determined.\n " ]
Please provide a description of the function:def _extended_lookup( datastore_api, project, key_pbs, missing=None, deferred=None, eventual=False, transaction_id=None, ): if missing is not None and missing != []: raise ValueError("missing must be None or an empty list") if deferred is not None and deferred != []: raise ValueError("deferred must be None or an empty list") results = [] loop_num = 0 read_options = helpers.get_read_options(eventual, transaction_id) while loop_num < _MAX_LOOPS: # loop against possible deferred. loop_num += 1 lookup_response = datastore_api.lookup( project, key_pbs, read_options=read_options ) # Accumulate the new results. results.extend(result.entity for result in lookup_response.found) if missing is not None: missing.extend(result.entity for result in lookup_response.missing) if deferred is not None: deferred.extend(lookup_response.deferred) break if len(lookup_response.deferred) == 0: break # We have deferred keys, and the user didn't ask to know about # them, so retry (but only with the deferred ones). key_pbs = lookup_response.deferred return results
[ "Repeat lookup until all keys found (unless stop requested).\n\n Helper function for :meth:`Client.get_multi`.\n\n :type datastore_api:\n :class:`google.cloud.datastore._http.HTTPDatastoreAPI`\n or :class:`google.cloud.datastore_v1.gapic.DatastoreClient`\n :param datastore_api: The datastore API object used to connect\n to datastore.\n\n :type project: str\n :param project: The project to make the request for.\n\n :type key_pbs: list of :class:`.entity_pb2.Key`\n :param key_pbs: The keys to retrieve from the datastore.\n\n :type missing: list\n :param missing: (Optional) If a list is passed, the key-only entity\n protobufs returned by the backend as \"missing\" will be\n copied into it.\n\n :type deferred: list\n :param deferred: (Optional) If a list is passed, the key protobufs returned\n by the backend as \"deferred\" will be copied into it.\n\n :type eventual: bool\n :param eventual: If False (the default), request ``STRONG`` read\n consistency. If True, request ``EVENTUAL`` read\n consistency.\n\n :type transaction_id: str\n :param transaction_id: If passed, make the request in the scope of\n the given transaction. Incompatible with\n ``eventual==True``.\n\n :rtype: list of :class:`.entity_pb2.Entity`\n :returns: The requested entities.\n :raises: :class:`ValueError` if missing / deferred are not null or\n empty list.\n " ]
Please provide a description of the function:def _datastore_api(self): if self._datastore_api_internal is None: if self._use_grpc: self._datastore_api_internal = make_datastore_api(self) else: self._datastore_api_internal = HTTPDatastoreAPI(self) return self._datastore_api_internal
[ "Getter for a wrapped API object." ]
Please provide a description of the function:def get(self, key, missing=None, deferred=None, transaction=None, eventual=False): entities = self.get_multi( keys=[key], missing=missing, deferred=deferred, transaction=transaction, eventual=eventual, ) if entities: return entities[0]
[ "Retrieve an entity from a single key (if it exists).\n\n .. note::\n\n This is just a thin wrapper over :meth:`get_multi`.\n The backend API does not make a distinction between a single key or\n multiple keys in a lookup request.\n\n :type key: :class:`google.cloud.datastore.key.Key`\n :param key: The key to be retrieved from the datastore.\n\n :type missing: list\n :param missing: (Optional) If a list is passed, the key-only entities\n returned by the backend as \"missing\" will be copied\n into it.\n\n :type deferred: list\n :param deferred: (Optional) If a list is passed, the keys returned\n by the backend as \"deferred\" will be copied into it.\n\n :type transaction:\n :class:`~google.cloud.datastore.transaction.Transaction`\n :param transaction: (Optional) Transaction to use for read consistency.\n If not passed, uses current transaction, if set.\n\n :type eventual: bool\n :param eventual: (Optional) Defaults to strongly consistent (False).\n Setting True will use eventual consistency, but cannot\n be used inside a transaction or will raise ValueError.\n\n :rtype: :class:`google.cloud.datastore.entity.Entity` or ``NoneType``\n :returns: The requested entity if it exists.\n\n :raises: :class:`ValueError` if eventual is True and in a transaction.\n " ]
Please provide a description of the function:def get_multi( self, keys, missing=None, deferred=None, transaction=None, eventual=False ): if not keys: return [] ids = set(key.project for key in keys) for current_id in ids: if current_id != self.project: raise ValueError("Keys do not match project") if transaction is None: transaction = self.current_transaction entity_pbs = _extended_lookup( datastore_api=self._datastore_api, project=self.project, key_pbs=[key.to_protobuf() for key in keys], eventual=eventual, missing=missing, deferred=deferred, transaction_id=transaction and transaction.id, ) if missing is not None: missing[:] = [ helpers.entity_from_protobuf(missed_pb) for missed_pb in missing ] if deferred is not None: deferred[:] = [ helpers.key_from_protobuf(deferred_pb) for deferred_pb in deferred ] return [helpers.entity_from_protobuf(entity_pb) for entity_pb in entity_pbs]
[ "Retrieve entities, along with their attributes.\n\n :type keys: list of :class:`google.cloud.datastore.key.Key`\n :param keys: The keys to be retrieved from the datastore.\n\n :type missing: list\n :param missing: (Optional) If a list is passed, the key-only entities\n returned by the backend as \"missing\" will be copied\n into it. If the list is not empty, an error will occur.\n\n :type deferred: list\n :param deferred: (Optional) If a list is passed, the keys returned\n by the backend as \"deferred\" will be copied into it.\n If the list is not empty, an error will occur.\n\n :type transaction:\n :class:`~google.cloud.datastore.transaction.Transaction`\n :param transaction: (Optional) Transaction to use for read consistency.\n If not passed, uses current transaction, if set.\n\n :type eventual: bool\n :param eventual: (Optional) Defaults to strongly consistent (False).\n Setting True will use eventual consistency, but cannot\n be used inside a transaction or will raise ValueError.\n\n :rtype: list of :class:`google.cloud.datastore.entity.Entity`\n :returns: The requested entities.\n :raises: :class:`ValueError` if one or more of ``keys`` has a project\n which does not match our project.\n :raises: :class:`ValueError` if eventual is True and in a transaction.\n " ]
Please provide a description of the function:def put_multi(self, entities): if isinstance(entities, Entity): raise ValueError("Pass a sequence of entities") if not entities: return current = self.current_batch in_batch = current is not None if not in_batch: current = self.batch() current.begin() for entity in entities: current.put(entity) if not in_batch: current.commit()
[ "Save entities in the Cloud Datastore.\n\n :type entities: list of :class:`google.cloud.datastore.entity.Entity`\n :param entities: The entities to be saved to the datastore.\n\n :raises: :class:`ValueError` if ``entities`` is a single entity.\n " ]
Please provide a description of the function:def delete_multi(self, keys): if not keys: return # We allow partial keys to attempt a delete, the backend will fail. current = self.current_batch in_batch = current is not None if not in_batch: current = self.batch() current.begin() for key in keys: current.delete(key) if not in_batch: current.commit()
[ "Delete keys from the Cloud Datastore.\n\n :type keys: list of :class:`google.cloud.datastore.key.Key`\n :param keys: The keys to be deleted from the Datastore.\n " ]
Please provide a description of the function:def allocate_ids(self, incomplete_key, num_ids): if not incomplete_key.is_partial: raise ValueError(("Key is not partial.", incomplete_key)) incomplete_key_pb = incomplete_key.to_protobuf() incomplete_key_pbs = [incomplete_key_pb] * num_ids response_pb = self._datastore_api.allocate_ids( incomplete_key.project, incomplete_key_pbs ) allocated_ids = [ allocated_key_pb.path[-1].id for allocated_key_pb in response_pb.keys ] return [ incomplete_key.completed_key(allocated_id) for allocated_id in allocated_ids ]
[ "Allocate a list of IDs from a partial key.\n\n :type incomplete_key: :class:`google.cloud.datastore.key.Key`\n :param incomplete_key: Partial key to use as base for allocated IDs.\n\n :type num_ids: int\n :param num_ids: The number of IDs to allocate.\n\n :rtype: list of :class:`google.cloud.datastore.key.Key`\n :returns: The (complete) keys allocated with ``incomplete_key`` as\n root.\n :raises: :class:`ValueError` if ``incomplete_key`` is not a\n partial key.\n " ]
Please provide a description of the function:def key(self, *path_args, **kwargs): if "project" in kwargs: raise TypeError("Cannot pass project") kwargs["project"] = self.project if "namespace" not in kwargs: kwargs["namespace"] = self.namespace return Key(*path_args, **kwargs)
[ "Proxy to :class:`google.cloud.datastore.key.Key`.\n\n Passes our ``project``.\n " ]
Please provide a description of the function:def query(self, **kwargs): if "client" in kwargs: raise TypeError("Cannot pass client") if "project" in kwargs: raise TypeError("Cannot pass project") kwargs["project"] = self.project if "namespace" not in kwargs: kwargs["namespace"] = self.namespace return Query(self, **kwargs)
[ "Proxy to :class:`google.cloud.datastore.query.Query`.\n\n Passes our ``project``.\n\n Using query to search a datastore:\n\n .. testsetup:: query\n\n import os\n import uuid\n\n from google.cloud import datastore\n\n unique = os.getenv('CIRCLE_BUILD_NUM', str(uuid.uuid4())[0:8])\n client = datastore.Client(namespace='ns{}'.format(unique))\n query = client.query(kind='_Doctest')\n\n def do_something(entity):\n pass\n\n .. doctest:: query\n\n >>> query = client.query(kind='MyKind')\n >>> query.add_filter('property', '=', 'val')\n\n Using the query iterator\n\n .. doctest:: query\n\n >>> query_iter = query.fetch()\n >>> for entity in query_iter:\n ... do_something(entity)\n\n or manually page through results\n\n .. testsetup:: query-page\n\n import os\n import uuid\n\n from google.cloud import datastore\n from tests.system.test_system import Config # system tests\n\n unique = os.getenv('CIRCLE_BUILD_NUM', str(uuid.uuid4())[0:8])\n client = datastore.Client(namespace='ns{}'.format(unique))\n\n key = client.key('_Doctest')\n entity1 = datastore.Entity(key=key)\n entity1['foo'] = 1337\n entity2 = datastore.Entity(key=key)\n entity2['foo'] = 42\n Config.TO_DELETE.extend([entity1, entity2])\n client.put_multi([entity1, entity2])\n\n query = client.query(kind='_Doctest')\n cursor = None\n\n .. doctest:: query-page\n\n >>> query_iter = query.fetch(start_cursor=cursor)\n >>> pages = query_iter.pages\n >>>\n >>> first_page = next(pages)\n >>> first_page_entities = list(first_page)\n >>> query_iter.next_page_token is None\n True\n\n :type kwargs: dict\n :param kwargs: Parameters for initializing and instance of\n :class:`~google.cloud.datastore.query.Query`.\n\n :rtype: :class:`~google.cloud.datastore.query.Query`\n :returns: A query object.\n " ]
Please provide a description of the function:def mutate(self, row): mutation_count = len(row._get_mutations()) if mutation_count > MAX_MUTATIONS: raise MaxMutationsError( "The row key {} exceeds the number of mutations {}.".format( row.row_key, mutation_count ) ) if (self.total_mutation_count + mutation_count) >= MAX_MUTATIONS: self.flush() self.rows.append(row) self.total_mutation_count += mutation_count self.total_size += row.get_mutations_size() if self.total_size >= self.max_row_bytes or len(self.rows) >= self.flush_count: self.flush()
[ " Add a row to the batch. If the current batch meets one of the size\n limits, the batch is sent synchronously.\n\n For example:\n\n .. literalinclude:: snippets.py\n :start-after: [START bigtable_batcher_mutate]\n :end-before: [END bigtable_batcher_mutate]\n\n :type row: class\n :param row: class:`~google.cloud.bigtable.row.DirectRow`.\n\n :raises: One of the following:\n * :exc:`~.table._BigtableRetryableError` if any\n row returned a transient error.\n * :exc:`RuntimeError` if the number of responses doesn't\n match the number of rows that were retried\n * :exc:`.batcher.MaxMutationsError` if any row exceeds max\n mutations count.\n " ]
Please provide a description of the function:def flush(self): if len(self.rows) != 0: self.table.mutate_rows(self.rows) self.total_mutation_count = 0 self.total_size = 0 self.rows = []
[ " Sends the current. batch to Cloud Bigtable.\n For example:\n\n .. literalinclude:: snippets.py\n :start-after: [START bigtable_batcher_flush]\n :end-before: [END bigtable_batcher_flush]\n\n " ]
Please provide a description of the function:def _parse_topic_path(topic_path): match = _TOPIC_REF_RE.match(topic_path) if match is None: raise ValueError(_BAD_TOPIC.format(topic_path)) return match.group("name"), match.group("project")
[ "Verify that a topic path is in the correct format.\n\n .. _resource manager docs: https://cloud.google.com/resource-manager/\\\n reference/rest/v1beta1/projects#\\\n Project.FIELDS.project_id\n .. _topic spec: https://cloud.google.com/storage/docs/json_api/v1/\\\n notifications/insert#topic\n\n Expected to be of the form:\n\n //pubsub.googleapis.com/projects/{project}/topics/{topic}\n\n where the ``project`` value must be \"6 to 30 lowercase letters, digits,\n or hyphens. It must start with a letter. Trailing hyphens are prohibited.\"\n (see `resource manager docs`_) and ``topic`` must have length at least two,\n must start with a letter and may only contain alphanumeric characters or\n ``-``, ``_``, ``.``, ``~``, ``+`` or ``%`` (i.e characters used for URL\n encoding, see `topic spec`_).\n\n Args:\n topic_path (str): The topic path to be verified.\n\n Returns:\n Tuple[str, str]: The ``project`` and ``topic`` parsed from the\n ``topic_path``.\n\n Raises:\n ValueError: If the topic path is invalid.\n " ]
Please provide a description of the function:def from_api_repr(cls, resource, bucket): topic_path = resource.get("topic") if topic_path is None: raise ValueError("Resource has no topic") name, project = _parse_topic_path(topic_path) instance = cls(bucket, name, topic_project=project) instance._properties = resource return instance
[ "Construct an instance from the JSON repr returned by the server.\n\n See: https://cloud.google.com/storage/docs/json_api/v1/notifications\n\n :type resource: dict\n :param resource: JSON repr of the notification\n\n :type bucket: :class:`google.cloud.storage.bucket.Bucket`\n :param bucket: Bucket to which the notification is bound.\n\n :rtype: :class:`BucketNotification`\n :returns: the new notification instance\n " ]
Please provide a description of the function:def _set_properties(self, response): self._properties.clear() self._properties.update(response)
[ "Helper for :meth:`reload`.\n\n :type response: dict\n :param response: resource mapping from server\n " ]
Please provide a description of the function:def create(self, client=None): if self.notification_id is not None: raise ValueError( "Notification already exists w/ id: {}".format(self.notification_id) ) client = self._require_client(client) query_params = {} if self.bucket.user_project is not None: query_params["userProject"] = self.bucket.user_project path = "/b/{}/notificationConfigs".format(self.bucket.name) properties = self._properties.copy() properties["topic"] = _TOPIC_REF_FMT.format(self.topic_project, self.topic_name) self._properties = client._connection.api_request( method="POST", path=path, query_params=query_params, data=properties )
[ "API wrapper: create the notification.\n\n See:\n https://cloud.google.com/storage/docs/json_api/v1/notifications/insert\n\n If :attr:`user_project` is set on the bucket, bills the API request\n to that project.\n\n :type client: :class:`~google.cloud.storage.client.Client`\n :param client: (Optional) the client to use. If not passed, falls back\n to the ``client`` stored on the notification's bucket.\n " ]
Please provide a description of the function:def exists(self, client=None): if self.notification_id is None: raise ValueError("Notification not intialized by server") client = self._require_client(client) query_params = {} if self.bucket.user_project is not None: query_params["userProject"] = self.bucket.user_project try: client._connection.api_request( method="GET", path=self.path, query_params=query_params ) except NotFound: return False else: return True
[ "Test whether this notification exists.\n\n See:\n https://cloud.google.com/storage/docs/json_api/v1/notifications/get\n\n If :attr:`user_project` is set on the bucket, bills the API request\n to that project.\n\n :type client: :class:`~google.cloud.storage.client.Client` or\n ``NoneType``\n :param client: Optional. The client to use. If not passed, falls back\n to the ``client`` stored on the current bucket.\n\n :rtype: bool\n :returns: True, if the notification exists, else False.\n :raises ValueError: if the notification has no ID.\n " ]
Please provide a description of the function:def reload(self, client=None): if self.notification_id is None: raise ValueError("Notification not intialized by server") client = self._require_client(client) query_params = {} if self.bucket.user_project is not None: query_params["userProject"] = self.bucket.user_project response = client._connection.api_request( method="GET", path=self.path, query_params=query_params ) self._set_properties(response)
[ "Update this notification from the server configuration.\n\n See:\n https://cloud.google.com/storage/docs/json_api/v1/notifications/get\n\n If :attr:`user_project` is set on the bucket, bills the API request\n to that project.\n\n :type client: :class:`~google.cloud.storage.client.Client` or\n ``NoneType``\n :param client: Optional. The client to use. If not passed, falls back\n to the ``client`` stored on the current bucket.\n\n :rtype: bool\n :returns: True, if the notification exists, else False.\n :raises ValueError: if the notification has no ID.\n " ]
Please provide a description of the function:def delete(self, client=None): if self.notification_id is None: raise ValueError("Notification not intialized by server") client = self._require_client(client) query_params = {} if self.bucket.user_project is not None: query_params["userProject"] = self.bucket.user_project client._connection.api_request( method="DELETE", path=self.path, query_params=query_params )
[ "Delete this notification.\n\n See:\n https://cloud.google.com/storage/docs/json_api/v1/notifications/delete\n\n If :attr:`user_project` is set on the bucket, bills the API request\n to that project.\n\n :type client: :class:`~google.cloud.storage.client.Client` or\n ``NoneType``\n :param client: Optional. The client to use. If not passed, falls back\n to the ``client`` stored on the current bucket.\n\n :raises: :class:`google.api_core.exceptions.NotFound`:\n if the notification does not exist.\n :raises ValueError: if the notification has no ID.\n " ]
Please provide a description of the function:def create_instance( self, parent, instance_id, instance, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): # Wrap the transport method to add retry and timeout logic. if "create_instance" not in self._inner_api_calls: self._inner_api_calls[ "create_instance" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.create_instance, default_retry=self._method_configs["CreateInstance"].retry, default_timeout=self._method_configs["CreateInstance"].timeout, client_info=self._client_info, ) request = cloud_redis_pb2.CreateInstanceRequest( parent=parent, instance_id=instance_id, instance=instance ) operation = self._inner_api_calls["create_instance"]( request, retry=retry, timeout=timeout, metadata=metadata ) return google.api_core.operation.from_gapic( operation, self.transport._operations_client, cloud_redis_pb2.Instance, metadata_type=cloud_redis_pb2.OperationMetadata, )
[ "\n Creates a Redis instance based on the specified tier and memory size.\n\n By default, the instance is accessible from the project's `default\n network <https://cloud.google.com/compute/docs/networks-and-firewalls#networks>`__.\n\n The creation is executed asynchronously and callers may check the\n returned operation to track its progress. Once the operation is\n completed the Redis instance will be fully functional. Completed\n longrunning.Operation will contain the new instance object in the\n response field.\n\n The returned operation is automatically deleted after a few hours, so\n there is no need to call DeleteOperation.\n\n Example:\n >>> from google.cloud import redis_v1\n >>> from google.cloud.redis_v1 import enums\n >>>\n >>> client = redis_v1.CloudRedisClient()\n >>>\n >>> parent = client.location_path('[PROJECT]', '[LOCATION]')\n >>> instance_id = 'test_instance'\n >>> tier = enums.Instance.Tier.BASIC\n >>> memory_size_gb = 1\n >>> instance = {'tier': tier, 'memory_size_gb': memory_size_gb}\n >>>\n >>> response = client.create_instance(parent, instance_id, instance)\n >>>\n >>> def callback(operation_future):\n ... # Handle result.\n ... result = operation_future.result()\n >>>\n >>> response.add_done_callback(callback)\n >>>\n >>> # Handle metadata.\n >>> metadata = response.metadata()\n\n Args:\n parent (str): Required. The resource name of the instance location using the form:\n ``projects/{project_id}/locations/{location_id}`` where ``location_id``\n refers to a GCP region\n instance_id (str): Required. The logical name of the Redis instance in the customer project\n with the following restrictions:\n\n - Must contain only lowercase letters, numbers, and hyphens.\n - Must start with a letter.\n - Must be between 1-40 characters.\n - Must end with a number or a letter.\n - Must be unique within the customer project / location\n instance (Union[dict, ~google.cloud.redis_v1.types.Instance]): Required. A Redis [Instance] resource\n\n If a dict is provided, it must be of the same form as the protobuf\n message :class:`~google.cloud.redis_v1.types.Instance`\n retry (Optional[google.api_core.retry.Retry]): A retry object used\n to retry requests. If ``None`` is specified, requests will not\n be retried.\n timeout (Optional[float]): The amount of time, in seconds, to wait\n for the request to complete. Note that if ``retry`` is\n specified, the timeout applies to each individual attempt.\n metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata\n that is provided to the method.\n\n Returns:\n A :class:`~google.cloud.redis_v1.types._OperationFuture` instance.\n\n Raises:\n google.api_core.exceptions.GoogleAPICallError: If the request\n failed for any reason.\n google.api_core.exceptions.RetryError: If the request failed due\n to a retryable error and retry attempts failed.\n ValueError: If the parameters are invalid.\n " ]
Please provide a description of the function:def import_instance( self, name, input_config, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): # Wrap the transport method to add retry and timeout logic. if "import_instance" not in self._inner_api_calls: self._inner_api_calls[ "import_instance" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.import_instance, default_retry=self._method_configs["ImportInstance"].retry, default_timeout=self._method_configs["ImportInstance"].timeout, client_info=self._client_info, ) request = cloud_redis_pb2.ImportInstanceRequest( name=name, input_config=input_config ) operation = self._inner_api_calls["import_instance"]( request, retry=retry, timeout=timeout, metadata=metadata ) return google.api_core.operation.from_gapic( operation, self.transport._operations_client, cloud_redis_pb2.Instance, metadata_type=cloud_redis_pb2.OperationMetadata, )
[ "\n Import a Redis RDB snapshot file from GCS into a Redis instance.\n\n Redis may stop serving during this operation. Instance state will be\n IMPORTING for entire operation. When complete, the instance will contain\n only data from the imported file.\n\n The returned operation is automatically deleted after a few hours, so\n there is no need to call DeleteOperation.\n\n Example:\n >>> from google.cloud import redis_v1\n >>>\n >>> client = redis_v1.CloudRedisClient()\n >>>\n >>> name = client.instance_path('[PROJECT]', '[LOCATION]', '[INSTANCE]')\n >>>\n >>> # TODO: Initialize `input_config`:\n >>> input_config = {}\n >>>\n >>> response = client.import_instance(name, input_config)\n >>>\n >>> def callback(operation_future):\n ... # Handle result.\n ... result = operation_future.result()\n >>>\n >>> response.add_done_callback(callback)\n >>>\n >>> # Handle metadata.\n >>> metadata = response.metadata()\n\n Args:\n name (str): Required. Redis instance resource name using the form:\n ``projects/{project_id}/locations/{location_id}/instances/{instance_id}``\n where ``location_id`` refers to a GCP region\n input_config (Union[dict, ~google.cloud.redis_v1.types.InputConfig]): Required. Specify data to be imported.\n\n If a dict is provided, it must be of the same form as the protobuf\n message :class:`~google.cloud.redis_v1.types.InputConfig`\n retry (Optional[google.api_core.retry.Retry]): A retry object used\n to retry requests. If ``None`` is specified, requests will not\n be retried.\n timeout (Optional[float]): The amount of time, in seconds, to wait\n for the request to complete. Note that if ``retry`` is\n specified, the timeout applies to each individual attempt.\n metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata\n that is provided to the method.\n\n Returns:\n A :class:`~google.cloud.redis_v1.types._OperationFuture` instance.\n\n Raises:\n google.api_core.exceptions.GoogleAPICallError: If the request\n failed for any reason.\n google.api_core.exceptions.RetryError: If the request failed due\n to a retryable error and retry attempts failed.\n ValueError: If the parameters are invalid.\n " ]
Please provide a description of the function:def notification_channel_path(cls, project, notification_channel): return google.api_core.path_template.expand( "projects/{project}/notificationChannels/{notification_channel}", project=project, notification_channel=notification_channel, )
[ "Return a fully-qualified notification_channel string." ]
Please provide a description of the function:def notification_channel_descriptor_path(cls, project, channel_descriptor): return google.api_core.path_template.expand( "projects/{project}/notificationChannelDescriptors/{channel_descriptor}", project=project, channel_descriptor=channel_descriptor, )
[ "Return a fully-qualified notification_channel_descriptor string." ]
Please provide a description of the function:def batch(self, client=None): client = self._require_client(client) return Batch(self, client)
[ "Return a batch to use as a context manager.\n\n :type client: :class:`~google.cloud.logging.client.Client` or\n ``NoneType``\n :param client: the client to use. If not passed, falls back to the\n ``client`` stored on the current topic.\n\n :rtype: :class:`Batch`\n :returns: A batch to use as a context manager.\n " ]
Please provide a description of the function:def _do_log(self, client, _entry_class, payload=None, **kw): client = self._require_client(client) # Apply defaults kw["log_name"] = kw.pop("log_name", self.full_name) kw["labels"] = kw.pop("labels", self.labels) kw["resource"] = kw.pop("resource", _GLOBAL_RESOURCE) if payload is not None: entry = _entry_class(payload=payload, **kw) else: entry = _entry_class(**kw) api_repr = entry.to_api_repr() client.logging_api.write_entries([api_repr])
[ "Helper for :meth:`log_empty`, :meth:`log_text`, etc.\n " ]
Please provide a description of the function:def log_text(self, text, client=None, **kw): self._do_log(client, TextEntry, text, **kw)
[ "API call: log a text message via a POST request\n\n See\n https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/write\n\n :type text: str\n :param text: the log message.\n\n :type client: :class:`~google.cloud.logging.client.Client` or\n ``NoneType``\n :param client: the client to use. If not passed, falls back to the\n ``client`` stored on the current logger.\n\n :type kw: dict\n :param kw: (optional) additional keyword arguments for the entry.\n See :class:`~google.cloud.logging.entries.LogEntry`.\n " ]
Please provide a description of the function:def log_struct(self, info, client=None, **kw): self._do_log(client, StructEntry, info, **kw)
[ "API call: log a structured message via a POST request\n\n See\n https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/write\n\n :type info: dict\n :param info: the log entry information\n\n :type client: :class:`~google.cloud.logging.client.Client` or\n ``NoneType``\n :param client: the client to use. If not passed, falls back to the\n ``client`` stored on the current logger.\n\n :type kw: dict\n :param kw: (optional) additional keyword arguments for the entry.\n See :class:`~google.cloud.logging.entries.LogEntry`.\n " ]
Please provide a description of the function:def log_proto(self, message, client=None, **kw): self._do_log(client, ProtobufEntry, message, **kw)
[ "API call: log a protobuf message via a POST request\n\n See\n https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list\n\n :type message: :class:`~google.protobuf.message.Message`\n :param message: The protobuf message to be logged.\n\n :type client: :class:`~google.cloud.logging.client.Client` or\n ``NoneType``\n :param client: the client to use. If not passed, falls back to the\n ``client`` stored on the current logger.\n\n :type kw: dict\n :param kw: (optional) additional keyword arguments for the entry.\n See :class:`~google.cloud.logging.entries.LogEntry`.\n " ]
Please provide a description of the function:def delete(self, client=None): client = self._require_client(client) client.logging_api.logger_delete(self.project, self.name)
[ "API call: delete all entries in a logger via a DELETE request\n\n See\n https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.logs/delete\n\n :type client: :class:`~google.cloud.logging.client.Client` or\n ``NoneType``\n :param client: the client to use. If not passed, falls back to the\n ``client`` stored on the current logger.\n " ]
Please provide a description of the function:def list_entries( self, projects=None, filter_=None, order_by=None, page_size=None, page_token=None, ): log_filter = "logName=%s" % (self.full_name,) if filter_ is not None: filter_ = "%s AND %s" % (filter_, log_filter) else: filter_ = log_filter return self.client.list_entries( projects=projects, filter_=filter_, order_by=order_by, page_size=page_size, page_token=page_token, )
[ "Return a page of log entries.\n\n See\n https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list\n\n :type projects: list of strings\n :param projects: project IDs to include. If not passed,\n defaults to the project bound to the client.\n\n :type filter_: str\n :param filter_:\n a filter expression. See\n https://cloud.google.com/logging/docs/view/advanced_filters\n\n :type order_by: str\n :param order_by: One of :data:`~google.cloud.logging.ASCENDING`\n or :data:`~google.cloud.logging.DESCENDING`.\n\n :type page_size: int\n :param page_size:\n Optional. The maximum number of entries in each page of results\n from this request. Non-positive values are ignored. Defaults\n to a sensible value set by the API.\n\n :type page_token: str\n :param page_token:\n Optional. If present, return the next batch of entries, using\n the value, which must correspond to the ``nextPageToken`` value\n returned in the previous response. Deprecated: use the ``pages``\n property of the returned iterator instead of manually passing\n the token.\n\n :rtype: :class:`~google.api_core.page_iterator.Iterator`\n :returns: Iterator of log entries accessible to the current logger.\n See :class:`~google.cloud.logging.entries.LogEntry`.\n " ]
Please provide a description of the function:def log_text(self, text, **kw): self.entries.append(TextEntry(payload=text, **kw))
[ "Add a text entry to be logged during :meth:`commit`.\n\n :type text: str\n :param text: the text entry\n\n :type kw: dict\n :param kw: (optional) additional keyword arguments for the entry.\n See :class:`~google.cloud.logging.entries.LogEntry`.\n " ]
Please provide a description of the function:def log_struct(self, info, **kw): self.entries.append(StructEntry(payload=info, **kw))
[ "Add a struct entry to be logged during :meth:`commit`.\n\n :type info: dict\n :param info: the struct entry\n\n :type kw: dict\n :param kw: (optional) additional keyword arguments for the entry.\n See :class:`~google.cloud.logging.entries.LogEntry`.\n " ]
Please provide a description of the function:def log_proto(self, message, **kw): self.entries.append(ProtobufEntry(payload=message, **kw))
[ "Add a protobuf entry to be logged during :meth:`commit`.\n\n :type message: protobuf message\n :param message: the protobuf entry\n\n :type kw: dict\n :param kw: (optional) additional keyword arguments for the entry.\n See :class:`~google.cloud.logging.entries.LogEntry`.\n " ]
Please provide a description of the function:def commit(self, client=None): if client is None: client = self.client kwargs = {"logger_name": self.logger.full_name} if self.resource is not None: kwargs["resource"] = self.resource._to_dict() if self.logger.labels is not None: kwargs["labels"] = self.logger.labels entries = [entry.to_api_repr() for entry in self.entries] client.logging_api.write_entries(entries, **kwargs) del self.entries[:]
[ "Send saved log entries as a single API call.\n\n :type client: :class:`~google.cloud.logging.client.Client` or\n ``NoneType``\n :param client: the client to use. If not passed, falls back to the\n ``client`` stored on the current batch.\n " ]
Please provide a description of the function:def full_name(self): if not self.name: raise ValueError("Missing config name.") return "projects/%s/configs/%s" % (self._client.project, self.name)
[ "Fully-qualified name of this variable.\n\n Example:\n ``projects/my-project/configs/my-config``\n\n :rtype: str\n :returns: The full name based on project and config names.\n\n :raises: :class:`ValueError` if the config is missing a name.\n " ]
Please provide a description of the function:def _set_properties(self, api_response): self._properties.clear() cleaned = api_response.copy() if "name" in cleaned: self.name = config_name_from_full_name(cleaned.pop("name")) self._properties.update(cleaned)
[ "Update properties from resource in body of ``api_response``\n\n :type api_response: dict\n :param api_response: response returned from an API call\n " ]
Please provide a description of the function:def reload(self, client=None): client = self._require_client(client) # We assume the config exists. If it doesn't it will raise a NotFound # exception. resp = client._connection.api_request(method="GET", path=self.path) self._set_properties(api_response=resp)
[ "API call: reload the config via a ``GET`` request.\n\n This method will reload the newest data for the config.\n\n See\n https://cloud.google.com/deployment-manager/runtime-configurator/reference/rest/v1beta1/projects.configs/get\n\n :type client: :class:`google.cloud.runtimeconfig.client.Client`\n :param client:\n (Optional) The client to use. If not passed, falls back to the\n client stored on the current config.\n " ]
Please provide a description of the function:def get_variable(self, variable_name, client=None): client = self._require_client(client) variable = Variable(config=self, name=variable_name) try: variable.reload(client=client) return variable except NotFound: return None
[ "API call: get a variable via a ``GET`` request.\n\n This will return None if the variable doesn't exist::\n\n >>> from google.cloud import runtimeconfig\n >>> client = runtimeconfig.Client()\n >>> config = client.config('my-config')\n >>> print(config.get_variable('variable-name'))\n <Variable: my-config, variable-name>\n >>> print(config.get_variable('does-not-exist'))\n None\n\n :type variable_name: str\n :param variable_name: The name of the variable to retrieve.\n\n :type client: :class:`~google.cloud.runtimeconfig.client.Client`\n :param client:\n (Optional) The client to use. If not passed, falls back to the\n ``client`` stored on the current config.\n\n :rtype: :class:`google.cloud.runtimeconfig.variable.Variable` or None\n :returns: The variable object if it exists, otherwise None.\n " ]
Please provide a description of the function:def list_variables(self, page_size=None, page_token=None, client=None): path = "%s/variables" % (self.path,) client = self._require_client(client) iterator = page_iterator.HTTPIterator( client=client, api_request=client._connection.api_request, path=path, item_to_value=_item_to_variable, items_key="variables", page_token=page_token, max_results=page_size, ) iterator._MAX_RESULTS = "pageSize" iterator.config = self return iterator
[ "API call: list variables for this config.\n\n This only lists variable names, not the values.\n\n See\n https://cloud.google.com/deployment-manager/runtime-configurator/reference/rest/v1beta1/projects.configs.variables/list\n\n :type page_size: int\n :param page_size:\n Optional. The maximum number of variables in each page of results\n from this request. Non-positive values are ignored. Defaults\n to a sensible value set by the API.\n\n :type page_token: str\n :param page_token:\n Optional. If present, return the next batch of variables, using\n the value, which must correspond to the ``nextPageToken`` value\n returned in the previous response. Deprecated: use the ``pages``\n property of the returned iterator instead of manually passing\n the token.\n\n :type client: :class:`~google.cloud.runtimeconfig.client.Client`\n :param client:\n (Optional) The client to use. If not passed, falls back to the\n ``client`` stored on the current config.\n\n :rtype: :class:`~google.api_core.page_iterator.Iterator`\n :returns:\n Iterator of :class:`~google.cloud.runtimeconfig.variable.Variable`\n belonging to this project.\n " ]
Please provide a description of the function:def _bytes_from_json(value, field): if _not_null(value, field): return base64.standard_b64decode(_to_bytes(value))
[ "Base64-decode value" ]
Please provide a description of the function:def _timestamp_query_param_from_json(value, field): if _not_null(value, field): # Canonical formats for timestamps in BigQuery are flexible. See: # g.co/cloud/bigquery/docs/reference/standard-sql/data-types#timestamp-type # The separator between the date and time can be 'T' or ' '. value = value.replace(" ", "T", 1) # The UTC timezone may be formatted as Z or +00:00. value = value.replace("Z", "") value = value.replace("+00:00", "") if "." in value: # YYYY-MM-DDTHH:MM:SS.ffffff return datetime.datetime.strptime(value, _RFC3339_MICROS_NO_ZULU).replace( tzinfo=UTC ) else: # YYYY-MM-DDTHH:MM:SS return datetime.datetime.strptime(value, _RFC3339_NO_FRACTION).replace( tzinfo=UTC ) else: return None
[ "Coerce 'value' to a datetime, if set or not nullable.\n\n Args:\n value (str): The timestamp.\n field (.SchemaField): The field corresponding to the value.\n\n Returns:\n Optional[datetime.datetime]: The parsed datetime object from\n ``value`` if the ``field`` is not null (otherwise it is\n :data:`None`).\n " ]
Please provide a description of the function:def _datetime_from_json(value, field): if _not_null(value, field): if "." in value: # YYYY-MM-DDTHH:MM:SS.ffffff return datetime.datetime.strptime(value, _RFC3339_MICROS_NO_ZULU) else: # YYYY-MM-DDTHH:MM:SS return datetime.datetime.strptime(value, _RFC3339_NO_FRACTION) else: return None
[ "Coerce 'value' to a datetime, if set or not nullable.\n\n Args:\n value (str): The timestamp.\n field (.SchemaField): The field corresponding to the value.\n\n Returns:\n Optional[datetime.datetime]: The parsed datetime object from\n ``value`` if the ``field`` is not null (otherwise it is\n :data:`None`).\n " ]
Please provide a description of the function:def _time_from_json(value, field): if _not_null(value, field): if len(value) == 8: # HH:MM:SS fmt = _TIMEONLY_WO_MICROS elif len(value) == 15: # HH:MM:SS.micros fmt = _TIMEONLY_W_MICROS else: raise ValueError("Unknown time format: {}".format(value)) return datetime.datetime.strptime(value, fmt).time()
[ "Coerce 'value' to a datetime date, if set or not nullable" ]
Please provide a description of the function:def _record_from_json(value, field): if _not_null(value, field): record = {} record_iter = zip(field.fields, value["f"]) for subfield, cell in record_iter: converter = _CELLDATA_FROM_JSON[subfield.field_type] if subfield.mode == "REPEATED": value = [converter(item["v"], subfield) for item in cell["v"]] else: value = converter(cell["v"], subfield) record[subfield.name] = value return record
[ "Coerce 'value' to a mapping, if set or not nullable." ]
Please provide a description of the function:def _row_tuple_from_json(row, schema): row_data = [] for field, cell in zip(schema, row["f"]): converter = _CELLDATA_FROM_JSON[field.field_type] if field.mode == "REPEATED": row_data.append([converter(item["v"], field) for item in cell["v"]]) else: row_data.append(converter(cell["v"], field)) return tuple(row_data)
[ "Convert JSON row data to row with appropriate types.\n\n Note: ``row['f']`` and ``schema`` are presumed to be of the same length.\n\n :type row: dict\n :param row: A JSON response row to be converted.\n\n :type schema: tuple\n :param schema: A tuple of\n :class:`~google.cloud.bigquery.schema.SchemaField`.\n\n :rtype: tuple\n :returns: A tuple of data converted to native types.\n " ]
Please provide a description of the function:def _rows_from_json(values, schema): from google.cloud.bigquery import Row field_to_index = _field_to_index_mapping(schema) return [Row(_row_tuple_from_json(r, schema), field_to_index) for r in values]
[ "Convert JSON row data to rows with appropriate types." ]
Please provide a description of the function:def _decimal_to_json(value): if isinstance(value, decimal.Decimal): value = str(value) return value
[ "Coerce 'value' to a JSON-compatible representation." ]
Please provide a description of the function:def _bytes_to_json(value): if isinstance(value, bytes): value = base64.standard_b64encode(value).decode("ascii") return value
[ "Coerce 'value' to an JSON-compatible representation." ]
Please provide a description of the function:def _timestamp_to_json_parameter(value): if isinstance(value, datetime.datetime): if value.tzinfo not in (None, UTC): # Convert to UTC and remove the time zone info. value = value.replace(tzinfo=None) - value.utcoffset() value = "%s %s+00:00" % (value.date().isoformat(), value.time().isoformat()) return value
[ "Coerce 'value' to an JSON-compatible representation.\n\n This version returns the string representation used in query parameters.\n " ]
Please provide a description of the function:def _timestamp_to_json_row(value): if isinstance(value, datetime.datetime): value = _microseconds_from_datetime(value) * 1e-6 return value
[ "Coerce 'value' to an JSON-compatible representation.\n\n This version returns floating-point seconds value used in row data.\n " ]
Please provide a description of the function:def _datetime_to_json(value): if isinstance(value, datetime.datetime): value = value.strftime(_RFC3339_MICROS_NO_ZULU) return value
[ "Coerce 'value' to an JSON-compatible representation." ]
Please provide a description of the function:def _date_to_json(value): if isinstance(value, datetime.date): value = value.isoformat() return value
[ "Coerce 'value' to an JSON-compatible representation." ]
Please provide a description of the function:def _time_to_json(value): if isinstance(value, datetime.time): value = value.isoformat() return value
[ "Coerce 'value' to an JSON-compatible representation." ]
Please provide a description of the function:def _scalar_field_to_json(field, row_value): converter = _SCALAR_VALUE_TO_JSON_ROW.get(field.field_type) if converter is None: # STRING doesn't need converting return row_value return converter(row_value)
[ "Maps a field and value to a JSON-safe value.\n\n Args:\n field ( \\\n :class:`~google.cloud.bigquery.schema.SchemaField`, \\\n ):\n The SchemaField to use for type conversion and field name.\n row_value (any):\n Value to be converted, based on the field's type.\n\n Returns:\n any:\n A JSON-serializable object.\n " ]
Please provide a description of the function:def _repeated_field_to_json(field, row_value): # Remove the REPEATED, but keep the other fields. This allows us to process # each item as if it were a top-level field. item_field = copy.deepcopy(field) item_field._mode = "NULLABLE" values = [] for item in row_value: values.append(_field_to_json(item_field, item)) return values
[ "Convert a repeated/array field to its JSON representation.\n\n Args:\n field ( \\\n :class:`~google.cloud.bigquery.schema.SchemaField`, \\\n ):\n The SchemaField to use for type conversion and field name. The\n field mode must equal ``REPEATED``.\n row_value (Sequence[any]):\n A sequence of values to convert to JSON-serializable values.\n\n Returns:\n List[any]:\n A list of JSON-serializable objects.\n " ]
Please provide a description of the function:def _record_field_to_json(fields, row_value): record = {} isdict = isinstance(row_value, dict) for subindex, subfield in enumerate(fields): subname = subfield.name if isdict: subvalue = row_value.get(subname) else: subvalue = row_value[subindex] record[subname] = _field_to_json(subfield, subvalue) return record
[ "Convert a record/struct field to its JSON representation.\n\n Args:\n fields ( \\\n Sequence[:class:`~google.cloud.bigquery.schema.SchemaField`], \\\n ):\n The :class:`~google.cloud.bigquery.schema.SchemaField`s of the\n record's subfields to use for type conversion and field names.\n row_value (Union[Tuple[Any], Mapping[str, Any]):\n A tuple or dictionary to convert to JSON-serializable values.\n\n Returns:\n Mapping[str, any]:\n A JSON-serializable dictionary.\n " ]
Please provide a description of the function:def _field_to_json(field, row_value): if row_value is None: return None if field.mode == "REPEATED": return _repeated_field_to_json(field, row_value) if field.field_type == "RECORD": return _record_field_to_json(field.fields, row_value) return _scalar_field_to_json(field, row_value)
[ "Convert a field into JSON-serializable values.\n\n Args:\n field ( \\\n :class:`~google.cloud.bigquery.schema.SchemaField`, \\\n ):\n The SchemaField to use for type conversion and field name.\n\n row_value (Union[ \\\n Sequence[list], \\\n any, \\\n ]):\n Row data to be inserted. If the SchemaField's mode is\n REPEATED, assume this is a list. If not, the type\n is inferred from the SchemaField's field_type.\n\n Returns:\n any:\n A JSON-serializable object.\n " ]
Please provide a description of the function:def _snake_to_camel_case(value): words = value.split("_") return words[0] + "".join(map(str.capitalize, words[1:]))
[ "Convert snake case string to camel case." ]
Please provide a description of the function:def _get_sub_prop(container, keys, default=None): sub_val = container for key in keys: if key not in sub_val: return default sub_val = sub_val[key] return sub_val
[ "Get a nested value from a dictionary.\n\n This method works like ``dict.get(key)``, but for nested values.\n\n Arguments:\n container (dict):\n A dictionary which may contain other dictionaries as values.\n keys (iterable):\n A sequence of keys to attempt to get the value for. Each item in\n the sequence represents a deeper nesting. The first key is for\n the top level. If there is a dictionary there, the second key\n attempts to get the value within that, and so on.\n default (object):\n (Optional) Value to returned if any of the keys are not found.\n Defaults to ``None``.\n\n Examples:\n Get a top-level value (equivalent to ``container.get('key')``).\n\n >>> _get_sub_prop({'key': 'value'}, ['key'])\n 'value'\n\n Get a top-level value, providing a default (equivalent to\n ``container.get('key', default='default')``).\n\n >>> _get_sub_prop({'nothere': 123}, ['key'], default='not found')\n 'not found'\n\n Get a nested value.\n\n >>> _get_sub_prop({'key': {'subkey': 'value'}}, ['key', 'subkey'])\n 'value'\n\n Returns:\n object: The value if present or the default.\n " ]
Please provide a description of the function:def _set_sub_prop(container, keys, value): sub_val = container for key in keys[:-1]: if key not in sub_val: sub_val[key] = {} sub_val = sub_val[key] sub_val[keys[-1]] = value
[ "Set a nested value in a dictionary.\n\n Arguments:\n container (dict):\n A dictionary which may contain other dictionaries as values.\n keys (iterable):\n A sequence of keys to attempt to set the value for. Each item in\n the sequence represents a deeper nesting. The first key is for\n the top level. If there is a dictionary there, the second key\n attempts to get the value within that, and so on.\n value (object): Value to set within the container.\n\n Examples:\n Set a top-level value (equivalent to ``container['key'] = 'value'``).\n\n >>> container = {}\n >>> _set_sub_prop(container, ['key'], 'value')\n >>> container\n {'key': 'value'}\n\n Set a nested value.\n\n >>> container = {}\n >>> _set_sub_prop(container, ['key', 'subkey'], 'value')\n >>> container\n {'key': {'subkey': 'value'}}\n\n Replace a nested value.\n\n >>> container = {'key': {'subkey': 'prev'}}\n >>> _set_sub_prop(container, ['key', 'subkey'], 'new')\n >>> container\n {'key': {'subkey': 'new'}}\n " ]
Please provide a description of the function:def _del_sub_prop(container, keys): sub_val = container for key in keys[:-1]: if key not in sub_val: sub_val[key] = {} sub_val = sub_val[key] if keys[-1] in sub_val: del sub_val[keys[-1]]
[ "Remove a nested key fro a dictionary.\n\n Arguments:\n container (dict):\n A dictionary which may contain other dictionaries as values.\n keys (iterable):\n A sequence of keys to attempt to clear the value for. Each item in\n the sequence represents a deeper nesting. The first key is for\n the top level. If there is a dictionary there, the second key\n attempts to get the value within that, and so on.\n\n Examples:\n Remove a top-level value (equivalent to ``del container['key']``).\n\n >>> container = {'key': 'value'}\n >>> _del_sub_prop(container, ['key'])\n >>> container\n {}\n\n Remove a nested value.\n\n >>> container = {'key': {'subkey': 'value'}}\n >>> _del_sub_prop(container, ['key', 'subkey'])\n >>> container\n {'key': {}}\n " ]
Please provide a description of the function:def _build_resource_from_properties(obj, filter_fields): partial = {} for filter_field in filter_fields: api_field = obj._PROPERTY_TO_API_FIELD.get(filter_field) if api_field is None and filter_field not in obj._properties: raise ValueError("No property %s" % filter_field) elif api_field is not None: partial[api_field] = obj._properties.get(api_field) else: # allows properties that are not defined in the library # and properties that have the same name as API resource key partial[filter_field] = obj._properties[filter_field] return partial
[ "Build a resource based on a ``_properties`` dictionary, filtered by\n ``filter_fields``, which follow the name of the Python object.\n " ]
Please provide a description of the function:def get_model(client, model_id): # [START bigquery_get_model] from google.cloud import bigquery # TODO(developer): Construct a BigQuery client object. # client = bigquery.Client() # TODO(developer): Set model_id to the ID of the model to fetch. # model_id = 'your-project.your_dataset.your_model' model = client.get_model(model_id) full_model_id = "{}.{}.{}".format(model.project, model.dataset_id, model.model_id) friendly_name = model.friendly_name print( "Got model '{}' with friendly_name '{}'.".format(full_model_id, friendly_name) )
[ "Sample ID: go/samples-tracker/1510" ]
Please provide a description of the function:def _check_database_id(database_id): if database_id != u"": msg = _DATABASE_ID_TEMPLATE.format(database_id) raise ValueError(msg)
[ "Make sure a \"Reference\" database ID is empty.\n\n :type database_id: unicode\n :param database_id: The ``database_id`` field from a \"Reference\" protobuf.\n\n :raises: :exc:`ValueError` if the ``database_id`` is not empty.\n " ]
Please provide a description of the function:def _add_id_or_name(flat_path, element_pb, empty_allowed): id_ = element_pb.id name = element_pb.name # NOTE: Below 0 and the empty string are the "null" values for their # respective types, indicating that the value is unset. if id_ == 0: if name == u"": if not empty_allowed: raise ValueError(_EMPTY_ELEMENT) else: flat_path.append(name) else: if name == u"": flat_path.append(id_) else: msg = _BAD_ELEMENT_TEMPLATE.format(id_, name) raise ValueError(msg)
[ "Add the ID or name from an element to a list.\n\n :type flat_path: list\n :param flat_path: List of accumulated path parts.\n\n :type element_pb: :class:`._app_engine_key_pb2.Path.Element`\n :param element_pb: The element containing ID or name.\n\n :type empty_allowed: bool\n :param empty_allowed: Indicates if neither ID or name need be set. If\n :data:`False`, then **exactly** one of them must be.\n\n :raises: :exc:`ValueError` if 0 or 2 of ID/name are set (unless\n ``empty_allowed=True`` and 0 are set).\n " ]
Please provide a description of the function:def _get_flat_path(path_pb): num_elts = len(path_pb.element) last_index = num_elts - 1 result = [] for index, element in enumerate(path_pb.element): result.append(element.type) _add_id_or_name(result, element, index == last_index) return tuple(result)
[ "Convert a legacy \"Path\" protobuf to a flat path.\n\n For example\n\n Element {\n type: \"parent\"\n id: 59\n }\n Element {\n type: \"child\"\n name: \"naem\"\n }\n\n would convert to ``('parent', 59, 'child', 'naem')``.\n\n :type path_pb: :class:`._app_engine_key_pb2.Path`\n :param path_pb: Legacy protobuf \"Path\" object (from a \"Reference\").\n\n :rtype: tuple\n :returns: The path parts from ``path_pb``.\n " ]
Please provide a description of the function:def _to_legacy_path(dict_path): elements = [] for part in dict_path: element_kwargs = {"type": part["kind"]} if "id" in part: element_kwargs["id"] = part["id"] elif "name" in part: element_kwargs["name"] = part["name"] element = _app_engine_key_pb2.Path.Element(**element_kwargs) elements.append(element) return _app_engine_key_pb2.Path(element=elements)
[ "Convert a tuple of ints and strings in a legacy \"Path\".\n\n .. note:\n\n This assumes, but does not verify, that each entry in\n ``dict_path`` is valid (i.e. doesn't have more than one\n key out of \"name\" / \"id\").\n\n :type dict_path: lsit\n :param dict_path: The \"structured\" path for a key, i.e. it\n is a list of dictionaries, each of which has\n \"kind\" and one of \"name\" / \"id\" as keys.\n\n :rtype: :class:`._app_engine_key_pb2.Path`\n :returns: The legacy path corresponding to ``dict_path``.\n " ]
Please provide a description of the function:def _parse_path(path_args): if len(path_args) == 0: raise ValueError("Key path must not be empty.") kind_list = path_args[::2] id_or_name_list = path_args[1::2] # Dummy sentinel value to pad incomplete key to even length path. partial_ending = object() if len(path_args) % 2 == 1: id_or_name_list += (partial_ending,) result = [] for kind, id_or_name in zip(kind_list, id_or_name_list): curr_key_part = {} if isinstance(kind, six.string_types): curr_key_part["kind"] = kind else: raise ValueError(kind, "Kind was not a string.") if isinstance(id_or_name, six.string_types): curr_key_part["name"] = id_or_name elif isinstance(id_or_name, six.integer_types): curr_key_part["id"] = id_or_name elif id_or_name is not partial_ending: raise ValueError(id_or_name, "ID/name was not a string or integer.") result.append(curr_key_part) return result
[ "Parses positional arguments into key path with kinds and IDs.\n\n :type path_args: tuple\n :param path_args: A tuple from positional arguments. Should be\n alternating list of kinds (string) and ID/name\n parts (int or string).\n\n :rtype: :class:`list` of :class:`dict`\n :returns: A list of key parts with kind and ID or name set.\n :raises: :class:`ValueError` if there are no ``path_args``, if one of\n the kinds is not a string or if one of the IDs/names is not\n a string or an integer.\n " ]
Please provide a description of the function:def _combine_args(self): child_path = self._parse_path(self._flat_path) if self._parent is not None: if self._parent.is_partial: raise ValueError("Parent key must be complete.") # We know that _parent.path() will return a copy. child_path = self._parent.path + child_path self._flat_path = self._parent.flat_path + self._flat_path if ( self._namespace is not None and self._namespace != self._parent.namespace ): raise ValueError("Child namespace must agree with parent's.") self._namespace = self._parent.namespace if self._project is not None and self._project != self._parent.project: raise ValueError("Child project must agree with parent's.") self._project = self._parent.project return child_path
[ "Sets protected data by combining raw data set from the constructor.\n\n If a ``_parent`` is set, updates the ``_flat_path`` and sets the\n ``_namespace`` and ``_project`` if not already set.\n\n :rtype: :class:`list` of :class:`dict`\n :returns: A list of key parts with kind and ID or name set.\n :raises: :class:`ValueError` if the parent key is not complete.\n " ]
Please provide a description of the function:def _clone(self): cloned_self = self.__class__( *self.flat_path, project=self.project, namespace=self.namespace ) # If the current parent has already been set, we re-use # the same instance cloned_self._parent = self._parent return cloned_self
[ "Duplicates the Key.\n\n Most attributes are simple types, so don't require copying. Other\n attributes like ``parent`` are long-lived and so we re-use them.\n\n :rtype: :class:`google.cloud.datastore.key.Key`\n :returns: A new ``Key`` instance with the same data as the current one.\n " ]
Please provide a description of the function:def completed_key(self, id_or_name): if not self.is_partial: raise ValueError("Only a partial key can be completed.") if isinstance(id_or_name, six.string_types): id_or_name_key = "name" elif isinstance(id_or_name, six.integer_types): id_or_name_key = "id" else: raise ValueError(id_or_name, "ID/name was not a string or integer.") new_key = self._clone() new_key._path[-1][id_or_name_key] = id_or_name new_key._flat_path += (id_or_name,) return new_key
[ "Creates new key from existing partial key by adding final ID/name.\n\n :type id_or_name: str or integer\n :param id_or_name: ID or name to be added to the key.\n\n :rtype: :class:`google.cloud.datastore.key.Key`\n :returns: A new ``Key`` instance with the same data as the current one\n and an extra ID or name added.\n :raises: :class:`ValueError` if the current key is not partial or if\n ``id_or_name`` is not a string or integer.\n " ]
Please provide a description of the function:def to_protobuf(self): key = _entity_pb2.Key() key.partition_id.project_id = self.project if self.namespace: key.partition_id.namespace_id = self.namespace for item in self.path: element = key.path.add() if "kind" in item: element.kind = item["kind"] if "id" in item: element.id = item["id"] if "name" in item: element.name = item["name"] return key
[ "Return a protobuf corresponding to the key.\n\n :rtype: :class:`.entity_pb2.Key`\n :returns: The protobuf representing the key.\n " ]
Please provide a description of the function:def to_legacy_urlsafe(self, location_prefix=None): if location_prefix is None: project_id = self.project else: project_id = location_prefix + self.project reference = _app_engine_key_pb2.Reference( app=project_id, path=_to_legacy_path(self._path), # Avoid the copy. name_space=self.namespace, ) raw_bytes = reference.SerializeToString() return base64.urlsafe_b64encode(raw_bytes).strip(b"=")
[ "Convert to a base64 encode urlsafe string for App Engine.\n\n This is intended to work with the \"legacy\" representation of a\n datastore \"Key\" used within Google App Engine (a so-called\n \"Reference\"). The returned string can be used as the ``urlsafe``\n argument to ``ndb.Key(urlsafe=...)``. The base64 encoded values\n will have padding removed.\n\n .. note::\n\n The string returned by ``to_legacy_urlsafe`` is equivalent, but\n not identical, to the string returned by ``ndb``. The location\n prefix may need to be specified to obtain identical urlsafe\n keys.\n\n :type location_prefix: str\n :param location_prefix: The location prefix of an App Engine project\n ID. Often this value is 's~', but may also be\n 'e~', or other location prefixes currently\n unknown.\n\n :rtype: bytes\n :returns: A bytestring containing the key encoded as URL-safe base64.\n " ]
Please provide a description of the function:def from_legacy_urlsafe(cls, urlsafe): urlsafe = _to_bytes(urlsafe, encoding="ascii") padding = b"=" * (-len(urlsafe) % 4) urlsafe += padding raw_bytes = base64.urlsafe_b64decode(urlsafe) reference = _app_engine_key_pb2.Reference() reference.ParseFromString(raw_bytes) project = _clean_app(reference.app) namespace = _get_empty(reference.name_space, u"") _check_database_id(reference.database_id) flat_path = _get_flat_path(reference.path) return cls(*flat_path, project=project, namespace=namespace)
[ "Convert urlsafe string to :class:`~google.cloud.datastore.key.Key`.\n\n This is intended to work with the \"legacy\" representation of a\n datastore \"Key\" used within Google App Engine (a so-called\n \"Reference\"). This assumes that ``urlsafe`` was created within an App\n Engine app via something like ``ndb.Key(...).urlsafe()``.\n\n :type urlsafe: bytes or unicode\n :param urlsafe: The base64 encoded (ASCII) string corresponding to a\n datastore \"Key\" / \"Reference\".\n\n :rtype: :class:`~google.cloud.datastore.key.Key`.\n :returns: The key corresponding to ``urlsafe``.\n " ]
Please provide a description of the function:def _make_parent(self): if self.is_partial: parent_args = self.flat_path[:-1] else: parent_args = self.flat_path[:-2] if parent_args: return self.__class__( *parent_args, project=self.project, namespace=self.namespace )
[ "Creates a parent key for the current path.\n\n Extracts all but the last element in the key path and creates a new\n key, while still matching the namespace and the project.\n\n :rtype: :class:`google.cloud.datastore.key.Key` or :class:`NoneType`\n :returns: A new ``Key`` instance, whose path consists of all but the\n last element of current path. If the current key has only\n one path element, returns ``None``.\n " ]
Please provide a description of the function:def parent(self): if self._parent is None: self._parent = self._make_parent() return self._parent
[ "The parent of the current key.\n\n :rtype: :class:`google.cloud.datastore.key.Key` or :class:`NoneType`\n :returns: A new ``Key`` instance, whose path consists of all but the\n last element of current path. If the current key has only\n one path element, returns ``None``.\n " ]