The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 10 new columns ({'Parameters', 'DevelopmentNotes', 'Version', 'Todo', 'Exception', 'Warnings', 'Links', 'Dependecies', 'Summary', 'Usage'}) and 15 missing columns ({'usage', 'Incomplete', 'exception', 'summary', 'Commentedcode', 'todo', 'Ownership', 'Pointer', 'rational', 'License', 'deprecation', 'Autogenerated', 'formatter', 'directive', 'Warning'}). This happened while the csv dataset builder was generating data using hf://datasets/poojaruhal/Code-comment-classification/python_0_raw.csv (at revision 3d2bbff4d30d5c41d2cbf5b1d55fbc8d10cfdbaa) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast stratum: int64 class: string comment: string Summary: string Usage: string Parameters: string Expand: string Version: string DevelopmentNotes: string Todo: string Exception: string Links: string Noise: string Warnings: string Recommendation: string Dependecies: string Precondition: string CodingGuidelines: string Extension: string Subclassexplnation: string Observation: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2742 to {'stratum': Value(dtype='int64', id=None), 'class': Value(dtype='string', id=None), 'comment': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None), 'Expand': Value(dtype='string', id=None), 'rational': Value(dtype='string', id=None), 'deprecation': Value(dtype='string', id=None), 'usage': Value(dtype='string', id=None), 'exception': Value(dtype='string', id=None), 'todo': Value(dtype='string', id=None), 'Incomplete': Value(dtype='string', id=None), 'Commentedcode': Value(dtype='string', id=None), 'directive': Value(dtype='string', id=None), 'formatter': Value(dtype='string', id=None), 'License': Value(dtype='float64', id=None), 'Ownership': Value(dtype='string', id=None), 'Pointer': Value(dtype='string', id=None), 'Autogenerated': Value(dtype='string', id=None), 'Noise': Value(dtype='string', id=None), 'Warning': Value(dtype='string', id=None), 'Recommendation': Value(dtype='string', id=None), 'Precondition': Value(dtype='string', id=None), 'CodingGuidelines': Value(dtype='float64', id=None), 'Extension': Value(dtype='float64', id=None), 'Subclassexplnation': Value(dtype='string', id=None), 'Observation': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 10 new columns ({'Parameters', 'DevelopmentNotes', 'Version', 'Todo', 'Exception', 'Warnings', 'Links', 'Dependecies', 'Summary', 'Usage'}) and 15 missing columns ({'usage', 'Incomplete', 'exception', 'summary', 'Commentedcode', 'todo', 'Ownership', 'Pointer', 'rational', 'License', 'deprecation', 'Autogenerated', 'formatter', 'directive', 'Warning'}). This happened while the csv dataset builder was generating data using hf://datasets/poojaruhal/Code-comment-classification/python_0_raw.csv (at revision 3d2bbff4d30d5c41d2cbf5b1d55fbc8d10cfdbaa) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
stratum
int64 | class
string | comment
string | summary
string | Expand
string | rational
string | deprecation
null | usage
null | exception
null | todo
null | Incomplete
null | Commentedcode
null | directive
null | formatter
null | License
null | Ownership
null | Pointer
null | Autogenerated
null | Noise
null | Warning
null | Recommendation
null | Precondition
null | CodingGuidelines
null | Extension
null | Subclassexplnation
null | Observation
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 |
Abfss.java
|
* Azure Blob File System implementation of AbstractFileSystem.
* This impl delegates to the old FileSystem
|
Azure Blob File System implementation of AbstractFileSystem.
|
This impl delegates to the old FileSystem
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
AbstractContractGetFileStatusTest.java
|
* Test getFileStatus and related listing operations.
| the tree parameters. Kept small to avoid killing object store test| runs too much.|
* Accept everything.
| the tree parameters. Kept small to avoid killing object store test| runs too much.|
* Accept nothing.
| the tree parameters. Kept small to avoid killing object store test| runs too much.|
* Path filter which only expects paths whose final name element
* equals the {@code match} field.
| the tree parameters. Kept small to avoid killing object store test| runs too much.|
* A filesystem filter which exposes the protected method
* {@link #listLocatedStatus(Path, PathFilter)}.
|
Test getFileStatus and related listing operations.
|
| the tree parameters. Kept small to avoid killing object store test| runs too much.|
* Accept everything.
| the tree parameters. Kept small to avoid killing object store test| runs too much.|
* Accept nothing.
| the tree parameters. Kept small to avoid killing object store test| runs too much.|
* Path filter which only expects paths whose final name element
* equals the {@code match} field.
| the tree parameters. Kept small to avoid killing object store test| runs too much.|
* A filesystem filter which exposes the protected method
* {@link #listLocatedStatus(Path, PathFilter)}.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
AbstractFSContract.java
|
* Class representing a filesystem contract that a filesystem
* implementation is expected implement.
*
* Part of this contract class is to allow FS implementations to
* provide specific opt outs and limits, so that tests can be
* skip unsupported features (e.g. case sensitivity tests),
* dangerous operations (e.g. trying to delete the root directory),
* and limit filesize and other numeric variables for scale tests
|
Class representing a filesystem contract that a filesystem * implementation is expected implement.
| null |
Part of this contract class is to allow FS implementations to * provide specific opt outs and limits, so that tests can be * skip unsupported features (e.g. case sensitivity tests), * dangerous operations (e.g. trying to delete the root directory), * and limit filesize and other numeric variables for scale tests
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
AbstractS3ACommitterFactory.java
|
* Dynamically create the output committer based on subclass type and settings.
| null |
Dynamically create the output committer based on subclass type and settings.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
AbstractTFLaunchCommandTestHelper.java
|
* This class is an abstract base class for testing Tensorboard and TensorFlow
* launch commands.
|
This class is an abstract base class for testing Tensorboard and TensorFlow and launch commands.
| null |
This class is an abstract base class for testing Tensorboard and TensorFlow and launch commands.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ApplicationConstants.java
|
* This is the API for the applications comprising of constants that YARN sets
* up for the applications and the containers.
*
* TODO: Investigate the semantics and security of each cross-boundary refs.
|
* The type of launch for the container.
|
* Environment for Applications.
*
* Some of the environment variables for applications are <em>final</em>
* i.e. they cannot be modified by the applications.
|
* This is the API for the applications comprising of constants that YARN sets * up for the applications and the containers.
|
The type of launch for the container. | * Environment for Applications. * * Some of the environment variables for applications are <em>final</em> * i.e. they cannot be modified by the applications.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ApplicationStateData.java
|
* Contains all the state data that needs to be stored persistently * for an Application
|
Contains all the state data that needs to be stored persistently * for an Application
| null |
needs to be stored persistently * for an Application
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
AutoInputFormat.java
|
* An {@link InputFormat} that tries to deduce the types of the input files
* automatically. It can currently handle text and sequence files.
|
An {@link InputFormat} that tries to deduce the types of the input files * automatically.
|
It can currently handle text and sequence files.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
BalancingPolicy.java
|
* Balancing policy.
* Since a datanode may contain multiple block pools,
* {@link Pool} implies {@link Node}
* but NOT the other way around
|
* Cluster is balanced if each node is balanced.
|
* Cluster is balanced if each pool in each node is balanced.
|
* Balancing policy.
|
Since a datanode may contain multiple block pools, * {@link Pool} implies {@link Node} * but NOT the other way around |
|
Cluster is balanced if each node is balanced. | * Cluster is balanced if each pool in each node is balanced.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
BaseRouterWebServicesTest.java
|
* Base class for all the RouterRMAdminService test cases. It provides utility
* methods that can be used by the concrete test case classes.
*
|
* Base class for all the RouterRMAdminService test cases.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
BatchedRequests.java
|
* A grouping of Scheduling Requests which are sent to the PlacementAlgorithm
* to place as a batch. The placement algorithm tends to give more optimal
* placements if more requests are batched together.
| PlacementAlgorithmOutput attempt - the number of times the requests in this|
* Iterator Type.
|
A grouping of Scheduling Requests which are sent to the PlacementAlgorithm
* to place as a batch
Iterator Type.
|
The placement algorithm tends to give more optimal
* placements if more requests are batched together.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
BlockPlacementStatusWithNodeGroup.java
|
* An implementation of @see BlockPlacementStatus for
* @see BlockPlacementPolicyWithNodeGroup
|
An implementation of @see BlockPlacementStatus
| null |
for * @see BlockPlacementPolicyWithNodeGroup
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
BlocksMap.java
|
* This class maintains the map from a block to its metadata.
* block's metadata currently includes blockCollection it belongs to and
* the datanodes that store the block.
|
This class maintains the map from a block to its metadata.
|
block's metadata currently includes blockCollection it belongs to and
* the datanodes that store the block.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
BlockUtils.java
|
* Utils functions to help block functions.
|
Utils functions to help block functions
| null |
Utils functions to help block functions
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ByteArrayEncodingState.java
|
* A utility class that maintains encoding state during an encode call using
* byte array inputs.
|
A utility class that maintains encoding state during an encode call using * byte array inputs.
|
A utility class that maintains encoding state during an encode call using * byte array inputs.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
CapacitySchedulerPlanFollower.java
|
* This class implements a {@link PlanFollower}. This is invoked on a timer, and
* it is in charge to publish the state of the {@link Plan}s to the underlying
* {@link CapacityScheduler}. This implementation does so, by
* adding/removing/resizing leaf queues in the scheduler, thus affecting the
* dynamic behavior of the scheduler in a way that is consistent with the
* content of the plan. It also updates the plan's view on how much resources
* are available in the cluster.
* * This implementation of PlanFollower is relatively stateless, and it can
* synchronize schedulers and Plans that have arbitrary changes (performing set
* differences among existing queues). This makes it resilient to frequency of
* synchronization, and RM restart issues (no "catch up" is necessary).
|
This class implements a {@link PlanFollower}. This is invoked on a timer, and
* it is in charge to publish the state of the {@link Plan}s to the underlying
* {@link CapacityScheduler}.
|
This implementation does so, by
* adding/removing/resizing leaf queues in the scheduler, thus affecting the
* dynamic behavior of the scheduler in a way that is consistent with the
* content of the plan. It also updates the plan's view on how much resources
* are available in the cluster.
|
This implementation of PlanFollower is relatively stateless, and it can
* synchronize schedulers and Plans that have arbitrary changes (performing set
* differences among existing queues). This makes it resilient to frequency of
* synchronization, and RM restart issues (no "catch up" is necessary).
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
Classpath.java
|
* Command-line utility for getting the full classpath needed to launch a Hadoop
* client application. If the hadoop script is called with "classpath" as the
* command, then it simply prints the classpath and exits immediately without
* launching a JVM. The output likely will include wildcards in the classpath.
* If there are arguments passed to the classpath command, then this class gets
* called. With the --glob argument, it prints the full classpath with wildcards
* expanded. This is useful in situations where wildcard syntax isn't usable.
* With the --jar argument, it writes the classpath as a manifest in a jar file.
* This is useful in environments with short limitations on the maximum command
* line length, where it may not be possible to specify the full classpath in a
* command. For example, the maximum command line length on Windows is 8191
* characters.
|
Command-line utility for getting the full classpath needed to launch a Hadoop * client application.
|
If the hadoop script is called with "classpath" as the * command, then it simply prints the classpath and exits immediately without * launching a JVM. The output likely will include wildcards in the classpath. * If there are arguments passed to the classpath command, then this class gets * called. With the --glob argument, it prints the full classpath with wildcards * expanded. This is useful in situations where wildcard syntax isn't usable. * With the --jar argument, it writes the classpath as a manifest in a jar file. * This is useful in environments with short limitations on the maximum command * line length, where it may not be possible to specify the full classpath in a * command. For example, the maximum command line length on Windows is 8191 * characters.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ComparableVersion.java
|
Code source of this file:| http://grepcode.com/file/repo1.maven.org/maven2/| org.apache.maven/maven-artifact/3.1.1/| org/apache/maven/artifact/versioning/ComparableVersion.java/|| Modifications made on top of the source:| 1. Changed| package org.apache.maven.artifact.versioning;| to| package org.apache.hadoop.util;| 2. Removed author tags to clear hadoop author tag warning|
* Generic implementation of version comparison.
* * <p>Features:
* <ul>
* <li>mixing of '<code>-</code>' (dash) and '<code>.</code>' (dot) separators,</li>
* <li>transition between characters and digits also constitutes a separator:
* <code>1.0alpha1 => [1, 0, alpha, 1]</code></li>
* <li>unlimited number of version components,</li>
* <li>version components in the text can be digits or strings,</li>
* <li>strings are checked for well-known qualifiers and the qualifier ordering is used for version ordering.
* Well-known qualifiers (case insensitive) are:<ul>
* <li><code>alpha</code> or <code>a</code></li>
* <li><code>beta</code> or <code>b</code></li>
* <li><code>milestone</code> or <code>m</code></li>
* <li><code>rc</code> or <code>cr</code></li>
* <li><code>snapshot</code></li>
* <li><code>(the empty string)</code> or <code>ga</code> or <code>final</code></li>
* <li><code>sp</code></li>
* </ul>
* Unknown qualifiers are considered after known qualifiers, with lexical order (always case insensitive),
* </li>
* <li>a dash usually precedes a qualifier, and is always less important than something preceded with a dot.</li>
* </ul><p>
*
* @see <a href="https://cwiki.apache.org/confluence/display/MAVENOLD/Versioning">"Versioning" on Maven Wiki</a>
|
* Represents a numeric item in the version item list.
|
* Represents a string in the version item list, usually a qualifier.
|
* Represents a version list item. This class is used both for the global item list and for sub-lists (which start
* with '-(number)' in the version specification).
|
Generic implementation of version comparison. Represents a numeric item in the version item list.
|
* Represents a string in the version item list, usually a qualifier.
|
* Represents a version list item. This class is used both for the global item list and for sub-lists (which start
* with '-(number)' in the version specification).
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ConfigurationException.java
|
* Exception to throw in case of a configuration problem.
|
Exception to throw in case of a configuration problem.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ContainerFinishData.java
|
* The class contains the fields that can be determined when
* <code>RMContainer</code> finishes, and that need to be stored persistently.
|
The class contains the fields that can be determined when * <code>RMContainer</code> finishes, and that need to be stored persistently.
| null |
The class contains the fields that can be determined when * <code>RMContainer</code> finishes, and that need to be stored persistently.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
CpuTimeTracker.java
|
* Utility for sampling and computing CPU usage.
|
Utility for sampling and computing CPU usage
| null |
Utility for sampling and computing CPU usage
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DBProfile.java
|
* User visible configs based RocksDB tuning page. Documentation for Options.
* <p>
* https://github.com/facebook/rocksdb/blob/master/include/rocksdb/options.h
* <p>
* Most tuning parameters are based on this URL.
* <p>
* https://github.com/facebook/rocksdb/wiki/Setup-Options-and-Basic-Tuning
|
* User visible configs based RocksDB tuning page.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DefaultAnonymizingRumenSerializer.java
|
* Default Rumen JSON serializer.
|
Default Rumen JSON serializer.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DelegatingSSLSocketFactory.java
|
* A {@link SSLSocketFactory} that can delegate to various SSL implementations.
* Specifically, either OpenSSL or JSSE can be used. OpenSSL offers better
* performance than JSSE and is made available via the
* <a href="https://github.com/wildfly/wildfly-openssl">wildlfy-openssl</a>
* library.
*
* <p>
* The factory has several different modes of operation:
* <ul>
* <li>OpenSSL: Uses the wildly-openssl library to delegate to the
* system installed OpenSSL. If the wildfly-openssl integration is not
* properly setup, an exception is thrown.</li>
* <li>Default: Attempts to use the OpenSSL mode, if it cannot load the
* necessary libraries, it falls back to the Default_JSEE mode.</li>
* <li>Default_JSSE: Delegates to the JSSE implementation of SSL, but
* it disables the GCM cipher when running on Java 8.</li>
* <li>Default_JSSE_with_GCM: Delegates to the JSSE implementation of
* SSL with no modification to the list of enabled ciphers.</li>
* </ul>
* </p>
| This should only be modified within the #initializeDefaultFactory|
* Default indicates Ordered, preferred OpenSSL, if failed to load then fall
* back to Default_JSSE.
*
* <p>
* Default_JSSE is not truly the the default JSSE implementation because
* the GCM cipher is disabled when running on Java 8. However, the name
* was not changed in order to preserve backwards compatibility. Instead,
* a new mode called Default_JSSE_with_GCM delegates to the default JSSE
* implementation with no changes to the list of enabled ciphers.
* </p>
|
* A {@link SSLSocketFactory} that can delegate to various SSL implementations.
|
Specifically, either OpenSSL or JSSE can be used. OpenSSL offers better
* performance than JSSE and is made available via the
* <a href="https://github.com/wildfly/wildfly-openssl">wildlfy-openssl</a>
* library.
|
<p>
* Default_JSSE is not truly the the default JSSE implementation because
* the GCM cipher is disabled when running on Java 8. However, the name
* was not changed in order to preserve backwards compatibility. Instead,
* a new mode called Default_JSSE_with_GCM delegates to the default JSSE
* implementation with no changes to the list of enabled ciphers.
* </p>
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DelegationTokenIdentifier.java
|
* A delegation token identifier that is specific to HDFS.
|
A delegation token identifier that is specific to HDFS.
|
A delegation token identifier that is specific to HDFS.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DeleteApplicationHomeSubClusterRequest.java
|
* The request to <code>Federation state store</code> to delete the mapping of
* home subcluster of a submitted application.
|
The request to <code>Federation state store</code> to delete the mapping of * home subcluster of a submitted application.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DFSConfigKeys.java
|
* This class contains constants for configuration keys and default values
* used in hdfs.
|
This class contains constants for configuration keys and default values used in hdfs.
| null |
This class contains constants for configuration keys and default values used in hdfs.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DfsServlet.java
|
* A base class for the servlets in DFS.
|
A base class for the servlets in DFS
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DiskBalancerCluster.java
|
* DiskBalancerCluster represents the nodes that we are working against.
* <p>
* Please Note :
* Semantics of inclusionList and exclusionLists.
* <p>
* If a non-empty inclusionList is specified then the diskBalancer assumes that
* the user is only interested in processing that list of nodes. This node list
* is checked against the exclusionList and only the nodes in inclusionList but
* not in exclusionList is processed.
* <p>
* if inclusionList is empty, then we assume that all live nodes in the nodes is
* to be processed by diskBalancer. In that case diskBalancer will avoid any
* nodes specified in the exclusionList but will process all nodes in the
* cluster.
* <p>
* In other words, an empty inclusionList is means all the nodes otherwise
* only a given list is processed and ExclusionList is always honored.
|
DiskBalancerCluster represents the nodes that we are working against.
|
Please Note :
* Semantics of inclusionList and exclusionLists.
* <p>
* If a non-empty inclusionList is specified then the diskBalancer assumes that
* the user is only interested in processing that list of nodes. This node list
* is checked against the exclusionList and only the nodes in inclusionList but
* not in exclusionList is processed.
* <p>
* if inclusionList is empty, then we assume that all live nodes in the nodes is
* to be processed by diskBalancer. In that case diskBalancer will avoid any
* nodes specified in the exclusionList but will process all nodes in the
* cluster.
* <p>
* In other words, an empty inclusionList is means all the nodes otherwise
* only a given list is processed and ExclusionList is always honored.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DistributedSchedulingAllocateRequest.java
|
* Object used by the Application Master when distributed scheduling is enabled,
* in order to forward the {@link AllocateRequest} for GUARANTEED containers to
* the Resource Manager, and to notify the Resource Manager about the allocation
* of OPPORTUNISTIC containers through the Distributed Scheduler.
|
* Object used by the Application Master when distributed scheduling is enabled,
|
in order to forward the {@link AllocateRequest} for GUARANTEED containers to * the Resource Manager, and to notify the Resource Manager about the allocation * of OPPORTUNISTIC containers through the Distributed Scheduler.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
DockerKillCommand.java
|
* Encapsulates the docker kill command and its command line arguments.
|
Encapsulates the docker kill command and its command line arguments.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
EditLogTailer.java
|
* EditLogTailer represents a thread which periodically reads from edits
* journals and applies the transactions contained within to a given
* FSNamesystem.
|
* The thread which does the actual work of tailing edits journals and
* applying the transactions to the FSNS.
|
* Manage the 'active namenode proxy'. This cannot just be the a single proxy since we could
* failover across a number of NameNodes, rather than just between an active and a standby.
* <p>
* We - lazily - get a proxy to one of the configured namenodes and attempt to make the request
* against it. If it doesn't succeed, either because the proxy failed to be created or the request
* failed, we try the next NN in the list. We try this up to the configuration maximum number of
* retries before throwing up our hands. A working proxy is retained across attempts since we
* expect the active NameNode to switch rarely.
* <p>
* This mechanism is <b>very bad</b> for cases where we care about being <i>fast</i>; it just
* blindly goes and tries namenodes.
|
EditLogTailer represents a thread which periodically reads from edits
* journals and applies the transactions contained within to a given
* FSNamesystem.
|
|
* The thread which does the actual work of tailing edits journals and
* applying the transactions to the FSNS.
|
|
* Manage the 'active namenode proxy'. This cannot just be the a single proxy since we could
* failover across a number of NameNodes, rather than just between an active and a standby.
* <p>
* We - lazily - get a proxy to one of the configured namenodes and attempt to make the request
* against it. If it doesn't succeed, either because the proxy failed to be created or the request
* failed, we try the next NN in the list. We try this up to the configuration maximum number of
* retries before throwing up our hands. A working proxy is retained across attempts since we
* expect the active NameNode to switch rarely.
* <p>
* This mechanism is <b>very bad</b> for cases where we care about being <i>fast</i>; it just
* blindly goes and tries namenodes.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ErasureCodingPolicyManager.java
|
* This manages erasure coding policies predefined and activated in the system.
* It loads customized policies and syncs with persisted ones in
* NameNode image.
*
* This class is instantiated by the FSNamesystem.
|
This manages erasure coding policies predefined and activated in the system
|
It loads customized policies and syncs with persisted ones in
* NameNode image.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
EventWatcher.java
|
* Event watcher the (re)send a message after timeout.
* <p>
* Event watcher will send the tracked payload/event after a timeout period
* unless a confirmation from the original event (completion event) is arrived.
*
* @param <TIMEOUT_PAYLOAD> The type of the events which are tracked.
* @param <COMPLETION_PAYLOAD> The type of event which could cancel the
* tracking.
|
* Event watcher the (re)send a message after timeout.
| null |
Event watcher will send the tracked payload/event after a timeout period
* unless a confirmation from the original event (completion event) is arrived.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
Expression.java
|
* Interface describing an expression to be used in the
* {@link org.apache.hadoop.fs.shell.find.Find} command.
|
Interface describing an expression to be used in the * {@link org.apache.hadoop.fs.shell.find.Find} command.
| null |
to be used in the * {@link org.apache.hadoop.fs.shell.find.Find} command.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
FairOrderingPolicy.java
|
* An OrderingPolicy which orders SchedulableEntities for fairness (see
* FairScheduler
* FairSharePolicy), generally, processes with lesser usage are lesser. If
* sizedBasedWeight is set to true then an application with high demand
* may be prioritized ahead of an application with less usage. This
* is to offset the tendency to favor small apps, which could result in
* starvation for large apps if many small ones enter and leave the queue
* continuously (optional, default false)
|
* An OrderingPolicy which orders SchedulableEntities for fairness (see
* FairScheduler
* FairSharePolicy), generally, processes with lesser usage are lesser
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
FederationPolicyException.java
|
* Generic policy exception.
|
Generic policy exception
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
FederationProtocolPBTranslator.java
|
* Helper class for setting/getting data elements in an object backed by a
* protobuf implementation.
|
Helper class for setting/getting data elements in an object backed by a * protobuf implementation.
| null |
for setting/getting data elements in an object backed by a * protobuf implementation.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
FederationStateStoreInvalidInputException.java
|
* Exception thrown by the {@code FederationMembershipStateStoreInputValidator},
* {@code FederationApplicationHomeSubClusterStoreInputValidator},
* {@code FederationPolicyStoreInputValidator} if the input is invalid.
*
|
Exception thrown by the {@code FederationMembershipStateStoreInputValidator},
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
FileSystemApplicationHistoryStore.java
|
* File system implementation of {@link ApplicationHistoryStore}. In this
* implementation, one application will have just one file in the file system,
* which contains all the history data of one application, and its attempts and
* containers. {@link #applicationStarted(ApplicationStartData)} is supposed to
* be invoked first when writing any history data of one application and it will
* open a file, while {@link #applicationFinished(ApplicationFinishData)} is
* supposed to be last writing operation and will close the file.
|
* File system implementation of {@link ApplicationHistoryStore}.
|
In this * implementation, one application will have just one file in the file system, * which contains all the history data of one application, and its attempts and * containers. {@link #applicationStarted(ApplicationStartData)} is supposed to
* be invoked first when writing any history data of one application and it will
* open a file, while {@link #applicationFinished(ApplicationFinishData)} is
* supposed to be last writing operation and will close the file.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
FsConstants.java
|
* FileSystem related constants.
|
FileSystem related constants
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
GetApplicationHomeSubClusterResponsePBImpl.java
|
* Protocol buffer based implementation of
* {@link GetApplicationHomeSubClusterResponse}.
|
Protocol buffer based implementation of * {@link GetApplicationHomeSubClusterResponse}.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
GetNamespaceInfoResponse.java
|
* API response for listing HDFS namespaces present in the state store.
|
API response for listing HDFS namespaces present in the state store.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
GetNodesToAttributesResponse.java
|
* <p>
* The response sent by the <code>ResourceManager</code> to a client requesting
* nodes to attributes mapping.
* </p>
*
* @see ApplicationClientProtocol#getNodesToAttributes
* (GetNodesToAttributesRequest)
|
The response sent by the <code>ResourceManager</code> to a client requesting * nodes to attributes mapping.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
GetSafeModeRequestPBImpl.java
|
* Protobuf implementation of the state store API object
* GetSafeModeRequest.
|
Protobuf implementation of the state store API object * GetSafeModeRequest.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
GetSubClusterPolicyConfigurationRequestPBImpl.java
|
* Protocol buffer based implementation of
* {@link GetSubClusterPolicyConfigurationRequest}.
|
Protocol buffer based implementation of * {@link GetSubClusterPolicyConfigurationRequest}.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
HadoopIllegalArgumentException.java
|
* Indicates that a method has been passed illegal or invalid argument. This
* exception is thrown instead of IllegalArgumentException to differentiate the
* exception thrown in Hadoop implementation from the one thrown in JDK.
|
Indicates that a method has been passed illegal or invalid argument.
|
This * exception is thrown instead of IllegalArgumentException to differentiate the * exception thrown in Hadoop implementation from the one thrown in JDK.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
HashResolver.java
|
* Order the destinations based on consistent hashing.
|
Order the destinations based on consistent hashing.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
HttpHeaderConfigurations.java
|
* Responsible to keep all abfs http headers here.
| null |
Responsible to keep all abfs http headers here.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
IDataLoader.java
|
* an IDataLoader loads data on demand
|
an IDataLoader loads data on demand
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
InconsistentS3ClientFactory.java
|
* S3 Client factory used for testing with eventual consistency fault injection.
* This client is for testing <i>only</i>; it is in the production
* {@code hadoop-aws} module to enable integration tests to use this
* just by editing the Hadoop configuration used to bring up the client.
|
S3 Client factory used for testing with eventual consistency fault injection.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
InfoKeyHandler.java
|
* Executes Info Object.
|
Executes Info Object
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
InvalidContainerRequestException.java
|
* Thrown when an arguments are combined to construct a
* <code>AMRMClient.ContainerRequest</code> in an invalid way.
|
* Thrown when an arguments are combined to construct a * <code>AMRMClient.ContainerRequest</code> in an invalid way.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ITestAbfsReadWriteAndSeek.java
|
* Test read, write and seek.
* Uses package-private methods in AbfsConfiguration, which is why it is in
* this package.
|
Test read, write and seek.
| null |
Uses package-private methods in AbfsConfiguration, which is why it is in * this package.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ITestAzureNativeContractSeek.java
|
* Contract test.
|
Contract test
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ITestCommitOperations.java
|
* Test the low-level binding of the S3A FS to the magic commit mechanism,
* and handling of the commit operations.
* This is done with an inconsistent client.
|
* Test the low-level binding of the S3A FS to the magic commit mechanism, * and handling of the commit operations.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ITestListPerformance.java
|
* Test list performance.
|
Test list performance
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ITestS3Select.java
|
* Test the S3 Select feature with some basic SQL Commands.
* Executed if the destination store declares its support for the feature.
|
Test the S3 Select feature with some basic SQL Commands.
|
Executed if the destination store declares its support for the feature.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ITestS3SelectCLI.java
|
* Test the S3 Select CLI through some operations against landsat
* and files generated from it.
|
Test the S3 Select CLI through some operations against landsat * and files generated from it.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
KerberosDelegationTokenAuthenticator.java
|
* The <code>KerberosDelegationTokenAuthenticator</code> provides support for
* Kerberos SPNEGO authentication mechanism and support for Hadoop Delegation
* Token operations.
* <p>
* It falls back to the {@link PseudoDelegationTokenAuthenticator} if the HTTP
* endpoint does not trigger a SPNEGO authentication
|
The <code>KerberosDelegationTokenAuthenticator</code> provides support for
* Kerberos SPNEGO authentication mechanism and support for Hadoop Delegation
* Token operations.
|
It falls back to the {@link PseudoDelegationTokenAuthenticator} if the HTTP
* endpoint does not trigger a SPNEGO authentication
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
LocalizationStatusPBImpl.java
|
* PB Impl of {@link LocalizationStatus}.
|
PB Impl of {@link LocalizationStatus}.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
LocatedFileStatus.java
|
* This class defines a FileStatus that includes a file's block locations.
|
This class defines a FileStatus that includes a file's block locations
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
LoggedTask.java
|
* A {@link LoggedTask} represents a [hadoop] task that is part of a hadoop job.
* It knows about the [pssibly empty] sequence of attempts, its I/O footprint,
* and its runtime.
* * All of the public methods are simply accessors for the instance variables we
* want to write out in the JSON files.
*
|
A {@link LoggedTask} represents a [hadoop] task that is part of a hadoop job.
|
It knows about the [pssibly empty] sequence of attempts, its I/O footprint,
* and its runtime.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
LogParserUtil.java
|
* Common utility functions for {@link LogParser}.
|
Common utility functions for {@link LogParser}
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
LogWebService.java
|
* Support only ATSv2 client only.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
LRUCacheHashMap.java
|
* LRU cache with a configurable maximum cache size and access order.
|
LRU cache with a configurable maximum cache size and access order.
|
LRU cache with a configurable maximum cache size and access order.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
MapContext.java
|
* The context that is given to the {@link Mapper}.
* @param <KEYIN> the key input type to the Mapper
* @param <VALUEIN> the value input type to the Mapper
* @param <KEYOUT> the key output type from the Mapper
* @param <VALUEOUT> the value output type from the Mapper
|
The context that is given to the {@link Mapper}.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
MetaBlockAlreadyExists.java
|
* Exception - Meta Block with the same name already exists.
|
Exception - Meta Block with the same name already exists.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
MetricsCache.java
|
* A metrics cache for sinks that don't support sparse updates.
|
* Cached record
|
A metrics cache for sinks that don't support sparse updates. Context of the Queues in Scheduler.
|
Cached record
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
NativeBatchProcessor.java
|
* used to create channel, transfer data and command between Java and native
| null | null |
used to create channel, transfer data and command between Java and native
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
NativeSingleLineParser.java
|
* This sample parser will parse the sample log and extract the resource
* skyline.
* <p> The expected log format is: NormalizedJobName NumInstances SubmitTime
* StartTime EndTime JobInstanceName memUsage coreUsage
|
This sample parser will parse the sample log and extract the resource
* skyline.
|
<p> The expected log format is: NormalizedJobName NumInstances SubmitTime
* StartTime EndTime JobInstanceName memUsage coreUsage
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
Nfs3Metrics.java
|
* This class is for maintaining the various NFS gateway activity statistics and
* publishing them through the metrics interfaces.
|
This class is for maintaining the various NFS gateway activity statistics and * publishing them through the metrics interfaces.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
Nfs3Status.java
|
* Success or error status is reported in NFS3 responses.
|
* Success or error status is reported in NFS3 responses.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
Node2ObjectsMap.java
|
* This data structure maintains the list of containers that is on a datanode.
* This information is built from the DN container reports.
|
This data structure maintains the list of containers that is on a datanode.
|
This information is built from the DN container reports.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
NodeUpdateType.java
|
* <p>Taxonomy of the <code>NodeState</code> that a
* <code>Node</code> might transition into.</p>
*
|
<p>Taxonomy of the <code>NodeState</code> that a * <code>Node</code> might transition into.</p>
|
<p>Taxonomy of the <code>NodeState</code> that a * <code>Node</code> might transition into.</p>
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
NullOutputFormat.java
|
* Consume all outputs and put them in /dev/null.
|
Consume all outputs and put them in /dev/null.
|
Consume all outputs and put them in /dev/null.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
OMNodeDetails.java
|
* This class stores OM node details.
|
* Builder class for OMNodeDetails.
|
* This class stores OM node details.
|
* Builder class for OMNodeDetails.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
OpportunisticContainersStatusPBImpl.java
|
* Protocol Buffer implementation of OpportunisticContainersStatus.
|
Protocol Buffer implementation of OpportunisticContainersStatus
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
OzoneObj.java
|
* Class representing an unique ozone object.
* |
* Ozone Objects supported for ACL.
|
* Ozone Objects supported for ACL.
|
Class representing an unique ozone object
|
Ozone Objects supported for ACL.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
Parser.java
|
A class for parsing outputs
|
A class for parsing outputs
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
PartialOutputCommitter.java
|
* Interface for an {@link org.apache.hadoop.mapreduce.OutputCommitter}
* implementing partial commit of task output, as during preemption.
|
* Interface for an {@link org.apache.hadoop.mapreduce.OutputCommitter} * implementing partial commit of task output, as during preemption.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
PartitionResourcesInfo.java
|
* This class represents queue/user resource usage info for a given partition
|
This class represents queue/user resource usage info for a given partition
| null |
This class represents queue/user resource usage info for a given partition
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
PlanningQuotaException.java
|
* This exception is thrown if the user quota is exceed while accepting or
* updating a reservation.
|
This exception is thrown if the user quota is exceed while accepting or * updating a reservation.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ProcessIdFileReader.java
|
* Helper functionality to read the pid from a file.
|
Helper functionality to read the pid from a file.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
QuasiMonteCarlo.java
|
* A map/reduce program that estimates the value of Pi
* using a quasi-Monte Carlo (qMC) method.
* Arbitrary integrals can be approximated numerically by qMC methods.
* In this example,
* we use a qMC method to approximate the integral $I = \int_S f(x) dx$,
* where $S=[0,1)^2$ is a unit square,
* $x=(x_1,x_2)$ is a 2-dimensional point,
* and $f$ is a function describing the inscribed circle of the square $S$,
* $f(x)=1$ if $(2x_1-1)^2+(2x_2-1)^2 <= 1$ and $f(x)=0$, otherwise.
* It is easy to see that Pi is equal to $4I$.
* So an approximation of Pi is obtained once $I$ is evaluated numerically.
* * There are better methods for computing Pi.
* We emphasize numerical approximation of arbitrary integrals in this example.
* For computing many digits of Pi, consider using bbp.
*
* The implementation is discussed below.
*
* Mapper:
* Generate points in a unit square
* and then count points inside/outside of the inscribed circle of the square.
*
* Reducer:
* Accumulate points inside/outside results from the mappers.
*
* Let numTotal = numInside + numOutside.
* The fraction numInside/numTotal is a rational approximation of
* the value (Area of the circle)/(Area of the square) = $I$,
* where the area of the inscribed circle is Pi/4
* and the area of unit square is 1.
* Finally, the estimated value of Pi is 4(numInside/numTotal). | 2-dimensional Halton sequence {H(i)},
* where H(i) is a 2-dimensional point and i >= 1 is the index.
* Halton sequence is used to generate sample points for Pi estimation. |
* Mapper class for Pi estimation.
* Generate points in a unit square
* and then count points inside/outside of the inscribed circle of the square.
|
* Reducer class for Pi estimation.
* Accumulate points inside/outside results from the mappers.
|
* A map/reduce program that estimates the value of Pi.
|
using a quasi-Monte Carlo (qMC) method. Arbitrary integrals can be approximated numerically by qMC methods. * There are better methods for computing Pi.
* We emphasize numerical approximation of arbitrary integrals in this example.
* For computing many digits of Pi, consider using bbp.
*
* The implementation is discussed below.
*
* Mapper:
* Generate points in a unit square
* and then count points inside/outside of the inscribed circle of the square.
*
* Reducer:
* Accumulate points inside/outside results from the mappers. * Mapper class for Pi estimation.
* Generate points in a unit square
* and then count points inside/outside of the inscribed circle of the square.
|
* Reducer class for Pi estimation.
* Accumulate points inside/outside results from the mappers.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
Query.java
|
* Check if a record matches a query. The query is usually a partial record.
*
* @param <T> Type of the record to query.
|
Check if a record matches a query.
|
The query is usually a partial record.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
QueueName.java
|
* Represents a queue name.
|
Represents a queue name.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
RandomKeyGenerator.java
|
* Data generator tool to generate as much keys as possible.
|
* Wrapper to hold ozone keyValidate entry.
|
* Validates the write done in ozone cluster.
|
Data generator tool to generate as much keys as possible.
|
Wrapper to hold ozone keyValidate entry.
|
* Validates the write done in ozone cluster.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ReencryptionUpdater.java
|
* Class for finalizing re-encrypt EDEK operations, by updating file xattrs with
* edeks returned from reencryption.
* <p>
* The tasks are submitted by ReencryptionHandler.
* <p>
* It is assumed only 1 Updater will be running, since updating file xattrs
* requires namespace write lock, and performance gain from multi-threading
* is limited.
|
* Class to track re-encryption submissions of a single zone. It contains
* all the submitted futures, and statistics about how far the futures are
* processed.
|
* Class representing the task for one batch of a re-encryption command. It
* also contains statistics about how far this single batch has been executed.
|
* Class that encapsulates re-encryption details of a file.
|
Class for finalizing re-encrypt EDEK operations. Class representing the task for one batch of a re-encryption command. It
* also contains statistics about how far this single batch has been executed. Class that encapsulates re-encryption details of a file.
|
by updating file xattrs with * edeks returned from reencryption. * <p> * The tasks are submitted by ReencryptionHandler. * <p>. It is assumed only 1 Updater will be running, since updating file xattrs
* requires namespace write lock, and performance gain from multi-threading
* is limited.
|
* Class to track re-encryption submissions of a single zone. It contains
* all the submitted futures, and statistics about how far the futures are
* processed. It contains the
* file inode, stores the initial edek of the file, and the new edek
* after re-encryption.
* <p>
* Assumptions are the object initialization happens when dir lock is held,
* and inode is valid and is encrypted during initialization.
* <p>
* Namespace changes may happen during re-encryption, and if inode is changed
* the re-encryption is skipped.
|
Class for finalizing re-encrypt EDEK operations
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
RegistryInternalConstants.java
|
* Internal constants for the registry.
*
* These are the things which aren't visible to users.
*
|
Internal constants for the registry.
| null |
These are the things which aren't visible to users.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
RegistryOperations.java
|
* Registry Operations
|
Registry Operations
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ReInitializeContainerRequestPBImpl.java
|
CHECKSTYLE:OFF
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ResourceBlacklistRequest.java
|
* {@link ResourceBlacklistRequest} encapsulates the list of resource-names * which should be added or removed from the <em>blacklist</em> of resources * for the application.
* * @see ResourceRequest
* @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)
|
* {@link ResourceBlacklistRequest} encapsulates the list of resource-names * which should be added or removed from the <em>blacklist</em> of resources * for the application.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
ResourceRequestsJsonVerifications.java
|
* Performs value verifications on
* {@link org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceRequestInfo}
* objects against the values of {@link ResourceRequest}. With the help of the
* {@link Builder}, users can also make verifications of the custom resource
* types and its values.
|
* Builder class for {@link ResourceRequestsJsonVerifications}.
|
* Performs value verifications on
* {@link org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceRequestInfo}
* objects against the values of {@link ResourceRequest}.
|
* Builder class for {@link ResourceRequestsJsonVerifications}.
|
With the help of the
* {@link Builder}, users can also make verifications of the custom resource
* types and its values.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
RetriableDirectoryCreateCommand.java
|
* This class extends Retriable command to implement the creation of directories
* with retries on failure.
|
This class extends Retriable command to implement the creation of directories * with retries on failure.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
RMAdminRequestInterceptor.java
|
* Defines the contract to be implemented by the request intercepter classes,
* that can be used to intercept and inspect messages sent from the client to
* the resource manager.
|
Defines the contract to be implemented by the request intercepter classes
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
RSLegacyRawErasureCoderFactory.java
|
* A raw coder factory for the legacy raw Reed-Solomon coder in Java.
|
A raw coder factory for the legacy raw Reed-Solomon coder in Java.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
SafeModeException.java
|
* This exception is thrown when the name node is in safe mode.
* Client cannot modified namespace until the safe mode is off.
*
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
SchedulerQueueManager.java
|
*
* Context of the Queues in Scheduler.
*
|
Context of the Queues in Scheduler.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 |
SequenceFileRecordReader.java
|
* An {@link RecordReader} for {@link SequenceFile}s.
|
* An {@link RecordReader} for {@link SequenceFile}s.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
End of preview.
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.