id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2046314677
|
[FLINK-31472][Connectors/Base] Update AsyncSink Throttling test to use concurrent mailbox
What is the purpose of the change
This pull request updates AsyncSinkThrottlingTest to use concurrent TestSinkInitContextAnyThreadMailbox due the concurrent nature of the test and to address CI stability issues.
Brief change log
Updated AsyncSinkThrottlingTest to use concurrent TestSinkInitContextAnyThreadMailbox
Verifying this change
This change is a trivial rework for test suite, changes are verified by rerunning tests in debug mode.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): no
The public API, i.e., is any changed class annotated with @Public(Evolving): no
The serializers: no
The runtime per-record code paths (performance sensitive):no
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
The S3 file system connector: no
Documentation
Does this pull request introduce a new feature? no
If yes, how is the feature documented? not applicable
@vahmed-hamdy would you mind creating backports ?
|
gharchive/pull-request
| 2023-12-18T10:33:09 |
2025-04-01T04:55:57.940255
|
{
"authors": [
"snuyanzin",
"vahmed-hamdy"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/23946",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
58211574
|
[FLINK-1528][Gelly] Added Local Clustering Coefficient Example
Hi!
I closed https://github.com/apache/flink/pull/400 and created this instead, because the commit history was messed up.
I have the following exception when running LocalClusteringCoefficientExample
Exception in thread "main" org.apache.flink.api.common.functions.InvalidTypesException:
Type of TypeVariable 'K' in 'class flink.graphs.library.LocalClusteringCoefficient
$NeighborhoodEdgesFunction' could not be determined.
This is most likely a type erasure problem. The type extraction currently supports types
with generic variables only in cases where all variables in the return type can be deduced
from the input type(s).
I have no idea how to fix this though. This is @vasia's old reply:
The problem is here:
DataSet<Tuple2<K, HashSet<K>>> neighborhoods = input.reduceOnEdges(new NeighborhoodEdgesFunction<K>(), EdgeDirection.OUT);
and we try to get the return type Tuple2<K, HashSet<K>> like this:
public TypeInformation<T> getProducedType() {
return TypeExtractor.createTypeInfo(EdgesFunction.class, function.getClass(), 2, null, null);
}
Anyone have an idea? Is it because of the nested type parameter in Tuple2<K, HashSet<K>> ?
Also, I tried propagating an additional parameter Q extends HashSet<K>, but it didn't help.
Does anybody know what's wrong here?
Thanks in advance!
Does anyone have an idea about this? Is there a way to pass the HashSet<K> type?
In any case, even if not, I think it doesn't matter in this case.
I believe we shouldn't add this as a library method, as it is a quite naive implementation of local clustering coefficient. I'd prefer if we try to keep the library methods as efficient as possible.
However, I would definitely add this as an example, since it very nicely demonstrates how to use neighborhood methods and joinWithVertices (which are missing from the other examples).
So, I would suggest we change this to an example that uses a sample dataset, with e.g. Long ids and also allows file input. What do you think @balidani?
Hi!
I agree. I will change the example then :)
I made the changes @vasia suggested
Hi @balidani! Thanks for the example :))
I'm a bit confused about the directed / undirected case. I tried testing with both a directed and an undirected input and both my tests failed.. See here are the test cases I tried.
I suppose the directed case doesn't work because you only consider the out-neighbors, when you should count all. And for the undirected case, I think that the division by 2 you're making is giving a wrong result, because you're not counting edges, just out-neighbors neighbors.
Hi @vasia! I fixed the algorithm, now it will convert all edges to a pair of edges and call distinct on the edge set. This gives the correct results now. Thanks!
Hi @balidani! I think it's still a bit confusing how this example works.
As far as I understand, you expect a directed graph as input, but then you convert it to an undirected one and compute the clustering coefficient of the undirected graph, right?
We should either document this behavior clearly in the example description or compute the result for the input we expect. I personally prefer the second :-)
Also, take a look in the other examples and write a short usage description in the beginning of the example, including the input format that you expect when args are provided.
Let me know if you have questions! Thanks!
Hey @balidani!
Would you like to finish this up?
It's not really urgent, but it's almost finished and it'd be a pity to abandon :)
Someone else could also take over of course. Just let us know!
Yeah, I should definitely finish this! I'll take a look tonight, sorry about that :)
Any progress on this pull request, or should it be closed?
@balidani,
I think it'd be better if you close this PR. I don't think we'll add another example after #1000 is merged. I can take over and probably reuse some of your code to add a local clustering coefficient library method. Would that be OK? Thanks!
@vasia yes, I'm sorry about not finishing it, but I just did not have the time lately.
Cheers!
Thats' fine :) Thanks for the fast response!
|
gharchive/pull-request
| 2015-02-19T14:03:01 |
2025-04-01T04:55:57.950379
|
{
"authors": [
"StephanEwen",
"balidani",
"vasia"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/420",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
264656002
|
[FLINK-7810] Switch from custom Flakka to Akka 2.4.x
What is the purpose of the change
Drop support for Scala 2.10 and then update to a newer Akka version. Before, we were forced to stay on our custom Flakka 2.3.x version with back ports because newer Akka does not support Scala 2.10.
wonderful! +1
Thanks for doing this, we need Akka 2.4 badly for improved SSL support.
I think this is good now, +1
|
gharchive/pull-request
| 2017-10-11T16:36:26 |
2025-04-01T04:55:57.952681
|
{
"authors": [
"EronWright",
"StephanEwen",
"aljoscha",
"bowenli86"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/4807",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2376944086
|
[Java] Move StringSerializer#isLatin to StringUtils
Is your feature request related to a problem? Please describe.
Currently, the isLatin(char[]) judgment is coupled in org.apache.fury.serializer.StringSerializer, but it is not strongly related to org.apache.fury.serializer.StringSerializer. org.apache.fury.serializer.StringSerializer is only responsible for serializing related content.
Describe the solution you'd like
isLatin(char[]) should be moved to org.apache.fury.util.StringUtils
Additional context
Maybe we can add a good first issue label to the current task.
|
gharchive/issue
| 2024-06-27T03:48:48 |
2025-04-01T04:55:57.955240
|
{
"authors": [
"LiangliangSui"
],
"repo": "apache/fury",
"url": "https://github.com/apache/fury/issues/1703",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
296505010
|
GEODE-4403: Remove ProtobufErrorCode
as it's a duplicate of BasicTypes.ErrorCode
Thank you for submitting a contribution to Apache Geode.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message?
[x] Has your PR been rebased against the latest commit within the target branch (typically develop)?
[x] Is your initial contribution a single, squashed commit?
[x] Does gradlew build run cleanly?
[not really applicable] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and
submit an update to your PR as soon as possible. If you need help, please send an
email to dev@geode.apache.org.
Rebased on develop, will merge if tests pass.
|
gharchive/pull-request
| 2018-02-12T19:59:20 |
2025-04-01T04:55:57.959464
|
{
"authors": [
"galen-pivotal"
],
"repo": "apache/geode",
"url": "https://github.com/apache/geode/pull/1432",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
440867014
|
GEODE-6743: Remove GFJsonObject and GFJsonArray classes
Authored-by: Jens Deppe jdeppe@pivotal.io
Thank you for submitting a contribution to Apache Geode.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message?
[ ] Has your PR been rebased against the latest commit within the target branch (typically develop)?
[ ] Is your initial contribution a single, squashed commit?
[ ] Does gradlew build run cleanly?
[ ] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
Note:
Please ensure that once the PR is submitted, check Concourse for build issues and
submit an update to your PR as soon as possible. If you need help, please send an
email to dev@geode.apache.org.
@Petahhh
Failing test is unrelated to the code changes
|
gharchive/pull-request
| 2019-05-06T20:12:59 |
2025-04-01T04:55:57.963983
|
{
"authors": [
"jdeppe-pivotal"
],
"repo": "apache/geode",
"url": "https://github.com/apache/geode/pull/3555",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
905873407
|
GEODE-4826: Use spotlessCheck, not spotlessApply, as input to srcDistTar
Using spotlessApply as an input to srcDistTar allowed build and other
CI tasks to pass while still having style errors on the repo.
Authored-by: Robert Houghton rhoughton@pivotal.io
Thank you for submitting a contribution to Apache Geode.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message?
[X] Has your PR been rebased against the latest commit within the target branch (typically develop)?
[X] Is your initial contribution a single, squashed commit?
[X] Does gradlew build run cleanly?
[n/a] Have you written or updated unit tests to verify your changes?
[n/a ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
Note:
Please ensure that once the PR is submitted, check Concourse for build issues and
submit an update to your PR as soon as possible. If you need help, please send an
email to dev@geode.apache.org.
good catch, looks like this problem was only introduced about a week ago
since this is a fix to GEODE-9284 perhaps that would be a more appropriate ticket number than GEODE-4826?
|
gharchive/pull-request
| 2021-05-28T18:39:20 |
2025-04-01T04:55:57.969039
|
{
"authors": [
"onichols-pivotal",
"rhoughton-pivot"
],
"repo": "apache/geode",
"url": "https://github.com/apache/geode/pull/6539",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1467600272
|
Fix the type of default values to match what Schema.parse() does.
When calling field.defaultVal() (or equivalent compat helper methods)
under Avro 1.9.2+, we get back a type that's not consistent with what
Schema.parse() creates internally. This causes two Schemas constructed
in these two different ways (Schema.parse vs Schema.createRecord) to
be considered unequal, even though their toString() representations
are identical.
Fix this situation by calling parseJsonToObject(), which results in
the default value being interpreted similar to Schema.parse().
Dear Gobblin maintainers,
Please accept this PR. I understand that it will not be reviewed until I have checked off all the steps below!
JIRA
[ ] My PR addresses the following Gobblin JIRA issues and references them in the PR title. For example, "[GOBBLIN-XXX] My Gobblin PR"
https://issues.apache.org/jira/browse/GOBBLIN-XXX
Description
[ ] Here are some details about my PR, including screenshots (if applicable):
Tests
[ ] My PR adds the following unit tests OR does not need testing for this extremely good reason:
Commits
[ ] My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
Subject is separated from body by a blank line
Subject is limited to 50 characters
Subject does not end with a period
Subject uses the imperative mood ("add", not "adding")
Body wraps at 72 characters
Body explains "what" and "why", not "how"
Codecov Report
Merging #3611 (4240794) into master (c6d6c1b) will decrease coverage by 3.13%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## master #3611 +/- ##
============================================
- Coverage 46.87% 43.74% -3.14%
+ Complexity 10687 2060 -8627
============================================
Files 2125 408 -1717
Lines 83157 17627 -65530
Branches 9266 2154 -7112
============================================
- Hits 38983 7711 -31272
+ Misses 40598 9057 -31541
+ Partials 3576 859 -2717
Impacted Files
Coverage Δ
...c/main/java/org/apache/gobblin/util/AvroUtils.java
56.50% <100.00%> (ø)
...a/org/apache/gobblin/cluster/GobblinHelixTask.java
62.36% <0.00%> (-2.16%)
:arrow_down:
...ion/mapreduce/orc/OrcKeyCompactorOutputFormat.java
...ement/copy/TimeAwareCopyableGlobDatasetFinder.java
...bblin/metrics/event/EntityMissingEventBuilder.java
...n/runtime/job_exec/JobLauncherExecutionDriver.java
...n/yarn/event/GetApplicationReportFailureEvent.java
...t/version/FileStatusTimestampedDatasetVersion.java
...rg/apache/gobblin/source/jdbc/OracleExtractor.java
...in/instrumented/fork/InstrumentedForkOperator.java
... and 1710 more
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
gharchive/pull-request
| 2022-11-29T07:43:10 |
2025-04-01T04:55:57.988165
|
{
"authors": [
"codecov-commenter",
"srramach"
],
"repo": "apache/gobblin",
"url": "https://github.com/apache/gobblin/pull/3611",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2444088815
|
feat: Add @Deprecated annotations and explanations across GravitinoClientBase
I added @Deprecated annotations to relevant methods in the GravitinoClientBase class to improve code clarity and guide developers towards using updated methods. - Deprecated getVersion() method in favor of serverVersion(). - Provided explanations within the annotations to clarify why the deprecation was applied and what should be used instead. This change ensures that outdated methods are clearly marked and helps maintain codebase consistency. Closes #3084
As part of an effort to keep the Gravitino Java client codebase clean and up-to-date, deprecated annotations were added to certain methods in the GravitinoClientBase class. The getVersion() method was marked as deprecated since it duplicates the functionality provided by the serverVersion() method. An inline comment was provided to guide developers towards using the recommended method.
This is my first contribution to the project, and I look forward to contributing further. Please review the changes and provide feedback if needed.
What changes were proposed in this pull request?
This pull request adds @Deprecated annotations to specific methods within the GravitinoClientBase class. The getVersion() method was deprecated in favor of serverVersion() to reduce redundancy and steer developers towards the preferred method. Each deprecated method includes a detailed comment explaining why the deprecation was applied and suggesting the appropriate alternative.
Why are the changes needed?
These changes are necessary to maintain a clean and up-to-date codebase. By marking certain methods as deprecated, we help developers avoid using outdated or redundant methods and encourage best practices within the Gravitino Java client. Specifically:
The getVersion() method was deprecated as it duplicates the functionality of serverVersion().
Developers are now guided to use the recommended methods via inline comments.
Fix: #3084
Does this PR introduce any user-facing change?
There are no direct user-facing changes. However, developers using the GravitinoClientBase class will now see deprecated warnings for certain methods, guiding them towards the recommended alternatives.
How was this patch tested?
The patch was tested by running the existing unit tests to ensure that the addition of @Deprecated annotations did not introduce any regressions or break existing functionality. No new tests were required, as this change is purely annotative and does not alter the underlying logic.
@shaofengshi can you please help to review.
|
gharchive/pull-request
| 2024-08-02T05:32:05 |
2025-04-01T04:55:57.993281
|
{
"authors": [
"jerryshao",
"mrkartik00"
],
"repo": "apache/gravitino",
"url": "https://github.com/apache/gravitino/pull/4338",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1306853466
|
HDFS-16664. Use correct GenerationStamp when invalidating corrupt block replicas
Description of PR
Under certain conditions the Namenode can send the incorrect generationStamp to a datanode when invalidating a corrupt block replica.
the generationStamp sent in the DNA_INVALIDATE is based on the generationStamp of the block sent in the block report
the problem is that the datanode with the corrupt block replica (that receives the DNA_INVALIDATE) is not necissarily the same datanode that sent the block report
this can cause the above exception when the corrupt block replica on the datanode receiving the DNA_INVALIDATE & the block replica on the datanode that sent the block report have different generationStamps
Results in the following datanode exception:
2022-07-16 08:07:52,041 [BP-958471676-X-1657973243350 heartbeating to localhost/127.0.0.1:61365] WARN datanode.DataNode (BPServiceActor.java:processCommand(887)) - Error processing datanode Command
java.io.IOException: Failed to delete 1 (out of 1) replica(s):
0) Failed to delete replica blk_1073741825_1005: GenerationStamp not matched, existing replica is blk_1073741825_1001
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:2139)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:2034)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:735)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:680)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:883)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:678)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:849)
at java.lang.Thread.run(Thread.java:750)
See JIRA for additional details: https://issues.apache.org/jira/browse/HDFS-16664
How was this patch tested?
Validated the fix by leveraging the unit test "TestDecommission#testDeleteCorruptReplicaForUnderReplicatedBlock"
Failed Test - Before this change
> mvn test -Dtest=TestDecommission#testDeleteCorruptReplicaForUnderReplicatedBlock
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestDecommission.testDeleteCorruptReplicaForUnderReplicatedBlock:2035 Node 127.0.0.1:61366 failed to complete decommissioning. numTrackedNodes=1 , numPendingNodes=0 , adminState=Decommission In Progress , nodesWithReplica=[127.0.0.1:61366, 127.0.0.1:61419]
> cat target/surefire-reports/org.apache.hadoop.hdfs.TestDecommission-output.txt | grep 'Expected Replicas:\|XXX\|FINALIZED\|Block now\|Failed to delete'
2022-07-16 08:07:45,891 [Listener at localhost/61378] INFO hdfs.TestDecommission (TestDecommission.java:testDeleteCorruptReplicaForUnderReplicatedBlock(1942)) - Block now has 2 corrupt replicas on [127.0.0.1:61370 , 127.0.0.1:61375] and 1 live replica on 127.0.0.1:61366
2022-07-16 08:07:45,913 [Listener at localhost/61378] INFO hdfs.TestDecommission (TestDecommission.java:testDeleteCorruptReplicaForUnderReplicatedBlock(1974)) - Block now has 2 corrupt replicas on [127.0.0.1:61370 , 127.0.0.1:61375] and 1 decommissioning replica on 127.0.0.1:61366
XXX invalidateBlock dn=127.0.0.1:61415 , blk=1073741825_1001
XXX postponeBlock dn=127.0.0.1:61415 , blk=1073741825_1001
XXX invalidateBlock dn=127.0.0.1:61419 , blk=1073741825_1003
XXX addToInvalidates dn=127.0.0.1:61419 , blk=1073741825_1003
XXX addBlocksToBeInvalidated dn=127.0.0.1:61419 , blk=1073741825_1003
XXX rescanPostponedMisreplicatedBlocks blk=1073741825_1005
XXX DNA_INVALIDATE dn=/127.0.0.1:61419 , blk=1073741825_1003
XXX invalidate(on DN) dn=/127.0.0.1:61419 , invalidBlk=blk_1073741825_1003 , blkByIdAndGenStamp = FinalizedReplica, blk_1073741825_1003, FINALIZED
2022-07-16 08:07:49,084 [BP-958471676-X-1657973243350 heartbeating to localhost/127.0.0.1:61365] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:deleteAsync(226)) - Scheduling blk_1073741825_1003 replica FinalizedReplica, blk_1073741825_1003, FINALIZED
XXX addBlock dn=127.0.0.1:61419 , blk=1073741825_1005 <<< block report is coming from 127.0.0.1:61419 which has genStamp=1005
XXX invalidateCorruptReplicas dn=127.0.0.1:61415 , reported_blk=1073741825_1005 <<< corrupt replica is on 127.0.0.1:61415 which is expecting genStamp=1001
XXX addToInvalidates dn=127.0.0.1:61415 , blk=1073741825_1005
2022-07-16 08:07:49,431 [DatanodeAdminMonitor-0] INFO BlockStateChange (DatanodeAdminManager.java:logBlockReplicationInfo(417)) - Block: blk_1073741825_1005, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, decommissioned replicas: 0, decommissioning replicas: 1, maintenance replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes having this block: 127.0.0.1:61366 127.0.0.1:61419 , Current Datanode: 127.0.0.1:61366, Is current datanode decommissioning: true, Is current datanode entering maintenance: false
XXX addBlocksToBeInvalidated dn=127.0.0.1:61415 , blk=1073741825_1005 <<< Namenode sends wrong genStamp to 127.0.0.1:61415
XXX DNA_INVALIDATE dn=/127.0.0.1:61415 , blk=1073741825_1005
XXX invalidate(on DN) dn=/127.0.0.1:61415 , invalidBlk=blk_1073741825_1005 , blkByIdAndGenStamp = null
XXX invalidate(on DN) dn=/127.0.0.1:61415 , invalidBlk=blk_1073741825_1005 , blkById = FinalizedReplica, blk_1073741825_1001, FINALIZED
2022-07-16 08:07:52,041 [BP-958471676-X-1657973243350 heartbeating to localhost/127.0.0.1:61365] WARN datanode.DataNode (BPServiceActor.java:processCommand(887)) - Error processing datanode Command
java.io.IOException: Failed to delete 1 (out of 1) replica(s):
0) Failed to delete replica blk_1073741825_1005: GenerationStamp not matched, existing replica is blk_1073741825_1001
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:2139)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:2034)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:735)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:680)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:883)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:678)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:849)
at java.lang.Thread.run(Thread.java:750)
2022-07-16 08:07:52,384 [DataXceiver for client at /127.0.0.1:61434 [Receiving block BP-958471676-X-1657973243350:blk_1073741825_1005]] INFO datanode.DataNode (DataXceiver.java:writeBlock(939)) - opWriteBlock BP-958471676-X-1657973243350:blk_1073741825_1005 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block BP-958471676-X-1657973243350:blk_1073741825_1005 already exists in state FINALIZED and thus cannot be created.
2022-07-16 08:07:52,385 [DataXceiver for client at /127.0.0.1:61434 [Receiving block BP-958471676-X-1657973243350:blk_1073741825_1005]] INFO datanode.DataNode (DataXceiver.java:run(307)) - 127.0.0.1:61415:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:61434 dst: /127.0.0.1:61415; org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block BP-958471676-X-1657973243350:blk_1073741825_1005 already exists in state FINALIZED and thus cannot be created.
2022-07-16 08:07:54,422 [DatanodeAdminMonitor-0] INFO BlockStateChange (DatanodeAdminManager.java:logBlockReplicationInfo(417)) - Block: blk_1073741825_1005, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, decommissioned replicas: 0, decommissioning replicas: 1, maintenance replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes having this block: 127.0.0.1:61366 127.0.0.1:61419 , Current Datanode: 127.0.0.1:61366, Is current datanode decommissioning: true, Is current datanode entering maintenance: false
...
2022-07-16 08:08:24,426 [DatanodeAdminMonitor-0] INFO BlockStateChange (DatanodeAdminManager.java:logBlockReplicationInfo(417)) - Block: blk_1073741825_1005, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, decommissioned replicas: 0, decommissioning replicas: 1, maintenance replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes having this block: 127.0.0.1:61366 127.0.0.1:61419 , Current Datanode: 127.0.0.1:61366, Is current datanode decommissioning: true, Is current datanode entering maintenance: false
Note the inline comments above which illustrate the bug
Successful Test - After this change
> mvn test -Dtest=TestDecommission#testDeleteCorruptReplicaForUnderReplicatedBlock
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Logs:
> cat target/surefire-reports/org.apache.hadoop.hdfs.TestDecommission-output.txt | grep 'Expected Replicas:\|XXX\|FINALIZED\|Block now\|Failed to delete'
2022-07-16 07:54:30,648 [Listener at localhost/60376] INFO hdfs.TestDecommission (TestDecommission.java:testDeleteCorruptReplicaForUnderReplicatedBlock(1942)) - Block now has 2 corrupt replicas on [127.0.0.1:60364 , 127.0.0.1:60368] and 1 live replica on 127.0.0.1:60373
2022-07-16 07:54:30,669 [Listener at localhost/60376] INFO hdfs.TestDecommission (TestDecommission.java:testDeleteCorruptReplicaForUnderReplicatedBlock(1974)) - Block now has 2 corrupt replicas on [127.0.0.1:60364 , 127.0.0.1:60368] and 1 decommissioning replica on 127.0.0.1:60373
XXX invalidateBlock dn=127.0.0.1:60423 , blk=1073741825_1001
XXX postponeBlock dn=127.0.0.1:60423 , blk=1073741825_1001
XXX invalidateBlock dn=127.0.0.1:60427 , blk=1073741825_1003
XXX addToInvalidates dn=127.0.0.1:60427 , blk=1073741825_1003
XXX addBlocksToBeInvalidated dn=127.0.0.1:60427 , blk=1073741825_1003
XXX rescanPostponedMisreplicatedBlocks blk=1073741825_1005
XXX DNA_INVALIDATE dn=/127.0.0.1:60427 , blk=1073741825_1003
XXX invalidate(on DN) dn=/127.0.0.1:60427 , invalidBlk=blk_1073741825_1003 , blkByIdAndGenStamp = FinalizedReplica, blk_1073741825_1003, FINALIZED
2022-07-16 07:54:32,831 [BP-1469857843-X-1657972447604 heartbeating to localhost/127.0.0.1:60363] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:deleteAsync(226)) - Scheduling blk_1073741825_1003 replica FinalizedReplica, blk_1073741825_1003, FINALIZED
2022-07-16 07:54:33,772 [DatanodeAdminMonitor-0] INFO BlockStateChange (DatanodeAdminManager.java:logBlockReplicationInfo(417)) - Block: blk_1073741825_1005, Expected Replicas: 2, live replicas: 0, corrupt replicas: 1, decommissioned replicas: 0, decommissioning replicas: 1, maintenance replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes having this block: 127.0.0.1:60373 127.0.0.1:60423 , Current Datanode: 127.0.0.1:60373, Is current datanode decommissioning: true, Is current datanode entering maintenance: false
XXX addBlock dn=127.0.0.1:60427 , blk=1073741825_1005
XXX invalidateCorruptReplicas dn=127.0.0.1:60423 , reported_blk=1073741825_1005
XXX getCorruptReplicaGenerationStamp dn=127.0.0.1:60423 , genStamp=1001
XXX addToInvalidates dn=127.0.0.1:60423 , blk=1073741825_1001
XXX addBlocksToBeInvalidated dn=127.0.0.1:60423 , blk=1073741825_1001
XXX DNA_INVALIDATE dn=/127.0.0.1:60423 , blk=1073741825_1001
XXX invalidate(on DN) dn=/127.0.0.1:60423 , invalidBlk=blk_1073741825_1001 , blkByIdAndGenStamp = FinalizedReplica, blk_1073741825_1001, FINALIZED
2022-07-16 07:54:35,796 [BP-1469857843-X-1657972447604 heartbeating to localhost/127.0.0.1:60363] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:deleteAsync(226)) - Scheduling blk_1073741825_1001 replica FinalizedReplica, blk_1073741825_1001, FINALIZED
XXX addBlock dn=127.0.0.1:60423 , blk=1073741825_1005
2022-07-16 07:54:40,768 [Listener at localhost/60430] INFO hdfs.TestDecommission (TestDecommission.java:testDeleteCorruptReplicaForUnderReplicatedBlock(2050)) - Block now has 2 live replicas on [127.0.0.1:60423 , 127.0.0.1:60427] and 1 decommissioned replica on 127.0.0.1:60373
Using "getCorruptReplicaGenerationStamp" allows the Namenode to get the correct generationStamp for the corrupt block replica
For code changes:
[X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
[n/a] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
[n/a] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[n/a] If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 55s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
-1 :x:
test4tests
0m 0s
The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
40m 30s
trunk passed
+1 :green_heart:
compile
1m 41s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
compile
1m 32s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
checkstyle
1m 19s
trunk passed
+1 :green_heart:
mvnsite
1m 42s
trunk passed
+1 :green_heart:
javadoc
1m 20s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 45s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 47s
trunk passed
+1 :green_heart:
shadedclient
26m 13s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 25s
the patch passed
+1 :green_heart:
compile
1m 28s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javac
1m 28s
the patch passed
+1 :green_heart:
compile
1m 20s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
javac
1m 20s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 1s
/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 88 unchanged - 1 fixed = 91 total (was 89)
+1 :green_heart:
mvnsite
1m 26s
the patch passed
+1 :green_heart:
javadoc
0m 59s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 33s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
-1 :x:
spotbugs
3m 35s
/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html
hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 :green_heart:
shadedclient
26m 9s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 :x:
unit
372m 9s
/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
1m 1s
The patch does not generate ASF License warnings.
490m 23s
Reason
Tests
SpotBugs
module:hadoop-hdfs-project/hadoop-hdfs
Should org.apache.hadoop.hdfs.server.blockmanagement.CorruptReplicasMap$CorruptBlockReplica be a static inner class? At CorruptReplicasMap.java:inner class? At CorruptReplicasMap.java:[lines 61-64]
Failed junit tests
hadoop.hdfs.server.namenode.TestFsck
hadoop.hdfs.server.namenode.ha.TestObserverNode
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4568
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 653bab42d979 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / e66897ec0f5866ba5f378834e7aec96555b1c286
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/1/testReport/
Max. process+thread count
2700 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
[ERROR] Tests run: 6803, Failures: 0, Errors: 2, Skipped: 24, Flakes: 8
[INFO]
[ERROR] There are test failures
[ERROR] Errors:
[ERROR] org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckCorruptWhenOneReplicaIsCorrupt(org.apache.hadoop.hdfs.server.namenode.TestFsck)
[ERROR] Run 1: TestFsck.testFsckCorruptWhenOneReplicaIsCorrupt » Remote java.lang.NullPointer...
[ERROR] Run 2: TestFsck.testFsckCorruptWhenOneReplicaIsCorrupt » Remote java.lang.NullPointer...
[ERROR] Run 3: TestFsck.testFsckCorruptWhenOneReplicaIsCorrupt » Remote java.lang.NullPointer...
[INFO]
[ERROR] org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode.testMkdirsRaceWithObserverRead(org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode)
[ERROR] Run 1: TestObserverNode.testMkdirsRaceWithObserverRead:557 Client #3 lastSeenStateId=-9223372036854775808 activStateId=37
null
[ERROR] Run 2: TestObserverNode.testMkdirsRaceWithObserverRead:511 » Connect Call From 653bab...
[ERROR] Run 3: TestObserverNode.cleanUp:111 » Connect Call From 653bab42d979/172.17.0.2 to lo...
[ERROR] Run 4: TestObserverNode.testMkdirsRaceWithObserverRead:511 » Connect Call From 653bab...
[ERROR] Run 5: TestObserverNode.cleanUp:111 » Connect Call From 653bab42d979/172.17.0.2 to lo...
TestObserverNode#testMkdirsRaceWithObserverRead is a flaky test: https://issues.apache.org/jira/browse/HDFS-15646
The other test failure is relevant TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt:
[ERROR] Tests run: 35, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 281.876 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestFsck
[ERROR] testFsckCorruptWhenOneReplicaIsCorrupt(org.apache.hadoop.hdfs.server.namenode.TestFsck) Time elapsed: 7.446 s <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.blockmanagement.CorruptReplicasMap.removeFromCorruptReplicasMap(CorruptReplicasMap.java:145)
at org.apache.hadoop.hdfs.server.blockmanagement.CorruptReplicasMap.removeFromCorruptReplicasMap(CorruptReplicasMap.java:134)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeStoredBlock(BlockManager.java:4255)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeStaleReplicas(BlockManager.java:4262)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.updateLastBlock(BlockManager.java:4772)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6004)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:5966)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:1009)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:1195)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:620)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:588)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1192)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1193)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1116)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1919)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3131)
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 56s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
-1 :x:
test4tests
0m 0s
The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
40m 9s
trunk passed
+1 :green_heart:
compile
1m 40s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
compile
1m 30s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
checkstyle
1m 19s
trunk passed
+1 :green_heart:
mvnsite
1m 42s
trunk passed
+1 :green_heart:
javadoc
1m 19s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 40s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 49s
trunk passed
+1 :green_heart:
shadedclient
26m 0s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 23s
the patch passed
+1 :green_heart:
compile
1m 28s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javac
1m 28s
the patch passed
+1 :green_heart:
compile
1m 19s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
javac
1m 19s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 1s
/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 88 unchanged - 1 fixed = 91 total (was 89)
+1 :green_heart:
mvnsite
1m 26s
the patch passed
+1 :green_heart:
javadoc
0m 59s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 33s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 34s
the patch passed
+1 :green_heart:
shadedclient
25m 51s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
362m 11s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
1m 0s
The patch does not generate ASF License warnings.
479m 0s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4568
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 233b4644866c 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 9884cf088d26412ee438b22dd977469f9377c63b
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/2/testReport/
Max. process+thread count
1862 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/2/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
Yetus failing because no new unit test was added:
The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
Note that this change was unit tested via the existing test "TestDecommission#testDeleteCorruptReplicaForUnderReplicatedBlock", the only caveat is that this unit test is behaving differently when backporting HDFS-16064 to older Hadoop versions, which is what caused this bug to be discovered in the first place.
See the section "Why does unit test failure not reproduce in Hadoop trunk?" in HDFS-16664 for additional details on difference in behavior. In summary:
on trunk the corrupt block replicas are invalidated immediately when first block report is sent
on branch-3.2.1 the corrupt block replica invalidation is getting postponed for some reason
Need additional time to investigate this difference in behavior, if possible I will try to make the unit test behavior more consistent
Can also add some unit testing to new code in CorruptReplicasMap
I have confirmed that the change in unit test behavior that uncovered this bug is related to a change in this condition which was made as part of: https://issues.apache.org/jira/browse/HDFS-15200
In trunk the block invalidation does not get postponed because "dfs.namenode.corrupt.block.delete.immediately.enabled" defaults to true
I have added some additional test coverage for this change
"testDeleteCorruptReplicaForUnderReplicatedBlock" test without postponing (trunk behavior)
> mvn test -Dtest=org.apache.hadoop.hdfs.TestDecommission#testDeleteCorruptReplicaForUnderReplicatedBlock
> cat target/surefire-reports/org.apache.hadoop.hdfs.TestDecommission-output.txt | grep XXX
XXX invalidateBlock? dn=127.0.0.1:63741 , blk=1073741825_1001 , replicasOnStaleNodes=1
XXX addToInvalidates
XXX invalidateBlock? dn=127.0.0.1:63745 , blk=1073741825_1003 , replicasOnStaleNodes=0
XXX addToInvalidates
"testDeleteCorruptReplicaForUnderReplicatedBlock" test with postponing (branch-3.2.1 behavior)
> mvn test -Dtest=org.apache.hadoop.hdfs.TestDecommission#testDeleteCorruptReplicaForUnderReplicatedBlockWithInvalidationPostponed
> cat target/surefire-reports/org.apache.hadoop.hdfs.TestDecommission-output.txt | grep XXX
XXX invalidateBlock? dn=127.0.0.1:63927 , blk=1073741825_1001 , replicasOnStaleNodes=1
XXX postponeBlock
XXX invalidateBlock? dn=127.0.0.1:63931 , blk=1073741825_1003 , replicasOnStaleNodes=0
XXX addToInvalidates
XXX invalidateBlock? dn=127.0.0.1:63927 , blk=1073741825_1001 , replicasOnStaleNodes=0
XXX addToInvalidates
CorruptReplicasMap test
> mvn test -Dtest=org.apache.hadoop.hdfs.server.blockmanagement.TestCorruptReplicaInfo#testGetCorruptReplicaGenerationStamp
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
19m 47s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
40m 59s
trunk passed
+1 :green_heart:
compile
1m 40s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
compile
1m 32s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
checkstyle
1m 20s
trunk passed
+1 :green_heart:
mvnsite
1m 44s
trunk passed
+1 :green_heart:
javadoc
1m 21s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 42s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 52s
trunk passed
+1 :green_heart:
shadedclient
26m 23s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 24s
the patch passed
+1 :green_heart:
compile
1m 28s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javac
1m 28s
the patch passed
+1 :green_heart:
compile
1m 22s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
javac
1m 22s
the patch passed
-1 :x:
blanks
0m 0s
/blanks-eol.txt
The patch has 5 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
-0 :warning:
checkstyle
1m 1s
/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 109 unchanged - 2 fixed = 114 total (was 111)
+1 :green_heart:
mvnsite
1m 27s
the patch passed
+1 :green_heart:
javadoc
1m 1s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 34s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 34s
the patch passed
+1 :green_heart:
shadedclient
26m 40s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
383m 49s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 58s
The patch does not generate ASF License warnings.
522m 22s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/3/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4568
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 2219b5d96d1c 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 637b3b23cff287cb67e688661a1354dd0579eb35
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/3/testReport/
Max. process+thread count
2056 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/3/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 44s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
38m 28s
trunk passed
+1 :green_heart:
compile
1m 44s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
compile
1m 38s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
checkstyle
1m 27s
trunk passed
+1 :green_heart:
mvnsite
1m 49s
trunk passed
+1 :green_heart:
javadoc
1m 25s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 50s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 48s
trunk passed
+1 :green_heart:
shadedclient
22m 55s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 26s
the patch passed
+1 :green_heart:
compile
1m 27s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javac
1m 27s
the patch passed
+1 :green_heart:
compile
1m 22s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
javac
1m 22s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 1s
/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 108 unchanged - 2 fixed = 110 total (was 110)
+1 :green_heart:
mvnsite
1m 28s
the patch passed
+1 :green_heart:
javadoc
0m 59s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 32s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 23s
the patch passed
+1 :green_heart:
shadedclient
25m 16s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 :x:
unit
252m 58s
/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
1m 21s
The patch does not generate ASF License warnings.
366m 13s
Reason
Tests
Failed junit tests
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/4/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4568
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 3c44b7f1763e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 60433bffc3a7fbb2153a78782d257534f8c7e34f
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/4/testReport/
Max. process+thread count
3121 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/4/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
I am ignoring the 2 checkstyle violations for the following reasons:
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java:285: /**: First sentence should end with a period. [JavadocStyle]
This is an existing comment: https://github.com/apache/hadoop/blob/60433bffc3a7fbb2153a78782d257534f8c7e34f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java#L285
Since I am not modifying this comment (i.e. lines of code) in any way, I think its better that I don't touch it
./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java:1932: public void testDeleteCorruptReplicaForUnderReplicatedBlockInternal() throws Exception {:3: Method length is 233 lines (max allowed is 150). [MethodLength]
"testDeleteCorruptReplicaForUnderReplicatedBlockInternal" is an existing method "testDeleteCorruptReplicaForUnderReplicatedBlock" which was renamed. Since the method was already merged I don't think its necessary that I reduce the number of lines in the method.
Unit test failures:
[ERROR] Errors:
[ERROR] org.apache.hadoop.hdfs.TestDFSStorageStateRecovery.testDNStorageStates(org.apache.hadoop.hdfs.TestDFSStorageStateRecovery)
[ERROR] Run 1: TestDFSStorageStateRecovery.testDNStorageStates:399 » OutOfMemory unable to cr...
[ERROR] Run 2: TestDFSStorageStateRecovery.setUp:449 » OutOfMemory unable to create new nativ...
[ERROR] Run 3: TestDFSStorageStateRecovery.setUp:449 » OutOfMemory unable to create new nativ...
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testCountNodes(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testDecommission2NodeWithBusyNode(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testDecommissionTwoNodes(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testDecommissionWithBusyNode(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testDecommissionWithFailedReplicating(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:127->TestDecommissionWithStriped.writeConfigFile:594 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testDecommissionWithMissingBlock(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testDecommissionWithURBlockForSameBlockGroup(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testFileChecksumAfterDecommission(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testFileFullBlockGroup(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.teardown:169 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testFileMultipleBlockGroups(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testFileSmallerThanOneCell(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:127->TestDecommissionWithStriped.writeConfigFile:594 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testFileSmallerThanOneStripe(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.teardown:169 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor.testRecoveryWithDecommission(org.apache.hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor)
[ERROR] Run 1: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 2: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[ERROR] Run 3: TestDecommissionWithStripedBackoffMonitor>TestDecommissionWithStriped.setup:151 » OutOfMemory
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestReconstructStripedFile.testErasureCodingWorkerXmitsWeight(org.apache.hadoop.hdfs.TestReconstructStripedFile)
[ERROR] Run 1: TestReconstructStripedFile.testErasureCodingWorkerXmitsWeight:549->testErasureCodingWorkerXmitsWeight:568->writeFile:318 » OutOfMemory
[ERROR] Run 2: TestReconstructStripedFile.tearDown:171 » OutOfMemory unable to create new nat...
[ERROR] Run 3: TestReconstructStripedFile.testErasureCodingWorkerXmitsWeight:545->testErasureCodingWorkerXmitsWeight:559 » OutOfMemory
[ERROR] Run 4: TestReconstructStripedFile.tearDown:171 » OutOfMemory unable to create new nat...
[ERROR] Run 5: TestReconstructStripedFile.setup:155 » OutOfMemory unable to create new native...
[INFO]
[ERROR] org.apache.hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy.testRecoverOneDataBlock2(org.apache.hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy)
[ERROR] Run 1: TestReconstructStripedFileWithRandomECPolicy>TestReconstructStripedFile.setup:155 » OutOfMemory
[ERROR] Run 2: TestReconstructStripedFileWithRandomECPolicy>TestReconstructStripedFile.setup:155 » OutOfMemory
[ERROR] Run 3: TestReconstructStripedFileWithRandomECPolicy>TestReconstructStripedFile.setup:155 » OutOfMemory
All the unit test failures are caused by:
java.lang.OutOfMemoryError: unable to create new native thread
I strongly suspect this is not related to my change & is perhaps related to the runtime environment of the tests, perhaps the unit tests will pass if re-run again
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 44s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
38m 46s
trunk passed
+1 :green_heart:
compile
1m 43s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
compile
1m 38s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
checkstyle
1m 20s
trunk passed
+1 :green_heart:
mvnsite
1m 47s
trunk passed
+1 :green_heart:
javadoc
1m 26s
trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 46s
trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 42s
trunk passed
+1 :green_heart:
shadedclient
23m 18s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 24s
the patch passed
+1 :green_heart:
compile
1m 26s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javac
1m 26s
the patch passed
+1 :green_heart:
compile
1m 20s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
javac
1m 20s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 1s
/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 108 unchanged - 2 fixed = 110 total (was 110)
+1 :green_heart:
mvnsite
1m 28s
the patch passed
+1 :green_heart:
javadoc
1m 0s
the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 :green_heart:
javadoc
1m 29s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 :green_heart:
spotbugs
3m 23s
the patch passed
+1 :green_heart:
shadedclient
23m 43s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
243m 41s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
1m 14s
The patch does not generate ASF License warnings.
355m 29s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/5/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4568
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux dec11872578f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 99d99144e2bae0344e3ab0484a1e5609f051acea
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/5/testReport/
Max. process+thread count
3606 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4568/5/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
Hey Xiaohui, appreciate you taking the time to review this :)
Just wanted to clarify a few things:
This is a complete unnecessary change. It was mentioned in your last JIRA [HDFS-16064](https://issues.apache.org/jira/browse/HDFS-16064) by [yanbin.zhang](https://issues.apache.org/jira/secure/ViewProfile.jspa?name=it_singer) that you are finding a wrong root cause in that JIRA and you didnt coordinate or reply them ?
I did reply to yanbin.zhang in the JIRA, you can see my comment just under theirs. I would also add:
a) the root cause is correct as per the testing that was conducted
b) I have seen this issue (i.e. HDFS-16064) impact production Hadoop clusters
This current change seems to unnecessary. If it would be a real production bug , someone in community would have experienced it. It is not like that you are reading the code someday and you identified some bug like this. Block management is very much critical and unnecessary changes always lead to more critical bugs...
I understand that there is no clear reporting of this issue beforehand by the community. That being said, the behavior is reproducible (as per the testing details in this PR/JIRA) and the behavior results in block invalidation requests (i.e. DNA_INVALIDATE) sent to datanodes failing due to incorrect GenerationStamp. I would argue that sending an incorrect GenerationStamp in a DNA_INVALIDATE request is a bug because it causes the request to fail when it otherwise could have succeeded if the correct GenerationStamp is used.
I would ask that you more clearly articulate why this change is un-necessary. Do you believe that sending an incorrect GenerationStamp in a DNA_INVALIDATE request causing the request to fail is not a bug? Or do you believe that this reproducible behavior should not be addressed simply because it has not been frequently reported as production impacting by the community?
@XiaohuiSun1 XiaohuiSun1
Do you have any follow-up comments on this PR? Just wanted to give you a chance to reply before I reach out to a Hadoop committer for review
Concrete Reproduce Steps
Create a Hadoop cluster with:
single Namenode (i.e. non-HA)
5 datanodes (DN1, DN2, DN3, DN4, DN5)
dfs.namenode.corrupt.block.delete.immediately.enabled = false
dfs.replication = 3
Create the block with 3 replicas
echo "hello" > /tmp/test;
export HADOOP_USER_NAME=hdfs;
hdfs dfs -put /tmp/test /tmp/test;
hdfs dfs -ls /tmp/test;
Determine the block locations of the 3 replicas
> hdfs fsck /tmp/test -files -blocks -locations;
...
0. BP-452161995-NN-1662558403599:blk_1073741825_1001 len=6 Live_repl=3 [DatanodeInfoWithStorage[DN1:9866,DS-XXX,DISK], DatanodeInfoWithStorage[DN2:9866,DS-XXX,DISK], DatanodeInfoWithStorage[DN3:9866,DS-XXX,DISK]]
...
Stop DN1 & DN2
sudo systemctl disable hadoop-hdfs-datanode.service;
sudo systemctl stop hadoop-hdfs-datanode.service;
Append the block which will cause it to be written to 2 new block locations
> hdfs dfs -appendToFile /tmp/test /tmp/test;
2022-09-07 13:49:58,779 INFO hdfs.DataStreamer: Exception in createBlockOutputStream blk_1073741825_1001
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
at org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:253)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1725)
at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1507)
at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:720)
2022-09-07 13:49:58,783 WARN hdfs.DataStreamer: Error Recovery for BP-452161995-NN-1662558403599:blk_1073741825_1001 in pipeline [DatanodeInfoWithStorage[DN1:9866,DS-XXX,DISK], DatanodeInfoWithStorage[DN2:9866,DS-XXX,DISK], DatanodeInfoWithStorage[DN3:9866,DS-XXX,DISK]]: datanode 0(DatanodeInfoWithStorage[DN1:9866,DS-XXX,DISK]) is bad.
2022-09-07 13:49:58,808 WARN hdfs.DFSClient: Error transferring data from DatanodeInfoWithStorage[DN2:9866,DS-XXX,DISK] to DatanodeInfoWithStorage[DN4:9866,DS-XXX,DISK]: Connection refused
2022-09-07 13:49:58,996 INFO hdfs.DataStreamer: Exception in createBlockOutputStream blk_1073741825_1001
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
at org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:253)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1725)
at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1507)
at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:720)
2022-09-07 13:49:58,996 WARN hdfs.DataStreamer: Error Recovery for BP-452161995-NN-1662558403599:blk_1073741825_1001 in pipeline [DatanodeInfoWithStorage[DN2:9866,DS-XXX,DISK], DatanodeInfoWithStorage[DN3:9866,DS-XXX,DISK], DatanodeInfoWithStorage[DN5:9866,DS-XXX,DISK]]: datanode 0(DatanodeInfoWithStorage[DN2:9866,DS-XXX,DISK]) is bad.
> hdfs dfs -cat /tmp/test;
hello
hello
Determine the new block locations of the 3 replicas
> hdfs fsck /tmp/test -files -blocks -locations;
...
0. BP-452161995-NN-1662558403599:blk_1073741825_1004 len=12 Live_repl=3 [DatanodeInfoWithStorage[DN3:9866,DS-XXX,DISK], DatanodeInfoWithStorage[DN4:9866,DS-XXX,DISK], DatanodeInfoWithStorage[DN5:9866,DS-XXX,DISK]]
...
Restart the Namenode so that the block replicas are marked as "stale"
enable Namenode BlockManager DEBUG logging by setting Log4J configuration "log4j.logger.BlockStateChange=DEBUG"
sudo systemctl restart hadoop-hdfs-namenode.service;
Restart DN1 & DN2
sudo systemctl start hadoop-hdfs-datanode.service;
Check the Namenode logs to confirm the block invalidation is postponed
2022-09-07 13:50:58,194 DEBUG BlockStateChange (Block report processor): BLOCK* invalidateBlocks: postponing invalidation of blk_1073741825_1001(stored=blk_1073741825_1004) on DN2:9866 because 1 replica(s) are located on nodes with potentially out-of-date block reports
2022-09-07 13:51:06,780 DEBUG BlockStateChange (Block report processor): BLOCK* invalidateBlocks: postponing invalidation of blk_1073741825_1001(stored=blk_1073741825_1004) on DN1:9866 because 1 replica(s) are located on nodes with potentially out-of-date block reports
Restart any of DN3, DN4, or DN5 so that they send a block report
note that the block must be sufficiently replicated for this code path to occur
sudo systemctl start hadoop-hdfs-datanode.service;
Check the Namenode logs to validate the invalidation requests were sent to DN1 & DN2
2022-09-07 13:52:07,729 INFO org.apache.hadoop.hdfs.StateChange (IPC Server handler 26 on default port 8020): BLOCK* registerDatanode: from DatanodeRegistration(DN4:9866, datanodeUuid=2792b414-8c97-4a36-bb3c-1bda67ea9f28, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-4227a61e-3071-4e90-a8bd-0ca0ded28a9f;nsid=1332362960;c=1662558403599) storage 2792b414-8c97-4a36-bb3c-1bda67ea9f28
2022-09-07 13:52:07,729 INFO org.apache.hadoop.net.NetworkTopology (IPC Server handler 26 on default port 8020): Removing a node: /default-rack/DN4:9866
2022-09-07 13:52:07,730 INFO org.apache.hadoop.net.NetworkTopology (IPC Server handler 26 on default port 8020): Adding a new node: /default-rack/DN4:9866
2022-09-07 13:52:07,792 DEBUG BlockStateChange (IPC Server handler 22 on default port 8020): *BLOCK* NameNode.blockReport: from DatanodeRegistration(DN4:9866, datanodeUuid=2792b414-8c97-4a36-bb3c-1bda67ea9f28, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-4227a61e-3071-4e90-a8bd-0ca0ded28a9f;nsid=1332362960;c=1662558403599), reports.length=2
2022-09-07 13:52:07,793 INFO BlockStateChange (Block report processor): BLOCK* processReport 0x5b965acbde378e45: Processing first storage report for DS-XXX from datanode 2792b414-8c97-4a36-bb3c-1bda67ea9f28
2022-09-07 13:52:07,793 DEBUG BlockStateChange (Block report processor): BLOCK* addStoredBlock: Redundant addStoredBlock request received for blk_1073741825_1004 on node DN4:9866 size 12
2022-09-07 13:52:07,793 DEBUG BlockStateChange (Block report processor): BLOCK* invalidateBlock: blk_1073741825_1004(stored=blk_1073741825_1004) on DN1:9866
2022-09-07 13:52:07,793 DEBUG BlockStateChange (Block report processor): BLOCK* InvalidateBlocks: add blk_1073741825_1004 to DN1:9866
2022-09-07 13:52:07,793 DEBUG BlockStateChange (Block report processor): BLOCK* removeStoredBlock: blk_1073741825_1004 from DN1:9866
2022-09-07 13:52:07,795 DEBUG BlockStateChange (Block report processor): BLOCK* invalidateBlocks: blk_1073741825_1004(stored=blk_1073741825_1004) on DN1:9866 listed for deletion.
2022-09-07 13:52:07,795 DEBUG BlockStateChange (Block report processor): BLOCK* invalidateBlock: blk_1073741825_1004(stored=blk_1073741825_1004) on DN2:9866
2022-09-07 13:52:07,795 DEBUG BlockStateChange (Block report processor): BLOCK* InvalidateBlocks: add blk_1073741825_1004 to DN2:9866
2022-09-07 13:52:07,795 DEBUG BlockStateChange (Block report processor): BLOCK* removeStoredBlock: blk_1073741825_1004 from DN2:9866
2022-09-07 13:52:07,795 DEBUG BlockStateChange (Block report processor): BLOCK* invalidateBlocks: blk_1073741825_1004(stored=blk_1073741825_1004) on DN2:9866 listed for deletion.
2022-09-07 13:52:07,795 INFO BlockStateChange (Block report processor): BLOCK* processReport 0x5b965acbde378e45: from storage DS-XXX node DatanodeRegistration(DN4:9866, datanodeUuid=2792b414-8c97-4a36-bb3c-1bda67ea9f28, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-4227a61e-3071-4e90-a8bd-0ca0ded28a9f;nsid=1332362960;c=1662558403599), blocks: 1, hasStaleStorage: false, processing time: 2 msecs, invalidatedBlocks: 0
2022-09-07 13:52:07,795 INFO BlockStateChange (Block report processor): BLOCK* processReport 0x5b965acbde378e45: Processing first storage report for DS-617e1346-8e62-40f0-a35a-5999c3fb2f64 from datanode 2792b414-8c97-4a36-bb3c-1bda67ea9f28
2022-09-07 13:52:07,795 INFO BlockStateChange (Block report processor): BLOCK* processReport 0x5b965acbde378e45: from storage DS-617e1346-8e62-40f0-a35a-5999c3fb2f64 node DatanodeRegistration(DN4:9866, datanodeUuid=2792b414-8c97-4a36-bb3c-1bda67ea9f28, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-4227a61e-3071-4e90-a8bd-0ca0ded28a9f;nsid=1332362960;c=1662558403599), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
2022-09-07 13:52:09,128 DEBUG BlockStateChange (RedundancyMonitor): BLOCK* neededReconstruction = 0 pendingReconstruction = 0
2022-09-07 13:52:09,128 DEBUG BlockStateChange (RedundancyMonitor): BLOCK* BlockManager: ask DN2:9866 to delete [blk_1073741825_1004]
2022-09-07 13:52:09,128 DEBUG BlockStateChange (RedundancyMonitor): BLOCK* BlockManager: ask DN1:9866 to delete [blk_1073741825_1004]
Check the datanode logs (for DN1 & DN2) to validate the "GenerationStamp not matched" exception occurs
2022-09-07 13:50:58,206 INFO org.apache.hadoop.hdfs.server.datanode.DataNode (BP-452161995-NN-1662558403599 heartbeating to NN/NN:8020): Successfully sent block report 0x65d28e4f71df62a6, containing 2 storage report(s), of which we sent 2. The reports had 1 total blocks and used 1 RPC(s). This took 3 msec to generate and 19 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2022-09-07 13:50:58,206 INFO org.apache.hadoop.hdfs.server.datanode.DataNode (BP-452161995-NN-1662558403599 heartbeating to NN/NN:8020): Got finalize command for block pool BP-452161995-NN-1662558403599
2022-09-07 13:52:10,159 WARN org.apache.hadoop.hdfs.server.datanode.DataNode (BP-452161995-NN-1662558403599 heartbeating to NN/NN:8020): Error processing datanode Command
java.io.IOException: Failed to delete 1 (out of 1) replica(s):
0) Failed to delete replica blk_1073741825_1004: GenerationStamp not matched, existing replica is blk_1073741825_1001
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:2135)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:2034)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:734)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:680)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:883)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:678)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:849)
at java.lang.Thread.run(Thread.java:750)
2022-09-07 13:51:06,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode (BP-452161995-NN-1662558403599 heartbeating to NN/NN:8020): Successfully sent block report 0x43c9d07e94b8c90b, containing 2 storage report(s), of which we sent 2. The reports had 1 total blocks and used 1 RPC(s). This took 4 msec to generate and 26 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2022-09-07 13:51:06,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode (BP-452161995-NN-1662558403599 heartbeating to NN/NN:8020): Got finalize command for block pool BP-452161995-NN-1662558403599
2022-09-07 13:52:09,738 WARN org.apache.hadoop.hdfs.server.datanode.DataNode (BP-452161995-NN-1662558403599 heartbeating to NN/NN:8020): Error processing datanode Command
java.io.IOException: Failed to delete 1 (out of 1) replica(s):
0) Failed to delete replica blk_1073741825_1004: GenerationStamp not matched, existing replica is blk_1073741825_1001
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:2135)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:2034)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:734)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:680)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:883)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:678)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:849)
at java.lang.Thread.run(Thread.java:750)
More abstractly the conditions to reproduce the issue are:
block is sufficiently replicated (because this code path needs to be invoked)
block has at least 1 corrupt replica which has been sent to the Namenode in a Block Report but which was not invalidated because of postponing invalidation logic
a datanode sends a block report which contains a different GenerationStamp than the corrupt replica. This should generally be any datanode, except for the case where another datanode has a corrupt replica with the same Generation Stamp.
|
gharchive/pull-request
| 2022-07-16T16:50:33 |
2025-04-01T04:55:58.254248
|
{
"authors": [
"KevinWikant",
"hadoop-yetus"
],
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/4568",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1689889889
|
YARN-11470. FederationStateStoreFacade Cache Support Guava Cache.
Description of PR
JIRA: YARN-11470. FederationStateStoreFacade Cache Support Guava Cache.
How was this patch tested?
For code changes:
[ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
[ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 49s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
-1 :x:
test4tests
0m 0s
The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
42m 13s
trunk passed
+1 :green_heart:
compile
0m 42s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
0m 36s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
0m 28s
trunk passed
+1 :green_heart:
mvnsite
0m 42s
trunk passed
+1 :green_heart:
javadoc
0m 43s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
0m 30s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
spotbugs
1m 36s
trunk passed
+1 :green_heart:
shadedclient
24m 1s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
0m 32s
the patch passed
+1 :green_heart:
compile
0m 36s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
0m 36s
the patch passed
+1 :green_heart:
compile
0m 31s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
0m 31s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
0m 15s
/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)
+1 :green_heart:
mvnsite
0m 35s
the patch passed
+1 :green_heart:
javadoc
0m 27s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
0m 25s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
spotbugs
1m 29s
the patch passed
+1 :green_heart:
shadedclient
23m 33s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
3m 5s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
asflicense
0m 33s
The patch does not generate ASF License warnings.
105m 12s
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux ade3a104ba21 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 00dbec8db6ac91668f7c87f42fc40ee749f08545
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/1/testReport/
Max. process+thread count
535 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 49s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 45s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
28m 46s
trunk passed
+1 :green_heart:
compile
10m 31s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
8m 56s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
1m 47s
trunk passed
+1 :green_heart:
mvnsite
1m 44s
trunk passed
+1 :green_heart:
javadoc
1m 34s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 19s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
spotbugs
3m 48s
trunk passed
+1 :green_heart:
shadedclient
24m 7s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 23s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 10s
the patch passed
+1 :green_heart:
compile
9m 51s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
9m 51s
the patch passed
+1 :green_heart:
compile
8m 51s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
8m 51s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 38s
/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
hadoop-yarn-project/hadoop-yarn: The patch generated 17 new + 164 unchanged - 0 fixed = 181 total (was 164)
+1 :green_heart:
mvnsite
1m 34s
the patch passed
+1 :green_heart:
javadoc
1m 20s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 13s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
spotbugs
3m 50s
the patch passed
+1 :green_heart:
shadedclient
24m 38s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 :x:
unit
1m 4s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
3m 17s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
asflicense
0m 48s
The patch does not generate ASF License warnings.
162m 15s
Reason
Tests
Failed junit tests
hadoop.yarn.conf.TestYarnConfigurationFields
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 9a41388d4ef7 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 62587d1ee9c08a0aa933c239dc0043b4758b000b
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/2/testReport/
Max. process+thread count
530 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/2/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 50s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 30s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
28m 36s
trunk passed
+1 :green_heart:
compile
10m 29s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
8m 56s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
1m 48s
trunk passed
+1 :green_heart:
mvnsite
2m 35s
trunk passed
+1 :green_heart:
javadoc
2m 26s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 11s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
spotbugs
5m 33s
trunk passed
+1 :green_heart:
shadedclient
24m 15s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 23s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 48s
the patch passed
+1 :green_heart:
compile
9m 51s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
9m 51s
the patch passed
+1 :green_heart:
compile
8m 48s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
8m 48s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 38s
/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
hadoop-yarn-project/hadoop-yarn: The patch generated 3 new + 164 unchanged - 0 fixed = 167 total (was 164)
+1 :green_heart:
mvnsite
2m 25s
the patch passed
+1 :green_heart:
javadoc
2m 8s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 59s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
spotbugs
5m 47s
the patch passed
+1 :green_heart:
shadedclient
24m 21s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
1m 5s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
5m 21s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
3m 17s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
asflicense
0m 49s
The patch does not generate ASF License warnings.
176m 26s
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/3/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 718cf032d12f 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 9609a097c6bcebd234f525586d6958359cb583bd
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/3/testReport/
Max. process+thread count
564 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/3/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@goiri Can you help review this pr? Thank you very much!
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 49s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 23s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
28m 46s
trunk passed
+1 :green_heart:
compile
10m 27s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
8m 49s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
1m 48s
trunk passed
+1 :green_heart:
mvnsite
2m 36s
trunk passed
+1 :green_heart:
javadoc
2m 27s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 11s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
spotbugs
5m 36s
trunk passed
+1 :green_heart:
shadedclient
23m 56s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 24s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 48s
the patch passed
+1 :green_heart:
compile
9m 49s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
9m 49s
the patch passed
+1 :green_heart:
compile
8m 47s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
8m 47s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 38s
the patch passed
+1 :green_heart:
mvnsite
2m 25s
the patch passed
+1 :green_heart:
javadoc
2m 9s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 2s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
spotbugs
5m 46s
the patch passed
+1 :green_heart:
shadedclient
24m 16s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
1m 4s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
5m 21s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
3m 17s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
asflicense
0m 49s
The patch does not generate ASF License warnings.
176m 0s
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/4/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 84ac90c09175 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / ee634de982706077dbee05a8b6cbe33f87e04e6d
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/4/testReport/
Max. process+thread count
531 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/4/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 48s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+0 :ok:
markdownlint
0m 1s
markdownlint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 32s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
28m 37s
trunk passed
+1 :green_heart:
compile
10m 30s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
9m 0s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
1m 44s
trunk passed
+1 :green_heart:
mvnsite
3m 8s
trunk passed
+1 :green_heart:
javadoc
2m 54s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 37s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 28s
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml)
+1 :green_heart:
shadedclient
19m 37s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 24s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
2m 0s
the patch passed
+1 :green_heart:
compile
9m 52s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
9m 52s
the patch passed
+1 :green_heart:
compile
8m 56s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
8m 56s
the patch passed
-1 :x:
blanks
0m 0s
/blanks-eol.txt
The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 :green_heart:
checkstyle
1m 39s
the patch passed
+1 :green_heart:
mvnsite
2m 54s
the patch passed
+1 :green_heart:
javadoc
2m 31s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 24s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 25s
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs
+1 :green_heart:
shadedclient
23m 13s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
1m 5s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
5m 22s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
3m 19s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
unit
0m 24s
hadoop-yarn-site in the patch passed.
+1 :green_heart:
asflicense
0m 48s
The patch does not generate ASF License warnings.
177m 44s
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/5/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname
Linux 366ae9eb4f90 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 8425562c65900b67063a252c3fa6578e719ebac2
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/5/testReport/
Max. process+thread count
530 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/5/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 39s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+0 :ok:
markdownlint
0m 0s
markdownlint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 51s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
26m 57s
trunk passed
+1 :green_heart:
compile
9m 56s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
8m 59s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
1m 48s
trunk passed
+1 :green_heart:
mvnsite
3m 44s
trunk passed
+1 :green_heart:
javadoc
3m 27s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
3m 14s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 38s
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml)
+1 :green_heart:
shadedclient
20m 29s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 28s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
2m 7s
the patch passed
+1 :green_heart:
compile
9m 6s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
9m 6s
the patch passed
+1 :green_heart:
compile
8m 31s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
8m 31s
the patch passed
-1 :x:
blanks
0m 0s
/blanks-eol.txt
The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 :green_heart:
checkstyle
1m 38s
the patch passed
+1 :green_heart:
mvnsite
3m 21s
the patch passed
+1 :green_heart:
javadoc
3m 2s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 52s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 34s
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs
+1 :green_heart:
shadedclient
20m 4s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
1m 14s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
5m 41s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
3m 28s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
unit
0m 33s
hadoop-yarn-site in the patch passed.
+1 :green_heart:
asflicense
0m 57s
The patch does not generate ASF License warnings.
177m 41s
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/6/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname
Linux 2301004e136d 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 0039781b66cb1db6fb748635a4d0c03d4dc8bfe4
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/6/testReport/
Max. process+thread count
724 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/6/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 34s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+0 :ok:
markdownlint
0m 1s
markdownlint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
17m 23s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
25m 53s
trunk passed
+1 :green_heart:
compile
9m 37s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
8m 35s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
1m 50s
trunk passed
+1 :green_heart:
mvnsite
3m 40s
trunk passed
+1 :green_heart:
javadoc
3m 27s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
3m 15s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 39s
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml)
+1 :green_heart:
shadedclient
20m 30s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 28s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
2m 5s
the patch passed
+1 :green_heart:
compile
9m 6s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
9m 6s
the patch passed
+1 :green_heart:
compile
8m 25s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
8m 25s
the patch passed
-1 :x:
blanks
0m 0s
/blanks-eol.txt
The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 :green_heart:
checkstyle
1m 39s
the patch passed
+1 :green_heart:
mvnsite
3m 21s
the patch passed
+1 :green_heart:
javadoc
3m 1s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 56s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 34s
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs
+1 :green_heart:
shadedclient
20m 31s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
1m 13s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
5m 41s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
3m 28s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
unit
0m 33s
hadoop-yarn-site in the patch passed.
+1 :green_heart:
asflicense
0m 58s
The patch does not generate ASF License warnings.
176m 29s
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/7/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname
Linux af592c427c30 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 6b75349eec655cd9ebf0093a11083df6a176ce83
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/7/testReport/
Max. process+thread count
562 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/7/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
1m 20s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+0 :ok:
markdownlint
0m 1s
markdownlint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 9s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
29m 12s
trunk passed
+1 :green_heart:
compile
10m 26s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
8m 50s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
1m 45s
trunk passed
+1 :green_heart:
mvnsite
3m 6s
trunk passed
+1 :green_heart:
javadoc
2m 54s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 39s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 28s
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml)
+1 :green_heart:
shadedclient
22m 53s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 24s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
2m 1s
the patch passed
+1 :green_heart:
compile
9m 53s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
9m 53s
the patch passed
+1 :green_heart:
compile
9m 50s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
9m 50s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 39s
the patch passed
+1 :green_heart:
mvnsite
2m 52s
the patch passed
+1 :green_heart:
javadoc
2m 32s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 25s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 25s
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs
+1 :green_heart:
shadedclient
23m 13s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
1m 5s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
5m 24s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
3m 17s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
unit
0m 25s
hadoop-yarn-site in the patch passed.
+1 :green_heart:
asflicense
0m 49s
The patch does not generate ASF License warnings.
183m 13s
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/8/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname
Linux 0f364fe6bb13 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / e3852b143ec68707ab86ffbe32917dc6f84f8d7c
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/8/testReport/
Max. process+thread count
530 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/8/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 49s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+0 :ok:
markdownlint
0m 1s
markdownlint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 53s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
22m 25s
trunk passed
+1 :green_heart:
compile
7m 36s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
6m 42s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
checkstyle
1m 48s
trunk passed
+1 :green_heart:
mvnsite
2m 56s
trunk passed
+1 :green_heart:
javadoc
2m 54s
trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 37s
trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 29s
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml)
+1 :green_heart:
shadedclient
23m 23s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 23s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 49s
the patch passed
+1 :green_heart:
compile
6m 51s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
6m 51s
the patch passed
+1 :green_heart:
compile
6m 40s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 :green_heart:
javac
6m 40s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 38s
the patch passed
+1 :green_heart:
mvnsite
2m 40s
the patch passed
+1 :green_heart:
javadoc
2m 31s
the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 25s
the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+0 :ok:
spotbugs
0m 25s
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs
+1 :green_heart:
shadedclient
23m 17s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
1m 2s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
5m 16s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
3m 15s
hadoop-yarn-server-common in the patch passed.
+1 :green_heart:
unit
0m 24s
hadoop-yarn-site in the patch passed.
+1 :green_heart:
asflicense
0m 48s
The patch does not generate ASF License warnings.
163m 19s
Subsystem
Report/Notes
Docker
ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/9/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5609
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint
uname
Linux c0c746febe26 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / e9352f7493ce7119ed687f6509bfacae55977285
Default Java
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/9/testReport/
Max. process+thread count
530 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/9/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@goiri Thank you very much for helping to review the code!
|
gharchive/pull-request
| 2023-04-30T11:57:09 |
2025-04-01T04:55:58.572494
|
{
"authors": [
"hadoop-yetus",
"slfan1989"
],
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/5609",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1899476261
|
HDFS-17196. Overflow during getDatanodeReadTimeout
Description of PR
Datanode read timeout equals to READ_TIMEOUT_EXTENSION * numNodes + dfs.client.socket-timeout. A large dfs.client.socket-timeout/numNodes/READ_TIMEOUT_EXTENSION will cause overflow.
To reproduce:
set dfs.client.socket-timeout to 2147483646
run mvn surefire:test -Dtest=org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode#testBlocksRemovedWhileInSafeModeEditsArriveFirst
This PR provides a fix by checking the read timeout calculation is at least 0.
How was this patch tested?
Unit test
For code changes:
[x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
[ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 58s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
-1 :x:
test4tests
0m 0s
The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
47m 54s
trunk passed
+1 :green_heart:
compile
1m 0s
trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 :green_heart:
compile
0m 50s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
0m 30s
trunk passed
+1 :green_heart:
mvnsite
0m 56s
trunk passed
+1 :green_heart:
javadoc
0m 46s
trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 :green_heart:
javadoc
0m 38s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
2m 42s
trunk passed
+1 :green_heart:
shadedclient
39m 47s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
0m 48s
the patch passed
+1 :green_heart:
compile
0m 53s
the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 :green_heart:
javac
0m 53s
the patch passed
+1 :green_heart:
compile
0m 43s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
0m 43s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
0m 20s
the patch passed
+1 :green_heart:
mvnsite
0m 48s
the patch passed
+1 :green_heart:
javadoc
0m 34s
the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 :green_heart:
javadoc
0m 33s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
2m 44s
the patch passed
+1 :green_heart:
shadedclient
40m 4s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
2m 20s
hadoop-hdfs-client in the patch passed.
+1 :green_heart:
asflicense
0m 33s
The patch does not generate ASF License warnings.
147m 51s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6091/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6091
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 86b9e3c4ce95 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 853cabcfbe4fd4241e12b16bd38232856595e1d9
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6091/1/testReport/
Max. process+thread count
626 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6091/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 26s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
-1 :x:
test4tests
0m 0s
The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
39m 37s
trunk passed
+1 :green_heart:
compile
0m 42s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
0m 38s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
0m 27s
trunk passed
+1 :green_heart:
mvnsite
0m 43s
trunk passed
+1 :green_heart:
javadoc
0m 39s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
0m 34s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
1m 52s
trunk passed
+1 :green_heart:
shadedclient
28m 35s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
0m 34s
the patch passed
+1 :green_heart:
compile
0m 36s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
0m 36s
the patch passed
+1 :green_heart:
compile
0m 33s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
0m 33s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
0m 16s
the patch passed
+1 :green_heart:
mvnsite
0m 35s
the patch passed
+1 :green_heart:
javadoc
0m 27s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
0m 26s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
1m 48s
the patch passed
+1 :green_heart:
shadedclient
28m 19s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
2m 0s
hadoop-hdfs-client in the patch passed.
+1 :green_heart:
asflicense
0m 30s
The patch does not generate ASF License warnings.
112m 6s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6091/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6091
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 46d1a6d0c550 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 853cabcfbe4fd4241e12b16bd38232856595e1d9
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6091/1/testReport/
Max. process+thread count
630 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6091/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@teamconfx any plans to update?
|
gharchive/pull-request
| 2023-09-16T15:38:19 |
2025-04-01T04:55:58.640438
|
{
"authors": [
"ayushtkn",
"hadoop-yetus",
"teamconfx"
],
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/6091",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2152335943
|
HBASE-28395 TableNotFoundException when executing 'hbase hbck'
Details see: HBASE-28395
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
0m 0s
Docker mode activated.
-1 :x:
docker
0m 7s
Docker failed to build run-specific yetus/hbase:tp-27351}.
Subsystem
Report/Notes
GITHUB PR
https://github.com/apache/hbase/pull/5706
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/1/console
versions
git=2.25.1
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
0m 40s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+1 :green_heart:
hbaseanti
0m 0s
Patch does not have any anti-patterns.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
_ master Compile Tests _
+1 :green_heart:
mvninstall
3m 5s
master passed
+1 :green_heart:
compile
2m 31s
master passed
+1 :green_heart:
checkstyle
0m 39s
master passed
+1 :green_heart:
spotless
0m 45s
branch has no errors when running spotless:check.
+1 :green_heart:
spotbugs
1m 39s
master passed
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 48s
the patch passed
+1 :green_heart:
compile
2m 27s
the patch passed
-0 :warning:
javac
2m 27s
hbase-server generated 1 new + 194 unchanged - 1 fixed = 195 total (was 195)
+1 :green_heart:
checkstyle
0m 37s
the patch passed
+1 :green_heart:
whitespace
0m 0s
The patch has no whitespace issues.
+1 :green_heart:
hadoopcheck
4m 58s
Patch does not cause any errors with Hadoop 3.3.6.
+1 :green_heart:
spotless
0m 43s
patch has no errors when running spotless:check.
+1 :green_heart:
spotbugs
1m 40s
the patch passed
_ Other Tests _
+1 :green_heart:
asflicense
0m 13s
The patch does not generate ASF License warnings.
28m 59s
Subsystem
Report/Notes
Docker
ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/1/artifact/yetus-general-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/5706
Optional Tests
dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile
uname
Linux de2408a3735a 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
master / 63e3a43041
Default Java
Eclipse Adoptium-11.0.17+8
javac
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/1/artifact/yetus-general-check/output/diff-compile-javac-hbase-server.txt
Max. process+thread count
81 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/1/console
versions
git=2.34.1 maven=3.8.6 spotbugs=4.7.3
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
0m 29s
Docker mode activated.
-0 :warning:
yetus
0m 2s
Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ master Compile Tests _
+1 :green_heart:
mvninstall
3m 11s
master passed
+1 :green_heart:
compile
0m 45s
master passed
+1 :green_heart:
shadedjars
5m 32s
branch has no errors when building our shaded downstream artifacts.
+1 :green_heart:
javadoc
0m 25s
master passed
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 49s
the patch passed
+1 :green_heart:
compile
0m 45s
the patch passed
+1 :green_heart:
javac
0m 45s
the patch passed
+1 :green_heart:
shadedjars
5m 34s
patch has no errors when building our shaded downstream artifacts.
+1 :green_heart:
javadoc
0m 23s
the patch passed
_ Other Tests _
+1 :green_heart:
unit
220m 44s
hbase-server in the patch passed.
244m 32s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/5706
Optional Tests
javac javadoc unit shadedjars compile
uname
Linux 14814df892a0 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
master / 63e3a43041
Default Java
Eclipse Adoptium-11.0.17+8
Test Results
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/1/testReport/
Max. process+thread count
5268 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/1/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
1m 59s
Docker mode activated.
-0 :warning:
yetus
0m 3s
Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ master Compile Tests _
+1 :green_heart:
mvninstall
3m 4s
master passed
+1 :green_heart:
compile
0m 46s
master passed
+1 :green_heart:
shadedjars
5m 36s
branch has no errors when building our shaded downstream artifacts.
+1 :green_heart:
javadoc
0m 24s
master passed
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 49s
the patch passed
+1 :green_heart:
compile
0m 45s
the patch passed
+1 :green_heart:
javac
0m 45s
the patch passed
+1 :green_heart:
shadedjars
5m 35s
patch has no errors when building our shaded downstream artifacts.
+1 :green_heart:
javadoc
0m 22s
the patch passed
_ Other Tests _
+1 :green_heart:
unit
220m 46s
hbase-server in the patch passed.
246m 6s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/5706
Optional Tests
javac javadoc unit shadedjars compile
uname
Linux bec30cb3d9fb 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
master / c4a02f7fcd
Default Java
Eclipse Adoptium-11.0.17+8
Test Results
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/2/testReport/
Max. process+thread count
5144 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/2/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
0m 24s
Docker mode activated.
-0 :warning:
yetus
0m 3s
Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ master Compile Tests _
+1 :green_heart:
mvninstall
2m 42s
master passed
+1 :green_heart:
compile
0m 38s
master passed
+1 :green_heart:
shadedjars
5m 34s
branch has no errors when building our shaded downstream artifacts.
+1 :green_heart:
javadoc
0m 23s
master passed
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 24s
the patch passed
+1 :green_heart:
compile
0m 38s
the patch passed
+1 :green_heart:
javac
0m 38s
the patch passed
+1 :green_heart:
shadedjars
5m 59s
patch has no errors when building our shaded downstream artifacts.
+1 :green_heart:
javadoc
0m 29s
the patch passed
_ Other Tests _
+1 :green_heart:
unit
233m 53s
hbase-server in the patch passed.
257m 0s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/5706
Optional Tests
javac javadoc unit shadedjars compile
uname
Linux 755af1cc886f 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
master / c4a02f7fcd
Default Java
Temurin-1.8.0_352-b08
Test Results
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/2/testReport/
Max. process+thread count
4865 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5706/2/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
|
gharchive/pull-request
| 2024-02-24T14:17:15 |
2025-04-01T04:55:58.715494
|
{
"authors": [
"Apache-HBase",
"guluo2016"
],
"repo": "apache/hbase",
"url": "https://github.com/apache/hbase/pull/5706",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
689189709
|
HIVE-22622: Hive allows to create a struct with duplicate attribute names
What changes were proposed in this pull request?
Add a check for duplicated struct attribute identifiers and throw SemanticException with customized error message when found.
Why are the changes needed?
Creating a table with a struct type column with duplicate attribute identifier and inserting records is allowed but later when querying from the table we cannot distinguish between the attributes of the struct has the same identifier.
In some cases (depending on table serde format) the query may fails. See jira for details.
Does this PR introduce any user-facing change?
Introduce new error code and message. Example:
FAILED: SemanticException [Error 10423]: Attribute "id" specified more than once in structured type.
How was this patch tested?
Create new negative test:
mvn test -Dtest.output.overwrite -DskipSparkTests -Dtest=TestNegativeCliDriver -Dqfile=struct_attribute_uniqueness.q,struct_attribute_uniqueness_2.q -pl itests/qtest -Pitests
Reproduce query failure
CREATE TABLE person
(
`id` int,
`address` struct<number:int,street:string,number:int>
)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
INSERT INTO person
VALUES (1, named_struct('number', 61, 'street', 'Terrasse', 'number', 62));
INSERT INTO person
VALUES (2, named_struct('number', 51, 'street', 'Terrasse', 'number', 52));
SELECT address.number FROM person;
Adding a test case for various underlying formats doesn't make sense this case because the duplicate check is performed in the semantical analysis phase of the create table statement and the error message would be the same. So if a duplicate found the code flow doesn't reach the point where the table is actually created.
Added a test case for nested struct.
|
gharchive/pull-request
| 2020-08-31T13:02:12 |
2025-04-01T04:55:58.719826
|
{
"authors": [
"kasakrisz"
],
"repo": "apache/hive",
"url": "https://github.com/apache/hive/pull/1446",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
456570979
|
HIVE-21737: Bump Apache Avro to 1.9.0
Apache Avro 1.9.0 brings a lot of new features:
Deprecate Joda-Time in favor of Java8 JSR310 and setting it as default
Remove support for Hadoop 1.x
Move from Jackson 1.x to 2.9
Add ZStandard Codec
Lots of updates on the dependencies to fix CVE's
Remove Jackson classes from public API
Apache Avro is built by default with Java 8
Apache Avro is compiled and tested with Java 11 to guarantee compatibility
Apache Avro MapReduce is compiled and tested with Hadoop 3
Apache Avro is now leaner, multiple dependencies were removed: guava, paranamer, commons-codec, and commons-logging
and many, many more!
Rebased onto master
|
gharchive/pull-request
| 2019-06-15T20:00:26 |
2025-04-01T04:55:58.723083
|
{
"authors": [
"Fokko"
],
"repo": "apache/hive",
"url": "https://github.com/apache/hive/pull/674",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
785958518
|
[SUPPORT] deltacommit for client 172.16.116.102 already exists
Environment Description
Hudi version :
0.6.0
Spark version :
spark-2.4.4-bin-hadoop2.7
Hive version :
hive-2.3.4
Hadoop version :
hadoop2.7.3
Storage (HDFS/S3/GCS..) :
hdfs
Running on Docker? (yes/no) :
no
1.when i write data to hudi,the error
Logical Plan:
RepartitionByExpression [dbName#23, tblName#24], 6
+- Project [row#21.dbName AS dbName#23, row#21.tblName AS tblName#24, row#21.opr AS opr#25, row#21.datalakeLogicalDeletion AS datalakeLogicalDeletion#26, row#21.etlTime AS etlTime#27L, row#21.jsonData AS jsonData#28]
+- Project [jsontostructs(StructField(dbName,StringType,true), StructField(tblName,StringType,true), StructField(opr,StringType,true), StructField(datalakeLogicalDeletion,IntegerType,true), StructField(etlTime,LongType,true), StructField(jsonData,StringType,true), cast(value#8 as string), Some(PRC)) AS row#21]
+- StreamingExecutionRelation KafkaV2[Subscribe[datalake_advertise]], [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13]
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:297)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:297)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
Caused by: org.apache.hudi.exception.HoodieIOException: Failed to create file /user/datalake/hudi/hbase/f_mid_business_card/.hoodie/20210114202219.deltacommit
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.createImmutableFileInPath(HoodieActiveTimeline.java:449)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:333)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.createImmutableFileInPath(HoodieActiveTimeline.java:449)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:333)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:308)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.saveAsComplete(HoodieActiveTimeline.java:143)
at org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:124)
at org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:124)
at org.apache.hudi.client.AbstractHoodieWriteClient.commit(AbstractHoodieWriteClient.java:99)
at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:397)
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:205) at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:205)
at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:125)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
2.my hoodieWriteConfig :
hoodie.filesystem.view.incr.timeline.sync.enable -> false,
hoodie.bulkinsert.sort.mode -> GLOBAL_SORT,
hoodie.avro.schema.externalTransformation -> false,
hoodie.bootstrap.parallelism -> 1500,
hoodie.delete.shuffle.parallelism -> 1500,
hoodie.simple.index.use.caching -> true,
hoodie.bloom.index.filter.type -> DYNAMIC_V0,
hoodie.filesystem.view.remote.port -> 26754,
hoodie.datasource.write.operation -> upsert,
hoodie.cleaner.parallelism -> 200,
hoodie.global.simple.index.parallelism -> 100,
hoodie.bootstrap.mode.selector.regex -> .*,
hoodie.parquet.page.size -> 1048576,
hoodie.datasource.write.table.type -> MERGE_ON_READ,
hoodie.datasource.hive_sync.table -> f_mid_business_card,
hoodie.compaction.daybased.target.partitions -> 10,
hoodie.metrics.reporter.class -> ,
hoodie.parquet.block.size -> 125829120,
hoodie.cleaner.delete.bootstrap.base.file -> false,
hoodie.consistency.check.max_interval_ms -> 300000,
hoodie.insert.shuffle.parallelism -> 100,
hoodie.upsert.shuffle.parallelism -> 100,
hoodie.bulkinsert.shuffle.parallelism -> 1500,
hoodie.write.commit.callback.on -> false,
hoodie.cleaner.fileversions.retained -> 3,
hoodie.datasource.hive_sync.partition_extractor_class -> org.apache.hudi.hive.NonPartitionedExtractor,
hoodie.parquet.compression.codec -> gzip,
hoodie.datasource.write.hive_style_partitioning -> true,
hoodie.copyonwrite.insert.split.size -> 500000,
hoodie.optimistic.consistency.guard.sleep_time_ms -> 500,
hoodie.datasource.hive_sync.use_jdbc -> true,
hoodie.metrics.reporter.type -> GRAPHITE,
hoodie.bootstrap.index.class -> org.apache.hudi.common.bootstrap.index.HFileBootstrapIndex,
hoodie.filesystem.remote.backup.view.enable -> true,
hoodie.logfile.to.parquet.compression.ratio -> 0.35,
hoodie.filesystem.view.spillable.mem -> 104857600,
hoodie.write.status.storage.level -> MEMORY_AND_DISK_SER,
hoodie.write.commit.callback.http.timeout.seconds -> 3,
hoodie.copyonwrite.insert.auto.split -> true,
hoodie.logfile.data.block.max.size -> 268435456,
hoodie.index.type -> BLOOM,
hoodie.keep.min.commits -> 6,
hoodie.memory.spillable.map.path -> /tmp/,
hoodie.filesystem.view.rocksdb.base.path -> /tmp/hoodie_timeline_rocksdb,
hoodie.compact.inline -> false,
hoodie.clean.async -> true,
hoodie.record.size.estimation.threshold -> 1.0,
hoodie.metrics.graphite.host -> localhost,
hoodie.simple.index.update.partition.path -> false,
hoodie.bloom.index.filter.dynamic.max.entries -> 100000,
hoodie.compaction.reverse.log.read -> false,
hoodie.metrics.jmx.port -> 9889,
hoodie.writestatus.class -> org.apache.hudi.client.WriteStatus,
hoodie.datasource.hive_sync.enable -> true,
hoodie.finalize.write.parallelism -> 1500,
hoodie.rollback.parallelism -> 100,
hoodie.index.bloom.num_entries -> 60000,
hoodie.memory.merge.max.size -> 131072,
hoodie.bootstrap.mode.selector.regex.mode -> METADATA_ONLY,
hoodie.rollback.using.markers -> false,
hoodie.copyonwrite.record.size.estimate -> 1024,
hoodie.bloom.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.simple.index.parallelism -> 50,
hoodie.consistency.check.enabled -> false,
hoodie.bloom.index.use.caching -> true,
hoodie.metrics.on -> false,
hoodie.memory.compaction.max.size -> 1048576,
hoodie.parquet.small.file.limit -> 104857600,
hoodie.combine.before.insert -> false,
hoodie.cleaner.commits.retained -> 2,
hoodie.embed.timeline.server -> true,
hoodie.bootstrap.mode.selector -> org.apache.hudi.client.bootstrap.selector.MetadataOnlyBootstrapModeSelector,
hoodie.filesystem.view.secondary.type -> MEMORY,
_.hoodie.allow.multi.write.on.same.instant -> false,
hoodie.datasource.write.partitionpath.field -> ,
_hoodie.optimistic.consistency.guard.enable -> true,
hoodie.datasource.hive_sync.database -> hbase,
hoodie.bloom.index.update.partition.path -> true,
hoodie.fail.on.timeline.archiving -> true,
hoodie.markers.delete.parallelism -> 100,
hoodie.filesystem.view.type -> MEMORY,
hoodie.parquet.max.file.size -> 125829120,
hoodie.datasource.write.keygenerator.class -> org.apache.hudi.keygen.NonpartitionedKeyGenerator,
hoodie.bootstrap.partitionpath.translator.class -> org.apache.hudi.client.bootstrap.translator.IdentityBootstrapPartitionPathTranslator,
hoodie.bloom.index.prune.by.ranges -> true,
hoodie.base.path -> /user/datalake/hudi/hbase/f_mid_business_card,
hoodie.index.class -> ,
hoodie.clean.automatic -> true,
hoodie.filesystem.view.remote.host -> localhost,
hoodie.compaction.lazy.block.read -> false,
hoodie.memory.writestatus.failure.fraction -> 0.1,
hoodie.metrics.graphite.port -> 4756,
hoodie.cleaner.policy -> KEEP_LATEST_COMMITS,
hoodie.logfile.max.size -> 1073741824,
hoodie.filesystem.view.spillable.compaction.mem.fraction -> 0.01,
hoodie.datasource.write.recordkey.field -> datalake_rowkey,
hoodie.avro.schema.validate -> false,
hoodie.simple.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.timeline.layout.version -> 1,
hoodie.consistency.check.max_checks -> 7,
hoodie.consistency.check.initial_interval_ms -> 2000,
hoodie.keep.max.commits -> 8,
hoodie.compact.inline.max.delta.commits -> 5,
hoodie.parquet.compression.ratio -> 0.1,
hoodie.memory.dfs.buffer.max.size -> 16777216,
hoodie.auto.commit -> true,
hoodie.write.commit.callback.http.api.key -> hudi_write_commit_http_callback,
hoodie.assume.date.partitioning -> false,
hoodie.filesystem.view.spillable.dir -> /tmp/view_map/,
hoodie.compaction.strategy -> org.apache.hudi.table.action.compact.strategy.LogFileSizeBasedCompactionStrategy,
hoodie.combine.before.upsert -> true,
hoodie.bloom.index.keys.per.bucket -> 10000000,
hoodie.write.commit.callback.class -> org.apache.hudi.callback.impl.HoodieWriteCommitHttpCallback,
hoodie.bloom.index.parallelism -> 0,
hoodie.cleaner.incremental.mode -> true,
hoodie.commits.archival.batch -> 5,
hoodie.datasource.hive_sync.partition_fields -> ,
hoodie.compaction.target.io -> 512000,
hoodie.table.name -> f_mid_business_card,
hoodie.bloom.index.bucketized.checking -> true,
hoodie.compaction.payload.class -> org.apache.hudi.common.model.OverwriteWithLatestAvroPayload,
hoodie.combine.before.delete -> true,
hoodie.datasource.write.precombine.field -> ts,
hoodie.filesystem.view.spillable.bootstrap.base.file.mem.fraction -> 0.05,
hoodie.metrics.jmx.host -> localhost,
hoodie.index.bloom.fpp -> 0.000000001,
hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://172.16.116.102:10000,
hoodie.bloom.index.use.treebased.filter -> true
To Reproduce
Key code is
val hudiDF = buildHudiDF(tableConfig, partitionDF, dbNameSource, tableNameSource)
val hoodieWriteConfigMap = buildHudiWriteConfig(tableConfig)
hudiDF.write.format("hudi")
.options(hoodieWriteConfigMap)
.mode(SaveMode.Append)
.save(hoodieWriteConfigMap.getOrElse(BASE_PATH_PROP, "/tmp"))
private def buildHudiDF(tableConfig: TableConfig, sourceDataFrame: DataFrame, dbNameKafka: String, tblNameKafka: String): DataFrame = {
val dataLakeColumns = tableConfig.getDataLakeColumns.asScala.toArray
val commonSchema = new StructType(dataLakeColumns.map(column => StructField(column.getColumnName, StringType)))
val castArray = dataLakeColumns.map(column => col(column.getColumnName).cast(column.getColumnType).toString())
var targetDataFrame = sourceDataFrame
.select(from_json(col("jsonData"), commonSchema).alias("row"), col("datalakeLogicalDeletion"))
.select(col("row."), col("datalakeLogicalDeletion"))
.selectExpr(castArray :+ (col("datalakeLogicalDeletion").alias("datalake_logical_deletion").toString()): _)
.filter(col("dbName").isNull.or(col("dbName").equalTo(dbNameKafka)).and(col("tblName").equalTo(tblNameKafka)))
val precombineField = tableConfig.getPrecombineFieldOptKey
if (StringUtils.isEmpty(precombineField) || !commonSchema.contains(StructField(precombineField, StringType))) {
targetDataFrame = targetDataFrame
.withColumn("ts", current_timestamp())
}
val complexRecordKey = tableConfig.getRecordkeyFieldOptKey
if (StringUtils.isNotEmpty(complexRecordKey)) {
val keyArray = complexRecordKey.split(StringUtils.COMMA_SYMBOL)
var conditionChain = col(keyArray(0)).isNotNull.and(col(keyArray(0)).notEqual(StringUtils.EMPTY))
keyArray.tail.foreach(key => conditionChain.or(col(key).isNotNull.and(col(key).notEqual(StringUtils.EMPTY))))
targetDataFrame = targetDataFrame.filter(conditionChain)
}
targetDataFrame
}
private def buildHudiWriteConfig(tableConfig: TableConfig): mutable.Map[String, String] = {
val options = new mutable.HashMap[String, String]
val dbName = tableConfig.getDbName
val tblName = tableConfig.getTableName
val partition_field = tableConfig.getPartitionpathFieldOptKey
options += (TABLE_TYPE_OPT_KEY -> tableConfig.getTableTypeOptKey)
options += (OPERATION_OPT_KEY -> tableConfig.getOperationOptKey)
options += (RECORDKEY_FIELD_OPT_KEY -> tableConfig.getRecordkeyFieldOptKey)
options += (PRECOMBINE_FIELD_OPT_KEY -> tableConfig.getPrecombineFieldOptKey)
options += (HIVE_SYNC_ENABLED_OPT_KEY -> "true")
options += (INDEX_TYPE_PROP -> tableConfig.getIndexTypeProp)
options += (BLOOM_INDEX_FILTER_TYPE -> BloomFilterTypeCode.DYNAMIC_V0.name())
options += (KEYGENERATOR_CLASS_OPT_KEY -> tableConfig.getKeygeneratorClassOptKey)
options += (HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY -> tableConfig.getHivePartitionExtractorClassOptKey)
options += (HIVE_STYLE_PARTITIONING_OPT_KEY -> "true")
options += (PARTITIONPATH_FIELD_OPT_KEY -> partition_field)
options += (HIVE_PARTITION_FIELDS_OPT_KEY -> partition_field.split(",").map(pair => pair.split(":")(0)).mkString(","))
options += (HIVE_DATABASE_OPT_KEY -> dbName)
options += (HIVE_TABLE_OPT_KEY -> tblName)
options += (HIVE_USE_JDBC_OPT_KEY -> tableConfig.getHiveUseJdbcOptKey)
options += (HIVE_URL_OPT_KEY -> tableConfig.getHiveUrlOptKey)
val hoodieCompactionConfig = HoodieCompactionConfig.newBuilder()
.retainCommits(2)
.withCommitsArchivalBatchSize(5)
.archiveCommitsWith(6, 8)
.withAsyncClean(true)
.build()
val hoodieIndexConfig = HoodieIndexConfig.newBuilder()
.withBloomIndexUpdatePartitionPath(true)
.build()
val hoodieMemoryConfig = HoodieMemoryConfig.newBuilder()
.withMaxMemoryMaxSize(128 * 1024, 1024 * 1024)
.build()
val hoodieWriteConfig = HoodieWriteConfig.newBuilder()
.withParallelism(tableConfig.getShuffleParallelism.toInt, tableConfig.getShuffleParallelism.toInt)
.withIndexConfig(hoodieIndexConfig)
.withCompactionConfig(hoodieCompactionConfig)
.withMemoryConfig(hoodieMemoryConfig)
.withProps(options.asJava)
.forTable(tblName)
.withPath(StringUtils.concat(tableConfig.getBasePath, File.separator, tableConfig.getDbName, File.separator, tableConfig.getTableName))
.build()
logger.info(s"hoodieWriteConfig -> {${hoodieWriteConfig.getProps.asScala.toString()}}")
hoodieWriteConfig.getProps.asScala
}
but ,when the hoodieWriteConfig like below,it is ok.but the log file is too big(2GB+),and small data file(1MB-) too much. and cause OOM error each hour.
hoodie.datasource.hive_sync.table->f_mid_order_details,
hoodie.bloom.index.update.partition.path->true,
hoodie.bloom.index.filter.type->DYNAMIC_V0,
hoodie.datasource.write.keygenerator.class->org.apache.hudi.keygen.SimpleKeyGenerator,
hoodie.datasource.hive_sync.database->hudi,
hoodie.datasource.write.table.type->MERGE_ON_READ,
hoodie.datasource.write.partitionpath.field->payment_date,
hoodie.datasource.hive_sync.partition_fields->payment_date,
hoodie.datasource.hive_sync.partition_extractor_class->org.apache.hudi.hive.MultiPartKeysValueExtractor,
hoodie.datasource.write.recordkey.field->key,
hoodie.datasource.hive_sync.enable->true,
hoodie.upsert.shuffle.parallelism->100,
hoodie.index.type->GLOBAL_BLOOM,
hoodie.datasource.hive_sync.jdbcurl->jdbc:hive2://172.16.117.73:10000,
hoodie.compact.inline->true,
hoodie.datasource.write.precombine.field->ts,
hoodie.table.name->f_mid_order_details,
hoodie.datasource.write.hive_style_partitioning->true,
hoodie.datasource.write.operation->upsert
Can you provide the full dump of the logs and .hoodie/ folder ?
Can you provide the full dump of the logs and .hoodie/ folder ?
Can you provide the full dump of the logs and .hoodie/ folder ?
the log has been clear, but still have some picture
when i set hoodie.auto.commit = false,the error is gone.
but how to limit the log file size,my log file is so big(3GB+),log file version always 1.
when i change hoodie.cleaner.policy = KEEP_LATEST_FILE_VERSIONS and hoodie.cleaner.fileversions.retained = 1,the old data file can be clean,but how to clean the old log file(or clean the old log file commit,set hoodie.cleaner.policy = KEEP_LATEST_COMMITS and hoodie.cleaner.commits.retained = 1 is useless)
now my config is
hoodie.filesystem.view.incr.timeline.sync.enable -> false,
hoodie.bulkinsert.sort.mode -> GLOBAL_SORT,
hoodie.avro.schema.externalTransformation -> false,
hoodie.bootstrap.parallelism -> 1500,
hoodie.delete.shuffle.parallelism -> 1500,
hoodie.simple.index.use.caching -> true,
hoodie.bloom.index.filter.type -> DYNAMIC_V0,
hoodie.filesystem.view.remote.port -> 26754,
hoodie.datasource.write.operation -> upsert,
hoodie.cleaner.parallelism -> 200,
hoodie.global.simple.index.parallelism -> 100,
hoodie.bootstrap.mode.selector.regex -> .*,
hoodie.parquet.page.size -> 1048576,
hoodie.datasource.write.table.type -> MERGE_ON_READ,
hoodie.datasource.hive_sync.table -> f_mid_business_card,
hoodie.compaction.daybased.target.partitions -> 10,
hoodie.metrics.reporter.class -> ,
hoodie.parquet.block.size -> 125829120,
hoodie.cleaner.delete.bootstrap.base.file -> false,
hoodie.consistency.check.max_interval_ms -> 300000,
hoodie.insert.shuffle.parallelism -> 100,
hoodie.upsert.shuffle.parallelism -> 100,
hoodie.bulkinsert.shuffle.parallelism -> 1500,
hoodie.write.commit.callback.on -> false,
hoodie.cleaner.fileversions.retained -> 1,
hoodie.datasource.hive_sync.partition_extractor_class -> org.apache.hudi.hive.NonPartitionedExtractor,
hoodie.parquet.compression.codec -> gzip,
hoodie.datasource.write.hive_style_partitioning -> true,
hoodie.copyonwrite.insert.split.size -> 500000,
hoodie.optimistic.consistency.guard.sleep_time_ms -> 500,
hoodie.datasource.hive_sync.use_jdbc -> true,
hoodie.metrics.reporter.type -> GRAPHITE,
hoodie.bootstrap.index.class -> org.apache.hudi.common.bootstrap.index.HFileBootstrapIndex,
hoodie.filesystem.remote.backup.view.enable -> true,
hoodie.logfile.to.parquet.compression.ratio -> 0.35,
hoodie.filesystem.view.spillable.mem -> 104857600,
hoodie.write.status.storage.level -> MEMORY_AND_DISK_SER,
hoodie.write.commit.callback.http.timeout.seconds -> 3,
hoodie.copyonwrite.insert.auto.split -> true,
hoodie.logfile.data.block.max.size -> 268435456,
hoodie.index.type -> BLOOM,
hoodie.keep.min.commits -> 6,
hoodie.memory.spillable.map.path -> /tmp/,
hoodie.filesystem.view.rocksdb.base.path -> /tmp/hoodie_timeline_rocksdb,
hoodie.compact.inline -> false,
hoodie.clean.async -> true,
hoodie.record.size.estimation.threshold -> 1.0,
hoodie.metrics.graphite.host -> localhost,
hoodie.simple.index.update.partition.path -> false,
hoodie.bloom.index.filter.dynamic.max.entries -> 100000,
hoodie.compaction.reverse.log.read -> false,
hoodie.metrics.jmx.port -> 9889,
hoodie.writestatus.class -> org.apache.hudi.client.WriteStatus,
hoodie.datasource.hive_sync.enable -> true,
hoodie.finalize.write.parallelism -> 1500,
hoodie.rollback.parallelism -> 100,
hoodie.index.bloom.num_entries -> 60000,
hoodie.memory.merge.max.size -> 134217728,
hoodie.bootstrap.mode.selector.regex.mode -> METADATA_ONLY,
hoodie.rollback.using.markers -> false,
hoodie.copyonwrite.record.size.estimate -> 1024,
hoodie.bloom.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.simple.index.parallelism -> 50,
hoodie.consistency.check.enabled -> false,
hoodie.bloom.index.use.caching -> true,
hoodie.metrics.on -> false,
hoodie.memory.compaction.max.size -> 1073741824,
hoodie.parquet.small.file.limit -> 104857600,
hoodie.combine.before.insert -> false,
hoodie.cleaner.commits.retained -> 1,
hoodie.embed.timeline.server -> true,
hoodie.bootstrap.mode.selector -> org.apache.hudi.client.bootstrap.selector.MetadataOnlyBootstrapModeSelector,
hoodie.filesystem.view.secondary.type -> MEMORY,
_.hoodie.allow.multi.write.on.same.instant -> false,
hoodie.datasource.write.partitionpath.field -> ,
_hoodie.optimistic.consistency.guard.enable -> true,
hoodie.datasource.hive_sync.database -> hbase,
hoodie.bloom.index.update.partition.path -> true,
hoodie.fail.on.timeline.archiving -> true,
hoodie.markers.delete.parallelism -> 100,
hoodie.filesystem.view.type -> MEMORY,
hoodie.parquet.max.file.size -> 125829120,
hoodie.datasource.write.keygenerator.class -> org.apache.hudi.keygen.NonpartitionedKeyGenerator,
hoodie.bootstrap.partitionpath.translator.class -> org.apache.hudi.client.bootstrap.translator.IdentityBootstrapPartitionPathTranslator,
hoodie.bloom.index.prune.by.ranges -> true,
hoodie.base.path -> /user/datalake/hudi/hbase/f_mid_business_card,
hoodie.index.class -> ,
hoodie.clean.automatic -> true,
hoodie.filesystem.view.remote.host -> localhost,
hoodie.compaction.lazy.block.read -> false,
hoodie.memory.writestatus.failure.fraction -> 0.1,
hoodie.metrics.graphite.port -> 4756,
hoodie.cleaner.policy -> KEEP_LATEST_FILE_VERSIONS,
hoodie.logfile.max.size -> 1073741824,
hoodie.filesystem.view.spillable.compaction.mem.fraction -> 0.01,
hoodie.datasource.write.recordkey.field -> datalake_rowkey,
hoodie.avro.schema.validate -> false,
hoodie.simple.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.timeline.layout.version -> 1,
hoodie.consistency.check.max_checks -> 7,
hoodie.consistency.check.initial_interval_ms -> 2000,
hoodie.keep.max.commits -> 8,
hoodie.compact.inline.max.delta.commits -> 5,
hoodie.parquet.compression.ratio -> 0.1,
hoodie.memory.dfs.buffer.max.size -> 16777216,
hoodie.auto.commit -> false,
hoodie.write.commit.callback.http.api.key -> hudi_write_commit_http_callback,
hoodie.assume.date.partitioning -> false,
hoodie.filesystem.view.spillable.dir -> /tmp/view_map/,
hoodie.compaction.strategy -> org.apache.hudi.table.action.compact.strategy.LogFileSizeBasedCompactionStrategy,
hoodie.combine.before.upsert -> true,
hoodie.bloom.index.keys.per.bucket -> 10000000,
hoodie.write.commit.callback.class -> org.apache.hudi.callback.impl.HoodieWriteCommitHttpCallback,
hoodie.bloom.index.parallelism -> 0,
hoodie.cleaner.incremental.mode -> true,
hoodie.commits.archival.batch -> 5,
hoodie.datasource.hive_sync.partition_fields -> ,
hoodie.compaction.target.io -> 512000,
hoodie.table.name -> f_mid_business_card,
hoodie.bloom.index.bucketized.checking -> true,
hoodie.compaction.payload.class -> org.apache.hudi.common.model.OverwriteWithLatestAvroPayload,
hoodie.combine.before.delete -> true,
hoodie.datasource.write.precombine.field -> ts,
hoodie.filesystem.view.spillable.bootstrap.base.file.mem.fraction -> 0.05,
hoodie.metrics.jmx.host -> localhost,
hoodie.index.bloom.fpp -> 0.000000001,
hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://172.16.116.102:10000,
hoodie.bloom.index.use.treebased.filter -> true
Can you provide the full dump of the logs and .hoodie/ folder ?
the log has been clear, but still have some picture
when i set hoodie.auto.commit = false,the error is gone.
but how to limit the log file size,my log file is so big(3GB+),log file version always 1.
when i change hoodie.cleaner.policy = KEEP_LATEST_FILE_VERSIONS and hoodie.cleaner.fileversions.retained = 1,the old data file can be clean,but how to clean the old log file(or clean the old log file commit,set hoodie.cleaner.policy = KEEP_LATEST_COMMITS and hoodie.cleaner.commits.retained = 1 is useless)
now my config is
hoodie.filesystem.view.incr.timeline.sync.enable -> false,
hoodie.bulkinsert.sort.mode -> GLOBAL_SORT,
hoodie.avro.schema.externalTransformation -> false,
hoodie.bootstrap.parallelism -> 1500,
hoodie.delete.shuffle.parallelism -> 1500,
hoodie.simple.index.use.caching -> true,
hoodie.bloom.index.filter.type -> DYNAMIC_V0,
hoodie.filesystem.view.remote.port -> 26754,
hoodie.datasource.write.operation -> upsert,
hoodie.cleaner.parallelism -> 200,
hoodie.global.simple.index.parallelism -> 100,
hoodie.bootstrap.mode.selector.regex -> .*,
hoodie.parquet.page.size -> 1048576,
hoodie.datasource.write.table.type -> MERGE_ON_READ,
hoodie.datasource.hive_sync.table -> f_mid_business_card,
hoodie.compaction.daybased.target.partitions -> 10,
hoodie.metrics.reporter.class -> ,
hoodie.parquet.block.size -> 125829120,
hoodie.cleaner.delete.bootstrap.base.file -> false,
hoodie.consistency.check.max_interval_ms -> 300000,
hoodie.insert.shuffle.parallelism -> 100,
hoodie.upsert.shuffle.parallelism -> 100,
hoodie.bulkinsert.shuffle.parallelism -> 1500,
hoodie.write.commit.callback.on -> false,
hoodie.cleaner.fileversions.retained -> 1,
hoodie.datasource.hive_sync.partition_extractor_class -> org.apache.hudi.hive.NonPartitionedExtractor,
hoodie.parquet.compression.codec -> gzip,
hoodie.datasource.write.hive_style_partitioning -> true,
hoodie.copyonwrite.insert.split.size -> 500000,
hoodie.optimistic.consistency.guard.sleep_time_ms -> 500,
hoodie.datasource.hive_sync.use_jdbc -> true,
hoodie.metrics.reporter.type -> GRAPHITE,
hoodie.bootstrap.index.class -> org.apache.hudi.common.bootstrap.index.HFileBootstrapIndex,
hoodie.filesystem.remote.backup.view.enable -> true,
hoodie.logfile.to.parquet.compression.ratio -> 0.35,
hoodie.filesystem.view.spillable.mem -> 104857600,
hoodie.write.status.storage.level -> MEMORY_AND_DISK_SER,
hoodie.write.commit.callback.http.timeout.seconds -> 3,
hoodie.copyonwrite.insert.auto.split -> true,
hoodie.logfile.data.block.max.size -> 268435456,
hoodie.index.type -> BLOOM,
hoodie.keep.min.commits -> 6,
hoodie.memory.spillable.map.path -> /tmp/,
hoodie.filesystem.view.rocksdb.base.path -> /tmp/hoodie_timeline_rocksdb,
hoodie.compact.inline -> false,
hoodie.clean.async -> true,
hoodie.record.size.estimation.threshold -> 1.0,
hoodie.metrics.graphite.host -> localhost,
hoodie.simple.index.update.partition.path -> false,
hoodie.bloom.index.filter.dynamic.max.entries -> 100000,
hoodie.compaction.reverse.log.read -> false,
hoodie.metrics.jmx.port -> 9889,
hoodie.writestatus.class -> org.apache.hudi.client.WriteStatus,
hoodie.datasource.hive_sync.enable -> true,
hoodie.finalize.write.parallelism -> 1500,
hoodie.rollback.parallelism -> 100,
hoodie.index.bloom.num_entries -> 60000,
hoodie.memory.merge.max.size -> 134217728,
hoodie.bootstrap.mode.selector.regex.mode -> METADATA_ONLY,
hoodie.rollback.using.markers -> false,
hoodie.copyonwrite.record.size.estimate -> 1024,
hoodie.bloom.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.simple.index.parallelism -> 50,
hoodie.consistency.check.enabled -> false,
hoodie.bloom.index.use.caching -> true,
hoodie.metrics.on -> false,
hoodie.memory.compaction.max.size -> 1073741824,
hoodie.parquet.small.file.limit -> 104857600,
hoodie.combine.before.insert -> false,
hoodie.cleaner.commits.retained -> 1,
hoodie.embed.timeline.server -> true,
hoodie.bootstrap.mode.selector -> org.apache.hudi.client.bootstrap.selector.MetadataOnlyBootstrapModeSelector,
hoodie.filesystem.view.secondary.type -> MEMORY,
_.hoodie.allow.multi.write.on.same.instant -> false,
hoodie.datasource.write.partitionpath.field -> ,
_hoodie.optimistic.consistency.guard.enable -> true,
hoodie.datasource.hive_sync.database -> hbase,
hoodie.bloom.index.update.partition.path -> true,
hoodie.fail.on.timeline.archiving -> true,
hoodie.markers.delete.parallelism -> 100,
hoodie.filesystem.view.type -> MEMORY,
hoodie.parquet.max.file.size -> 125829120,
hoodie.datasource.write.keygenerator.class -> org.apache.hudi.keygen.NonpartitionedKeyGenerator,
hoodie.bootstrap.partitionpath.translator.class -> org.apache.hudi.client.bootstrap.translator.IdentityBootstrapPartitionPathTranslator,
hoodie.bloom.index.prune.by.ranges -> true,
hoodie.base.path -> /user/datalake/hudi/hbase/f_mid_business_card,
hoodie.index.class -> ,
hoodie.clean.automatic -> true,
hoodie.filesystem.view.remote.host -> localhost,
hoodie.compaction.lazy.block.read -> false,
hoodie.memory.writestatus.failure.fraction -> 0.1,
hoodie.metrics.graphite.port -> 4756,
hoodie.cleaner.policy -> KEEP_LATEST_FILE_VERSIONS,
hoodie.logfile.max.size -> 1073741824,
hoodie.filesystem.view.spillable.compaction.mem.fraction -> 0.01,
hoodie.datasource.write.recordkey.field -> datalake_rowkey,
hoodie.avro.schema.validate -> false,
hoodie.simple.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.timeline.layout.version -> 1,
hoodie.consistency.check.max_checks -> 7,
hoodie.consistency.check.initial_interval_ms -> 2000,
hoodie.keep.max.commits -> 8,
hoodie.compact.inline.max.delta.commits -> 5,
hoodie.parquet.compression.ratio -> 0.1,
hoodie.memory.dfs.buffer.max.size -> 16777216,
hoodie.auto.commit -> false,
hoodie.write.commit.callback.http.api.key -> hudi_write_commit_http_callback,
hoodie.assume.date.partitioning -> false,
hoodie.filesystem.view.spillable.dir -> /tmp/view_map/,
hoodie.compaction.strategy -> org.apache.hudi.table.action.compact.strategy.LogFileSizeBasedCompactionStrategy,
hoodie.combine.before.upsert -> true,
hoodie.bloom.index.keys.per.bucket -> 10000000,
hoodie.write.commit.callback.class -> org.apache.hudi.callback.impl.HoodieWriteCommitHttpCallback,
hoodie.bloom.index.parallelism -> 0,
hoodie.cleaner.incremental.mode -> true,
hoodie.commits.archival.batch -> 5,
hoodie.datasource.hive_sync.partition_fields -> ,
hoodie.compaction.target.io -> 512000,
hoodie.table.name -> f_mid_business_card,
hoodie.bloom.index.bucketized.checking -> true,
hoodie.compaction.payload.class -> org.apache.hudi.common.model.OverwriteWithLatestAvroPayload,
hoodie.combine.before.delete -> true,
hoodie.datasource.write.precombine.field -> ts,
hoodie.filesystem.view.spillable.bootstrap.base.file.mem.fraction -> 0.05,
hoodie.metrics.jmx.host -> localhost,
hoodie.index.bloom.fpp -> 0.000000001,
hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://172.16.116.102:10000,
hoodie.bloom.index.use.treebased.filter -> true
@peng-xin : Can you attach the contents of hoodie.properties file here. This is most likely due to setting timeline layout version wrongly.
Can you start from a clean slate (new base path) and not pass "hoodie.timeline.layout.version" in configs and try and let us know ?
@peng-xin : Can you attach the contents of hoodie.properties file here. This is most likely due to setting timeline layout version wrongly.
Can you start from a clean slate (new base path) and not pass "hoodie.timeline.layout.version" in configs and try and let us know ?
@peng-xin : Can you attach the contents of hoodie.properties file here. This is most likely due to setting timeline layout version wrongly.
Can you start from a clean slate (new base path) and not pass "hoodie.timeline.layout.version" in configs and try and let us know ?
the old hoodie.properties is there:
i drop the old data and removed the config hoodie.timeline.layout.version,but that happened again.
now the config is there:
hoodie.filesystem.view.incr.timeline.sync.enable -> false,
hoodie.bulkinsert.sort.mode -> GLOBAL_SORT,
hoodie.bootstrap.parallelism -> 1500,
hoodie.avro.schema.externalTransformation -> false,
hoodie.delete.shuffle.parallelism -> 1500,
hoodie.simple.index.use.caching -> true,
hoodie.bloom.index.filter.type -> DYNAMIC_V0,
hoodie.filesystem.view.remote.port -> 26754,
hoodie.datasource.write.operation -> upsert,
hoodie.cleaner.parallelism -> 200,
hoodie.global.simple.index.parallelism -> 100,
hoodie.bootstrap.mode.selector.regex -> .*,
hoodie.parquet.page.size -> 1048576,
hoodie.datasource.write.table.type -> MERGE_ON_READ,
hoodie.datasource.hive_sync.table -> f_mid_business_card,
hoodie.compaction.daybased.target.partitions -> 10,
hoodie.metrics.reporter.class -> ,
hoodie.parquet.block.size -> 125829120,
hoodie.cleaner.delete.bootstrap.base.file -> false,
hoodie.consistency.check.max_interval_ms -> 300000,
hoodie.insert.shuffle.parallelism -> 100,
hoodie.upsert.shuffle.parallelism -> 100,
hoodie.bulkinsert.shuffle.parallelism -> 1500,
hoodie.write.commit.callback.on -> false,
hoodie.cleaner.fileversions.retained -> 1,
hoodie.datasource.hive_sync.partition_extractor_class -> org.apache.hudi.hive.NonPartitionedExtractor,
hoodie.parquet.compression.codec -> gzip,
hoodie.datasource.write.hive_style_partitioning -> true,
hoodie.copyonwrite.insert.split.size -> 500000,
hoodie.optimistic.consistency.guard.sleep_time_ms -> 500,
hoodie.datasource.hive_sync.use_jdbc -> true,
hoodie.metrics.reporter.type -> GRAPHITE,
hoodie.bootstrap.index.class -> org.apache.hudi.common.bootstrap.index.HFileBootstrapIndex,
hoodie.logfile.to.parquet.compression.ratio -> 0.35,
hoodie.filesystem.remote.backup.view.enable -> true,
hoodie.filesystem.view.spillable.mem -> 104857600,
hoodie.write.status.storage.level -> MEMORY_AND_DISK_SER,
hoodie.write.commit.callback.http.timeout.seconds -> 3,
hoodie.copyonwrite.insert.auto.split -> true,
hoodie.logfile.data.block.max.size -> 268435456,
hoodie.index.type -> BLOOM,
hoodie.keep.min.commits -> 6,
hoodie.memory.spillable.map.path -> /tmp/,
hoodie.filesystem.view.rocksdb.base.path -> /tmp/hoodie_timeline_rocksdb,
hoodie.compact.inline -> false,
hoodie.clean.async -> true,
hoodie.record.size.estimation.threshold -> 1.0,
hoodie.simple.index.update.partition.path -> false,
hoodie.bloom.index.filter.dynamic.max.entries -> 100000,
hoodie.metrics.graphite.host -> localhost,
hoodie.compaction.reverse.log.read -> false,
hoodie.metrics.jmx.port -> 9889,
hoodie.datasource.hive_sync.enable -> true,
hoodie.writestatus.class -> org.apache.hudi.client.WriteStatus,
hoodie.finalize.write.parallelism -> 1500,
hoodie.rollback.parallelism -> 100,
hoodie.index.bloom.num_entries -> 60000,
hoodie.memory.merge.max.size -> 134217728,
hoodie.bootstrap.mode.selector.regex.mode -> METADATA_ONLY,
hoodie.rollback.using.markers -> false,
hoodie.copyonwrite.record.size.estimate -> 1024,
hoodie.bloom.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.simple.index.parallelism -> 50,
hoodie.consistency.check.enabled -> false,
hoodie.bloom.index.use.caching -> true,
hoodie.memory.compaction.max.size -> 1073741824,
hoodie.metrics.on -> false,
hoodie.parquet.small.file.limit -> 104857600,
hoodie.combine.before.insert -> false,
hoodie.cleaner.commits.retained -> 1,
hoodie.embed.timeline.server -> true,
hoodie.bootstrap.mode.selector -> org.apache.hudi.client.bootstrap.selector.MetadataOnlyBootstrapModeSelector,
hoodie.datasource.write.partitionpath.field -> ,
_.hoodie.allow.multi.write.on.same.instant -> false,
hoodie.filesystem.view.secondary.type -> MEMORY,
_hoodie.optimistic.consistency.guard.enable -> true,
hoodie.datasource.hive_sync.database -> hbase,
hoodie.bloom.index.update.partition.path -> true,
hoodie.fail.on.timeline.archiving -> true,
hoodie.markers.delete.parallelism -> 100,
hoodie.datasource.write.keygenerator.class -> org.apache.hudi.keygen.NonpartitionedKeyGenerator,
hoodie.parquet.max.file.size -> 125829120,
hoodie.filesystem.view.type -> MEMORY,
hoodie.bootstrap.partitionpath.translator.class -> org.apache.hudi.client.bootstrap.translator.IdentityBootstrapPartitionPathTranslator,
hoodie.bloom.index.prune.by.ranges -> true,
hoodie.base.path -> /user/datalake/hudi/hbase/f_mid_business_card,
hoodie.clean.automatic -> true,
hoodie.index.class -> ,
hoodie.compaction.lazy.block.read -> false,
hoodie.filesystem.view.remote.host -> localhost,
hoodie.memory.writestatus.failure.fraction -> 0.1,
hoodie.metrics.graphite.port -> 4756,
hoodie.cleaner.policy -> KEEP_LATEST_FILE_VERSIONS,
hoodie.logfile.max.size -> 1073741824,
hoodie.filesystem.view.spillable.compaction.mem.fraction -> 0.01,
hoodie.datasource.write.recordkey.field -> datalake_rowkey,
hoodie.simple.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.avro.schema.validate -> false,
hoodie.consistency.check.max_checks -> 7,
hoodie.keep.max.commits -> 8,
hoodie.consistency.check.initial_interval_ms -> 2000,
hoodie.compact.inline.max.delta.commits -> 5,
hoodie.parquet.compression.ratio -> 0.1,
hoodie.memory.dfs.buffer.max.size -> 16777216,
hoodie.auto.commit -> false,
hoodie.write.commit.callback.http.api.key -> hudi_write_commit_http_callback,
hoodie.assume.date.partitioning -> false,
hoodie.filesystem.view.spillable.dir -> /tmp/view_map/,
hoodie.compaction.strategy -> org.apache.hudi.table.action.compact.strategy.LogFileSizeBasedCompactionStrategy,
hoodie.bloom.index.keys.per.bucket -> 10000000,
hoodie.combine.before.upsert -> true,
hoodie.cleaner.incremental.mode -> true,
hoodie.bloom.index.parallelism -> 0,
hoodie.write.commit.callback.class -> org.apache.hudi.callback.impl.HoodieWriteCommitHttpCallback,
hoodie.commits.archival.batch -> 5,
hoodie.compaction.target.io -> 512000,
hoodie.datasource.hive_sync.partition_fields -> ,
hoodie.table.name -> f_mid_business_card,
hoodie.bloom.index.bucketized.checking -> true,
hoodie.compaction.payload.class -> org.apache.hudi.common.model.OverwriteWithLatestAvroPayload,
hoodie.datasource.write.precombine.field -> ts,
hoodie.combine.before.delete -> true,
hoodie.filesystem.view.spillable.bootstrap.base.file.mem.fraction -> 0.05,
hoodie.metrics.jmx.host -> localhost,
hoodie.index.bloom.fpp -> 0.000000001,
hoodie.bloom.index.use.treebased.filter -> true,
hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://172.16.116.102:10000
@peng-xin : Can you attach the contents of hoodie.properties file here. This is most likely due to setting timeline layout version wrongly.
Can you start from a clean slate (new base path) and not pass "hoodie.timeline.layout.version" in configs and try and let us know ?
the old hoodie.properties is there:
i drop the old data and removed the config hoodie.timeline.layout.version,but that happened again.
now the config is there:
hoodie.filesystem.view.incr.timeline.sync.enable -> false,
hoodie.bulkinsert.sort.mode -> GLOBAL_SORT,
hoodie.bootstrap.parallelism -> 1500,
hoodie.avro.schema.externalTransformation -> false,
hoodie.delete.shuffle.parallelism -> 1500,
hoodie.simple.index.use.caching -> true,
hoodie.bloom.index.filter.type -> DYNAMIC_V0,
hoodie.filesystem.view.remote.port -> 26754,
hoodie.datasource.write.operation -> upsert,
hoodie.cleaner.parallelism -> 200,
hoodie.global.simple.index.parallelism -> 100,
hoodie.bootstrap.mode.selector.regex -> .*,
hoodie.parquet.page.size -> 1048576,
hoodie.datasource.write.table.type -> MERGE_ON_READ,
hoodie.datasource.hive_sync.table -> f_mid_business_card,
hoodie.compaction.daybased.target.partitions -> 10,
hoodie.metrics.reporter.class -> ,
hoodie.parquet.block.size -> 125829120,
hoodie.cleaner.delete.bootstrap.base.file -> false,
hoodie.consistency.check.max_interval_ms -> 300000,
hoodie.insert.shuffle.parallelism -> 100,
hoodie.upsert.shuffle.parallelism -> 100,
hoodie.bulkinsert.shuffle.parallelism -> 1500,
hoodie.write.commit.callback.on -> false,
hoodie.cleaner.fileversions.retained -> 1,
hoodie.datasource.hive_sync.partition_extractor_class -> org.apache.hudi.hive.NonPartitionedExtractor,
hoodie.parquet.compression.codec -> gzip,
hoodie.datasource.write.hive_style_partitioning -> true,
hoodie.copyonwrite.insert.split.size -> 500000,
hoodie.optimistic.consistency.guard.sleep_time_ms -> 500,
hoodie.datasource.hive_sync.use_jdbc -> true,
hoodie.metrics.reporter.type -> GRAPHITE,
hoodie.bootstrap.index.class -> org.apache.hudi.common.bootstrap.index.HFileBootstrapIndex,
hoodie.logfile.to.parquet.compression.ratio -> 0.35,
hoodie.filesystem.remote.backup.view.enable -> true,
hoodie.filesystem.view.spillable.mem -> 104857600,
hoodie.write.status.storage.level -> MEMORY_AND_DISK_SER,
hoodie.write.commit.callback.http.timeout.seconds -> 3,
hoodie.copyonwrite.insert.auto.split -> true,
hoodie.logfile.data.block.max.size -> 268435456,
hoodie.index.type -> BLOOM,
hoodie.keep.min.commits -> 6,
hoodie.memory.spillable.map.path -> /tmp/,
hoodie.filesystem.view.rocksdb.base.path -> /tmp/hoodie_timeline_rocksdb,
hoodie.compact.inline -> false,
hoodie.clean.async -> true,
hoodie.record.size.estimation.threshold -> 1.0,
hoodie.simple.index.update.partition.path -> false,
hoodie.bloom.index.filter.dynamic.max.entries -> 100000,
hoodie.metrics.graphite.host -> localhost,
hoodie.compaction.reverse.log.read -> false,
hoodie.metrics.jmx.port -> 9889,
hoodie.datasource.hive_sync.enable -> true,
hoodie.writestatus.class -> org.apache.hudi.client.WriteStatus,
hoodie.finalize.write.parallelism -> 1500,
hoodie.rollback.parallelism -> 100,
hoodie.index.bloom.num_entries -> 60000,
hoodie.memory.merge.max.size -> 134217728,
hoodie.bootstrap.mode.selector.regex.mode -> METADATA_ONLY,
hoodie.rollback.using.markers -> false,
hoodie.copyonwrite.record.size.estimate -> 1024,
hoodie.bloom.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.simple.index.parallelism -> 50,
hoodie.consistency.check.enabled -> false,
hoodie.bloom.index.use.caching -> true,
hoodie.memory.compaction.max.size -> 1073741824,
hoodie.metrics.on -> false,
hoodie.parquet.small.file.limit -> 104857600,
hoodie.combine.before.insert -> false,
hoodie.cleaner.commits.retained -> 1,
hoodie.embed.timeline.server -> true,
hoodie.bootstrap.mode.selector -> org.apache.hudi.client.bootstrap.selector.MetadataOnlyBootstrapModeSelector,
hoodie.datasource.write.partitionpath.field -> ,
_.hoodie.allow.multi.write.on.same.instant -> false,
hoodie.filesystem.view.secondary.type -> MEMORY,
_hoodie.optimistic.consistency.guard.enable -> true,
hoodie.datasource.hive_sync.database -> hbase,
hoodie.bloom.index.update.partition.path -> true,
hoodie.fail.on.timeline.archiving -> true,
hoodie.markers.delete.parallelism -> 100,
hoodie.datasource.write.keygenerator.class -> org.apache.hudi.keygen.NonpartitionedKeyGenerator,
hoodie.parquet.max.file.size -> 125829120,
hoodie.filesystem.view.type -> MEMORY,
hoodie.bootstrap.partitionpath.translator.class -> org.apache.hudi.client.bootstrap.translator.IdentityBootstrapPartitionPathTranslator,
hoodie.bloom.index.prune.by.ranges -> true,
hoodie.base.path -> /user/datalake/hudi/hbase/f_mid_business_card,
hoodie.clean.automatic -> true,
hoodie.index.class -> ,
hoodie.compaction.lazy.block.read -> false,
hoodie.filesystem.view.remote.host -> localhost,
hoodie.memory.writestatus.failure.fraction -> 0.1,
hoodie.metrics.graphite.port -> 4756,
hoodie.cleaner.policy -> KEEP_LATEST_FILE_VERSIONS,
hoodie.logfile.max.size -> 1073741824,
hoodie.filesystem.view.spillable.compaction.mem.fraction -> 0.01,
hoodie.datasource.write.recordkey.field -> datalake_rowkey,
hoodie.simple.index.input.storage.level -> MEMORY_AND_DISK_SER,
hoodie.avro.schema.validate -> false,
hoodie.consistency.check.max_checks -> 7,
hoodie.keep.max.commits -> 8,
hoodie.consistency.check.initial_interval_ms -> 2000,
hoodie.compact.inline.max.delta.commits -> 5,
hoodie.parquet.compression.ratio -> 0.1,
hoodie.memory.dfs.buffer.max.size -> 16777216,
hoodie.auto.commit -> false,
hoodie.write.commit.callback.http.api.key -> hudi_write_commit_http_callback,
hoodie.assume.date.partitioning -> false,
hoodie.filesystem.view.spillable.dir -> /tmp/view_map/,
hoodie.compaction.strategy -> org.apache.hudi.table.action.compact.strategy.LogFileSizeBasedCompactionStrategy,
hoodie.bloom.index.keys.per.bucket -> 10000000,
hoodie.combine.before.upsert -> true,
hoodie.cleaner.incremental.mode -> true,
hoodie.bloom.index.parallelism -> 0,
hoodie.write.commit.callback.class -> org.apache.hudi.callback.impl.HoodieWriteCommitHttpCallback,
hoodie.commits.archival.batch -> 5,
hoodie.compaction.target.io -> 512000,
hoodie.datasource.hive_sync.partition_fields -> ,
hoodie.table.name -> f_mid_business_card,
hoodie.bloom.index.bucketized.checking -> true,
hoodie.compaction.payload.class -> org.apache.hudi.common.model.OverwriteWithLatestAvroPayload,
hoodie.datasource.write.precombine.field -> ts,
hoodie.combine.before.delete -> true,
hoodie.filesystem.view.spillable.bootstrap.base.file.mem.fraction -> 0.05,
hoodie.metrics.jmx.host -> localhost,
hoodie.index.bloom.fpp -> 0.000000001,
hoodie.bloom.index.use.treebased.filter -> true,
hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://172.16.116.102:10000
@peng-xin : Can you enable hoodie.compact.inline -> true and hoodie.auto.commit -> true. The log files are growing because they need to be compacted and if you set the first config, it will periodically run compactions. Cleaner will eventually remove old log files and parquet files after that.
@peng-xin : Can you enable hoodie.compact.inline -> true and hoodie.auto.commit -> true. The log files are growing because they need to be compacted and if you set the first config, it will periodically run compactions. Cleaner will eventually remove old log files and parquet files after that.
@peng-xin : Can you enable hoodie.compact.inline -> true and hoodie.auto.commit -> true. The log files are growing because they need to be compacted and if you set the first config, it will periodically run compactions. Cleaner will eventually remove old log files and parquet files after that.
thank you so much.
when i set hoodie.compact.inline -> true,the size of log be limited.
but hoodie.auto.commit -> true will cause the same error
@peng-xin : Can you enable hoodie.compact.inline -> true and hoodie.auto.commit -> true. The log files are growing because they need to be compacted and if you set the first config, it will periodically run compactions. Cleaner will eventually remove old log files and parquet files after that.
thank you so much.
when i set hoodie.compact.inline -> true,the size of log be limited.
but hoodie.auto.commit -> true will cause the same error
@peng-xin Are you able to proceed with hoodie.compact.inline -> true and hoodie.auto.commit -> false ?
Guess you might have to fix the max file size. I see currently you set it to very high value. Was that intentional ?
hoodie.parquet.max.file.size -> 125829120,
hoodie.logfile.max.size -> 1073741824
Can you try setting the values as follows:
parquet max file size: 120Mb
@vinothchandar : any recommendation for log file max size?
@peng-xin : few quick questions as we triage the issue.
Were you running older version of Hudi and encountered this after upgrade? in other words, older Hudi version you were able to run successfully and with 0.7.0 there is a bug.
Is this affecting your production? trying to gauge the severity.
Or you are trying out a POC ? and this is the first time trying out Hudi.
if you set "hoodie.compact.inline -> true",this means compaction changes inline, but async?
nope. inline means sync. if not its async.
but I need async, if I set "hoodie.compact.inline -> true",that is not async.
@root18039532923 : Please look at https://hudi.apache.org/blog/async-compaction-deployment-model/ for running async compactions
INLINE_COMPACT_NUM_DELTA_COMMITS_PROP is the configuration of sync?why does async need to add this?
I know that The default value of ASYNC_COMPACT_ENABLE_OPT_KEY is true.
@peng-xin Since we haven't heard from you in a while and this issue has not been reported by anyone else, I'm assuming this to be a transient issue with some of your settings. Let me know if you need further help.
@root18039532923 Please feel free to re-open if you are still confused about how to use async compaction.
|
gharchive/issue
| 2021-01-14T12:43:02 |
2025-04-01T04:55:58.865837
|
{
"authors": [
"bvaradar",
"n3nash",
"nsivabalan",
"peng-xin",
"root18039532923"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/issues/2448",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1252431339
|
NoClassDefFoundError: org/apache/parquet/schema/LogicalTypeAnnotation$LogicalTypeAnnotationVisitor Caused by: ClassNotFoundException: org.apache.parquet.schema.LogicalTypeAnnotation$LogicalTypeAnnotationVisitor
Hi Team ,
Getting the below issue in hudi job while executing through databricks.
ERROR details:-
NoClassDefFoundError: org/apache/parquet/schema/LogicalTypeAnnotation$LogicalTypeAnnotationVisitor Caused by: ClassNotFoundException: org.apache.parquet.schema.LogicalTypeAnnotation$LogicalTypeAnnotationVisitor
We are running the hudi code through databricks .
Environment Description
Hudi version : 0.11.0
Spark version : 3.1.2
Scala Version :2.12
Storage (HDFS/S3/GCS..) : AZURE Blob Storage
AVRO SOURCE we are using to read the kafka data .
Used 9.1 LTS Databricks RUNTIME version
With same configurations job succeeded in non prod environment getting error in prod environment
Please find the attached details and log file ,Let me know the resolution steps
Full log -
log4j-active (43).txt
can you share the complete spark-submit command with args? need to see what jars are used.
With same configurations job succeeded in non prod environment getting error in prod environment
so this is related to your environment discrepancy. are you able to check from your side what jars not present in your prod env?
Hi @xushiyan ,
i verified the both non prod and prod jars , it is same in both environments .
Please find the attached screenshot of Databrick job details.
Parameters that we used for the job :-
["--table-type","COPY_ON_WRITE","--source-ordering-field","CDC_TS","--source-limit","1000","--source-class","com.optum.df.hudi.sources.DFAvroKafkaSource","--target-base-path","/mnt/ulp/dataassets-lake/metrics/","--target-table","metrics","--schemaprovider-class","org.apache.hudi.utilities.schema.SchemaRegistryProvider","--props","/mnt/ulp/artifacts/properties/metrics.properties"]
spark-avro package you used is mismatch; it's for spark 3.2.1 but your main spark is 3.1. can you refer to 0.11 release note migration guide? spark-avro is not required. And also you don't need spark3-bundle, as you already used utilities-bundle
HI @xushiyan ,
I have made the changes suggested by you, removed spark3-bundle and spark_avro and spark3 bundle dependencies are removed and submitted the job it's giving the another error .
ERROR details:-
HoodieException: Commit 20220530133502327 failed and rolled-back !
at org.apache.hudi.utilities.deltastreamer.DeltaSync.writeToSink(DeltaSync.java:649)
Please find the attached screen shot and log file .
log4j-active (44).txt
The new exception is application-level error, specific to your data and scenario, not related to dependency. Please examine your executor logs and data for details. Closing this as the original problem about the dependency was resolved.
|
gharchive/issue
| 2022-05-30T09:10:11 |
2025-04-01T04:55:58.877687
|
{
"authors": [
"nleena123",
"xushiyan"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/issues/5714",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1329432621
|
[SUPPORT] Spark multi writer failed ! ! !
Describe the problem you faced
when i use config 1 ,The exception A occurred in one of the two spark jobs writing the same table;
when i use config 2 ,The exception B occurred in one of the two spark jobs writing the same table.
there was not clustering config .
To Reproduce
config 1:
hoodie.write.concurrency.mode=optimistic_concurrency_control
hoodie.cleaner.policy.failed.writes=LAZY
hoodie.write.lock.provider=org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider
hoodie.write.lock.zookeeper.url=10.1.2.12
hoodie.write.lock.zookeeper.port=2181
hoodie.write.lock.zookeeper.lock_key=ss_bucket_lock
hoodie.write.lock.zookeeper.base_path=/ss_bucket_lock
config 2 :
hoodie.write.concurrency.mode=optimistic_concurrency_control
hoodie.cleaner.policy.failed.writes=LAZY
hoodie.write.lock.provider=org.apache.hudi.hive.HiveMetastoreBasedLockProvider
hoodie.write.lock.hivemetastore.database=default
hoodie.write.lock.hivemetastore.table=occ_lock_ss_bucket_dsj
hoodie.write.lock.hivemetastore.uris=thrift://host-10-4-6-15:9083
Expected behavior
Two spark jobs could write the same hudi table normally at the same time !
Environment Description
Hudi version :
Hudi-0.11.1
Spark version :
spark-3.1.x
Hive version :
None
Hadoop version :
Hadoop-3.3.0
Storage (HDFS/S3/GCS..) :
Hdfs
Running on Docker? (yes/no) :
No
**Stacktrace ,Exception A **
22/08/05 15:01:15 INFO CommitUtils: Creating metadata for INSERT numWriteStats:1numReplaceFileIds:0
22/08/05 15:01:15 INFO TransactionManager: Transaction starting for Option{val=[==>20220805150110559__commit__INFLIGHT]} with latest completed transaction instant Option{val=[20220805150050505__commit__COMPLETED]}
22/08/05 15:01:15 INFO ZookeeperBasedLockProvider: ACQUIRING lock atZkBasePath = /ss_bucket_lock, lock key = ss_bucket_lock
22/08/05 15:01:18 INFO ZookeeperBasedLockProvider: ACQUIRED lock atZkBasePath = /ss_bucket_lock, lock key = ss_bucket_lock
22/08/05 15:01:18 INFO TransactionManager: Transaction started for Option{val=[==>20220805150110559__commit__INFLIGHT]} with latest completed transaction instant Option{val=[20220805150050505__commit__COMPLETED]}
22/08/05 15:01:18 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from /tmp/hudi/ss_bucket_dsj
22/08/05 15:01:18 INFO HoodieTableConfig: Loading table properties from /tmp/hudi/ss_bucket_dsj/.hoodie/hoodie.properties
22/08/05 15:01:18 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from /tmp/hudi/ss_bucket_dsj
22/08/05 15:01:18 INFO HoodieTableMetaClient: Loading Active commit timeline for /tmp/hudi/ss_bucket_dsj
22/08/05 15:01:18 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20220805150110559__commit__INFLIGHT]}
22/08/05 15:01:18 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY
22/08/05 15:01:18 INFO FileSystemViewManager: Creating in-memory based Table View
22/08/05 15:01:18 INFO HoodieBucketIndex: use bucket index, numBuckets=1
22/08/05 15:01:18 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20220805150110559__commit__INFLIGHT]}
22/08/05 15:01:18 INFO SimpleConcurrentFileWritesConflictResolutionStrategy: Found conflicting writes between first operation = {actionType=commit, instantTime=20220805150110559, actionState=INFLIGHT'}, second operation = {actionType=commit, instantTime=20220805150100333, actionState=COMPLETED'} , intersecting file ids [00000000-4447-4513-8068-326e01720c62-0]
22/08/05 15:01:18 INFO TransactionUtils: Conflict encountered between current instant = {actionType=commit, instantTime=20220805150110559, actionState=INFLIGHT'} and instant = {actionType=commit, instantTime=20220805150100333, actionState=COMPLETED'}, attempting to resolve it...
22/08/05 15:01:18 INFO TransactionManager: Transaction ending with transaction owner Option{val=[==>20220805150110559__commit__INFLIGHT]}
22/08/05 15:01:18 INFO ZookeeperBasedLockProvider: RELEASING lock atZkBasePath = /ss_bucket_lock, lock key = ss_bucket_lock
22/08/05 15:01:18 INFO ZookeeperBasedLockProvider: RELEASED lock atZkBasePath = /ss_bucket_lock, lock key = ss_bucket_lock
22/08/05 15:01:18 INFO TransactionManager: Transaction ended with transaction owner Option{val=[==>20220805150110559__commit__INFLIGHT]}
22/08/05 15:01:18 INFO MapPartitionsRDD: Removing RDD 167 from persistence list
22/08/05 15:01:18 INFO MapPartitionsRDD: Removing RDD 159 from persistence list
22/08/05 15:01:18 INFO BlockManager: Removing RDD 167
22/08/05 15:01:18 ERROR HoodieStreamingSink: Micro batch id=32 threw following exception:
org.apache.hudi.exception.HoodieWriteConflictException: java.util.ConcurrentModificationException: Cannot resolve conflicts for overlapping writes
at org.apache.hudi.client.transaction.SimpleConcurrentFileWritesConflictResolutionStrategy.resolveConflict(SimpleConcurrentFileWritesConflictResolutionStrategy.java:102)
at org.apache.hudi.client.utils.TransactionUtils.lambda$resolveWriteConflictIfAny$0(TransactionUtils.java:85)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at org.apache.hudi.client.utils.TransactionUtils.resolveWriteConflictIfAny(TransactionUtils.java:79)
at org.apache.hudi.client.SparkRDDWriteClient.preCommit(SparkRDDWriteClient.java:473)
at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:233)
at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:122)
at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:651)
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:315)
at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$2(HoodieStreamingSink.scala:91)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$1(HoodieStreamingSink.scala:90)
at org.apache.hudi.HoodieStreamingSink.retry(HoodieStreamingSink.scala:166)
at org.apache.hudi.HoodieStreamingSink.addBatch(HoodieStreamingSink.scala:89)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:586)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:584)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:584)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:226)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:194)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:188)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:333)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244)
Caused by: java.util.ConcurrentModificationException: Cannot resolve conflicts for overlapping writes
... 38 more
22/08/05 15:01:18 INFO HoodieStreamingSink: Ignore the exception and move on streaming as per hoodie.datasource.write.streaming.ignore.failed.batch configuration
22/08/05 15:01:18 INFO HoodieStreamingSink: Micro batch id=32 succeeded
22/08/05 15:01:18 INFO BlockManager: Removing RDD 159
22/08/05 15:01:18 INFO CheckpointFileManager: Writing atomically to hdfs://host-10-4-6-18:8020/tmp/hudi/ckp1/commits/32 using temp file hdfs://host-10-4-6-18:8020/tmp/hudi/ckp1/commits/.32.a829b364-c1b9-4b4b-8fae-f7d866ed76e5.tmp
22/08/05 15:01:18 INFO CheckpointFileManager: Renamed temp file hdfs://host-10-4-6-18:8020/tmp/hudi/ckp1/commits/.32.a829b364-c1b9-4b4b-8fae-f7d866ed76e5.tmp to hdfs://host-10-4-6-18:8020/tmp/hudi/ckp1/commits/32
**Stacktrace for Exception B **
22/08/05 14:35:08 INFO TransactionManager: Transaction starting for Option{val=[==>20220805143450492__commit__INFLIGHT]} with latest completed transaction instant Option{val=[20220805143430140__commit__COMPLETED]}
22/08/05 14:35:08 INFO HiveMetastoreBasedLockProvider: ACQUIRING lock at database default and table occ_lock_ss_bucket_dsj
22/08/05 14:35:08 INFO LockManager: Retrying to acquire lock...
22/08/05 14:35:18 INFO HiveMetastoreBasedLockProvider: ACQUIRING lock at database default and table occ_lock_ss_bucket_dsj
22/08/05 14:35:18 INFO MapPartitionsRDD: Removing RDD 6 from persistence list
22/08/05 14:35:18 INFO BlockManager: Removing RDD 6
22/08/05 14:35:18 INFO MapPartitionsRDD: Removing RDD 14 from persistence list
22/08/05 14:35:18 INFO BlockManager: Removing RDD 14
22/08/05 14:35:18 ERROR HoodieStreamingSink: Micro batch id=133 threw following exception:
java.lang.IllegalArgumentException: ALREADY_ACQUIRED
at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:40)
at org.apache.hudi.hive.HiveMetastoreBasedLockProvider.acquireLock(HiveMetastoreBasedLockProvider.java:136)
at org.apache.hudi.hive.HiveMetastoreBasedLockProvider.tryLock(HiveMetastoreBasedLockProvider.java:112)
at org.apache.hudi.client.transaction.lock.LockManager.lock(LockManager.java:67)
at org.apache.hudi.client.transaction.TransactionManager.beginTransaction(TransactionManager.java:53)
at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:230)
at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:122)
at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:651)
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:315)
at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$2(HoodieStreamingSink.scala:91)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$1(HoodieStreamingSink.scala:90)
at org.apache.hudi.HoodieStreamingSink.retry(HoodieStreamingSink.scala:166)
at org.apache.hudi.HoodieStreamingSink.addBatch(HoodieStreamingSink.scala:89)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:586)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:584)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:584)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:226)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:194)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:188)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:333)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244)
22/08/05 14:35:18 INFO HoodieStreamingSink: Ignore the exception and move on streaming as per hoodie.datasource.write.streaming.ignore.failed.batch configuration
22/08/05 14:35:18 INFO HoodieStreamingSink: Micro batch id=133 succeeded
22/08/05 14:35:18 ERROR MicroBatchExecution: Query [id = d0ebeccf-4600-4aba-b097-420cd8356448, runId = 233f22f7-66d9-47a8-9093-0381f157428a] terminated with error
java.lang.AssertionError: assertion failed: Concurrent update to the commit log. Multiple streaming jobs detected for 133
at scala.Predef$.assert(Predef.scala:223)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:602)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:613)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:598)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:226)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:194)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:188)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:333)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244)
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: assertion failed: Concurrent update to the commit log. Multiple streaming jobs detected for 133
=== Streaming Query ===
Identifier: [id = d0ebeccf-4600-4aba-b097-420cd8356448, runId = 233f22f7-66d9-47a8-9093-0381f157428a]
Current Committed Offsets: {KafkaV2[Subscribe[user_hive4]]: {"user_hive4":{"2":49415,"1":47279,"0":49423}}}
Current Available Offsets: {KafkaV2[Subscribe[user_hive4]]: {"user_hive4":{"2":49752,"1":47604,"0":49760}}}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
Project [id#23, cast(score#24 as double) AS score#31, sex#25, to_utc_timestamp(cast(ts#26 as timestamp), Asia/Shanghai) AS ts#32]
+- SubqueryAlias s
+- Project [split(kafka_value#21, ,, -1)[0] AS id#23, split(kafka_value#21, ,, -1)[1] AS score#24, split(kafka_value#21, ,, -1)[2] AS sex#25, split(kafka_value#21, ,, -1)[3] AS ts#26]
+- SubqueryAlias t
+- Project [cast(value#8 as string) AS kafka_value#21]
+- StreamingDataSourceV2Relation [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13], org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaScan@6a155037, KafkaV2[Subscribe[user_hive4]]
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:354)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244)
Caused by: java.lang.AssertionError: assertion failed: Concurrent update to the commit log. Multiple streaming jobs detected for 133
at scala.Predef$.assert(Predef.scala:223)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:602)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:613)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:598)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:226)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:194)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:188)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:333)
... 1 more
I think the second issue was fixed in the 0.12 branch, can you try it?
I also met the second issue, will create a pr to fix it
I also met the second issue, this should already be fixed in master branch , you can try it out
@fengjian428 Thank you for your reply,I'll try the main branch..
I also met the second issue, this should already be fixed in master branch , you can try it out
@fengjian428 Thank you for your reply,I'll try the master branch..
The second issue was indeed fixed , thank you !
@eric9204 For the first failure, can you share the full write configs? Was there any pending/inflight compaction and clustering? Can you share the .hoodie directory under the base path of the table? I think the lock provider is running fine but conflict resolution failed. The timeline will help us build the sequence of events and see why the resolution failed.
@eric9204 For the first failure, can you share the full write configs? Was there any pending/inflight compaction and clustering? Can you share the .hoodie directory under the base path of the table? I think the lock provider is running fine but conflict resolution failed. The timeline will help us build the sequence of events and see why the resolution failed.
2022-08-08 13:33 /tmp/hudi/ss_bucket_dsj/.hoodie/.aux
2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/.heartbeat
2022-08-08 13:33 /tmp/hudi/ss_bucket_dsj/.hoodie/.schema
2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/.temp
1.5 K 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134200577.rollback
0 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134200577.rollback.inflight
1.3 K 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134200577.rollback.requested
1.5 K 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134211686.rollback
0 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134211686.rollback.inflight
1.3 K 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134211686.rollback.requested
1.5 K 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134216453.rollback
0 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134216453.rollback.inflight
1.3 K 2022-08-08 13:42 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134216453.rollback.requested
1.5 K 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134916968.rollback
0 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134916968.rollback.inflight
1.3 K 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134916968.rollback.requested
1.5 K 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134917661.rollback
0 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134917661.rollback.inflight
1.3 K 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134917661.rollback.requested
1.5 K 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134918933.rollback
0 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134918933.rollback.inflight
1.3 K 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134918933.rollback.requested
1.5 K 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134919571.rollback
0 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134919571.rollback.inflight
1.3 K 2022-08-08 13:49 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134919571.rollback.requested
1.5 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134920158.rollback
0 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134920158.rollback.inflight
1.3 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134920158.rollback.requested
1.5 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134921135.rollback
0 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134921135.rollback.inflight
1.3 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808134921135.rollback.requested
1.5 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135002565.rollback
0 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135002565.rollback.inflight
1.3 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135002565.rollback.requested
1.5 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135003386.rollback
0 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135003386.rollback.inflight
1.3 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135003386.rollback.requested
1.5 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135013265.rollback
0 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135013265.rollback.inflight
1.3 K 2022-08-08 13:50 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135013265.rollback.requested
1.5 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135022179.rollback
0 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135022179.rollback.inflight
1.3 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135022179.rollback.requested
1.5 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135033233.rollback
0 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135033233.rollback.inflight
1.3 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135033233.rollback.requested
1.5 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135036166.rollback
0 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135036166.rollback.inflight
1.3 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135036166.rollback.requested
1.5 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135037732.rollback
0 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135037732.rollback.inflight
1.3 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135037732.rollback.requested
1.5 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135043606.rollback
0 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135043606.rollback.inflight
1.3 K 2022-08-08 13:51 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135043606.rollback.requested
1.5 K 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135212947.rollback
0 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135212947.rollback.inflight
1.3 K 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135212947.rollback.requested
1.5 K 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135213720.rollback
0 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135213720.rollback.inflight
1.3 K 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135213720.rollback.requested
1.5 K 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135215256.rollback
0 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135215256.rollback.inflight
1.3 K 2022-08-08 13:52 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135215256.rollback.requested
1.5 K 2022-08-08 13:54 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135330116.rollback
0 2022-08-08 13:54 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135330116.rollback.inflight
1.3 K 2022-08-08 13:54 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135330116.rollback.requested
1.5 K 2022-08-08 13:54 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135406717.rollback
0 2022-08-08 13:54 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135406717.rollback.inflight
1.3 K 2022-08-08 13:54 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135406717.rollback.requested
2.5 K 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135436042.commit
0 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135436042.commit.requested
3.0 K 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135436042.inflight
1.5 K 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135447952.clean
1.6 K 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135447952.clean.inflight
1.6 K 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135447952.clean.requested
2.5 K 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135459430.commit
0 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135459430.commit.requested
3.0 K 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135459430.inflight
1.5 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135509582.clean
1.6 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135509582.clean.inflight
1.6 K 2022-08-08 13:55 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135509582.clean.requested
2.5 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135540714.commit
0 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135540714.commit.requested
3.0 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135540714.inflight
1.5 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135552939.clean
1.6 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135552939.clean.inflight
1.6 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135552939.clean.requested
1.5 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135552965.rollback
0 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135552965.rollback.inflight
1.3 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135552965.rollback.requested
1.5 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135624549.rollback
0 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135624549.rollback.inflight
1.3 K 2022-08-08 13:56 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135624549.rollback.requested
2.5 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135634238.commit
0 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135634238.commit.requested
3.0 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135634238.inflight
1.5 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135645734.clean
1.6 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135645734.clean.inflight
1.6 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135645734.clean.requested
1.5 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135645758.rollback
0 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135645758.rollback.inflight
1.3 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135645758.rollback.requested
0 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135645810.commit.requested
3.0 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135645810.inflight
1.5 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135646329.rollback
0 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135646329.rollback.inflight
1.3 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135646329.rollback.requested
2.5 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135659559.commit
0 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135659559.commit.requested
3.0 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135659559.inflight
1.5 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135711854.clean
1.6 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135711854.clean.inflight
1.6 K 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135711854.clean.requested
2.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135719322.commit
0 2022-08-08 13:57 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135719322.commit.requested
3.0 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135719322.inflight
1.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135730874.clean
1.6 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135730874.clean.inflight
1.6 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135730874.clean.requested
2.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135731862.commit
0 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135731862.commit.requested
3.0 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135731862.inflight
0 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135734515.commit.requested
3.0 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135734515.inflight
1.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135736528.clean
1.6 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135736528.clean.inflight
1.6 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135736528.clean.requested
2.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135746974.commit
0 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135746974.commit.requested
3.0 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135746974.inflight
1.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135751508.clean
1.6 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135751508.clean.inflight
1.6 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135751508.clean.requested
0 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135753793.commit.requested
3.0 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135753793.inflight
2.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135754854.commit
0 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135754854.commit.requested
3.0 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135754854.inflight
1.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135802038.clean
1.6 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135802038.clean.inflight
1.6 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135802038.clean.requested
2.5 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135815814.commit
0 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135815814.commit.requested
3.0 K 2022-08-08 13:58 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135815814.inflight
1.5 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135819555.clean
1.6 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135819555.clean.inflight
1.6 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135819555.clean.requested
2.5 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135849144.commit
0 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135849144.commit.requested
3.0 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135849144.inflight
1.5 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135906862.clean
1.6 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135906862.clean.inflight
1.6 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135906862.clean.requested
1.5 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135906879.rollback
0 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135906879.rollback.inflight
1.3 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135906879.rollback.requested
1.5 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135907610.rollback
0 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135907610.rollback.inflight
1.3 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135907610.rollback.requested
2.5 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135911957.commit
0 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135911957.commit.requested
3.0 K 2022-08-08 13:59 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135911957.inflight
1.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135920430.clean
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135920430.clean.inflight
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135920430.clean.requested
2.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135925137.commit
0 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135925137.commit.requested
3.0 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135925137.inflight
1.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135930305.clean
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135930305.clean.inflight
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135930305.clean.requested
2.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135935514.commit
0 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135935514.commit.requested
3.0 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135935514.inflight
1.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135945906.clean
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135945906.clean.inflight
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808135945906.clean.requested
2.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140000922.commit
0 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140000922.commit.requested
3.0 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140000922.inflight
1.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140004948.clean
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140004948.clean.inflight
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140004948.clean.requested
2.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140006350.commit
0 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140006350.commit.requested
3.0 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140006350.inflight
1.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140010972.clean
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140010972.clean.inflight
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140010972.clean.requested
2.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140013016.commit
0 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140013016.commit.requested
3.0 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140013016.inflight
1.5 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140018094.clean
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140018094.clean.inflight
1.6 K 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140018094.clean.requested
2.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140020180.commit
0 2022-08-08 14:00 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140020180.commit.requested
3.0 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140020180.inflight
1.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140028764.clean
1.6 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140028764.clean.inflight
1.6 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140028764.clean.requested
2.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140031809.commit
0 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140031809.commit.requested
3.0 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140031809.inflight
1.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140035523.clean
1.6 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140035523.clean.inflight
1.6 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140035523.clean.requested
2.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140042109.commit
0 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140042109.commit.requested
3.0 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140042109.inflight
1.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140046546.clean
1.6 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140046546.clean.inflight
1.6 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140046546.clean.requested
2.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140050105.commit
0 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140050105.commit.requested
3.0 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140050105.inflight
1.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140055595.clean
1.6 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140055595.clean.inflight
1.6 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140055595.clean.requested
2.5 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140109135.commit
0 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140109135.commit.requested
3.0 K 2022-08-08 14:01 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140109135.inflight
1.5 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140118730.clean
1.6 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140118730.clean.inflight
1.6 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140118730.clean.requested
2.5 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140132300.commit
0 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140132300.commit.requested
3.0 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140132300.inflight
1.5 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140144323.clean
1.6 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140144323.clean.inflight
1.6 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140144323.clean.requested
2.5 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140204915.commit
0 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140204915.commit.requested
3.0 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140204915.inflight
1.5 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140210226.clean
1.6 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140210226.clean.inflight
1.6 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140210226.clean.requested
0 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140215991.commit.requested
3.0 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140215991.inflight
2.5 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140223090.commit
0 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140223090.commit.requested
3.0 K 2022-08-08 14:02 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140223090.inflight
1.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140227701.clean
1.6 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140227701.clean.inflight
1.6 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140227701.clean.requested
2.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140232295.commit
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140232295.commit.requested
3.0 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140232295.inflight
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140238784.commit.requested
3.0 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140238784.inflight
1.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140246252.rollback
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140246252.rollback.inflight
1.3 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140246252.rollback.requested
1.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140247998.rollback
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140247998.rollback.inflight
1.3 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140247998.rollback.requested
1.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140253101.rollback
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140253101.rollback.inflight
1.3 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140253101.rollback.requested
1.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140255842.rollback
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140255842.rollback.inflight
1.3 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140255842.rollback.requested
1.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140303187.rollback
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140303187.rollback.inflight
1.3 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140303187.rollback.requested
1.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140305407.rollback
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140305407.rollback.inflight
1.3 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140305407.rollback.requested
1.5 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140306513.rollback
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140306513.rollback.inflight
1.3 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140306513.rollback.requested
0 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140308986.rollback.inflight
1.3 K 2022-08-08 14:03 /tmp/hudi/ss_bucket_dsj/.hoodie/20220808140308986.rollback.requested
0 2022-08-08 13:41 /tmp/hudi/ss_bucket_dsj/.hoodie/archived
781 2022-08-08 13:33 /tmp/hudi/ss_bucket_dsj/.hoodie/hoodie.properties
@codope The hudi table type is cow,and there was not any config about compaction or clustering.
@eric9204 : just so you are aware of how conflict resolution happens w/in hudi.
If two writers are writing to same file, hudi will fail one of them and will succeed the other. But if two writers are writing to two different files, both should succeed (assuming both are concurrent writers).
For eg, lets say writer1 ingests 100 records in commit C1.
and at time t10, wirter1 updates the same 100 records and ingests to hudi. wirter2 updates the same 100 records and ingests to hudi. Hudi may not be in a position to choose the winner. and so will succeed one and will fail the other. In this situations, you will see ConflictResolutionFailed error messages.
But lets say writer1 ingests 100 records in commit C1 and writer2 ingests diff set of 100 records to a diff partition in commit C2.
at time t10, wirter1 updates records as part of C1 and ingests to hudi. writer2 updates the records as part of C2 and ingests to hudi. Both write will succeed if it happens concurrently. bcoz, there is no overlap.
Hope that clarifies why one of the writer failed in your case. you can get some additions details here.
@nsivabalan I know what you mean, there is another problem.
If one of writers fail, this hoodie commit transaction was failed at hudi side, but this batch was successful at spark side. So the next micro batch will consume record from last successful commit of offset. the data of last successful micro batch which is failed at hoodie side actually may be lost 。
22/08/05 15:01:18 INFO HoodieStreamingSink: Ignore the exception and move on streaming as per hoodie.datasource.write.streaming.ignore.failed.batch configuration
22/08/05 15:01:18 INFO HoodieStreamingSink: Micro batch id=32 succeeded
22/08/05 15:01:18 INFO BlockManager: Removing RDD 159
22/08/05 15:01:18 INFO CheckpointFileManager: Writing atomically to hdfs://host-10-4-6-18:8020/tmp/hudi/ckp1/commits/32 using temp file hdfs://host-10-4-6-18:8020/tmp/hudi/ckp1/commits/.32.a829b364-c1b9-4b4b-8fae-f7d866ed76e5.tmp
22/08/05 15:01:18 INFO CheckpointFileManager: Renamed temp file hdfs://host-10-4-6-18:8020/tmp/hudi/ckp1/commits/.32.a829b364-c1b9-4b4b-8fae-f7d866ed76e5.tmp to hdfs://host-10-4-6-18:8020/tmp/hudi/ckp1/commits/32
Could the community add a retry strategy to make it successful instead of just discarding it.
yes. we have a config which you can flip to ensure spark streaming fails if hudi write fails.
hoodie.datasource.write.streaming.ignore.failed.batch
default value for this config is set to true until 2 weeks back. we have flipped the default to false. if you can set this config value to false, atleast you won' see any data loss. if there are any valid errors, your pipeline should fail.
@nsivabalan thanks! I've retested with hoodie.datasource.write.streaming.ignore.failed.batch=false ,the spark micro-batch indeed fail when hudi commit fail.
So, should I close this issue?
yes, thanks!
|
gharchive/issue
| 2022-08-05T04:29:49 |
2025-04-01T04:55:58.904704
|
{
"authors": [
"codope",
"eric9204",
"fengjian428",
"nsivabalan"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/issues/6308",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2241777540
|
[HUDI-7617] Fix issues for bulk insert user defined partitioner in StreamSync
Change Logs
NOTE: This PR handles only AVRO code paths, there will be follow-up patch for RowWriter code paths as well.
There are two problems with BULK_INSERT and partitioners.
Passing user defined partitioner using hoodie.bulkinsert.user.defined.partitioner.class is not honoured in the StreamSync code path and the data is written in a non-sort mode and can lead to OOM errors because of too many open writeHandles.
There is another problem with RDDCustomColumnsSortPartitioner where data is globally sorted but too many files are written because data is actually not pre-pending the partition keys in the sort columns. The unit test fails with this error for existing code.
org.opentest4j.AssertionFailedError:
Expected :654
Actual :3
<Click to see difference>
// Verify each partition has one base file because parallelism is 1.
assertEquals(baseFiles.size(), partitions.size());
https://github.com/onehouseinc/hudi-internal/blob/master/hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/execution/bulkinsert/RDDCustomColumnsSortPartitioner.java#L60
@Override
public JavaRDD<HoodieRecord<T>> repartitionRecords(JavaRDD<HoodieRecord<T>> records,
int outputSparkPartitions) {
final String[] sortColumns = this.sortColumnNames;
final SerializableSchema schema = this.serializableSchema;
final boolean consistentLogicalTimestampEnabled = this.consistentLogicalTimestampEnabled;
return records.sortBy(
record -> {
Object[] columnValues = record.getColumnValues(schema.get(), sortColumns, consistentLogicalTimestampEnabled);
return FlatLists.ofComparableArray(columnValues);
},
true, outputSparkPartitions);
}
But _hoodie_partition_path is returned as null here using record.getColumnValues, added the screenshots from debugger because these fields are actually added as part of HoodieAvroParquetWriter.
https://github.com/onehouseinc/hudi-internal/blob/master/hudi-common/src/main/java/org/apache/hudi/io/storage/HoodieAvroParquetWriter.java#L64
@Override
public void writeAvroWithMetadata(HoodieKey key, IndexedRecord avroRecord) throws IOException {
if (populateMetaFields) {
prepRecordWithMetadata(key, avroRecord, instantTime,
taskContextSupplier.getPartitionIdSupplier().get(), getWrittenRecordCount(), fileName);
super.write(avroRecord);
writeSupport.add(key.getRecordKey());
} else {
super.write(avroRecord);
}
}
default void prepRecordWithMetadata(HoodieKey key, IndexedRecord avroRecord, String instantTime, Integer partitionId, long recordIndex, String fileName) {
String seqId = HoodieRecord.generateSequenceId(instantTime, partitionId, recordIndex);
HoodieAvroUtils.addHoodieKeyToRecord((GenericRecord) avroRecord, key.getRecordKey(), key.getPartitionPath(), fileName);
HoodieAvroUtils.addCommitMetadataToRecord((GenericRecord) avroRecord, instantTime, seqId);
}
Attaching the screenshots below where _hoodie_partition_path column is null.
Impact
No impact, fixing the bugs related to BULK_INSERT user defined partitioners to ensure it sorts the data correctly.
Risk level (write none, low medium or high below)
Medium.
Documentation Update
None.
Contributor's checklist
[x] Read through contributor's guide
[x] Change Logs and Impact were stated clearly
[x] Adequate tests were added if applicable
[x] CI passed
Azure CI is green.
|
gharchive/pull-request
| 2024-04-13T20:51:23 |
2025-04-01T04:55:58.913779
|
{
"authors": [
"vinishjail97",
"yihua"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/11014",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
668534655
|
[HUDI-1129]: AvroConversionUtils unable to handle avro to row transformation when passing evolved schema
What is the purpose of the pull request
Specific fix this error: https://issues.apache.org/jira/browse/HUDI-1129
This is needed to partially fix https://github.com/apache/hudi/issues/1845
Brief change log
Use avro field names and not indices to convert from avro to GenericRow of catalyst at AvroConversionHelper
Verify this pull request
Run maven test suite
Committer checklist
[ ] Has a corresponding JIRA in PR title & commit
[ ] Commit message is descriptive of the change
[ ] CI is green
[ ] Necessary doc changes done or have another open PR
[ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
Hi @bvaradar the test currently fails because of the EOFException from https://issues.apache.org/jira/browse/HUDI-1128
I found the issue after some debugging. But need your thoughts on whether it is a bug or how to go about fixing it.
@bvaradar @n3nash @vinothchandar
As per the test case linked to reproduce, here is what we are doing.
Generate records with SCHEMA_1 and ingest to Hudi with SCHEMA_1
Generate records with SCHEMA_2 and ingest to Hudi with SCHEMA_2
Generate records with SCHEMA_1 and ingest to Hudi with SCHEMA_2(both source and target schema)// this is where the exception is thrown.
Here is the gist of the issue.
Lets say we have an avro record with SCHEMA_1
byte[] recordBytes = HoodieAvroUtils.avroToBytes(genericRecord);
Converting this back to GenRec with SCHEMA_1 succeeds. HoodieAvroUtils.bytesToAvro(recordBytes, SCHEMA_1)
But converting this back to GenRec with SCHEMA_2 (which has one additional field compared to SCHEMA_1) fails.
This is not required anymore. https://github.com/apache/hudi/pull/2927 handles the schema evol.
|
gharchive/pull-request
| 2020-07-30T09:21:57 |
2025-04-01T04:55:58.921160
|
{
"authors": [
"nsivabalan",
"sbernauer"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/1888",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
746224682
|
[MINOR] clean up and add comments to flink client
What is the purpose of the pull request
Minor code clean up and added some comments.
Brief change log
rename HudiFlinkStreamer to HoodieFlinkStreamer
Verify this pull request
This pull request is a trivial rework / code cleanup without any test coverage.
Committer checklist
[ ] Has a corresponding JIRA in PR title & commit
[ ] Commit message is descriptive of the change
[ ] CI is green
[ ] Necessary doc changes done or have another open PR
[ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
@wangxianghu did some clean up while reading the code. Please take a look when you are free.
Codecov Report
Merging #2261 (f9e2b0f) into master (4d05680) will decrease coverage by 43.13%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #2261 +/- ##
=============================================
- Coverage 53.53% 10.39% -43.14%
+ Complexity 2770 48 -2722
=============================================
Files 348 50 -298
Lines 16109 1779 -14330
Branches 1643 211 -1432
=============================================
- Hits 8624 185 -8439
+ Misses 6786 1581 -5205
+ Partials 699 13 -686
Flag
Coverage Δ
Complexity Δ
hudicli
?
?
hudiclient
?
?
hudicommon
?
?
hudihadoopmr
?
?
hudispark
?
?
huditimelineservice
?
?
hudiutilities
10.39% <ø> (-59.70%)
0.00 <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
Complexity Δ
...va/org/apache/hudi/utilities/IdentitySplitter.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-2.00%)
...va/org/apache/hudi/utilities/schema/SchemaSet.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-3.00%)
...a/org/apache/hudi/utilities/sources/RowSource.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-4.00%)
.../org/apache/hudi/utilities/sources/AvroSource.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-1.00%)
.../org/apache/hudi/utilities/sources/JsonSource.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-1.00%)
...rg/apache/hudi/utilities/sources/CsvDFSSource.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-10.00%)
...g/apache/hudi/utilities/sources/JsonDFSSource.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-4.00%)
...apache/hudi/utilities/sources/JsonKafkaSource.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-6.00%)
...pache/hudi/utilities/sources/ParquetDFSSource.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-5.00%)
...lities/schema/SchemaProviderWithPostProcessor.java
0.00% <0.00%> (-100.00%)
0.00% <0.00%> (-3.00%)
... and 325 more
|
gharchive/pull-request
| 2020-11-19T03:51:16 |
2025-04-01T04:55:58.941208
|
{
"authors": [
"codecov-io",
"garyli1019"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/2261",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1284904688
|
[HUDI-4284] Implement bloom lookup tree as red-black tree
What is the purpose of the pull request
The existing KeyRangeLookupTree implementation is Binary Sorting Tree.
Although it is shuffled before insertion, it may still cause uneven distribution. This PR implement it as a Red Black Tree.
Brief change log
Added abstract implementation of red-black tree RedBlackTree and implement the KeyRangeLookupTree as red-black tree.
Verify this pull request
Added new unit test org.apache.hudi.common.util.rbtree.TestRedBlackTree
Committer checklist
[x] Has a corresponding JIRA in PR title & commit
[x] Commit message is descriptive of the change
[x] CI is green
[x] Necessary doc changes done or have another open PR
[x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
@codope Please take a look~
@vinothchandar Can you help review this PR, thanks.
@yabola Thanks for putting up this PR. Sorry for the delay as I was occupied with Hudi release work. I am going to review it towards the end of week. Expect feedback by Monday.
@yihua Hi, if you have time, can you help review my PR, thanks~
|
gharchive/pull-request
| 2022-06-26T12:49:58 |
2025-04-01T04:55:58.945759
|
{
"authors": [
"codope",
"yabola"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/5978",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1511271685
|
[HUDI-5477] Optimize timeline loading in Hudi sync client
Change Logs
Before this change, the Hudi archived timeline is always loaded during the metastore sync process if the last sync time is given. Besides, the archived timeline is not cached inside the meta client if the start instant time is given. These cause performance issues and read timeout on cloud storage due to rate limiting on requests because of loading archived timeline from the storage, when the archived timeline is huge, e.g., hundreds of log files in .hoodie/archived folder.
This PR improves the timeline loading by
(1) only reading active timeline if the last sync time is the same as or after the start of the active timeline;
(2) caching the archived timeline based on the start instant time in the meta client, to avoid unnecessary repeated loading of the same archived timeline.
Impact
This PR improves the performance of metastore sync.
Risk level
low
Documentation Update
N/A
Contributor's checklist
[ ] Read through contributor's guide
[ ] Change Logs and Impact were stated clearly
[ ] Adequate tests were added if applicable
[ ] CI passed
CI passes. Merging this PR
|
gharchive/pull-request
| 2022-12-26T22:00:17 |
2025-04-01T04:55:58.949873
|
{
"authors": [
"yihua"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/7561",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2284372256
|
parquet_path_to_id_mapping generates incorrect path for List types
Apache Iceberg version
main (development)
Please describe the bug 🐞
When using the add_files table api, the parquet metadata needs to be read and a mapping of Dict[str, int] is used by data_file_statistics_from_parquet_metadata in order to link the field ID to the name in the parquet file for statistics collection. However during the mapping lookup I was receiving an error that a key was not present.
My schema contains one of the following (its a subfield of a Details struct which is important for the full name later):
extras: large_list<item: struct<key: string not null, value: string>> not null
child 0, item: struct<key: string not null, value: string>
child 0, key: string not null
child 1, value: string
Which based on the parquet schema path definition has a path of:
Details.extras.list.item.key
Details.extras.list.item.value
The issue is that the parquet_path_to_id_mapping returns a mapping for these two fields as follows:
Details.extras.list.element.key -> 189
Details.extras.list.element.value -> 190
So, the issue appears to be that the visitor for constructing the schema paths is incorrectly using element instead of item as expected in the parquet schema paths. I am not sure how this manifests yet, as I have not dug into it too closely.
@cgbur Thanks for raising this. Could you share the stack trace that you're seeing?
I tried to reproduce it, but it works on my end:
def test_data_file_statistics_from_parquet_metadata_list(tmp_path_factory: pytest.TempPathFactory) -> None:
pyarrow_list = pa.schema([
pa.field("extras", pa.list_(pa.field("element", pa.struct([pa.field("key", pa.string()), pa.field("value", pa.string())]))))
])
tbl = pa.Table.from_pylist([{'some_list': [{"key": "a", "value": "b"}]}], schema=pyarrow_list)
file_path = tmp_path_factory.mktemp('test_statistics') / "test.parquet"
pq.write_table(tbl, file_path)
parquet_metadata = pq.read_metadata(file_path)
iceberg_schema = Schema(
NestedField(
1,
"extras",
ListType(
10,
StructType(
NestedField(10, "key", StringType()),
NestedField(11, "value", StringType()),
),
element_required=False,
),
)
)
statistics = data_file_statistics_from_parquet_metadata(
parquet_metadata=parquet_metadata,
stats_columns=compute_statistics_plan(iceberg_schema, EMPTY_DICT),
parquet_column_mapping=parquet_path_to_id_mapping(iceberg_schema),
)
assert statistics == DataFileStatistics(
record_count=1,
column_sizes={10: 51, 11: 51},
value_counts={10: 1, 11: 1},
null_value_counts={10: 1, 11: 1},
nan_value_counts={},
column_aggregates={},
split_offsets=[4],
)
Here is a complete example recreating the error. Here I am using polars to make the table which results in the same schema that I am producing with pyarrow.
import polars as pl
from pyiceberg.catalog.sql import SqlCatalog
import pyarrow.parquet as pq
import os
import shutil
pl.DataFrame(
{
"a": [[{"a": 1}, {"a": 2}], [{"a": 3}]],
}
).write_parquet("example.parquet")
warehouse_path = "/tmp/warehouse"
# wipe the warehouse
if os.path.exists(warehouse_path):
shutil.rmtree(warehouse_path)
os.makedirs(warehouse_path)
catalog = SqlCatalog(
"default",
**{
"uri": f"sqlite:///{warehouse_path}/pyiceberg_catalog.db",
"warehouse": f"file://{warehouse_path}",
},
)
df = pq.read_table("example.parquet")
catalog.create_namespace("default")
table = catalog.create_table(
"default.webserver",
schema=df.schema,
)
table.add_files(["example.parquet"])
And here is the error. The top two lines were debug statements showing how the mapping file has the incorrect path.
print(f"column mappings {len(parquet_column_mapping)}")
print(parquet_column_mapping)
column mappings 1
{'a.list.element.a': 3}
Traceback (most recent call last):
File "/home/cgbur/pyice-test/failure.py", line 36, in <module>
table.add_files(["example.parquet"])
File "/local/home/cgbur/pyice-test/iceberg-python/pyiceberg/table/__init__.py", line 1355, in add_files
tx.add_files(file_paths=file_paths, snapshot_properties=snapshot_properties)
File "/local/home/cgbur/pyice-test/iceberg-python/pyiceberg/table/__init__.py", line 462, in add_files
for data_file in data_files:
File "/local/home/cgbur/pyice-test/iceberg-python/pyiceberg/table/__init__.py", line 2737, in _parquet_files_to_data_files
yield from parquet_files_to_data_files(io=io, table_metadata=table_metadata, file_paths=iter(file_paths))
File "/local/home/cgbur/pyice-test/iceberg-python/pyiceberg/io/pyarrow.py", line 1869, in parquet_files_to_data_files
statistics = data_file_statistics_from_parquet_metadata(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local/home/cgbur/pyice-test/iceberg-python/pyiceberg/io/pyarrow.py", line 1734, in data_file_statistics_from_parquet_metadata
field_id = parquet_column_mapping[column.path_in_schema]
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'a.list.item.a'
You can see how the parquet path_in_schema using the item instead of element.
Ah, confusingly there appears to be writer differences that cause the issue. My Rust pyarrow implementation matches when polars has pyarrow=True.
import polars as pl
import pyarrow.parquet as pq
df = pl.DataFrame(
{
"a": [[{"a": 1}, {"a": 2}], [{"a": 3}]],
}
)
def print_schema_path(path, col_name):
metadata = pq.read_metadata(path)
for group_number in range(metadata.num_row_groups):
row_group = metadata.row_group(group_number)
for column_number in range(row_group.num_columns):
column = row_group.column(column_number)
if column.path_in_schema.startswith(col_name):
print(f"path_in_schema: {column.path_in_schema}")
df.write_parquet("example.parquet", use_pyarrow=False)
print("with polars")
print(pq.read_schema("example.parquet"))
print_schema_path("example.parquet", "a")
df.write_parquet("example.parquet", use_pyarrow=True)
print("with pyarrow")
print(pq.read_schema("example.parquet"))
print_schema_path("example.parquet", "a")
with polars
a: large_list<item: struct<a: int64>>
child 0, item: struct<a: int64>
child 0, a: int64
path_in_schema: a.list.item.a
with pyarrow
a: large_list<element: struct<a: int64>>
child 0, element: struct<a: int64>
child 0, a: int64
path_in_schema: a.list.element.a
Perhaps the visitor is not respecting the name used in the schema? Or there is a mismatch in the method used to acquire between the iceberg and parquet change?
Hi, I investigated this a bit further and it seems to be related to the way the visitor works as @cgbur suggested. Here is what I tried:
def test_parquet_path_to_id_mapping():
# set field name to "item"
pyarrow_list = pa.schema([
pa.field("extras", pa.list_(pa.field("item", pa.struct([pa.field("key", pa.string()), pa.field("value", pa.string())]))))
])
# this is called during table creation
schema = Catalog._convert_schema_if_needed(pyarrow_list)
mapping = parquet_path_to_id_mapping(schema)
assert "extras.list.item.key" in mapping
The mapping that Catalog._convert_schema_if_needed creates looks like this:
{'extras.list.element.key': -1, 'extras.list.element.value': -1}
Looking into the visitor I found the method dealing with list types sets a default field name of "elements".
https://github.com/apache/iceberg-python/blob/20c273104257f0a1ccd74a09f6d4601643115ffd/pyiceberg/io/pyarrow.py#L865-L870
https://github.com/apache/iceberg-python/blob/20c273104257f0a1ccd74a09f6d4601643115ffd/pyiceberg/io/pyarrow.py#L172
So we lose the information on the field name of the value field, setting it to "elements".
Unfortunately I haven't found a way to access the field name as both pyarrow.lib.ListType and pyarrow.lib.DataType don't seem to make that available.
Hi @cgbur and @felixscherz thank you for raising this and taking this investigation further. I'm not a polars user myself, but the difference in the behavior is quite interesting, and I think there would be value in trying to fix this issue.
I just read the write_table API documentation in pyarrow.parquet and found something rather interesting:
https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html#pyarrow.parquet.write_table
If you check the documentation for use_compliant_nested_type flag, it mentions that having element as the single item field name is the parquet compliant format as specified here on the Parquet Spec for Nested Types. PyArrow defaults to using this flag and writes the list element field name as element.
For some reason it looks like polars has decided to use item - the non-parquet compliant list element name instead. While I'm curious about why the polars community has decided to go this route, I also think supporting both element or item name in the visitor may not be the worst thing, just to increase our scope of support.
Here is the code in Polars where the "item" name crops up.
https://github.com/pola-rs/polars/blob/main/crates/polars-core/src/datatypes/dtype.rs#L575
I believe this is because they are not converting to parquet, but to arrow, then parquet. A subtle difference in their internal logic. However, perhaps the confusion arises because in arrow, the List single element name is often item not element.
https://arrow.apache.org/docs/format/Columnar.html#recordbatch-message
import pyarrow as pa
py_list = pa.array([[1, 2, 3], [1, 2]])
print(py_list.type)
list<item: int64>
I'll go ahead and open an issue on the Polars repo and see if they have anything to say, there are multiple ways to fix this in their package and agree they should likely be producing parquet files according to the spec.
|
gharchive/issue
| 2024-05-07T23:09:50 |
2025-04-01T04:55:58.966186
|
{
"authors": [
"Fokko",
"cgbur",
"felixscherz",
"syun64"
],
"repo": "apache/iceberg-python",
"url": "https://github.com/apache/iceberg-python/issues/716",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
151392018
|
Airflow improperly shows task status as 'up for retry' for a task that failed on re-run
Dear Airflow Maintainers,
Environment
Before I tell you about my issue, let me describe my Airflow environment:
Airflow version: 1.7.0
Airflow components: webserver, mysql, scheduler with celery executor
Python Version: 2.7.6
Operating System: Linux Ubuntu 3.19.0-26-generic
Description of Issue
Now that you know a little about me, let me tell you about the issue I am having:
What I expect:
If I do a re-run and it fails - The task should be either re-tried again (resetting retry count) and marked accordingly in GUI OR not retried - and marked in GUI as 'failed'
What happened instead?
The task in the GUI was presented as 'up_for_retry' however it was not retried, even after retry_delay has passed
Reproducing the Issue
DAG does not have some strange settings:
concurrency= 3,
max_active_runs = 2,
start_date = datetime(2016,04,03,01),
default_args={
'depends_on_past': False,
'retries': 2,
'retry_delay': timedelta(minutes=3) }
@kretes Can you create a Jira issue for this and provide a reproducible case with it plus screenshots? Thanks
migrated to https://issues.apache.org/jira/browse/AIRFLOW-138
|
gharchive/issue
| 2016-04-27T13:55:11 |
2025-04-01T04:55:59.026343
|
{
"authors": [
"bolkedebruin",
"kretes"
],
"repo": "apache/incubator-airflow",
"url": "https://github.com/apache/incubator-airflow/issues/1441",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
159084363
|
[AIRFLOW-219] Unix user impersonation based on new BaseOperator.run_as_user
[Work in Progress]
https://issues.apache.org/jira/browse/AIRFLOW-219
I'm looking for guidance around:
the bash magic sudo su {ti.run_as_user}; ...
the consideration & requirements I started highlighting in the Security docs in the PR
The sudo principle is quite easy to implement. What I am more worried about is how to manage it correctly. For example you might want to guard against tasks becoming root or have an admin be able the configure a list of acceptable userids.
In case of versioning we probably get some issues that are not foreseen yet, but as versioning is slowly being addressed should not be too much of a thing. Improvements will happen overtime.
Might also want to have a look at Hadoop's container-executor:
http://www.cloudera.com/documentation/archive/cdh/4-x/4-2-0/CDH4-Security-Guide/cdh4sg_topic_18_3.html
https://github.com/apache/hadoop/blob/master/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
[Current coverage][cc-pull] is 64.38%
Merging [#1578][cc-pull] into [master][cc-base-branch] will decrease coverage by <.01%
@@ master #1578 diff @@
==========================================
Files 123 124 +1
Lines 8683 8748 +65
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
+ Hits 5591 5632 +41
- Misses 3092 3116 +24
Partials 0 0
Powered by Codecov. Last updated by [348f25f...dacf41f][cc-compare]
[cc-base-branch]: https://codecov.io/gh/apache/incubator-airflow/branch/master?src=pr
[cc-compare]: https://codecov.io/gh/apache/incubator-airflow/compare/348f25f08af2c02627cec04453564edd2fb69fa3...dacf41f387e994a6c23f99d701207be5334983a2?src=pr
[cc-pull]: https://codecov.io/gh/apache/incubator-airflow/pull/1578?src=pr
For the BashOperator, only the subprocess needs to be run as the specified user, not the entire airflow command. If the specified user doesn't have permissions to airflow.cfg, we should be set.
@mistercrunch you can close this as it lives here now: https://github.com/apache/incubator-airflow/pull/1934
|
gharchive/pull-request
| 2016-06-08T05:48:14 |
2025-04-01T04:55:59.034778
|
{
"authors": [
"aoen",
"bolkedebruin",
"codecov-io",
"criccomini",
"mistercrunch",
"plypaul"
],
"repo": "apache/incubator-airflow",
"url": "https://github.com/apache/incubator-airflow/pull/1578",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
302897428
|
[AIRFLOW-2186] Change the way logging is carried out in few ops
Make sure you have checked all steps below.
JIRA
[x] My PR addresses the following Airflow JIRA issues and references them in the PR title. For example, "[AIRFLOW-XXX] My Airflow PR"
https://issues.apache.org/jira/browse/AIRFLOW-2186
Description
[x] Here are some details about my PR, including screenshots of any UI changes:
Changed the way logging is implemented in PostgresToGoogleCloudStorageOperator and HiveToDynamoDBTransferOperator. Changed logging.info to self.log.info
Tests
[x] My PR adds the following unit tests OR does not need testing for this extremely good reason:
Minor change. Doesn't require testing
Commits
[x] My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
Subject is separated from body by a blank line
Subject is limited to 50 characters
Subject does not end with a period
Subject uses the imperative mood ("add", not "adding")
Body wraps at 72 characters
Body explains "what" and "why", not "how"
[x] Passes git diff upstream/master -u -- "*.py" | flake8 --diff
cc @Fokko PTAL. The Travis CI fails as apache-beam[gcp] is not available for Python 3.
@kaxil The apache-beam issue has been fixed in master.
@Fokko haha yes, I fixed that in a PR 😉 💃
|
gharchive/pull-request
| 2018-03-06T22:46:32 |
2025-04-01T04:55:59.041490
|
{
"authors": [
"Fokko",
"kaxil"
],
"repo": "apache/incubator-airflow",
"url": "https://github.com/apache/incubator-airflow/pull/3106",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
381393979
|
[AIRFLOW-3355] Fix BigQueryCursor.execute to work with Python3
Make sure you have checked all steps below.
Jira
[x] My PR addresses the following Airflow Jira issues and references them in the PR title. For example, "[AIRFLOW-XXX] My Airflow PR"
https://issues.apache.org/jira/browse/AIRFLOW-3355
In case you are fixing a typo in the documentation you can prepend your commit with [AIRFLOW-XXX], code changes always need a Jira issue.
Description
[x] Here are some details about my PR, including screenshots of any UI changes:
BigQueryCursor.execute uses dict.iteritems internally,
so it fails with Python3 if binding parameters are
provided. This PR fixes this problem.
Tests
[x] My PR adds the following unit tests OR does not need testing for this extremely good reason:
tests.contrib.hooks.test_bigquery_hook:TestBigQueryCursor
Commits
[x] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
Subject is separated from body by a blank line
Subject is limited to 50 characters (not including Jira issue reference)
Subject does not end with a period
Subject uses the imperative mood ("add", not "adding")
Body wraps at 72 characters
Body explains "what" and "why", not "how"
Documentation
[x] In case of new functionality, my PR adds documentation that describes how to use it.
When adding new operators/hooks/sensors, the autoclass documentation generation needs to be added.
Code Quality
[x] Passes flake8
CI failures occur at test_scheduler_sla_miss_callback_exception and test_scheduler_sla_miss_email_exception and are irrelevant to this fix.
Codecov Report
Merging #4198 into master will increase coverage by <.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #4198 +/- ##
=========================================
+ Coverage 77.69% 77.7% +<.01%
=========================================
Files 199 199
Lines 16309 16309
=========================================
+ Hits 12672 12673 +1
+ Misses 3637 3636 -1
Impacted Files
Coverage Δ
airflow/models.py
92.37% <0%> (+0.04%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8668ef8...eab19b0. Read the comment docs.
|
gharchive/pull-request
| 2018-11-16T00:14:06 |
2025-04-01T04:55:59.054644
|
{
"authors": [
"codecov-io",
"sekikn"
],
"repo": "apache/incubator-airflow",
"url": "https://github.com/apache/incubator-airflow/pull/4198",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
105705285
|
APEX-22 #commit adding the port object only when it doesn't already e…
…xist
@davidyan74 Please review and merge
|
gharchive/pull-request
| 2015-09-09T23:46:26 |
2025-04-01T04:55:59.056134
|
{
"authors": [
"chandnisingh"
],
"repo": "apache/incubator-apex-core",
"url": "https://github.com/apache/incubator-apex-core/pull/24",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
192711728
|
[BEAM-551] Add TextIO.Write support for ValueProvider
R: @dhalperi
Some issues deferred, but can be resolved orthogonally.
And merged. Thanks.
|
gharchive/pull-request
| 2016-11-30T23:32:51 |
2025-04-01T04:55:59.057637
|
{
"authors": [
"dhalperi",
"sammcveety"
],
"repo": "apache/incubator-beam",
"url": "https://github.com/apache/incubator-beam/pull/1475",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
854322887
|
What is external:ssl in BUILD.bazel
Describe the bug (描述bug)
com_github_brpc/BUILD.bazel:324:11: no such target '//external:ssl': target 'ssl' not declared in package 'external' defined by
To Reproduce (复现方法)
bazel build brpc
Expected behavior (期望行为)
Versions (各种版本)
OS: Ubuntu 20.04.2 LTS
Compiler: bazel 3.7.0
brpc: commit b3a948c9dca29632b3367529488e070852e31f11
protobuf: v3.15.6
Additional context/screenshots (更多上下文/截图)
butil depends on "//conditions:default": ["//external:ssl"],
|
gharchive/issue
| 2021-04-09T08:56:23 |
2025-04-01T04:55:59.060645
|
{
"authors": [
"372046933",
"fuhailin"
],
"repo": "apache/incubator-brpc",
"url": "https://github.com/apache/incubator-brpc/issues/1375",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
480466696
|
brpc log performance
我有个问题,关于log的:
1:目前我看到brpc的log中,使用了localtime_s/localtime_r,这两个函数应该调用了__tz_convert, __tz_convert,有tzset_lock全局锁,每次获取时间都会使用到kernel级别的futex锁,不知道我分析的对不对,这块getimeofday会不会好点
#if _MSC_VER >= 1400
localtime_s(&local_tm, &t);
#else
localtime_r(&t, &local_tm);
#endif
2:关于log打印的问题,目前多线程下,log打印是有锁的,当打印的内容比较多的时候,这个锁就会有一定的问题,是否有种无锁的设计方式?
这个log主要是提供接口,实现作为参考,性能是没有任何优化的,在生产环境中建议适配使用公司内或第三方日志库
@jamesge 好的,谢谢戈神,目前我实现的方法是继承了LogSink,但没看到类似于glog那种AddLogSink(),如何调用我这个logsink呢,还是得改brpc源码
找到了
|
gharchive/issue
| 2019-08-14T03:45:33 |
2025-04-01T04:55:59.063059
|
{
"authors": [
"gxkevin",
"jamesge"
],
"repo": "apache/incubator-brpc",
"url": "https://github.com/apache/incubator-brpc/issues/886",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
771779898
|
[fix-4273][UI] The submission time of task instance is empty
What is the purpose of the pull request
#4273
Fix the submission time of task instance is empty
Verify this pull request
This change added tests and can be verified as follows:
Manually verified the change by testing locally.
Codecov Report
Merging #4274 (3bebbec) into dev (64dd9db) will decrease coverage by 0.04%.
The diff coverage is n/a.
@@ Coverage Diff @@
## dev #4274 +/- ##
============================================
- Coverage 43.14% 43.09% -0.05%
+ Complexity 3153 3149 -4
============================================
Files 466 466
Lines 21632 21603 -29
Branches 2611 2610 -1
============================================
- Hits 9333 9310 -23
+ Misses 11401 11393 -8
- Partials 898 900 +2
Impacted Files
Coverage Δ
Complexity Δ
...che/dolphinscheduler/common/utils/LoggerUtils.java
62.50% <0.00%> (-8.34%)
5.00% <0.00%> (-1.00%)
...e/dolphinscheduler/common/shell/AbstractShell.java
44.53% <0.00%> (-5.89%)
5.00% <0.00%> (-1.00%)
...e/dolphinscheduler/remote/NettyRemotingClient.java
50.00% <0.00%> (-2.78%)
9.00% <0.00%> (-2.00%)
...eduler/server/worker/runner/TaskExecuteThread.java
54.33% <0.00%> (-2.70%)
13.00% <0.00%> (ø%)
.../apache/dolphinscheduler/common/utils/OSUtils.java
44.57% <0.00%> (-2.41%)
23.00% <0.00%> (ø%)
...inscheduler/server/log/LoggerRequestProcessor.java
56.41% <0.00%> (-1.70%)
8.00% <0.00%> (ø%)
...pache/dolphinscheduler/common/utils/FileUtils.java
48.14% <0.00%> (-0.62%)
13.00% <0.00%> (ø%)
.../server/worker/processor/TaskExecuteProcessor.java
69.11% <0.00%> (ø)
7.00% <0.00%> (ø%)
...phinscheduler/server/worker/task/AbstractTask.java
16.66% <0.00%> (+5.41%)
4.00% <0.00%> (ø%)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 64dd9db...3bebbec. Read the comment docs.
+1
|
gharchive/pull-request
| 2020-12-21T02:04:10 |
2025-04-01T04:55:59.082476
|
{
"authors": [
"chengshiwen",
"codecov-io",
"zhuangchong"
],
"repo": "apache/incubator-dolphinscheduler",
"url": "https://github.com/apache/incubator-dolphinscheduler/pull/4274",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
792971744
|
[Improvement-4550][ui]Fix the problem of Uncaught (in promise) NavigationDuplicated: Avoided redundant navigation when clicking on the same route
issue #4550
Codecov Report
Merging #4551 (2289ba4) into dev (829fdb5) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## dev #4551 +/- ##
=========================================
Coverage 44.73% 44.73%
+ Complexity 3340 3339 -1
=========================================
Files 528 528
Lines 22832 22832
Branches 2669 2669
=========================================
Hits 10214 10214
+ Misses 11698 11696 -2
- Partials 920 922 +2
Impacted Files
Coverage Δ
Complexity Δ
...er/master/dispatch/host/assign/RandomSelector.java
77.77% <0.00%> (-5.56%)
3.00% <0.00%> (-1.00%)
...dolphinscheduler/remote/future/ResponseFuture.java
81.35% <0.00%> (-1.70%)
18.00% <0.00%> (-1.00%)
...inscheduler/service/zk/CuratorZookeeperClient.java
65.85% <0.00%> (+4.87%)
8.00% <0.00%> (+1.00%)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 829fdb5...2289ba4. Read the comment docs.
Codecov Report
Merging #4551 (2289ba4) into dev (829fdb5) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## dev #4551 +/- ##
=========================================
Coverage 44.73% 44.73%
+ Complexity 3340 3339 -1
=========================================
Files 528 528
Lines 22832 22832
Branches 2669 2669
=========================================
Hits 10214 10214
+ Misses 11698 11696 -2
- Partials 920 922 +2
Impacted Files
Coverage Δ
Complexity Δ
...er/master/dispatch/host/assign/RandomSelector.java
77.77% <0.00%> (-5.56%)
3.00% <0.00%> (-1.00%)
...dolphinscheduler/remote/future/ResponseFuture.java
81.35% <0.00%> (-1.70%)
18.00% <0.00%> (-1.00%)
...inscheduler/service/zk/CuratorZookeeperClient.java
65.85% <0.00%> (+4.87%)
8.00% <0.00%> (+1.00%)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 829fdb5...2289ba4. Read the comment docs.
LGTM.
LGTM.
|
gharchive/pull-request
| 2021-01-25T02:23:32 |
2025-04-01T04:55:59.101377
|
{
"authors": [
"break60",
"codecov-io",
"zhuangchong"
],
"repo": "apache/incubator-dolphinscheduler",
"url": "https://github.com/apache/incubator-dolphinscheduler/pull/4551",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
388060862
|
Analytic function issue
Describe the bug
palo在使用分析函数时必须有partition分组字段,但如果只是针对某一列排序,不想分组但话,就会被限制,所以我在表中加了一个常量作为分组partition使用,但是报错
测试sql如下:
select row_number() over(partition by dim order by id) rn,id from
(
select id,'test' dim from
(
select 1 id
union all
select 2 id
union all
select 3 id
) t
)t
返回 ERROR 1064 (HY000): Internal Error, maybe this is a bug, please contact with Palo RD.
但如果我将常量写在 union里面 ,可以通过执行,改写如下:
select row_number() over(partition by dim order by id) rn,id from
(
select 1 id ,'test' dim
union all
select 2 id ,'test' dim
union all
select 3 id ,'test' dim
)t
可以得到正常返回
分析函数应该是不能支持常量分组。。
希望改进~
分析函数已经支持不带分组列的了~
|
gharchive/issue
| 2018-12-06T05:21:41 |
2025-04-01T04:55:59.103945
|
{
"authors": [
"EmmyMiao87",
"lubaolei161"
],
"repo": "apache/incubator-doris",
"url": "https://github.com/apache/incubator-doris/issues/398",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
798032695
|
Custom encoding when exporting query results
Is it possible to specify the encoding format when importing query results? Because the default UTF-8 format will appear Chinese garbled when it is opened by Excel, I need to perform a GBK operation by myself, thank you~
importing query results
Did you mean exporting the query result to file by using "SELECT INTO OUTFILE"?
|
gharchive/issue
| 2021-02-01T06:56:24 |
2025-04-01T04:55:59.105244
|
{
"authors": [
"F-SLoong",
"morningman"
],
"repo": "apache/incubator-doris",
"url": "https://github.com/apache/incubator-doris/issues/5330",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
727514666
|
[Optimize] Improve LRU cache's performance
Proposed changes
When LRUCache insert and evict a large number of entries, there are
frequently calls of HandleTable::remove(e->key, e->hash), it will
lookup the entry in the hash table. Now that we know the entry to
remove 'e', we can remove it directly from hash table's collision list
if it's a double linked list.
This patch refactor the collision list to double linked list, the simple
benchmark CacheTest.SimpleBenchmark shows that time cost reduced about
18% in my test environment.
Types of changes
What types of changes does your code introduce to Doris?
Put an x in the boxes that apply
[] Bugfix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[] Documentation Update (if none of the other choices apply)
[] Code refactor (Modify the code structure, format the code, etc...)
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.
[] I have create an issue on (Fix #ISSUE), and have described the bug/feature there in detail
[x] Compiling and unit tests pass locally with my changes
[x] I have added tests that prove my fix is effective or that my feature works
[] If this change need a document change, I have updated the document
[] Any dependent changes have been merged
Could you rebase the master code to see if unit test can run normally?
Could you rebase the master code to see if unit test can run normally?
Done
|
gharchive/pull-request
| 2020-10-22T15:52:37 |
2025-04-01T04:55:59.111338
|
{
"authors": [
"acelyc111",
"morningman"
],
"repo": "apache/incubator-doris",
"url": "https://github.com/apache/incubator-doris/pull/4781",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
358880500
|
Springboot1.5.10.RELEASE yml配置文件cunsumer配置无法自动注入问题
spring1.5.x版本,配置cunsumer yml资源文件,dubbo无法解析?
举例
dubbo配置
dubbo:
consumer:
check: false
上述配置,在springboot1.5.x版本无效。
换成springboot2.0.4和dubbo-spring-boot-starter 0.2.0得以解决,或手动注入ConsumerConfig Bean解决
You also can update Dubbo latest version
You also can update Dubbo latest version
是的,升级到springboot2.x和dubbo-spring-boot-starter0.2.0是可以解决的。
经过我们排查,怀疑可能是dubbo-spring-boot-starter0.1.0内引用的dubbo版本可能与springboot1.5.x版本兼容存在bug。
|
gharchive/issue
| 2018-09-11T04:17:23 |
2025-04-01T04:55:59.114277
|
{
"authors": [
"mercyblitz",
"uzdz"
],
"repo": "apache/incubator-dubbo-spring-boot-project",
"url": "https://github.com/apache/incubator-dubbo-spring-boot-project/issues/273",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
601781183
|
树形结构,多个中间节点能否汇聚到同一个节点,在这个汇总节点又分散成多个节点?
Version
4.7.0
Steps to reproduce
"name": "flare",
"children": [
{
"name": "data",
"children": [
{
"name": "b",
"children": [
{"name": "a", "value": 721}
]
},
{
"name": "c",
"children": [
{"name": "a", "value": 721}
]
}
]
}
]
};
var option = {
tooltip: {
trigger: 'item',
triggerOn: 'mousemove'
},
series:[
{
type: 'tree',
id: 0,
name: 'tree1',
data: [data],
top: '10%',
left: '8%',
bottom: '22%',
right: '20%',
symbolSize: 7,
edgeShape: 'curve',
edgeForkPosition: '10%',
initialTreeDepth: 3,
lineStyle: {
width: 2,
curveness: 1
},
label: {
backgroundColor: '#fff',
position: 'left',
verticalAlign: 'middle',
align: 'right',
borderColor: 'red',
borderWidth: 1,
borderRadius: 10,
padding: 10
},
leaves: {
label: {
position: 'right',
verticalAlign: 'middle',
align: 'left'
}
},
expandAndCollapse: true,
animationDuration: 550,
animationDurationUpdate: 750
}
]
};
What is expected?
a节点成为b,c节点的汇总节点
What is actually happening?
b,c节点各自生成了a节点
Could you provide a demo for the issue either with https://gallery.echartsjs.com/editor.html or https://jsfiddle.net/ovilia/n6xc4df3/.
https://gallery.echartsjs.com/editor.html?c=xygd9KCt9o&v=1
如果「汇聚」,从数据结构上来说,这个就不是「树」了,所以建议使用 graph 实现。
|
gharchive/issue
| 2020-04-17T08:00:59 |
2025-04-01T04:55:59.118199
|
{
"authors": [
"Ovilia",
"yufeng04",
"zhuanghongbin"
],
"repo": "apache/incubator-echarts",
"url": "https://github.com/apache/incubator-echarts/issues/12453",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
749657272
|
realtimeSort switch not in time
Version
5.0.0
Steps to reproduce
Example:
incubator-echarts-examples the next branch:
http://127.0.0.1:3002/en/editor.html?c=covid-america
Need to set the animationDurationUpdate as a long time.
Like:
var _timelineDuration = 5500;
var _barDurationUpdate = 5490;
var _axisDurationUpdate = 5000;
And drag the timeline to about 10/7/20. We will see that the bar of Massachusetts growth very fast and does not switch on time.
It depends on the setting on axis:
yAxis: [{
inverse: true,
type: 'category',
max: 9,
// Should be a small value:
animationDurationUpdate: 200,
animationEasingUpdate: 'linear'
}],
It depends on the setting on axis:
yAxis: [{
inverse: true,
type: 'category',
max: 9,
// Should be a small value:
animationDurationUpdate: 200,
animationEasingUpdate: 'linear'
}],
|
gharchive/issue
| 2020-11-24T12:16:37 |
2025-04-01T04:55:59.121754
|
{
"authors": [
"100pah"
],
"repo": "apache/incubator-echarts",
"url": "https://github.com/apache/incubator-echarts/issues/13679",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
314186348
|
[GOBBLIN-465] Add support for couchbase client cert auth
The requisite version bump eliminates support for couchbase v3.
Dear Gobblin maintainers,
Please accept this PR. I understand that it will not be reviewed until I have checked off all the steps below!
JIRA
[/] My PR addresses the following Gobblin JIRA issues and references them in the PR title. For example, "[GOBBLIN-465] My Gobblin PR"
https://issues.apache.org/jira/browse/GOBBLIN-465
Description
[/] Here are some details about my PR, including screenshots (if applicable):
Bump API to 2.5.4. Exposes configuration values for performing cert auth. Skips SASL password auth if password is not provided.
Tests
[/] My PR adds the following unit tests OR does not need testing for this extremely good reason: this change is mostly exposing new config values - coverage & flow doesn't really change here, but I'm happy to add tests if they are needed to pass. I have smoke tested the new and old functionality with this build against CB5.
Commits
[/] My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
Subject is separated from body by a blank line
Subject is limited to 50 characters
Subject does not end with a period
Subject uses the imperative mood ("add", not "adding")
Body wraps at 72 characters
Body explains "what" and "why", not "how"
@wwest4 looks ok. let me know when you have fixed the test cases.
@abti passing now after an update of CouchbaseMock
Thanks. Merged the changes.
|
gharchive/pull-request
| 2018-04-13T17:15:47 |
2025-04-01T04:55:59.128173
|
{
"authors": [
"abti",
"wwest4"
],
"repo": "apache/incubator-gobblin",
"url": "https://github.com/apache/incubator-gobblin/pull/2337",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
127600540
|
HAWQ-344. When resource queue capacity is shrunk, deadlock detection …
When resource queue capacity is shrunk, deadlock detection maybe not triggered.
The fix looks good. +1
+1
|
gharchive/pull-request
| 2016-01-20T04:16:56 |
2025-04-01T04:55:59.129349
|
{
"authors": [
"huor",
"jiny2",
"linwen"
],
"repo": "apache/incubator-hawq",
"url": "https://github.com/apache/incubator-hawq/pull/281",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
502022900
|
[HUDI-285] Implement HoodieStorageWriter based on actual file type
see jira: https://jira.apache.org/jira/projects/HUDI/issues/HUDI-285
CC @vinothchandar
LGTM/ merging!
|
gharchive/pull-request
| 2019-10-03T12:01:52 |
2025-04-01T04:55:59.130612
|
{
"authors": [
"leesf",
"vinothchandar"
],
"repo": "apache/incubator-hudi",
"url": "https://github.com/apache/incubator-hudi/pull/936",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1212150185
|
[Bug][Manager] The method of batchSaveAll occurred error as the sink_name was null
What happened
The batchSaveAll() method submits the form interface exception, and there is no sink_name to fill in the form. But the database sets the field to be non-null
What you expected to happen
No Errors
How to reproduce
Create a hive sink
Environment
centos
InLong version
master
InLong Component
InLong Manager, InLong Dashboard
Are you willing to submit PR?
[ ] Yes, I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Already resolved.
|
gharchive/issue
| 2022-04-22T10:10:30 |
2025-04-01T04:55:59.134279
|
{
"authors": [
"Greedyu",
"healchow"
],
"repo": "apache/incubator-inlong",
"url": "https://github.com/apache/incubator-inlong/issues/3888",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
497451227
|
[IOTDB-234] Refactor TsFile storage on HDFS
Refactor TsFile storage on HDFS codes:
Extract HDFS related files (HDFSFile, HDFSInput, HDFSOutput) into Hadoop module from TSFile module, so that TSFile module will not depend on Hadoop libs.
Use Java Reflection to get HDFS related files for factories used in TSFile module.
I think using reflection every time is too heavy and slow when all you want to do is to get a new Object. To put it simply, reflections should be used as less frequently as possible.
Here is my suggestion: use two factories (the LocalFactory and the HDFSFactory) instead of one, and initialize one of them using reflection according to the configuration at the system start-up. Thus the occurrence of reflections can be reduced to only 1.
Hi, many thanks for your useful suggestion!
In the new commit, I tried to reduce the occurrence of reflections to only 1 by initializing private static Class<?> clazz when constructing TSFileFactory. I think this also reached your main purpose, so I didn't use two factories... Do you think it is acceptable? > <
If you insist on this design, at least you should save the constructor.
And this test will show you that this is still slower compared with a direct call to new().
ReflectionTest.txt
If you insist on this design, at least you should save the constructor.
And this test will show you that this is still slower compared with a direct call to new().
ReflectionTest.txt
Thanks for the suggested design @jt2594838 ! I have accepted it by using two factories (LocalFSFactory and HDFSFactory) with FSFactoryProducer to produce either one of them according to user config. I think this design is so much better for being less heavy and slow and being more scalable for future design.
|
gharchive/pull-request
| 2019-09-24T04:40:51 |
2025-04-01T04:55:59.139309
|
{
"authors": [
"jt2594838",
"samperson1997"
],
"repo": "apache/incubator-iotdb",
"url": "https://github.com/apache/incubator-iotdb/pull/417",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2234090272
|
[new-parser] Class literal not supported by expression parser
Parent issue
#5678
Failing tests
org.drools.traits.compiler.factmodel.traits.TraitTest#testIsAEvaluatorOnClassification
Notes
Rule code snippet
$t : org.drools.base.factmodel.traits.Thing( $c : core, this not isA t.x.E.class, this isA t.x.D.class )
Error output
20:22:51.158 [main] WARN o.d.c.k.builder.impl.KieBuilderImpl.packageNameForFile:396 - File 'file0.drl' is in folder '' but declares package 't.x'. It is advised to have a correspondance between package and folder names.
### parse : ANTLR4_PARSER_ENABLED = true
line 1:19 no viable alternative at input '.class'
20:22:51.189 [main] ERROR o.d.c.k.b.impl.AbstractKieProject.buildKnowledgePackages:280 - Unable to build KieBaseModel:defaultKieBase
Unable to parse pattern expression:
[ERR 101] Line 1:19 no viable alternative at input '.class' : [Rule name='Rule 0 >> http://t/x#D']
java.lang.RuntimeException: [Message [id=1, kieBase=defaultKieBase, level=ERROR, path=file0.drl, line=27, column=0
text=Unable to parse pattern expression:
[ERR 101] Line 1:19 no viable alternative at input '.class']]
at org.kie.internal.utils.KieHelper.getKieContainer(KieHelper.java:127)
at org.kie.internal.utils.KieHelper.build(KieHelper.java:89)
at org.kie.internal.utils.KieHelper.build(KieHelper.java:84)
at org.drools.traits.compiler.factmodel.traits.TraitTest.getSessionFromString(TraitTest.java:145)
at org.drools.traits.compiler.factmodel.traits.TraitTest.testIsAEvaluatorOnClassification(TraitTest.java:2699)
/take
|
gharchive/issue
| 2024-04-09T18:34:37 |
2025-04-01T04:55:59.141711
|
{
"authors": [
"yurloc"
],
"repo": "apache/incubator-kie-drools",
"url": "https://github.com/apache/incubator-kie-drools/issues/5833",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2228352102
|
[kie-issues#963] Remove all references to kogito-task-console and kogito-management-console
Closes https://github.com/apache/incubator-kie-issues/issues/963
This PR removes all references to kogito-task-console and kogito-management-console images, as they are being migrated to kie-tools [1].
It also fixes a small bug in the Makefile that prevented the make command from building all images.
[1] https://github.com/apache/incubator-kie-tools/pull/2226
Are you going to still use the same architecture (Cekit) when moving those to kie-tools?
No, we are using kie-tools tooling (image-builder [1]) to build the image now. Also, it's not a Java-based image anymore, we figured out it's enough to have Apache httpd serve the static webapp files and provide a way to change environment variables (to set the data index endpoint, for example).
If we're missing something or not meeting a requirement, let us know.
[1] https://github.com/apache/incubator-kie-tools/tree/main/packages/image-builder
PR job #137 was: FAILURE
Possible explanation: Pipeline failure or project build failure
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/137/display/redirect
See console log:
Console Logs
[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jit-runner: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-allinone: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-runtime-native: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: FAILURE[Pipeline] error[Pipeline] }[Pipeline] // stage[Pipeline] }Failed in branch kogito-runtime-nativeBuild KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-devmode: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-s2i-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }[Pipeline] // parallel[Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/137/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
@tiagobento count on me
@ricardozanini I'm not sure why ShellCheck is failing, do you have any ideas?
Try looking for this openBinaryFile method. Maybe it was a script used in the modules you're removing.
PR job #143 was: UNSTABLE
Possible explanation: This should be test failures
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/143/display/redirect
See console log:
Console Logs
[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-data-index-ephemeral seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jit-runner: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-allinone: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-jobs-service-allinone seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-builder seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-devmode: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-devmode seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }[Pipeline] // parallel[Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/143/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
@ricardozanini Apparently, ShellCheck never worked (or I'm missing something), because Bash didn't work well with globstar. After I enabled it ShellCheck started printing a bunch of warnings and infos regarding the scripts.
I could add a -S error to the ShellCheck command so it only alerts for Error cases, but I can't fix the warnings and infos as I don't have the proper context of these scripts.
PR job #144 was: UNSTABLE
Possible explanation: This should be test failures
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/144/display/redirect
See console log:
Console Logs
[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jit-runner: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-allinone: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-builder seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-devmode: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-devmode seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }[Pipeline] // parallel[Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/144/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
PR job #145 was: FAILURE
Possible explanation: Pipeline failure or project build failure
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/145/display/redirect
See console log:
Console Logs
[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jit-runner: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-allinone: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-builder seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-devmode: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: FAILURE[Pipeline] error[Pipeline] }[Pipeline] // stage[Pipeline] }Failed in branch kogito-swf-devmode[Pipeline] // parallel[Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/145/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
@thiagoelg please remove the globstar, this action has been working on all the other PRs. I'll take a look after you revert it.
@thiagoelg I think I've found the problem, I'll send a PR later to solve this.
Fixed here: https://github.com/apache/incubator-kie-kogito-images/pull/1756
Once I get green, we can merge, rebase here and we should be good to go.
Awesome! Thanks @ricardozanini
PR job #147 was: UNSTABLE
Possible explanation: This should be test failures
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/147/display/redirect
See console log:
Console Logs
[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jit-runner: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-jobs-service-ephemeral seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-allinone: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-jobs-service-allinone seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-builder seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-devmode: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-devmode seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }[Pipeline] // parallel[Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/147/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
PR job #153 was: UNSTABLE
Possible explanation: This should be test failures
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/153/display/redirect
See console log:
Console Logs
[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jit-runner: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-data-index-ephemeral seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-allinone: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-builder seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-devmode: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-devmode seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }[Pipeline] // parallel[Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/153/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
The images build are broken, please see: https://github.com/apache/incubator-kie-kogito-runtimes/issues/3473
PR job #154 was: FAILURE
Possible explanation: Pipeline failure or project build failure
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/154/display/redirect
See console log:
Console Logs
[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-allinone: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-swf-builder seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-devmode: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: FAILURE[Pipeline] error[Pipeline] }[Pipeline] // stage[Pipeline] }Failed in branch kogito-swf-devmode[Pipeline] // parallel[Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/154/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
PR job #155 was: UNSTABLE
Possible explanation: This should be test failures
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/155/display/redirect
See console log:
Console Logs
[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jit-runner: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-postgresql: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-data-index-ephemeral: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-data-index-ephemeral seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-jobs-service-allinone: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: UNSTABLE[Pipeline] unstableWARNING: Tests on kogito-jobs-service-allinone seems to have failed[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-devmode: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }Build KIE » kogito » main » pullrequest » kogito-images.build-image PR #1753 - kogito-swf-builder: https://github.com/apache/incubator-kie-kogito-images/pull/1753 completed: SUCCESS[Pipeline] }[Pipeline] // stage[Pipeline] }[Pipeline] // parallel[Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/155/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
PR job #159 was: FAILURE
Possible explanation: Pipeline failure or project build failure
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/159/display/redirect
See console log:
Console Logs
From https://github.com/thiagoelg/incubator-kie-kogito-images * branch kie-issues-963 -> FETCH_HEADhint: You have divergent branches and need to specify how to reconcile them.hint: You can do so by running one of the following commands sometime beforehint: your next pull:hint: hint: git config pull.rebase false # merge (the default strategy)hint: git config pull.rebase true # rebasehint: git config pull.ff only # fast-forward onlyhint: hint: You can replace "git config" with "git config --global" to set a defaulthint: preference for all repositories. You can also pass --rebase, --no-rebase,hint: or --ff-only on the command line to override the configured default perhint: invocation.fatal: Need to specify how to reconcile divergent branches.[Pipeline] }[Pipeline] // withCredentials[Pipeline] echo ------------------------------------------------------------- [ERROR] Can't merge source into Target. Please rebase PR branch. ------------------------------------------------------------- Source: git://github.com/thiagoelg/incubator-kie-kogito-images kie-issues-963 Target: af98d962 Temporary removal of kogito-swf-{builder-,devmode} images from release pipelines (#1752) ------------------------------------------------------------- [Pipeline] }[Pipeline] // dir[Pipeline] }[Pipeline] // script[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Validate CeKit Image and Modules descriptors)Stage "Validate CeKit Image and Modules descriptors" skipped due to earlier failure(s)[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Build & Test Images)Stage "Build & Test Images" skipped due to earlier failure(s)[Pipeline] }[Pipeline] // stage[Pipeline] stage[Pipeline] { (Declarative: Post Actions)[Pipeline] script[Pipeline] {[Pipeline] sh+ wget --no-check-certificate -qO - 'https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest/job/kogito-images.build-and-test/159/api/json?depth=0'[Pipeline] readJSON[Pipeline] sh
|
gharchive/pull-request
| 2024-04-05T15:53:07 |
2025-04-01T04:55:59.220111
|
{
"authors": [
"kie-ci3",
"ricardozanini",
"thiagoelg"
],
"repo": "apache/incubator-kie-kogito-images",
"url": "https://github.com/apache/incubator-kie-kogito-images/pull/1753",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2585444290
|
[incubator-kie-issues-1485] Reenable event test in ProcessTestEvents in kogito-apps
Closes: https://github.com/apache/incubator-kie-issues/issues/1485
PR job #2 was: UNSTABLE
Possible explanation: This should be test failures
Reproducer
build-chain build full_downstream -f 'https://raw.githubusercontent.com/${AUTHOR:apache}/incubator-kie-kogito-pipelines/${BRANCH:main}/.ci/buildchain-config-pr-cdb.yaml' -o 'bc' -p apache/incubator-kie-kogito-runtimes -u https://github.com/apache/incubator-kie-kogito-runtimes/pull/3722 --skipParallelCheckout
NOTE: To install the build-chain tool, please refer to https://github.com/kiegroup/github-action-build-chain#local-execution
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest_jobs/job/kogito-runtimes-pr/job/PR-3722/2/display/redirect
Test results:
PASSED: 3362
FAILED: 6
Those are the test failures:
org.jbpm.bpmn2.IntermediateEventTest.testTimerBoundaryEventInterrupting
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testIntermediateCatchEventTimerDuration
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testEventBasedSplit2
expected: 2 but was: 1
org.kie.kogito.quarkus.it.openapi.client.ApiWithSecurityContextIT.verifyAuthHeadersOpenApi3_0NoAuth
java.util.concurrent.CompletionException: java.lang.RuntimeException: Unable to start Quarkus test resource class org.kie.kogito.quarkus.it.openapi.client.mocks.AuthSecurityMockService
org.kie.kogito.integrationtests.quarkus.TaskIT.testUpdateTaskInfo
1 expectation failed.Expected status code <200> but was <404>.
org.kie.kogito.integrationtests.springboot.TaskTest.testUpdateTaskInfo
1 expectation failed.Expected status code <200> but was <404>.
PR job #3 was: UNSTABLE
Possible explanation: This should be test failures
Reproducer
build-chain build full_downstream -f 'https://raw.githubusercontent.com/${AUTHOR:apache}/incubator-kie-kogito-pipelines/${BRANCH:main}/.ci/buildchain-config-pr-cdb.yaml' -o 'bc' -p apache/incubator-kie-kogito-runtimes -u https://github.com/apache/incubator-kie-kogito-runtimes/pull/3722 --skipParallelCheckout
NOTE: To install the build-chain tool, please refer to https://github.com/kiegroup/github-action-build-chain#local-execution
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest_jobs/job/kogito-runtimes-pr/job/PR-3722/3/display/redirect
Test results:
PASSED: 3376
FAILED: 5
Those are the test failures:
org.jbpm.bpmn2.IntermediateEventTest.testTimerBoundaryEventInterrupting
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testIntermediateCatchEventTimerDuration
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testEventBasedSplit2
expected: 2 but was: 1
org.kie.kogito.integrationtests.quarkus.TaskIT.testUpdateTaskInfo
1 expectation failed.Expected status code <200> but was <404>.
org.kie.kogito.integrationtests.springboot.TaskTest.testUpdateTaskInfo
1 expectation failed.Expected status code <200> but was <404>.
PR job #4 was: UNSTABLE
Possible explanation: This should be test failures
Reproducer
build-chain build full_downstream -f 'https://raw.githubusercontent.com/${AUTHOR:apache}/incubator-kie-kogito-pipelines/${BRANCH:main}/.ci/buildchain-config-pr-cdb.yaml' -o 'bc' -p apache/incubator-kie-kogito-runtimes -u https://github.com/apache/incubator-kie-kogito-runtimes/pull/3722 --skipParallelCheckout
NOTE: To install the build-chain tool, please refer to https://github.com/kiegroup/github-action-build-chain#local-execution
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest_jobs/job/kogito-runtimes-pr/job/PR-3722/4/display/redirect
Test results:
PASSED: 3378
FAILED: 3
Those are the test failures:
org.jbpm.bpmn2.IntermediateEventTest.testTimerBoundaryEventInterrupting
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testIntermediateCatchEventTimerDuration
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testEventBasedSplit2
expected: 2 but was: 1
PR job #5 was: UNSTABLE
Possible explanation: This should be test failures
Reproducer
build-chain build full_downstream -f 'https://raw.githubusercontent.com/${AUTHOR:apache}/incubator-kie-kogito-pipelines/${BRANCH:main}/.ci/buildchain-config-pr-cdb.yaml' -o 'bc' -p apache/incubator-kie-kogito-runtimes -u https://github.com/apache/incubator-kie-kogito-runtimes/pull/3722 --skipParallelCheckout
NOTE: To install the build-chain tool, please refer to https://github.com/kiegroup/github-action-build-chain#local-execution
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest_jobs/job/kogito-runtimes-pr/job/PR-3722/5/display/redirect
Test results:
PASSED: 2664
FAILED: 3
Those are the test failures:
org.jbpm.bpmn2.IntermediateEventTest.testTimerBoundaryEventInterrupting
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testIntermediateCatchEventTimerDuration
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testEventBasedSplit2
expected: 2 but was: 1
PR job #6 was: UNSTABLE
Possible explanation: This should be test failures
Reproducer
build-chain build full_downstream -f 'https://raw.githubusercontent.com/${AUTHOR:apache}/incubator-kie-kogito-pipelines/${BRANCH:main}/.ci/buildchain-config-pr-cdb.yaml' -o 'bc' -p apache/incubator-kie-kogito-runtimes -u https://github.com/apache/incubator-kie-kogito-runtimes/pull/3722 --skipParallelCheckout
NOTE: To install the build-chain tool, please refer to https://github.com/kiegroup/github-action-build-chain#local-execution
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest_jobs/job/kogito-runtimes-pr/job/PR-3722/6/display/redirect
Test results:
PASSED: 2155
FAILED: 3
Those are the test failures:
org.jbpm.bpmn2.IntermediateEventTest.testTimerBoundaryEventInterrupting
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testIntermediateCatchEventTimerDuration
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testEventBasedSplit2
expected: 2 but was: 1
PR job #7 was: UNSTABLE
Possible explanation: This should be test failures
Reproducer
build-chain build full_downstream -f 'https://raw.githubusercontent.com/${AUTHOR:apache}/incubator-kie-kogito-pipelines/${BRANCH:main}/.ci/buildchain-config-pr-cdb.yaml' -o 'bc' -p apache/incubator-kie-kogito-runtimes -u https://github.com/apache/incubator-kie-kogito-runtimes/pull/3722 --skipParallelCheckout
NOTE: To install the build-chain tool, please refer to https://github.com/kiegroup/github-action-build-chain#local-execution
Please look here: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/pullrequest_jobs/job/kogito-runtimes-pr/job/PR-3722/7/display/redirect
Test results:
PASSED: 3380
FAILED: 3
Those are the test failures:
org.jbpm.bpmn2.IntermediateEventTest.testTimerBoundaryEventInterrupting
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testIntermediateCatchEventTimerDuration
expected: 2 but was: 1
org.jbpm.bpmn2.IntermediateEventTest.testEventBasedSplit2
expected: 2 but was: 1
|
gharchive/pull-request
| 2024-10-14T09:44:26 |
2025-04-01T04:55:59.250749
|
{
"authors": [
"elguardian",
"kie-ci3"
],
"repo": "apache/incubator-kie-kogito-runtimes",
"url": "https://github.com/apache/incubator-kie-kogito-runtimes/pull/3722",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1284399045
|
Add an issue translator to the website repository
This PR is to solve the issue #381
|
gharchive/pull-request
| 2022-06-25T01:24:16 |
2025-04-01T04:55:59.252556
|
{
"authors": [
"Beacontownfc"
],
"repo": "apache/incubator-linkis-website",
"url": "https://github.com/apache/incubator-linkis-website/pull/380",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1167601501
|
About DSS and linkis, in addition to supporting deployment user login, can you configure other user login?
Search before asking
[X] I searched the issues and found no similar issues.
Linkis Component
linkis-commons
What happened + What you expected to happen
About DSS and linkis, in addition to supporting deployment user login, can you configure other user login?
Relevent platform
。
Reproduction script
。
Anything else
This question comes from the QA documentation of the Linkis community
Are you willing to submit a PR?
[X] Yes I am willing to submit a PR!
Solution:
Certainly. Users are deployed for convenience only. Linkis gateway supports access by configuring LDAP service and SSO service. There is no user verification system. For example, to enable LDAP service access, you only need to configure linkis in the configuration file of linkis gateway Properties your LDAP server is configured as follows:
wds.linkis.ldap.proxy.url=ldap://127.0.0.1:389/#您的LDAP服务URL
wds.linkis.ldap.proxy.baseDN=dc=webank,dc=com#您的LDAP服务的配置
If the user needs to perform tasks, he / she also needs to establish a user with the corresponding user name on the LIX server. If it is a standard version, the user needs to be able to perform spark and hive tasks, and needs to establish a corresponding user name directory in the local workspace and HDFS directory / TMP / links.
|
gharchive/issue
| 2022-03-13T14:46:57 |
2025-04-01T04:55:59.256586
|
{
"authors": [
"Ritakang0451"
],
"repo": "apache/incubator-linkis",
"url": "https://github.com/apache/incubator-linkis/issues/1688",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
343255105
|
All the tests in tools/coreml package are failing
Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.
For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io
Description
Currently all the tests for mxnet-to-coreml converter under the tools/coreml package are failing due to lack of maintenance. We need to update them to keep up with the latest release
Environment info (Required)
----------Python Info----------
('Version :', '2.7.15')
('Compiler :', 'GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)')
('Build :', ('default', 'May 1 2018 16:44:37'))
('Arch :', ('64bit', ''))
------------Pip Info-----------
('Version :', '10.0.1')
('Directory :', '/Users/lnyuan/.virtualenvs/mxnet2/lib/python2.7/site-packages/pip')
----------MXNet Info-----------
('Version :', '1.3.0')
('Directory :', '/Users/lnyuan/work/incubator-mxnet/python/mxnet')
Hashtag not found. Not installed from pre-built package.
----------System Info----------
('Platform :', 'Darwin-16.7.0-x86_64-i386-64bit')
('system :', 'Darwin')
('node :', '88e9fe759c49.ant.amazon.com')
('release :', '16.7.0')
('version :', 'Darwin Kernel Version 16.7.0: Thu Jun 21 20:07:39 PDT 2018; root:xnu-3789.73.14~1/RELEASE_X86_64')
----------Hardware Info----------
('machine :', 'x86_64')
('processor :', 'i386')
machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW RDTSCP TSCI
machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 AVX2 BMI2 INVPCID SMAP RDSEED ADX IPT SGX FPU_CSDS MPX CLFSOPT
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
machdep.cpu.brand_string: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0121 sec, LOAD: 0.6747 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0117 sec, LOAD: 0.4044 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0217 sec, LOAD: 0.1044 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0195 sec, LOAD: 0.0761 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0173 sec, LOAD: 0.2269 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0216 sec, LOAD: 0.1687 sec.
Package used (Python/R/Scala/Julia):
Python
Build info (Required if built from source)
Compiler (gcc/clang/mingw/visual studio):
g++
MXNet commit hash:
e4134c8270c1b944278b1e0331313074b1d97cc0
Error Message:
======================================================================
ERROR: test_pred_vgg16 (test_mxnet_models.ModelsTest)
Traceback (most recent call last):
File "/Users/lnyuan/work/mxnet-master/tools/coreml/test/test_mxnet_models.py", line 141, in test_pred_vgg16
"http://data.mxnet.io/models/imagenet/vgg/vgg16-0000.params"])
File "/Users/lnyuan/work/mxnet-master/tools/coreml/test/test_mxnet_models.py", line 116, in _test_model
coreml_pred = coreml_model.predict(_mxnet_remove_batch(input_data)).values()[0].flatten()
File "/Users/lnyuan/.virtualenvs/mxnet2/lib/python2.7/site-packages/coremltools/models/model.py", line 267, in predict
raise Exception('Model prediction is only supported on macOS version 10.13 or later.')
Exception: Model prediction is only supported on macOS version 10.13 or later.
Minimum reproducible example
All the tests under tools/coreml/test
Steps to reproduce
cd tools/coreml/test
nosetests -v .
What have you tried to solve it?
@sandeep-krishnamurthy Please help to label this issue Test
Created a JIRA ticket to track this issue. I will work on it.
As displayed in the error message "Exception: Model prediction is only supported on macOS version 10.13 or later.", the tests convert MXNet models into CoreML models and use CoreML to predict. This prediction is not supported on Mac OS 10.12.
I have disabled the prediction and tests will pass.
@apeforest any update?
@lupesko I have created a simple test that can run on Mac OS 10.12. The PR is https://github.com/apache/incubator-mxnet/pull/11952
Since our CI does not include Mac OS platform, I am working with @marcoabreu to add this to the manual test suite.
@sandeep-krishnamurthy @nswamy Please close this issue.
|
gharchive/issue
| 2018-07-20T21:38:49 |
2025-04-01T04:55:59.267368
|
{
"authors": [
"apeforest",
"lupesko"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/issues/11841",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
650177414
|
MXNet for Scala 2.12 and 2.13
The Scala language binding is badly needed for high performance training/inference around Apache Spark. The current Scala language binding is for 2.11 and the MXNet version is 1.5.1.
Please add 2.12 and 2.13 packages for 1.6.0 and up. TF has eaplatanios/tensorflow_scala, which while being a non-official release, has a lot of traction and builds for both 2.12 and 2.13.
I would like to contribute if anyone can give me a few pointers of what is needed to have this happen.
@cosmincatalin have you tried building from source https://github.com/apache/incubator-mxnet/tree/v1.x/scala-package#build-from-source?
Are you interested in the CPU or GPU packages? There are some licensing issues with the binaries at org.apache.mxnet (GPU packages may be subject to CUDA EULA and thus incompatible with Apache 2 License; CPU packages redistribute libquadmath.so which is GPL and thus incompatible with Apache 2 License). The latter can be easily fixed by not putting libquadmath.so into the jar. So if you are interested in having the CPU packages for 1.6 (and 1.7 release) and Scala 2.12 and 2.13, we can discuss more about how to make it happen. The first step is to verify the build-from-source works locally for 2.12 and 2.13.
For the GPU packages, it depends on NVidia. They have internal discussions considering if they'd be able to make their EULA compatible with Apache License 2.
You can also refer to https://issues.apache.org/jira/browse/INFRA-20442 for more information.
I've tinkered a little with building from source, but wasn't very successful, I guess I need to focus on it more. To answer your second question, yes, I am interested specifically in the CPU packages.
hey, I'm interested on this, count me in if you want help @cosmincatalin
hey @leezu , I got to compile the project using make. But when run mvn compile on the Scala folder I get this error:
[INFO] Compiling 2 source files to /home/gustavo/git/apache-mxnet-src-1.6.0-incubating/scala-package/init/target/classes at 1597332091507
[INFO] compiler plugin: BasicArtifact(org.scalamacros,paradise_2.11.8,2.1.0,null)
[ERROR] error: scala.reflect.internal.MissingRequirementError: object java.lang.Object in compiler mirror not found.
then I realized that the pom.xml is configured like this:
<java.version>1.7</java.version>
I have Java 11 installed and that may be the source of the error above, would that be correct?
Also, is there any reason to set the java.version to 1.7?
@tavoaqp I think that's right. cc @lanking520 who's helping with a build instruction on this.
I have some instruction in this repository https://github.com/cosmincatalin/mxnet-compiler. I use a custom made Docker image to compile a linux based MXNet library and then I compile the Scala 2.11 binding. The image has Java 8 baked in.
Roughly the same procedure can be used to generate 2.12 bindings, by modifying some Maven configs and libraries. But I don't know how to generate both 2.11, 2.12 and 2.13 bindings.
instead of build from source, you can get the pip wheel for mxnet and put the so in the lib folder.
I would suggest start with mvn verify to see if it can run successfully. I have been trying to upgrade to 2.12 last year but failed due to some dependencies mismatch issue. If you can get over them it could be ok.
Another beast in the code is the code generation system. I am not sure if 2.12 or 2.13 would have consistent support on quasiquote to get it work, but worth for a try. Finally is the Spark support, you may need to change some code to get Spark fully support there.
thanks @lanking520 ! seems like a lot of work :smile: , @cosmincatalin I will try your setup!
@lanking520 Yeah, that works too. I tried with so's from the wheel, and that worked. That being said, 2.12 and 2.13 are a must since a lot Scala code is now on 2.12 at least. Spark is now on 2.12. It would have been nice if the whole setup was based on sbt rather than maven
hey @lanking520 I got some progress: I switched to Scala 2.12.12 and fixed the dependencies. Everything compiles (with some warnings though) but when it comes to compile the examples project I get this:
[INFO] --- scala-maven-plugin:3.4.4:doc-jar (compile) @ mxnet-examples ---
/home/gustavo/git/incubator-mxnet/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/benchmark/ObjectDetectionBenchmark.java:35: error: not found: type NDArray$
private NDArray$ NDArray = NDArray$.MODULE$;
^
/home/gustavo/git/incubator-mxnet/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/bert/BertQA.java:52: error: not found: type NDArray$
private static NDArray$ NDArray = NDArray$.MODULE$;
^
/home/gustavo/git/incubator-mxnet/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/PredictorExample.java:48: error: not found: type NDArray$
private static NDArray$ NDArray = NDArray$.MODULE$;
Any pointers?
Thanks!
Yeah, I've stumbled on this one as well. Come to think of it, maybe I haven't actually been able to generate bindings for 2.12 🤔
hey @lanking520 I got some progress: I switched to Scala 2.12.12 and fixed the dependencies. Everything compiles (with some warnings though) but when it comes to compile the examples project I get this:
[INFO] --- scala-maven-plugin:3.4.4:doc-jar (compile) @ mxnet-examples ---
/home/gustavo/git/incubator-mxnet/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/benchmark/ObjectDetectionBenchmark.java:35: error: not found: type NDArray$
private NDArray$ NDArray = NDArray$.MODULE$;
^
/home/gustavo/git/incubator-mxnet/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/bert/BertQA.java:52: error: not found: type NDArray$
private static NDArray$ NDArray = NDArray$.MODULE$;
^
/home/gustavo/git/incubator-mxnet/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/PredictorExample.java:48: error: not found: type NDArray$
private static NDArray$ NDArray = NDArray$.MODULE$;
Any pointers?
Thanks!
Acutally this is used for Java to access the NDArray class.
So, is there a way to get pass the issue with the NDArray?
I haven't had spare time to see this! Still not sure why is this happening, I believe that the POM project is not generating the Java byte code in the correct path.
October 2021 - is this project release stuck at Scala 2.11 (anno 2014)? I didn't expect it to work with Scala 3, but not even 2.13, come on? Any pointers to othe machine learning frameworks that work with contemporary Scala versions?
Does anyone know if mxnet still has any scala or java bindings update on the roadmap? Or is it only python at this point?
|
gharchive/issue
| 2020-07-02T20:09:33 |
2025-04-01T04:55:59.281966
|
{
"authors": [
"Sciss",
"cosmincatalin",
"lanking520",
"leezu",
"markthor",
"szha",
"tavoaqp"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/issues/18655",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
120544043
|
[Discussion] FP16 Support
Ideally, we have everything to support FP16,
While trying to train Inception-7, I find out it is necessary to support FP16 now, as it is so memory consuming.
Let's set a milestone.
The question is do we want it as a global constant or a per tensor variable. The former requires rebuilding to switch and the later requires major refactor of frontend code, since it's currently const there.
we want a per tensor type aware thing, so fp16 can be part of the graph
Then we need to discuss how to handle this in python and cudnn. Currently mx_float is a const.
Also do we want to alway convert or let mshadow support different source/target type?
mshadow is typed, and mshadow accept DataType as flag, we can use mshadow's cast operator to cast between types. There is a type flag in TBlob already, need a patch to support dtype in NDArray
Does cudnn support mixed type operations?
Unlikely, but we can do explicit cast op
On Sat, Dec 5, 2015 at 2:03 PM Eric Junyuan Xie notifications@github.com
wrote:
Does cudnn support mixed type operations?
—
Reply to this email directly or view it on GitHub
https://github.com/dmlc/mxnet/issues/833#issuecomment-162251184.
Should data type be the property of symbol?
Hmm, I guess we can support some kinda of inference, and make datatype one loose property like device
The quickest way is to first make low level operator type aware, and support cast, so things can be added gradually, as opposed to need to support everything
I think we can make mshadow primitives support different input/output type. Then later we can add type inference to minimize casting.
mshadow primitive already support mixed type, like
Tensor<xpu,float> a;
Tensor<xpu, fp16> b, out;
out = cast<fp16>(a) + b;
The last line generate one kernel that takes fp16 and float input and add to
can we make the cast implicit
On Dec 5, 2015 4:15 PM, "Tianqi Chen" notifications@github.com wrote:
mshadow primitive already support mixed type, like
Tensor<xpu,float> a;
Tensor<xpu, fp16> b, out;
out = cast(a) + b;
The last line generate one kernel that takes fp16 and float input and add
to
—
Reply to this email directly or view it on GitHub
https://github.com/dmlc/mxnet/issues/833#issuecomment-162258823.
currently no due to type inference cost, but I guess it works for most of case we need
What do you mean by type inference cost? Can it be resolved at compile time?
Yes, it can be, except we need to cost some engineering to write the type inference code, we can do it gradually, first by explicit cast, then add implicit inference
Ok. Let's try to do this properly so we can easily support all types. I think TF also supports int. should be useful for softmax and nlp models.
https://github.com/dmlc/mxnet/pull/1675
@tqchen How can I use cast<> in my code? I cannot find a definition of cast in mshadow.
|
gharchive/issue
| 2015-12-05T09:50:08 |
2025-04-01T04:55:59.293013
|
{
"authors": [
"KiddoZhu",
"antinucleon",
"futurely",
"piiswrong",
"tqchen"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/issues/833",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
403228567
|
[Clojure] Add resource scope to clojure package
Description
Ports ResourceScope from the Scala package and introduces using macro to automatically close any resource in the enclosing forms.
https://github.com/apache/incubator-mxnet/issues/13442
and see https://cwiki.apache.org/confluence/display/MXNET/JVM+Memory+Management for more context
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
[X] Changes are complete (i.e. I finished coding on this PR)
[X] All changes have test coverage:
Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
[X] Code is well-documented:
[X] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
Changes
Also added the resource scope to the imclassification example.
New runs to the example do not show any warnings of memory leaks of this nature:
WARN org.apache.mxnet.WarnIfNotDisposed: LEAK: [one-time warning] ...
I think all the feedback is addressed. Feel free to take another look. If there are no more changes needed, I will merge 😸
|
gharchive/pull-request
| 2019-01-25T16:25:10 |
2025-04-01T04:55:59.297640
|
{
"authors": [
"gigasquid"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/13993",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
608017138
|
Fixed Install page history broken
Description
Fixed issue https://github.com/apache/incubator-mxnet/issues/14583. On get started page when navigating back in session history through browser back button, the "Installing MXNet" options block does not update while the url was cut shorter.
After the fix, when navigating back session history, the install options selection will be updated to previous state.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
[x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
[x] Changes are complete (i.e. I finished coding on this PR)
Changes
[x] Update install options section when navigating back history
[x] Avoid install options section buttons' default css style, the blue outline, when losing focus during selection update
Comments
Preview: http://ec2-52-38-4-82.us-west-2.compute.amazonaws.com/
@mxnet-label-bot add [Website]
@mxnet-label-bot add [pr-awaiting-review]
Thanks @ys2843 for your contributions! Welcome to MXNet community :-)
Congratulations on your first PR @ys2843! Excellent work :)
All test cases passed, it should be all good to merge. Thanks @zachgk
|
gharchive/pull-request
| 2020-04-28T04:34:14 |
2025-04-01T04:55:59.303717
|
{
"authors": [
"connorgoggins",
"sandeep-krishnamurthy",
"ys2843"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/18182",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
279228676
|
Add param 'num_filter' for 'BaseConvRNNCell'.
Description
I think it is improper to use the 'num_hidden' as 'num_filter'. So I add the parameter 'num_filter' for 'BaseConvRNNCell' and its derived classes in rnn_cell.py.
Changes
Add the parameter 'num_filter'.
Change the 'num_filter' in 'Convolution' from 'self._num_hiddenself._num_gates' to 'self._num_filterself._num_gates'.
@dsqx71
I think num_hidden is fine. Also this breaks backward compatibility
|
gharchive/pull-request
| 2017-12-05T02:59:52 |
2025-04-01T04:55:59.306266
|
{
"authors": [
"ceiba-w",
"piiswrong",
"szha"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/8945",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
267972549
|
[NETBEANS-54] adding schema2beans test data to Rat exceptions
[NETBEANS-54] adding and simplifying Rat exclusions
OK, these updates are modifications to the existing Rat exclusions, and a simplification of part of it, only the build.xml is impacted by this commit, and only the Rat exclusion section in there. These changes should bring the problematic files down to about 500. Merging this now.
|
gharchive/pull-request
| 2017-10-24T10:08:01 |
2025-04-01T04:55:59.307569
|
{
"authors": [
"geertjanw"
],
"repo": "apache/incubator-netbeans",
"url": "https://github.com/apache/incubator-netbeans/pull/187",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1217974065
|
arch/xtensa: Replace the xcp context with stack context to improve context switching
Summary
Apply the same ideas from:
https://github.com/apache/incubator-nuttx/pull/5645
https://github.com/apache/incubator-nuttx/pull/5731
Impact
Xtensa chips.
Testing
ESP32, ESP32-S2 and ESP32-S3.
@zhuyanlin111 could you review the change?
|
gharchive/pull-request
| 2022-04-27T23:17:05 |
2025-04-01T04:55:59.312725
|
{
"authors": [
"Ouss4",
"xiaoxiang781216"
],
"repo": "apache/incubator-nuttx",
"url": "https://github.com/apache/incubator-nuttx/pull/6167",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1772842373
|
feat(services/redb): support redb service
introduce new redb service https://github.com/apache/incubator-opendal/issues/2524
Sorry for the CI failure, fixed in https://github.com/apache/incubator-opendal/pull/2534
@Xuanwo https://github.com/apache/incubator-opendal/actions/runs/5370716502/jobs/9743000170?pr=2526 and this error, it seems we are using old MSRV version.
@Xuanwo https://github.com/apache/incubator-opendal/actions/runs/5370716502/jobs/9743000170?pr=2526 and this error, it seems we are using old MSRV version.
Please remove redb from default feature. It's required that opendal eith default feature must work on MSRV.
waiting for https://github.com/apache/incubator-opendal/pull/2534 merged and rebase for fix CI failed.
|
gharchive/pull-request
| 2023-06-24T17:16:24 |
2025-04-01T04:55:59.316034
|
{
"authors": [
"Xuanwo",
"oowl"
],
"repo": "apache/incubator-opendal",
"url": "https://github.com/apache/incubator-opendal/pull/2526",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
376884562
|
Testing Improvements
* Support for running :tests:testSystemBasic as part of `helm test`.
* Enable running tests:testSystemBasic for DockerContainerFactory
configurations (ping test still fails with KubernetesContainerFactory
because need to fix upstream to drop CAP_NET_ADMIN on user action pod).
* Change error handling strategy in myTask.sh to being mostly
explicit to get better control and allow retries of commands
that might be vulnerable to CouchDB's eventual consistiency.
* Remove obsolete script for starting minikube in TravisCI
* Push openwhisk core git tag forward 2 commits to pick up fix
for org.apache package rename in tests:testSystemBasic.
* Separate chart deployment and testing into separate scripts
because we need to use travis_wait when running helm tests,
but we want to be able to eagerly see the output of the deploy
step to enable early aborts of the test run when it gets hung.
Fixes #127.
This is finally working reliably.....I've caught my 🐋
Doubles the wall clock time for testing a PR to about 30 minutes, but running the system tests against the kube deploy is worth adding the time.
|
gharchive/pull-request
| 2018-11-02T16:38:14 |
2025-04-01T04:55:59.317458
|
{
"authors": [
"dgrove-oss"
],
"repo": "apache/incubator-openwhisk-deploy-kube",
"url": "https://github.com/apache/incubator-openwhisk-deploy-kube/pull/332",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
655871996
|
prioritize_critical_css loads css twice
Version: 1.13.35.2-0
Configuration:
pagespeed on;
pagespeed FileCachePath /dev/shm/ngx_pagespeed_cache;
pagespeed Statistics on;
pagespeed StatisticsLogging on;
pagespeed LogDir /var/log/pagespeed;
pagespeed MemcachedServers "127.0.0.1:11211";
pagespeed EnableCachePurge on;
pagespeed PreserveUrlRelativity on;
pagespeed InPlaceResourceOptimization off;
pagespeed InPlaceSMaxAgeSec 604800;
pagespeed ModifyCachingHeaders off;
pagespeed FetchHttps enable;
pagespeed CssInlineMaxBytes 4096;
pagespeed JsInlineMaxBytes 16384;
pagespeed ImageRecompressionQuality 70;
pagespeed WebpRecompressionQuality 70;
pagespeed WebpRecompressionQualityForSmallScreens 60;
pagespeed RewriteLevel CoreFilters;
pagespeed EnableFilters prioritize_critical_css,inline_javascript,resize_images,resize_mobile_images,remove_comments,collapse_whitespace,insert_dns_prefetch,hint_preload_subresources,recompress_images;
I noticed the CSS files were loaded twice, which means the prioritize_critical_css did not remove the original tags.
but the 2 times css files are loaded are rewrited by pagespeed? aka have url like full-url-css-file.pagespeed.some-hash.css
Hmm No.
Could it be that you are observing browser prefetching of the original css?
I'm looking at the source code. Also Inspector indicates duplicate CSS rules exist.
prioritize_critical_css works removing the css file link, change it by a css snipet (those used above the fold) and then puy a javascript snipet at the bottom of the page wirh an onload event to load the rewrited css files.
You are viewing 2 css files both w/o any rewrite by pagespeed?
I ask for that because maybe you have a preload hint in the http headers, not in the code. In this case, the browser load these files 2 times, 1 not rewrited by pagespeed (cause pagespeed don´t rewrite link headers) and 1 more time because the css (rewrited by pagespeed) is loaded in the code.
These 2 files are different because in the header link is the original (https://yoursite.com/some-path/file.css) and in the html code the file is loaded with a different url ( https://yoursite.com/some-path/file.css.ce.pagespeed.HASHCHAIN.css).
Can you share a url to test?
https://winterco.org
^^
I'm pretty sure I see the original stylesheet links.
Is there some form is html caching going on?
I'm curious what happens when cloudflare is taken out of the equation.
Well... you have allmost 2 big things to look in it:
The site has headers to preload some things, so when the html is rewrited by pagespeed you have 2 diferent files with the same content, 1 loaded by the header and other loaded by the html code. These preload headers can come from Cloudflare.
When the 2 files are the same, is loaded only 1 time but in chrome dev tools network tab in the initiator column you can see is loaded by "other"
The other thing to look is Cloudflare. Cloudflare is a proxy-cache approach to do CDN work. This mean that the only User Agent that hit the real server is the UA used by Cloudflare, so some optimizations from pagespeed are missed because it are UA dependent.
You need to configure Cloudflare in a way that the real UA from the final user hit the real server and headers from the real server go to the final user.
For example, in the html request (hhtps://winterco.org) pagespeed set a cache-control header max-age=0, no-cache, this header forbid to store the html in any cache, why? cause the html can be, or not, rewrited by pagespeed.
As a first step to make it work, you can put Cloudflare in by-pass mode, so the request from final users can to to the real server, once you got the server with pagespeed working well, the you enable Clodflare and face the issues that introduces.
If I remember, Cloudflare can be set to respect original headers.
Is there some form is html caching going on?
I'm curious what happens when cloudflare is taken out of the equation.
Yes, some html cache is done. The html is served with cache-control: max-age=3600, must-revalidate
Negative. No cloudflare this time:
root@localhost ~ # curl -vv --insecure --resolve "winterco.org:443:127.0.0.1" https://winterco.org
* Expire in 0 ms for 6 (transfer 0x55d944151f50)
* Added winterco.org:443:127.0.0.1 to DNS cache
* Hostname winterco.org was found in DNS cache
* Trying 127.0.0.1...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55d944151f50)
* Connected to winterco.org (127.0.0.1) port 443 (#0)
...(garbage)...
> GET / HTTP/2
> Host: winterco.org
> User-Agent: curl/7.64.0
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200
< server: nginx
< date: Thu, 16 Jul 2020 09:40:51 GMT
< content-type: text/html; charset=UTF-8
< cache-control: max-age=3600, must-revalidate
< vary: Accept-Encoding, Cookie
< hummingbird-cache: Served
< x-content-type-options: nosniff
< x-xss-protection: 1; mode=block;
< x-powered-by: ASP.NET
< x-aspnet-version: 2.0.50727
< x-aspnetmvc-version: 2.0
< x-recruiting: Like hacking? Join us: winterco.org/careers
< x-page-speed: 1.13.35.2-0
< link: </wp-content/themes/creativeily-winterco-mod/style.css?ver=5.4.2>; rel=preload; as=style; nopush
< link: </wp-content/uploads/bws-custom-code/bws-custom-code.css?ver=5.4.2>; rel=preload; as=style; nopush
< link: </wp-content/themes/creativeily-winterco-mod/assets/js/creativeily.js?ver=5.4.2>; rel=preload; as=script; nopush
< link: </wp-content/themes/creativeily-winterco-mod/assets/js/accessibility.js?ver=5.4.2>; rel=preload; as=script; nopush
< link: </wp-includes/js/wp-embed.min.js?ver=5.4.2>; rel=preload; as=script; nopush
<
<!DOCTYPE html>
<html lang="zh-CN" class="no-js no-svg">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="profile" href="http://gmpg.org/xfn/11">
<title>凛冬华人联盟 – Winter Coalition</title>
<link rel='dns-prefetch' href='//www.googletagmanager.com'/>
<link rel='dns-prefetch' href='//fonts.googleapis.com'/>
<link rel='dns-prefetch' href='//cdn.jsdelivr.net'/>
<link rel='dns-prefetch' href='//recaptcha.net'/>
<link rel='dns-prefetch' href='//fonts.gstatic.com'/>
<link rel='dns-prefetch' href='//ajax.googleapis.com'/>
<link rel='dns-prefetch' href='//google-analytics.com'/>
<link rel='dns-prefetch' href='//www.google-analytics.com'/>
<link rel='dns-prefetch' href='//ssl.google-analytics.com'/>
<link rel="alternate" type="application/rss+xml" title="凛冬华人联盟 » Feed" href="https://winterco.org/feed/"/>
<link rel="alternate" type="application/rss+xml" title="凛冬华人联盟 » 评论Feed" href="https://winterco.org/comments/feed/"/>
<link rel='stylesheet' id='wp-block-library-css' href='https://cdn.jsdelivr.net/gh/WordPress/WordPress@5.4.2/wp-includes/css/dist/block-library/style.min.css?ver=5.4.2' type='text/css' media='all'/>
<link rel='stylesheet' id='comment_styles-css' href='https://cdn.jsdelivr.net/wp/plugins/wp-discourse/tags/2.0.6/lib/../css/comments.min.css?ver=1594583367' type='text/css' media='all'/>
<link rel='stylesheet' id='creativeily-google-fonts-css' href='https://fonts.googleapis.com/css2?family=Roboto%3Aital%2Cwght%400%2C400%3B0%2C500%3B0%2C700%3B0%2C900%3B1%2C400%3B1%2C500&display=swap&ver=5.4.2' type='text/css' media='all'/>
<link rel='stylesheet' id='dashicons-css' href='https://cdn.jsdelivr.net/gh/WordPress/WordPress@5.4.2/wp-includes/css/dashicons.min.css?ver=5.4.2' type='text/css' media='all'/>
<link rel='stylesheet' id='creativeily-style-css' href='https://winterco.org/wp-content/themes/creativeily-winterco-mod/style.css?ver=5.4.2' type='text/css' media='all'/>
<link rel='stylesheet' id='bws-custom-style-css' href='https://winterco.org/wp-content/uploads/bws-custom-code/bws-custom-code.css?ver=5.4.2' type='text/css' media='all'/>
<script type='text/javascript' src='https://cdn.jsdelivr.net/gh/WordPress/WordPress@5.4.2/wp-includes/js/jquery/jquery.js?ver=1.12.4-wp'></script>
<script type='text/javascript' src='https://cdn.jsdelivr.net/gh/WordPress/WordPress@5.4.2/wp-includes/js/jquery/jquery-migrate.min.js?ver=1.4.1'></script>
<script type='text/javascript' src='https://winterco.org/wp-content/themes/creativeily-winterco-mod/assets/js/creativeily.js?ver=5.4.2'></script>
<script type='text/javascript' src='https://winterco.org/wp-content/themes/creativeily-winterco-mod/assets/js/accessibility.js?ver=5.4.2'></script>
<script type='text/javascript' src='https://www.googletagmanager.com/gtag/js?id=UA-144673995-3'></script>
<script type='text/javascript'>window.dataLayer=window.dataLayer||[];function gtag(){dataLayer.push(arguments);}gtag('js',new Date());gtag('config','UA-144673995-3',{"anonymize_ip":true});</script>
<link rel='https://api.w.org/' href='https://winterco.org/wp-json/'/>
<link rel="EditURI" type="application/rsd+xml" title="RSD" href="https://winterco.org/xmlrpc.php?rsd"/>
<link rel="wlwmanifest" type="application/wlwmanifest+xml" href="https://winterco.org/wp-includes/wlwmanifest.xml"/>
<meta name="generator" content="WordPress 5.4.2"/>
<meta name="generator" content="Site Kit by Google 1.11.1"/> <style type="text/css" id="custom-theme-css">body{font-style:normal;font-weight:400;padding:0;margin:0;position:relative;-webkit-tap-highlight-color:transparent;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;-webkit-text-size-adjust:100%}#start{background-color:#f3f3f3}.header{position:relative;overflow:visible;display:-webkit-flex;-webkit-flex-wrap:wrap;justify-content:center;align-items:-webkit-flex-start;align-content:-webkit-flex-start;height:700px;height:100vh;max-height:100%;min-height:200px;min-width:300px;color:#eee}.image-creativeily-header{width:100%;height:100%;position:fixed;top:0;left:0;-webkit-backface-visibility:hidden;backface-visibility:hidden;-webkit-transform:translateZ(0) scale(1.0,1.0);transform:translateZ(0);-ms-transform:translateZ(0);background:;background-size:cover;background-attachment:scroll;-webkit-animation:grow 60s linear 10ms infinite;animation:grow 60s linear 10ms infinite;-webkit-transition:all .2s ease-in-out;transition:all .2s ease-in-out;z-index:-2}</style>
<style type="text/css">.has-sidebar #secondary{background: }.has-sidebar #secondary h2,.has-sidebar #secondary th{color: }.has-sidebar #secondary .widget,.has-sidebar #secondary li,.has-sidebar #secondary ul,.has-sidebar #secondary span,.has-sidebar #secondary div{color: }.has-sidebar #secondary button.search-submit{background: ;color:#fff}.has-sidebar #secondary a{color: }.has-sidebar #secondary *,.has-sidebar #secondary .widget h2{border-color: }.blog .wrapmain article{background: }.blog .wrapmain article h2,.blog .wrapmain article h2 a{color: }.postinfo,.postinfo *{color: }.blog .wrapmain article .entry-content p{color: }a.button.button-readmore{background: }a.button.button-readmore{color: }.wrapmain .search-submit{background: }.wrapmain .search-submit{color: }footer{background: }.site-info{color: }</style>
<style type="text/css">.image-creativeily-header{background:#222 url(https://winterco.org/wp-content/uploads/2020/07/2020.07.13.12.22.2-optimized.jpg) center center no-repeat}.header .info h1,.header .meta p{color:#fff}https://winterco.org/wp-content/uploads/2020/07/2020.07.13.12.22.2-optimized.jpg"
.header .info h1, .header .meta p {color:#fff}</style>
<link rel="icon" href="https://winterco.org/wp-content/uploads/2020/07/cropped-11355daf357ecd7cd74ad2c64b3de3e4b15ba35c-32x32.png" sizes="32x32"/>
<link rel="icon" href="https://winterco.org/wp-content/uploads/2020/07/cropped-11355daf357ecd7cd74ad2c64b3de3e4b15ba35c-192x192.png" sizes="192x192"/>
<link rel="apple-touch-icon" href="https://winterco.org/wp-content/uploads/2020/07/cropped-11355daf357ecd7cd74ad2c64b3de3e4b15ba35c-180x180.png"/>
<meta name="msapplication-TileImage" content="https://winterco.org/wp-content/uploads/2020/07/cropped-11355daf357ecd7cd74ad2c64b3de3e4b15ba35c-270x270.png"/>
<style type="text/css" id="wp-custom-css">@media screen and (min-width:1280px){.info{width:50%;right:3%;left:auto}}.info{text-shadow:2px 2px 5px black}.custom-logo{width:60px}.header .meta p{font-size:30px}.image-creativeily-header{background-position:23% center}@media screen and (max-width:1280px){.header .info h1{font-size:40px}.header .meta p{font-size:24px}}.site-info{font-size:12px;font-weight:300}</style>
</head>
<body class="home blog wp-custom-logo">
<a class="skip-link screen-reader-text" href="#content">Skip to content</a>
<div id="page" class="site has-sidebar">
<div class="header">
<div class="image-creativeily-header"></div>
<div class="header-top">
<a href="https://winterco.org/" class="custom-logo-link" rel="home"><img width="120" height="120" src="https://winterco.org/wp-content/uploads/2020/07/dc93e8bf55b0efd7a0b6c74c4f62a426e98fd5e3.png" class="custom-logo" alt="凛冬华人联盟" srcset="https://winterco.org/wp-content/uploads/2020/07/dc93e8bf55b0efd7a0b6c74c4f62a426e98fd5e3.png 120w, https://winterco.org/wp-content/uploads/2020/07/dc93e8bf55b0efd7a0b6c74c4f62a426e98fd5e3-100x100.png 100w" sizes="(max-width: 120px) 100vw, 120px"/></a>
</div>
<div class="info">
...(garbage)...
</div>
</div>
<script type='text/javascript'>var wpdc={"commentsURL":"https:\/\/winterco.org\/wp-json\/wp-discourse\/v1\/discourse-comments"};</script>
<script type='text/javascript' src='https://cdn.jsdelivr.net/gh/discourse/wp-discourse@1.8.7/js/load-comments.min.js?ver=1594583367'></script>
<script type='text/javascript' src='https://winterco.org/wp-includes/js/wp-embed.min.js?ver=5.4.2'></script>
<noscript class="psa_add_styles"><link rel='stylesheet' id='wp-block-library-css' href='https://cdn.jsdelivr.net/gh/WordPress/WordPress@5.4.2/wp-includes/css/dist/block-library/style.min.css?ver=5.4.2' type='text/css' media='all'/><link rel='stylesheet' id='comment_styles-css' href='https://cdn.jsdelivr.net/wp/plugins/wp-discourse/tags/2.0.6/lib/../css/comments.min.css?ver=1594583367' type='text/css' media='all'/><link rel='stylesheet' id='creativeily-google-fonts-css' href='https://fonts.googleapis.com/css2?family=Roboto%3Aital%2Cwght%400%2C400%3B0%2C500%3B0%2C700%3B0%2C900%3B1%2C400%3B1%2C500&display=swap&ver=5.4.2' type='text/css' media='all'/><link rel='stylesheet' id='dashicons-css' href='https://cdn.jsdelivr.net/gh/WordPress/WordPress@5.4.2/wp-includes/css/dashicons.min.css?ver=5.4.2' type='text/css' media='all'/><link rel='stylesheet' id='creativeily-style-css' href='https://winterco.org/wp-content/themes/creativeily-winterco-mod/style.css?ver=5.4.2' type='text/css' media='all'/><link rel='stylesheet' id='bws-custom-style-css' href='https://winterco.org/wp-content/uploads/bws-custom-code/bws-custom-code.css?ver=5.4.2' type='text/css' media='all'/><style type="text/css" id="custom-theme-css">body{font-style:normal;font-weight:400;padding:0;margin:0;position:relative;-webkit-tap-highlight-color:transparent;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;-webkit-text-size-adjust:100%}#start{background-color:#f3f3f3}.header{position:relative;overflow:visible;display:-webkit-flex;-webkit-flex-wrap:wrap;justify-content:center;align-items:-webkit-flex-start;align-content:-webkit-flex-start;height:700px;height:100vh;max-height:100%;min-height:200px;min-width:300px;color:#eee}#top-menu li:after{content:"";display:block;margin:0 auto;width:30px;margin-bottom:.7em;margin-top:.7em}.image-creativeily-header{width:100%;height:100%;position:fixed;top:0;left:0;-webkit-backface-visibility:hidden;backface-visibility:hidden;-webkit-transform:translateZ(0) scale(1.0,1.0);transform:translateZ(0);-ms-transform:translateZ(0);background:;background-size:cover;background-attachment:scroll;-webkit-animation:grow 60s linear 10ms infinite;animation:grow 60s linear 10ms infinite;-webkit-transition:all .2s ease-in-out;transition:all .2s ease-in-out;z-index:-2}</style><style type="text/css">.header a.logo,.logo:hover{color: }.has-sidebar #secondary{background: }.has-sidebar #secondary h2,.has-sidebar #secondary h1,.has-sidebar #secondary h3,.has-sidebar #secondary h4,.has-sidebar #secondary h5,.has-sidebar #secondary h6,.has-sidebar #secondary th{color: }.has-sidebar #secondary p,.has-sidebar #secondary .widget,.has-sidebar #secondary li,.has-sidebar #secondary ol,.has-sidebar #secondary ul,.has-sidebar #secondary dd,.has-sidebar #secondary span,.has-sidebar #secondary div{color: }.has-sidebar #secondary button.search-submit{background: ;color:#fff}.has-sidebar #secondary a{color: }.has-sidebar #secondary *,.has-sidebar #secondary .widget h2{border-color: }.blog .wrapmain article,.archive .wrapmain article,.search-results .wrapmain article{background: }.blog .wrapmain article h2,.archive .wrapmain article h2,.search-results .wrapmain article h2,.blog .wrapmain article h2 a,.archive .wrapmain article h2 a,.search-results .wrapmain article h2 a{color: }.postinfo,.postinfo *{color: }.blog .wrapmain article .entry-content p,.archive .wrapmain article .entry-content p,.search-results .wrapmain article .entry-content p{color: }a.button.button-readmore{background: }a.button.button-readmore{color: }.error404 .content-area,.search-no-results .content-area,.single .wrapmain article,.page .wrapmain article,#commentform{background: }#commentform label,h3#reply-title,h2.comments-title,.page .wrapmain article h1,.page .wrapmain article h2,.page .wrapmain article h3,.page .wrapmain article h4,.page .wrapmain article h5,.page .wrapmain article h6,.page .wrapmain article th,.single .wrapmain article h1,.single .wrapmain article h2,.single .wrapmain article h3,.single .wrapmain article h4,.single .wrapmain article h5,.single .wrapmain article h6,.single .wrapmain article th{color: }.error404 .content-area p,.search-no-results .content-area p,.single .wrapmain article,.single .wrapmain article p,.single .wrapmain article dd,.single .wrapmain article li,.single .wrapmain article ul,.single .wrapmain article ol,.single .wrapmain article address,.single .wrapmain article table,.single .wrapmain article th,.single .wrapmain article td,.single .wrapmain article blockquote,.single .wrapmain article span,.single .wrapmain article div .page .wrapmain article,.page .wrapmain article p,.page .wrapmain article dd,.page .wrapmain article li,.page .wrapmain article ul,.page .wrapmain article ol,.page .wrapmain article address,.page .wrapmain article table,.page .wrapmain article th,.page .wrapmain article td,.page .wrapmain article blockquote,.page .wrapmain article span,.page .wrapmain article div{color: }.single .wrapmain article a,.page .wrapmain article a{color: }.wrapmain .search-submit,.page .wrapmain article a.button,.single .wrapmain article a.button,.nav-links span.button,form#commentform input#submit{background: }.wrapmain .search-submit,.nav-links span.button,form#commentform input#submit{color: }.page .wrapmain article td,.single .wrapmain article td,.page .wrapmain article th,.single .wrapmain article th,.single .wrapmain article *,.page .wrapmain article *{border-color: }.footer-column-three h3{color: }footer{background: }.footer-column-wrapper .widget a{color: }.footer-column-wrapper .widget *{border-color: }.footer-column-wrapper .widget .search-submit{background: }.footer-column-wrapper .widget .search-submit{color: }.site-info,.site-info *,.footer-column-wrapper .widget,.footer-column-wrapper .widget li,.footer-column-wrapper .widget p,.footer-column-wrapper abbr,.footer-column-wrapper cite,.footer-column-wrapper table caption,.footer-column-wrapper td,.footer-column-wrapper th{color: }</style><style type="text/css">.image-creativeily-header{background:#222 url(https://winterco.org/wp-content/uploads/2020/07/2020.07.13.12.22.2-optimized.jpg) center center no-repeat}.header .info h1,.header .meta p{color:#fff}https://winterco.org/wp-content/uploads/2020/07/2020.07.13.12.22.2-optimized.jpg"
.header .info h1, .header .meta p {color:#fff}</style><style type="text/css" id="wp-custom-css">@media screen and (min-width:1280px){.info{width:50%;right:3%;left:auto}}.info{text-shadow:2px 2px 5px black}.custom-logo{width:60px}.header .meta p{font-size:30px}.image-creativeily-header{background-position:23% center}@media screen and (max-width:1280px){.header .info h1{font-size:40px}.header .meta p{font-size:24px}}.site-info,.site-info *{font-size:12px;font-weight:300}</style></noscript><script data-pagespeed-no-defer>(function(){function b(){var a=window,c=e;if(a.addEventListener)a.addEventListener("load",c,!1);else if(a.attachEvent)a.attachEvent("onload",c);else{var d=a.onload;a.onload=function(){c.call(this);d&&d.call(this)}}};var f=!1;function e(){if(!f){f=!0;for(var a=document.getElementsByClassName("psa_add_styles"),c=0,d;d=a[c];++c)if("NOSCRIPT"==d.nodeName){var k=document.createElement("div");k.innerHTML=d.textContent;document.body.appendChild(k)}}}function g(){var a=window.requestAnimationFrame||window.webkitRequestAnimationFrame||window.mozRequestAnimationFrame||window.oRequestAnimationFrame||window.msRequestAnimationFrame||null;a?a(function(){window.setTimeout(e,0)}):b()}
var h=["pagespeed","CriticalCssLoader","Run"],l=this;h[0]in l||!l.execScript||l.execScript("var "+h[0]);for(var m;h.length&&(m=h.shift());)h.length||void 0===g?l[m]?l=l[m]:l=l[m]={}:l[m]=g;})();
pagespeed.CriticalCssLoader.Run();</script></body>
</html>
* Connection #0 to host winterco.org left intact
The Cache-Control on the header is likely provided by Hummingbird, a plugin of WordPress and it does not interfere with PageSpeed.
These Link header are provided by hint_preload_subresources I believe.
Note that there is two set of <link>, one in <head> and the other in <noscript>. Also got repeated. DevTools says:
though I have totally no idea why they are all tagged as <style>
Whit prioritize_crital_css the links into the <noscript> label is expected bahaviour. Then these links are loaded by the pagespeed.CriticalCssLoader.Run(); javascript snipet.
The unusual things are the headers with not rewrited url (if they are set by pagespeed) and the no delete the in html and change it by the css code above the fold.
But for now it is working, I get the css files rewrited and no css tag duplicated.
and have the css code inlined in the html
Maybe you need to disable hint_preload_subresources and get some hits to get the inlined css.
NOTE: Files for other domain other that yours, are not optimized by pagespeed
`
If these Link headers work, then I'll get 3 working copy of these CSS, but currently I only got 2.
URL rewriting is out of scope here as they re broken for weeks and fixed just now. In this process, the problem neither goes away nor changes behavior.
The specific problem I currently have, is while PageSpeed move these unimportant styles in the bottom of the page, there are still copies of these styles in . They should be removed for this filter to actually work.
Maybe is some type of problem with the Cloudflare cache, it has stored some html w/out the optimizations or half-optimized.
Now is servig the optimized version, cache-control is set to max-age=0, no-cache, css above the fold is inlined and the css files are loaded by the javascript snipet pagespeed.CriticalCssLoader.Run(); in the onload event.
To be sure the headers link can´t return, unset it in the pagespeed config file.
^^ I have provided the curl result without cloudflare there. Problem persists.
https://github.com/apache/incubator-pagespeed-ngx/issues/1697#issuecomment-659301893
But the last images I have posted are with cloudflare and it work as spected. CSS above the fold is inlined and css files are loaded only 1 times.
#1697 (comment)
It fetched once and loaded multiple times. If still confused, take these <style>s as an example instead of <link>s.
For example now we have 2 copy of <style type="text/css" id="wp-custom-css">.
I'll give you a minimal code snippet to start with. Take the example located at PageSpeed Docs:
If the HTML document looks like this:
<html>
<head>
<style>
.blue {color: blue;}
</style>
<link rel="stylesheet" href="small.css">
</head>
<body>
<div class="blue">
Hello, world!
</div>
</body>
</html>
And the resource small.css is like this:
.yellow {background-color: yellow;}
.big { font-size: 8em; }
.bold { font-weight: bold; }
We expect PageSpeed to rewrite it into:
<html>
<head>
<style>
.blue{color:blue;}
</style>
</head>
<body>
<div class="blue">
Hello, world!
</div>
</body>
</html>
<noscript><link rel="stylesheet" href="small.css"></noscript>
However, now we are getting this instead:
<html>
<head>
<style>
.blue{color:blue;}
</style>
<link rel="stylesheet" href="small.css">
</head>
<body>
<div class="blue">
Hello, world!
</div>
</body>
</html>
<noscript><link rel="stylesheet" href="small.css"></noscript>
As far I can see your site is working, pagespeed does their job, css files are loaded once, css abobe the fold is inlined in the html....
And yes, some css rules are set twice, 1 time in the inlined css and 1 more time in the css files.
And using a browser, so Cloudflare is here.
So why is it link css files multiple times? I understand that it extracts a few rules in <style> and load the rest in the bottom of the page, but there are <link>s duplicated as-is in these two places. Is there any specific reasons or restrictions that prevents deduping these <link> tags?
Someting is changing the url of the files. See these images:
I think if you are using Cloudflare don´t need a 2nd CDN as far as Clodflare is a CDN himself.
And I only see <link> tags when the file comes from cdn.jsdelivr.net because pagespeed let it untouched. When the file comes from winterco.org get the css above the fold inlined and the file loaded by the javascript snipet.
|
gharchive/issue
| 2020-07-13T13:54:23 |
2025-04-01T04:55:59.347196
|
{
"authors": [
"Lofesa",
"oschaaf",
"wfjsw"
],
"repo": "apache/incubator-pagespeed-ngx",
"url": "https://github.com/apache/incubator-pagespeed-ngx/issues/1697",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
256484427
|
Update install-sourcecode.html.md.erb
minor typo with 'following'.
Thanks for catching!
|
gharchive/pull-request
| 2017-09-10T04:29:43 |
2025-04-01T04:55:59.349769
|
{
"authors": [
"aayush142128",
"dszeto"
],
"repo": "apache/incubator-predictionio",
"url": "https://github.com/apache/incubator-predictionio/pull/434",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1208336183
|
MybatisPlus的基类insert方法不能回滚数据
问题如下:
seata(1.30和1.42版本都有)安装配置工作正常后,经测试发现,mybatis-plus 的 BaseMapper 类里的 insert 方法,如果在插入数据时,不指定主键id,会导致插入的数据无法回滚。同时更新的数据是能正常回滚的。
具体测试类看下图的那行注释:
public void insertOrder(int userId, int goodsId, String goods, int count) { GoodsOrder goodsOrder = new GoodsOrder(); // goodsOrder.setId(101); // 必须指定主键id,否则插入的记录不能回滚 goodsOrder.setUserId(userId); goodsOrder.setGoodsId(goodsId); goodsOrder.setGoods(goods); goodsOrder.setCount(count); goodsOrderMapper.insert(goodsOrder); }
这是插入后不能回滚的数据证据:
触发回滚:
There was an unexpected error (type=Internal Server Error, status=500). [500] during [GET] to [http://BATIS-SEATA-FEIGN-STOCK/stock/update_stock?goodsId=56&count=101] [StockFeignService#insertOrUpdateStock(int,int)]: [{"timestamp":"2022-04-19T12:54:21.572+0000","status":500,"error":"Internal Server Error","message":"库存下限错误","path":"/stock/update_stock"}]
数据没被回滚,还在:
107 398 56 北欧沙发 101
如下操作可以正常回滚:
如果给实体实例设定id=101(即把该行注释去掉)
或者如果使用 mapper.xml 里的原生SQL语句
问题描述完了,
期待你的回复!谢谢哈!
请提供有效日志证明无法回滚
我也遇到上面的问题,没有指定主键的id时根本不走MySQLUndoLogManager中的insertUndoLog方法
请提供一下驱动版本
我是在整合tkmybatis的时候不插入undo_log。。。
的确存在这个问题。
mybatis-plus 在处理自增不指定id插入时,默认使用0让数据库自己处理。
不清楚是否这个原因?
各位有简单一点的解决方案吗(不改表id自增的情况)?
|
gharchive/issue
| 2022-04-19T12:57:30 |
2025-04-01T04:55:59.354336
|
{
"authors": [
"SoloAlien",
"a364176773",
"caisf",
"caohdgege",
"ftdmao",
"iloveleeyan"
],
"repo": "apache/incubator-seata",
"url": "https://github.com/apache/incubator-seata/issues/4556",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1168317472
|
[Improvement][Config] Print config in origin order
Purpose of this pull request
close #1473
Check list
[ ] Code changed are covered with tests, or it does not need tests for reason:
[ ] If any new Jar binary package adding in you PR, please add License Notice according
New License Guide
[ ] If necessary, please update the documentation to describe the new feature. https://github.com/apache/incubator-seatunnel/tree/dev/docs
you need to add reference to this :
https://github.com/apache/incubator-seatunnel/blob/159eb8423c99f21f951abe1a37753bafa206c4dc/LICENSE#L216
Thanks for your reminder, I added this announces, please help to take a look.
I find many code improve in this PR, It's great to do code improve, If not much we can just use one PR . If code improve change too much, I suggest split it into two PR next time. It's good for code review and PR check. Thanks
@CalvinKirs LGTM
|
gharchive/pull-request
| 2022-03-14T12:36:55 |
2025-04-01T04:55:59.358452
|
{
"authors": [
"BenJFan",
"ruanwenjun"
],
"repo": "apache/incubator-seatunnel",
"url": "https://github.com/apache/incubator-seatunnel/pull/1484",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1366809980
|
[Feature][Connector-V2] Add mongodb connecter sink
Purpose of this pull request
Check list
[x] Code changed are covered with tests, or it does not need tests for reason:
[ ] If any new Jar binary package adding in your PR, please add License Notice according
New License Guide
[x] If necessary, please update the documentation to describe the new feature. https://github.com/apache/incubator-seatunnel/tree/dev/docs
add spark e2e-testcase?
add spark e2e-testcase?
OK, thank you for your reminder. I'll add it later
|
gharchive/pull-request
| 2022-09-08T18:29:40 |
2025-04-01T04:55:59.361615
|
{
"authors": [
"hailin0",
"wuchunfu"
],
"repo": "apache/incubator-seatunnel",
"url": "https://github.com/apache/incubator-seatunnel/pull/2694",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
528581640
|
Refactor and enhance sharding-scaling-core
[ ] Refactor SyncExecutor to SyncExecuteEngine
[ ] Refactor DataSyncTask
[ ] Implement SyncTaskProgress
[ ] Use Callback to report completed sync task
I am working on task one, assignees this to me?
|
gharchive/issue
| 2019-11-26T09:02:30 |
2025-04-01T04:55:59.363396
|
{
"authors": [
"KomachiSion",
"avalon566"
],
"repo": "apache/incubator-shardingsphere",
"url": "https://github.com/apache/incubator-shardingsphere/issues/3602",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
421472085
|
Make merge module and execute module independent
Fixes #1864.
Pull Request Test Coverage Report for Build 7170
0 of 9 (0.0%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.001%) to 20.069%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
sharding-jdbc/sharding-jdbc-core/src/main/java/org/apache/shardingsphere/shardingjdbc/jdbc/core/statement/EncryptPreparedStatement.java
0
9
0.0%
Totals
Change from base Build 7169:
-0.001%
Covered Lines:
8170
Relevant Lines:
40709
💛 - Coveralls
|
gharchive/pull-request
| 2019-03-15T11:22:57 |
2025-04-01T04:55:59.369833
|
{
"authors": [
"coveralls",
"tristaZero"
],
"repo": "apache/incubator-shardingsphere",
"url": "https://github.com/apache/incubator-shardingsphere/pull/2042",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
582888524
|
Fix 4782.
Fixes https://github.com/apache/incubator-shardingsphere/issues/4782
Pull Request Test Coverage Report for Build 10317
0 of 0 changed or added relevant lines in 0 files are covered.
3 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-0.005%) to 59.861%
Files with Coverage Reduction
New Missed Lines
%
sharding-orchestration/sharding-orchestration-core/sharding-orchestration-core-registrycenter/src/main/java/org/apache/shardingsphere/orchestration/core/registrycenter/util/IpUtils.java
3
76.0%
Totals
Change from base Build 1096:
-0.005%
Covered Lines:
12545
Relevant Lines:
20957
💛 - Coveralls
|
gharchive/pull-request
| 2020-03-17T09:52:28 |
2025-04-01T04:55:59.376762
|
{
"authors": [
"coveralls",
"yu199195"
],
"repo": "apache/incubator-shardingsphere",
"url": "https://github.com/apache/incubator-shardingsphere/pull/4803",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1158342392
|
[Question] shenyu-admin
Question
When I restart admin, the original configuration will be lost,why is this?
could you please detail your question, with version, snapshot, procedure and so on, many thanks.
Set spring.profiles.active=mysql, then try it again.
could you please detail your question, with version, snapshot, procedure and so on, many thanks.
could you please detail your question, with version, snapshot, procedure and so on, many thanks.
Set spring.profiles.active=mysql, then try it again.
ok, get it,many thinks
done
|
gharchive/issue
| 2022-03-03T12:18:56 |
2025-04-01T04:55:59.379357
|
{
"authors": [
"AhahaGe",
"KevinClair",
"chariles",
"midnight2104"
],
"repo": "apache/incubator-shenyu",
"url": "https://github.com/apache/incubator-shenyu/issues/2972",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
340051833
|
Fix #1441 add icon files notice
Please answer these questions before submitting pull request
Why submit this pull request?
[x] Bug fix
[ ] New feature provided
[ ] Improve performance
Related issues
Fix #1441
We should not add the ASF License header to the iconfont.svg if the file is just copied from dant.
Yes. @WillemJiang already removed it at UI repository and submodule updated.
|
gharchive/pull-request
| 2018-07-11T00:33:04 |
2025-04-01T04:55:59.381941
|
{
"authors": [
"WillemJiang",
"hanahmily",
"wu-sheng"
],
"repo": "apache/incubator-skywalking",
"url": "https://github.com/apache/incubator-skywalking/pull/1442",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1388869956
|
[Bug] mvn package failed
Search before asking
[X] I had searched in the issues and found no similar issues.
What happened
git clone code then chose dev branch,open it with idea,then run mvn clean install -Dscala.version=2.11.12 -Dscala.binary.version=2.11 -DskipTests ,then something wrong happens
[ERROR] ## Exception when compiling 61 sources to D:\code\incubator-streampark\streampark-common\target\classes
java.lang.NoSuchMethodError: org.fusesource.jansi.AnsiConsole.wrapOutputStream(Ljava/io/OutputStream;)Ljava/io/OutputStream;
jline.AnsiWindowsTerminal.detectAnsiSupport(AnsiWindowsTerminal.java:57)
jline.AnsiWindowsTerminal.(AnsiWindowsTerminal.java:27)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
java.lang.Class.newInstance(Class.java:442)
jline.TerminalFactory.getFlavor(TerminalFactory.java:205)
jline.TerminalFactory.create(TerminalFactory.java:96)
jline.TerminalFactory.get(TerminalFactory.java:180)
jline.TerminalFactory.get(TerminalFactory.java:186)
sbt.internal.util.ConsoleAppender$.ansiSupported(ConsoleAppender.scala:292)
sbt.internal.util.ConsoleAppender$.useColorDefault$1(ConsoleAppender.scala:127)
sbt.internal.util.ConsoleAppender$.$anonfun$formatEnabledInEnv$4(ConsoleAppender.scala:143)
scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
scala.Option.getOrElse(Option.scala:138)
sbt.internal.util.ConsoleAppender$.(ConsoleAppender.scala:143)
sbt.internal.util.ConsoleAppender$.(ConsoleAppender.scala)
sbt.internal.inc.MixedAnalyzingCompiler.compile(MixedAnalyzingCompiler.scala:150)
sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileInternal$1(IncrementalCompilerImpl.scala:343)
sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileInternal$1$adapted(IncrementalCompilerImpl.scala:343)
sbt.internal.inc.Incremental$.doCompile(Incremental.scala:120)
sbt.internal.inc.Incremental$.$anonfun$compile$4(Incremental.scala:100)
sbt.internal.inc.IncrementalCommon.recompileClasses(IncrementalCommon.scala:180)
sbt.internal.inc.IncrementalCommon.cycle(IncrementalCommon.scala:98)
sbt.internal.inc.Incremental$.$anonfun$compile$3(Incremental.scala:102)
sbt.internal.inc.Incremental$.manageClassfiles(Incremental.scala:155)
sbt.internal.inc.Incremental$.compile(Incremental.scala:92)
sbt.internal.inc.IncrementalCompile$.apply(Compile.scala:75)
sbt.internal.inc.IncrementalCompilerImpl.compileInternal(IncrementalCompilerImpl.scala:348)
sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileIncrementally$1(IncrementalCompilerImpl.scala:301)
sbt.internal.inc.IncrementalCompilerImpl.handleCompilationError(IncrementalCompilerImpl.scala:168)
sbt.internal.inc.IncrementalCompilerImpl.compileIncrementally(IncrementalCompilerImpl.scala:248)
sbt.internal.inc.IncrementalCompilerImpl.compile(IncrementalCompilerImpl.scala:74)
sbt_inc.SbtIncrementalCompiler.compile(SbtIncrementalCompiler.java:173)
scala_maven.ScalaCompilerSupport.incrementalCompile(ScalaCompilerSupport.java:297)
scala_maven.ScalaCompilerSupport.compile(ScalaCompilerSupport.java:109)
scala_maven.ScalaCompilerSupport.doExecute(ScalaCompilerSupport.java:91)
scala_maven.ScalaMojoSupport.execute(ScalaMojoSupport.java:554)
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
org.apache.maven.lifecycle.internal.MojoExecutor.doExecute(MojoExecutor.java:301)
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:211)
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:165)
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:157)
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:121)
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:127)
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:294)
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
org.apache.maven.cli.MavenCli.execute(MavenCli.java:960)
org.apache.maven.cli.MavenCli.doMain(MavenCli.java:293)
org.apache.maven.cli.MavenCli.main(MavenCli.java:196)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
StreamPark Version
git clone the latest code
Java Version
1.8
Flink Version
unkown
Scala Version of Flink
2.11.12
Error Exception
[ERROR] ## Exception when compiling 61 sources to D:\code\incubator-streampark\streampark-common\target\classes
java.lang.NoSuchMethodError: org.fusesource.jansi.AnsiConsole.wrapOutputStream(Ljava/io/OutputStream;)Ljava/io/OutputStream;
jline.AnsiWindowsTerminal.detectAnsiSupport(AnsiWindowsTerminal.java:57)
jline.AnsiWindowsTerminal.<init>(AnsiWindowsTerminal.java:27)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
java.lang.Class.newInstance(Class.java:442)
jline.TerminalFactory.getFlavor(TerminalFactory.java:205)
jline.TerminalFactory.create(TerminalFactory.java:96)
jline.TerminalFactory.get(TerminalFactory.java:180)
jline.TerminalFactory.get(TerminalFactory.java:186)
sbt.internal.util.ConsoleAppender$.ansiSupported(ConsoleAppender.scala:292)
sbt.internal.util.ConsoleAppender$.useColorDefault$1(ConsoleAppender.scala:127)
sbt.internal.util.ConsoleAppender$.$anonfun$formatEnabledInEnv$4(ConsoleAppender.scala:143)
scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
scala.Option.getOrElse(Option.scala:138)
sbt.internal.util.ConsoleAppender$.<init>(ConsoleAppender.scala:143)
sbt.internal.util.ConsoleAppender$.<clinit>(ConsoleAppender.scala)
sbt.internal.inc.MixedAnalyzingCompiler.compile(MixedAnalyzingCompiler.scala:150)
sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileInternal$1(IncrementalCompilerImpl.scala:343)
sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileInternal$1$adapted(IncrementalCompilerImpl.scala:343)
sbt.internal.inc.Incremental$.doCompile(Incremental.scala:120)
sbt.internal.inc.Incremental$.$anonfun$compile$4(Incremental.scala:100)
sbt.internal.inc.IncrementalCommon.recompileClasses(IncrementalCommon.scala:180)
sbt.internal.inc.IncrementalCommon.cycle(IncrementalCommon.scala:98)
sbt.internal.inc.Incremental$.$anonfun$compile$3(Incremental.scala:102)
sbt.internal.inc.Incremental$.manageClassfiles(Incremental.scala:155)
sbt.internal.inc.Incremental$.compile(Incremental.scala:92)
sbt.internal.inc.IncrementalCompile$.apply(Compile.scala:75)
sbt.internal.inc.IncrementalCompilerImpl.compileInternal(IncrementalCompilerImpl.scala:348)
sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileIncrementally$1(IncrementalCompilerImpl.scala:301)
sbt.internal.inc.IncrementalCompilerImpl.handleCompilationError(IncrementalCompilerImpl.scala:168)
sbt.internal.inc.IncrementalCompilerImpl.compileIncrementally(IncrementalCompilerImpl.scala:248)
sbt.internal.inc.IncrementalCompilerImpl.compile(IncrementalCompilerImpl.scala:74)
sbt_inc.SbtIncrementalCompiler.compile(SbtIncrementalCompiler.java:173)
scala_maven.ScalaCompilerSupport.incrementalCompile(ScalaCompilerSupport.java:297)
scala_maven.ScalaCompilerSupport.compile(ScalaCompilerSupport.java:109)
scala_maven.ScalaCompilerSupport.doExecute(ScalaCompilerSupport.java:91)
scala_maven.ScalaMojoSupport.execute(ScalaMojoSupport.java:554)
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
org.apache.maven.lifecycle.internal.MojoExecutor.doExecute(MojoExecutor.java:301)
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:211)
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:165)
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:157)
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:121)
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:127)
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:294)
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
org.apache.maven.cli.MavenCli.execute(MavenCli.java:960)
org.apache.maven.cli.MavenCli.doMain(MavenCli.java:293)
org.apache.maven.cli.MavenCli.main(MavenCli.java:196)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
Screenshots
No response
Are you willing to submit PR?
[ ] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
bug fixed
|
gharchive/issue
| 2022-09-28T07:29:50 |
2025-04-01T04:55:59.397349
|
{
"authors": [
"gitfortian"
],
"repo": "apache/incubator-streampark",
"url": "https://github.com/apache/incubator-streampark/issues/1706",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
537173253
|
[fix] Adding time grains to Table
CATEGORY
Choose one
[x] Bug Fix
[ ] Enhancement (new features, refinement)
[ ] Refactor
[ ] Add tests
[ ] Build / Development Environment
[ ] Documentation
SUMMARY
This PR re-adds the time grain controls back to the Table visualization type as this is needed by the include_time control. Note this is the only visualization type which uses this control (per here).
BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
TEST PLAN
CI.
ADDITIONAL INFORMATION
[ ] Has associated issue:
[ ] Changes UI
[ ] Requires DB Migration.
[ ] Confirm DB Migration upgrade and downgrade tested.
[ ] Introduces new feature or API
[ ] Removes existing feature or API
REVIEWERS
to: @etr2460
Codecov Report
Merging #8825 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #8825 +/- ##
=======================================
Coverage 65.88% 65.88%
=======================================
Files 483 483
Lines 24178 24178
Branches 2778 2778
=======================================
Hits 15930 15930
Misses 8070 8070
Partials 178 178
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ec43609...db85432. Read the comment docs.
|
gharchive/pull-request
| 2019-12-12T19:36:12 |
2025-04-01T04:55:59.406662
|
{
"authors": [
"codecov-io",
"john-bodley"
],
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/pull/8825",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
563533367
|
[fix] Fix table viz column order
CATEGORY
Choose one
[x] Bug Fix
[ ] Enhancement (new features, refinement)
[ ] Refactor
[ ] Add tests
[ ] Build / Development Environment
[ ] Documentation
SUMMARY
This PR fixes a couple of issues related to the table visualization type. Originally the intent was to ensure that the table column orders adhered to the UI. The PR somewhat morphed to contain:
Updated the validity matrix to ensure that defining percent metrics is not viable when NOT GROUPED BY columns are defined.
Replaced outdated Python filter/maps with list comprehensions and Pandas column operators.
Ensure that the resulting data frame is ordered according to group-by or non-group-by column, metrics, and percent metrics.
BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
TEST PLAN
CI and manual testing.
ADDITIONAL INFORMATION
[ ] Has associated issue:
[ ] Changes UI
[ ] Requires DB Migration.
[ ] Confirm DB Migration upgrade and downgrade tested.
[ ] Introduces new feature or API
[ ] Removes existing feature or API
REVIEWERS
to: @michellethomas @mistercrunch @villebro
Codecov Report
Merging #9122 into master will decrease coverage by <.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #9122 +/- ##
=========================================
- Coverage 59.1% 59.1% -0.01%
=========================================
Files 372 372
Lines 11920 11922 +2
Branches 2917 2919 +2
=========================================
+ Hits 7045 7046 +1
- Misses 4693 4694 +1
Partials 182 182
Impacted Files
Coverage Δ
superset-frontend/src/chart/chartAction.js
43.33% <0%> (+0.09%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2913063...a05bafd. Read the comment docs.
|
gharchive/pull-request
| 2020-02-11T22:25:40 |
2025-04-01T04:55:59.417605
|
{
"authors": [
"codecov-io",
"john-bodley"
],
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/pull/9122",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
475353009
|
[TOPI] Update softmax compute and CPU schedule
This change improves performance for softmax by simplifying the computation and writing a schedule that supports better parallelization.
Compute: Currently, exp(input - max) is computed twice: once in the _compute_expsum stage and once in the _normalize stage. This change adds an extra stage to compute this tensor once. It is then re-used in the _compute_expsum and _normalize stages.
Schedule: Currently, the schedule only parallelizes the _normalize stage of the computation. This change puts all stages of computation under a common root and parallelizes the outer dimensions.
The following results are with a tensor of shape (1,12,128,128) and axis=-1. This simulates the softmax in BERT base. The CPU is Intel Xeon E5-2650, and the Relay target string is llvm -mcpu=core-avx2.
TVM_NUM_THREADS
Latency in ms (master branch)
Latency in ms (new branch)
1
4.7
3.0
2
3.8
1.8
4
3.3
1.0
8
3.1
0.74
16
3.2
0.55
@kevinthesun @vinx13 can you please review and add any other reviewers you think are necessary?
Thank you @soiferj , can you check the CI problem?
Yeah, I'm taking a look at the CI failure now. It seems to be an issue in the CUDA schedule. I will work on it.
The CI issue is fixed.
@kevinthesun feel free to merge the PR given you are managing it
Thank you for contributing!
Another suggestion - https://discuss.tvm.ai/t/softmax-sequence-of-relay-ops/5686
|
gharchive/pull-request
| 2019-07-31T20:54:48 |
2025-04-01T04:55:59.424551
|
{
"authors": [
"anijain2305",
"kevinthesun",
"soiferj",
"tqchen"
],
"repo": "apache/incubator-tvm",
"url": "https://github.com/apache/incubator-tvm/pull/3680",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1588946313
|
[FOLLOWUP] fix: don't recreate base dir if it's already existed
What changes were proposed in this pull request?
optimize the base dir init logic, it now skips recreate base dir if it's already an dir
Why are the changes needed?
handles some corner cases such as the base dir is an mount point root path.
Does this PR introduce any user-facing change? rss shuffle server can use mounted path as base dir directly.
How was this patch tested?
Existing UTs.
Codecov Report
Merging #622 (bc7f285) into branch-0.7 (7f9b561) will increase coverage by 2.30%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## branch-0.7 #622 +/- ##
================================================
+ Coverage 60.87% 63.17% +2.30%
+ Complexity 1801 1797 -4
================================================
Files 214 200 -14
Lines 12387 10409 -1978
Branches 1044 1041 -3
================================================
- Hits 7540 6576 -964
+ Misses 4443 3490 -953
+ Partials 404 343 -61
Impacted Files
Coverage Δ
...rg/apache/uniffle/storage/common/LocalStorage.java
51.31% <100.00%> (+2.25%)
:arrow_up:
...e/uniffle/server/storage/SingleStorageManager.java
70.58% <0.00%> (-2.95%)
:arrow_down:
deploy/kubernetes/operator/pkg/utils/rss.go
deploy/kubernetes/operator/pkg/webhook/manager.go
...y/kubernetes/operator/pkg/webhook/inspector/pod.go
...eploy/kubernetes/operator/pkg/utils/coordinator.go
...oy/kubernetes/operator/pkg/utils/shufflerserver.go
...oy/kubernetes/operator/pkg/controller/util/util.go
deploy/kubernetes/operator/pkg/utils/util.go
...bernetes/operator/pkg/controller/controller/rss.go
... and 7 more
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
gharchive/pull-request
| 2023-02-17T08:35:40 |
2025-04-01T04:55:59.438620
|
{
"authors": [
"advancedxy",
"codecov-commenter"
],
"repo": "apache/incubator-uniffle",
"url": "https://github.com/apache/incubator-uniffle/pull/622",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
215810659
|
Class common's problem
I use <style src="assets/common.styl" lang="stylus"/> require class, want class shared,but no good, becasue I view the build dist file, these class all repeat, is not one module pack in.
We will merge 0.11-dev to master when v0.11 released. So this PR will not proceed currently. Thank you.
|
gharchive/pull-request
| 2017-03-21T17:02:29 |
2025-04-01T04:55:59.440031
|
{
"authors": [
"nicefan",
"sospartan"
],
"repo": "apache/incubator-weex",
"url": "https://github.com/apache/incubator-weex/pull/141",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
665949257
|
[android] android targetSdkVersion 29 ashmem issue
As Google Play warn all apps need update targetSdkVersion to 29 before 2020.10, We must handle this issue, https://github.com/apache/incubator-weex/issues/2706
Successful self-test
Just judging the api level is a problem with Android R.Should judge target sdk version and api level together.
if (g_targetSDKInt >= 29 && device_api_level() == 29)
As Google Play warn all apps need update targetSdkVersion to 29 before 2020.10, We must handle this issue, #2706
Successful self-test
大佬,修改了c文件后要如何打包呢?
targetSdkVersion29的问题解决了吗?我用后面c文件修改后的重新打包aar还是不行
Now that google play has started to execute, can we accelerate the completion of this matter?
targetSdkVersion29的问题解决了吗?我们着急提交包啊,不支持29不让提包
成功的自检
现在项目着急上线,我怎么操作来实现,或者给我联系方式来提问
我用后面c文件修改后的重新打包aar还是不行
targetSdkVersion29的问题解决了吗?我用后面c文件修改后的重新打包aar还是不行
同样不行
成功的自测
确定成功了吗?大佬
|
gharchive/pull-request
| 2020-07-27T03:33:50 |
2025-04-01T04:55:59.444625
|
{
"authors": [
"hualiang0537",
"hxs2mr",
"ikantech",
"leif0419",
"lovemyapple",
"lzq879069670",
"neuyu",
"roger2380"
],
"repo": "apache/incubator-weex",
"url": "https://github.com/apache/incubator-weex/pull/3246",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1607926621
|
[INLONG-7508][Sort] Carry right RowKind when cdc-base sends RowData to sink
Prepare a Pull Request
(Change the title refer to the following example)
Title Example: [INLONG-XYZ][Component] Title of the pull request
(The following XYZ should be replaced by the actual GitHub Issue number)
Fixes #7508
Motivation
Explain here the context, and why you're making that change. What is the problem you're trying to solve?
cdc-base sends RowData to sink, should carry right RowKind
Modifications
Describe the modifications you've done.
AppendMetadataCollector make a new GenericRowData instance, should pass right RowKind.
Verifying this change
(Please pick either of the following options)
[ ] This change is a trivial rework/code cleanup without any test coverage.
[ ] This change is already covered by existing tests, such as:
(please describe tests)
[ ] This change added tests and can be verified as follows:
(example:)
Added integration tests for end-to-end deployment with large payloads (10MB)
Extended integration test for recovery after broker failure
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
If a feature is not applicable for documentation, explain why?
If a feature is not documented yet in this PR, please create a follow-up issue for adding the documentation
Now sink parse rowkind from data when all database migration. So rowkind use INSERT.
Source database has a update operation, cdc send update type in json data. Sink connector cannot known whether is Update_Before or Update_After.
The 3rd screenshot shows that sink connector always get update before node null from json data.
When source table has a update operation, cdc send update type in json data. Sink connector cannot known whether is Update_Before or Update_After.
The 3rd screenshot shows that sink connector always get updateBeforeNode null from json data.
@liaorui @EMsnap have a PR issue this problem. https://github.com/apache/inlong/issues/7397
The opType in allmigrate is represented as the parameter TYPE in canal json. maybe you can get the optype from there
Close this PR since PR #7397 has solved this problem.
|
gharchive/pull-request
| 2023-03-03T04:35:22 |
2025-04-01T04:55:59.457593
|
{
"authors": [
"EMsnap",
"gong",
"liaorui"
],
"repo": "apache/inlong",
"url": "https://github.com/apache/inlong/pull/7509",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1235363323
|
[IOTDB-3179] Printing logs when get/getOrCreate Partition in ConfigNode
Currently, we don't know what PartitionTable that ConfigNode returns to DataNode, which is not conducive to development debug. So I have temporarily added logs for getSchemaPartition getOrCreateSchemaPartition getDataPartition, getOrCreateDataPartition interfaces.
Why not check out a new branch.
|
gharchive/pull-request
| 2022-05-13T15:06:19 |
2025-04-01T04:55:59.458852
|
{
"authors": [
"CRZbulabula",
"chinausers"
],
"repo": "apache/iotdb",
"url": "https://github.com/apache/iotdb/pull/5902",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1260214286
|
OAK-9790 - Implement parallel indexing for speeding up oak run indexing command
OAK-9790 - Implement parallel indexing for speeding up oak run indexing command
Since indexing was single threads, which is slow for large repository. In order to improve the indexing performance we need to implement parallel indexing.
The work is cover for both lucene and elastic indexing. In order to support parallel indexing, it need to split the big flat file store file ahead, which add a big overhead, but make parallel index possible and much faster.
Another change together is support the LZ4 compression since which is much faster compare to gzip.
New PR which incorporates the review comments https://github.com/apache/jackrabbit-oak/pull/715
superseded by PR https://github.com/apache/jackrabbit-oak/pull/715
|
gharchive/pull-request
| 2022-06-03T18:20:46 |
2025-04-01T04:55:59.461537
|
{
"authors": [
"Ewocker",
"amit-jain"
],
"repo": "apache/jackrabbit-oak",
"url": "https://github.com/apache/jackrabbit-oak/pull/587",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1408904955
|
OAK-9966 : Internal code calls Node.isCheckedOut and VersionManager.isCheckedOut
hi @joerghoh , @mreutegg , @reschke , @Joscorbe , i would highly appreciate if you could take a careful look at this PR for the version-mgt implementation.
as outlined in the ticket the implementation calling JCR API (and by doing so resolving the node again) is suboptiomal..... so the goal was to get rid of all JCR calls that create another operation-object inside a JCR call and avoid resolving the node again.
the check for the tree.existing in ReadOnlyVersionManager is IMHO redundant (and not relevant. i added an additional line in the comment illustrating how it could be fixed if really needed and added a test illustrating the mismatch that we had/have. i verified that the test both passed before and after my change)
please pay attention to the following subtle changes and let me know if you see any issue with that:
ItemImpl.getVersionManager no longer calls JCR API on Workspace but the internal variant that returns the impl and does not perform a check if the session is still alive. that is handled in preconditions of the individual JCR calls)
all internal checks for the node being checkedout should now call the internal method taking a nodedelegate without creating a separate operation object
@mreutegg, i haven't seen org.apache.jackrabbit.oak.plugins.document.VersionGCWithSplitTest failing when i ran the build locally. does that sound like being related to my patch?
i haven't seen org.apache.jackrabbit.oak.plugins.document.VersionGCWithSplitTest failing when i ran the build locally. does that sound like being related to my patch?
The test failure is most likely unrelated.
@reschke ..... uuuuhhhhh thanks..... not sure it is justified :-)
|
gharchive/pull-request
| 2022-10-14T07:32:24 |
2025-04-01T04:55:59.465739
|
{
"authors": [
"anchela",
"mreutegg"
],
"repo": "apache/jackrabbit-oak",
"url": "https://github.com/apache/jackrabbit-oak/pull/732",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
841567603
|
KAFKA-12561: don't set timeout for future.get
These tests failed quite often, because of TimeoutException. We cannot get the future result in time. I think the flaky tests tell us that we should not expect the results will return as soon as we expected in jenkins. Like other tests, we don't usually set timeout when future.get, so remove the timeout setting to make this test reliable.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@mumrah , @hachikuji , could you review this PR? It failed a lot in recent builds. Thanks.
@mumrah , @hachikuji , could you review this PR? It keeps failing recent builds. Thanks.
@showuon Could you merge trunk to trigger QA again? I'd like to merge it if QA pass :)
kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch() is traced by #10389 and ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit is unrelated flaky. will merge to trunk
@showuon thanks for this fix!
|
gharchive/pull-request
| 2021-03-26T03:59:44 |
2025-04-01T04:55:59.469063
|
{
"authors": [
"chia7712",
"showuon"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/10410",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
967414516
|
KAFKA-13192: Prevent inconsistent broker.id/node.id values
If both broker.id and node.id are set, and they are set inconsistently (e.g.broker.id=0, node.id=1) then currently the value of node.id is used and the broker.id value is left at the original value. The server should detect this inconsistency, throw a ConfigException, and fail to start. This patch adds the check and a test for it.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
This PR is likely going to be replaced by https://github.com/apache/kafka/pull/11256. Will probably eventually close this assuming that other PR gets merged.
|
gharchive/pull-request
| 2021-08-11T20:34:27 |
2025-04-01T04:55:59.472034
|
{
"authors": [
"rondagostino"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/11200",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2445852819
|
MINOR: Reduce log levels for transactions_mixed_versions_test with 3.2 due to bug in that version
https://github.com/apache/kafka/commit/7496e6243410ca851f4b8502270cf4222bd89a2b fixed an error that caused an exception to be thrown on broker startup when debug logs were on. This made it to every version except 3.2.
The Kraft upgrade tests totally turn off debug logs, but I think we only need to remove them for the broken version.
I have not seen it on other versions. I thought I ran the tests, but looks like I didn't. I will update the PR when I see the results.
I ran the transactions_upgrade_test without any changes and everything passed.
I ran the mixed_version_test but due to https://issues.apache.org/jira/browse/KAFKA-17250 :(
I ran the mixed_version_test but it is failing due to https://issues.apache.org/jira/browse/KAFKA-17250 :(
@jolshan Could you please take a look at my previous comment: https://github.com/apache/kafka/pull/16235#discussion_r1672760435?
Looks like the other PR is merged. I will try rerunning the tests now
All the tests pass now. We don't need for 3.1 since it doesn't have this line (I traced back all the components of delta and it eventually gets to a BrokerRegistration):
3.2:
https://github.com/apache/kafka/blob/e4ca066680296ea29d443efb626baecc837083f6/core/src/main/scala/kafka/server/metadata/BrokerMetadataListener.scala#L274
3.1:
https://github.com/apache/kafka/blob/2b57b38f9306ac73f18d94d55c0158530fd444df/core/src/main/scala/kafka/server/metadata/BrokerMetadataListener.scala#L254
I will backport this to 3.9 for consistency before backporting #17067
|
gharchive/pull-request
| 2024-08-02T22:48:52 |
2025-04-01T04:55:59.477695
|
{
"authors": [
"chia7712",
"jolshan"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/16787",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
598100891
|
KAFKA-9846: Filter active tasks for running state in KafkaStreams#allLocalStorePartitionLags()
Added check that only treats running active tasks as having 0 lag
Tasks that are neither restoring, nor running will report 0 as currentoffset position
Fixed LagFetchIntegrationTest to wait till thread/instance reaches RUNNING before checking lag
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
test this please
@guozhangwang So, I still need to add a test case around this specific scenario, tasks stuck in created state.. Seems it needs some engineering to create that scenario. (sophie gave me some pointers, yet to try them)..
if we need to just fix the test issue, I can open a simpler one with just the test fix.. That's probably better?
Opened a simple backport here #8534.. We can focus on that for fixing the test flakiness itself.
Opened a simple backport here #8534.. We can focus on that for fixing the test flakiness itself.
Sounds good, thanks!
Closing this PR as outdated.
|
gharchive/pull-request
| 2020-04-10T21:22:53 |
2025-04-01T04:55:59.482129
|
{
"authors": [
"guozhangwang",
"mjsax",
"vinothchandar"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/8462",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
603837233
|
KAFKA-9885; Evict last members of a group when the maximum allowed is reached
This PR updates the algorithm which limits the number of members within a group (group.max.size) to fix the following two issues:
As described in KAFKA-9885, we found out that multiple members of a group can be evicted if the leader of the consumer offset partition changes before the group is persisted. This happens because the current evection logic always evict the first member rejoining the group.
We also found out that dynamic members, when required to have a known member id, are not always limited. The caveat is that the current logic only considers unknown members and uses the group size, which does not include the so called pending members, to accept or reject a member. In this case, when they rejoins, they are not unknown member anymore and thus could bypass the limit. See testDynamicMembersJoinGroupWithMaxSizeAndRequiredKnownMember for the whole scenario.
This PR extends the tests coverage to cover all the member types.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
cc @hachikuji @abbccdda
Related to #8437.
ok to test
|
gharchive/pull-request
| 2020-04-21T09:09:08 |
2025-04-01T04:55:59.485576
|
{
"authors": [
"dajac",
"hachikuji"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/8525",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1441277651
|
Fix the envKey in Config.java
fix #10
@iskey can you add a test to validate please?
Thanks, let me take a look.
@iskey can you add a test to validate please?
ok, let me do some tests.
Better to do a mock test. Post two snapshots for contrast for now:
from
to
Sounds good!
If possible try avoiding a mock please, it is like not testing at all at the end.
Using surefire you can force some env var so it is then easy to check it is used or not cases IMHO.
I think we don't need a mock for the test. We can directly test the ConfigService and setting a system property to test.
Actually, I think for this kind of "simple" bug, we can just move forward without test.
If no objection, I will merge this PR.
Guess the regex should be extracted if config service is used at runtime instead of recompiled each time.
@rmannibucau good point, we can have the Pattern compile as static.
@jbonofre agree, no mock is needed, go for merge.
Agree, no mock here. Actually, it is hard to mock without extracting the getEnv as another method.
Let me make the pattern static.
|
gharchive/pull-request
| 2022-11-09T03:05:14 |
2025-04-01T04:55:59.491125
|
{
"authors": [
"fpapon",
"iskey",
"jbonofre",
"rmannibucau"
],
"repo": "apache/karaf-minho",
"url": "https://github.com/apache/karaf-minho/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1094951103
|
18098: KUDU-3197 [tserver] optimal Schema's memory used, using std::s…
create the pr, just for code-review
Thanks for your contribution!
Please submit code review on gerrit, see: https://kudu.apache.org/docs/contributing.html#_contributing_patches_using_gerrit
For draft, you can open a pull request on your own repo.
|
gharchive/pull-request
| 2022-01-06T04:15:50 |
2025-04-01T04:55:59.492868
|
{
"authors": [
"acelyc111",
"shenxingwuying"
],
"repo": "apache/kudu",
"url": "https://github.com/apache/kudu/pull/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2256253392
|
Add support of hash field expiration
Search before asking
[X] I had searched in the issues and found no similar issues.
Motivation
Refer to https://github.com/redis/redis/pull/13172, Redis has supported this feature.
Solution
TBD
Are you willing to submit a PR?
[ ] I'm willing to submit a PR!
I'll do this, and will propose a solution later.
This feature was add in Redis 7.4 RC
@git-hulk Hi, I have done some work on this feature before. May i submit a pr for this?
@jjz921024 Thank you!
|
gharchive/issue
| 2024-04-22T11:08:01 |
2025-04-01T04:55:59.496104
|
{
"authors": [
"PragmaTwice",
"Yangsx-1",
"aleksraiden",
"git-hulk",
"jjz921024"
],
"repo": "apache/kvrocks",
"url": "https://github.com/apache/kvrocks/issues/2269",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2204335236
|
Fix missing migrating/importing information in the CLUSTER NODES command
In Redis, it will add the migrating/importing slot section
for the source and target node. For the migrating source node,
it will contain the below section:
[{slot_id}->-{target_node_id}]
And for the importing node, it will add:
[{slot_id}-<-{source_node_id}]
In this PR, I also removed the unused import_fd_ field in the imported.
|
gharchive/pull-request
| 2024-03-24T12:47:41 |
2025-04-01T04:55:59.497693
|
{
"authors": [
"git-hulk"
],
"repo": "apache/kvrocks",
"url": "https://github.com/apache/kvrocks/pull/2196",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2267052515
|
Add index selection pass for KQIR planning
We add a simple index selection algorithm upon KQIR planning part based on a simple cost model.
Also, we introduce range-v3 since it's essential for modern C++ and is a part of C++20.
TODO:
[x] more tests
Ready for review now : )
|
gharchive/pull-request
| 2024-04-27T15:01:29 |
2025-04-01T04:55:59.499075
|
{
"authors": [
"PragmaTwice"
],
"repo": "apache/kvrocks",
"url": "https://github.com/apache/kvrocks/pull/2278",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1633817025
|
Bump Iceberg from 1.1.0 to 1.2.0
Why are the changes needed?
Iceberg 1.2.0 release notes: https://iceberg.apache.org/releases/#120-release
How was this patch tested?
[ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible
[ ] Add screenshots for manual tests if appropriate
[x] Run test locally before make a pull request
Will defer this PR until Iceberg announces release notes and changes of 1.2.0 on its official website.
Thanks, merged to master
|
gharchive/pull-request
| 2023-03-21T12:29:52 |
2025-04-01T04:55:59.502855
|
{
"authors": [
"bowenliang123",
"pan3793"
],
"repo": "apache/kyuubi",
"url": "https://github.com/apache/kyuubi/pull/4572",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1699885642
|
[feate] Added support for postgresql based on dev-1.4.0
What is the purpose of the change
Added postgresql support for linkis metadata.
Related issues/PRs
Related issues: #4190
Brief change log
Modify MybatisConfigurationFactory.java for support multiple data sources.
Modify linkis-cg-linkismanager.properties, linkis-mg-gateway.properties and linkis-cg-linkismanager.properties for support multiple data sources.
To better distinguish between different data sources, change the original mapper file path. For example, move resources/mapper/common/contextHistoryMapper.xml to resources/mapper/common/mysql/contextHistoryMapper.xml
Add mapper files for postgresql. For example, resources/mapper/common/postgresql/contextHistoryMapper.xml
Some test classes were added or modified.
How to use postgresql
pom.xml
remove test in postgresql's dependency.
linkis.properties
Add database connection informations.
Add linkis.server.mybatis.pagehelper.dialect=postgresql. The configuration detault value is mysql.
linkis-cg-linkismanager.properties, linkis-mg-gateway.properties and linkis-cg-linkismanager.properties
Select mapper files that postgresql can execute.
If you want to use postgresql in test classes, You should change the properties used by the test class.
Add database connection informations.
Select mapper files that postgresql can execute.
Note: Please use the existing postgresql database to test.
Checklist
[x] I have read the Contributing Guidelines on pull requests.
[ x] I have explained the need for this PR and the problem it solves
[ x] I have explained the changes or the new features added to this PR
[ x] I have added tests corresponding to this change
[ x] I have updated the documentation to reflect this change
[ x] I have verified that this change is backward compatible (If not, please discuss on the Linkis mailing list first)
[ x] If this is a code change: I have written unit tests to fully verify the new behavior.
Please resolve the conflict
ping @peacewong
|
gharchive/pull-request
| 2023-05-08T09:25:11 |
2025-04-01T04:55:59.511679
|
{
"authors": [
"aiceflower",
"jackxu2011",
"sjgllgh"
],
"repo": "apache/linkis",
"url": "https://github.com/apache/linkis/pull/4524",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2135960992
|
Can we decrease the overhead of skipping?
Description
On top-k queries, Lucene is now competitive with Tantivy/PISA on https://tantivy-search.github.io/bench/, but it's still quite slower on counting queries. This made me want to run a similar experiment as https://github.com/Tony-X/search-benchmark-game/issues/44, though with a few more changes to how skipping works:
Single level of skip lists.
Skip data and impacts are inlined between blocks of postings.
Less overhead:
no separate SkipReader abstraction that gets lazily instantiated: the skipping logic is more lightweight and within the postings/impacts enum logic,
checking whether to skip and decode a new block is now a single check on BlockDocsEnum while it requires two different checks today.
A hacky implementation of this can be found at https://github.com/apache/lucene/compare/main...jpountz:lucene:skip_experiment?expand=1:
It doesn't replace existings skip data, just adds additional skip data and impacts inlined between blocks.
Only BlockDocsEnum and BlockImpactsDocsEnum switched to this new skip data, other impls still use existing skip data. So term queries will see a change, but not phrase queries.
It's quite naive, we could probably do something that is a bit more efficient. Yet results on wikibigall are interesting:
TaskQPS baseline StdDevQPS my_modified_version StdDev Pct diff p-value
OrHighNotLow 327.87 (7.9%) 183.13 (5.4%) -44.1% ( -53% - -33%) 0.000
OrHighNotMed 374.44 (6.7%) 242.58 (5.0%) -35.2% ( -43% - -25%) 0.000
HighTermMonthSort 4971.86 (1.6%) 3322.82 (1.1%) -33.2% ( -35% - -30%) 0.000
HighTerm 480.63 (6.5%) 343.22 (5.2%) -28.6% ( -37% - -18%) 0.000
OrHighNotHigh 226.71 (7.4%) 169.91 (6.2%) -25.1% ( -36% - -12%) 0.000
MedTerm 705.95 (5.8%) 617.27 (5.1%) -12.6% ( -22% - -1%) 0.000
OrNotHighHigh 183.15 (6.7%) 161.14 (6.3%) -12.0% ( -23% - 1%) 0.000
CountOrHighHigh 56.83 (16.6%) 53.32 (10.5%) -6.2% ( -28% - 25%) 0.160
OrHighHigh 73.45 (1.9%) 71.56 (1.3%) -2.6% ( -5% - 0%) 0.000
TermDTSort 259.01 (4.1%) 253.64 (5.8%) -2.1% ( -11% - 8%) 0.193
HighTermTitleBDVSort 15.83 (6.0%) 15.60 (4.9%) -1.4% ( -11% - 10%) 0.417
AndHighHigh 71.57 (2.2%) 70.60 (1.8%) -1.4% ( -5% - 2%) 0.032
CountTerm 14081.51 (4.0%) 13918.96 (3.9%) -1.2% ( -8% - 6%) 0.353
HighTermTitleSort 112.66 (5.3%) 112.15 (2.4%) -0.5% ( -7% - 7%) 0.724
Prefix3 340.35 (3.4%) 339.24 (2.9%) -0.3% ( -6% - 6%) 0.745
Wildcard 106.77 (3.2%) 106.51 (3.5%) -0.2% ( -6% - 6%) 0.824
PKLookup 282.16 (2.0%) 281.52 (1.3%) -0.2% ( -3% - 3%) 0.667
LowSpanNear 13.84 (2.8%) 13.81 (2.6%) -0.2% ( -5% - 5%) 0.839
CountPhrase 3.19 (8.0%) 3.19 (9.1%) -0.1% ( -15% - 18%) 0.974
HighPhrase 29.89 (2.9%) 29.90 (4.7%) 0.0% ( -7% - 7%) 0.985
MedSpanNear 9.92 (2.3%) 9.93 (2.3%) 0.0% ( -4% - 4%) 0.952
HighSpanNear 5.37 (3.7%) 5.37 (3.4%) 0.1% ( -6% - 7%) 0.923
LowTerm 1154.36 (4.2%) 1155.62 (3.6%) 0.1% ( -7% - 8%) 0.930
MedSloppyPhrase 25.88 (2.7%) 25.93 (1.8%) 0.2% ( -4% - 4%) 0.789
Respell 59.03 (1.6%) 59.16 (1.6%) 0.2% ( -2% - 3%) 0.661
LowPhrase 48.93 (2.4%) 49.10 (4.7%) 0.4% ( -6% - 7%) 0.763
HighTermDayOfYearSort 437.52 (1.2%) 440.03 (1.7%) 0.6% ( -2% - 3%) 0.227
CountOrHighMed 108.93 (11.3%) 109.65 (7.7%) 0.7% ( -16% - 22%) 0.830
MedPhrase 26.25 (2.6%) 26.44 (5.1%) 0.7% ( -6% - 8%) 0.580
HighSloppyPhrase 6.35 (3.5%) 6.40 (2.1%) 0.8% ( -4% - 6%) 0.361
Fuzzy2 72.65 (1.1%) 73.27 (1.2%) 0.9% ( -1% - 3%) 0.022
Fuzzy1 90.62 (1.1%) 91.49 (1.2%) 1.0% ( -1% - 3%) 0.008
LowIntervalsOrdered 15.56 (3.6%) 15.74 (3.4%) 1.2% ( -5% - 8%) 0.287
HighIntervalsOrdered 3.05 (5.0%) 3.10 (5.0%) 1.6% ( -8% - 12%) 0.323
LowSloppyPhrase 18.12 (5.2%) 18.41 (2.3%) 1.6% ( -5% - 9%) 0.217
OrHighLow 658.61 (2.4%) 670.64 (1.9%) 1.8% ( -2% - 6%) 0.008
MedIntervalsOrdered 18.21 (4.2%) 18.59 (3.7%) 2.1% ( -5% - 10%) 0.096
OrNotHighMed 337.44 (4.1%) 350.80 (4.5%) 4.0% ( -4% - 13%) 0.004
OrHighMed 147.53 (2.6%) 153.47 (2.1%) 4.0% ( 0% - 8%) 0.000
IntNRQ 130.99 (21.2%) 139.01 (21.2%) 6.1% ( -29% - 61%) 0.360
AndHighMed 270.68 (2.4%) 291.75 (2.6%) 7.8% ( 2% - 13%) 0.000
AndHighLow 903.88 (2.2%) 1000.22 (3.1%) 10.7% ( 5% - 16%) 0.000
OrNotHighLow 793.34 (2.3%) 935.56 (2.1%) 17.9% ( 13% - 22%) 0.000
CountAndHighHigh 43.99 (2.1%) 51.94 (3.3%) 18.1% ( 12% - 23%) 0.000
CountAndHighMed 129.42 (2.3%) 155.03 (3.4%) 19.8% ( 13% - 26%) 0.000
CountAndHighHigh and CountAndHighMed became almost 20% faster! These are the main targets that I was targeting with this change, so it's good to see they are seeing a significant speedup. This confirms that we have some non-negligible overhead for skipping today though it's not easy to tell how much comes from the additional abstractions vs. multiple levels of skip lists.
OrNotHighLow and OrNotHighMed are faster. This is because the bottleneck of these queries is advancing the MUST_NOT clause, which are not scoring. So it's very similar to the speedup we're seeing on the counting queries.
AndHighLow and AndHighMed are 8%-11% faster. Again, I would attribute this to the faster skipping logic since this is about clauses that have different doc frequencies, so the higher frequency clause will need to do a lot of skipping to catch up with the leading clause. Interestingly, the fact that we are storing a single level of impact data doesn't hurt.
AndHighHigh and OrHighHigh are slightly slower (or is it noise?). I could believe that there is a small performance hit on this one due to having a single level of impact data. This forces Lucene to use the maximum score across the entire doc ID space as a score upper bound for the clause that has the higher cost. Maybe it could be enough to compute global impacts to have better performance on these queries by having slightly better score upper bounds for the following clause.
HighTerm, MedTerm, OrHighNotLow, OrHighNotMed, HighTerm, OrHighNotHigh, OrNotHighHigh are slower. This is expected as there are queries that have a single positive clause, which in-turn are queries where the score upper bounds that we compute are very close to the actual produced scores, which in-turn enables these queries to take advantage of the higher levels of impacts to skip more docs at once.
HighTermMonthSort is slower. This is because the sort dynamically introduces a filter that is so selective that the term query can take advantage of skip data on higher levels to skip more docs at once.
CountOrHighHigh is a bit slower because there's a bit more overhead to collect postings lists exhaustively now that skip data and impacts are inlined.
It's not a net win, but this suggests that we have some room for improvement here.
Whoa, very cool @jpountz! This reminds me of this longstanding issue/paper which also inlined skip data directly in the postings, but maybe was still multi-level?
Fixed by https://github.com/apache/lucene/pull/13585
|
gharchive/issue
| 2024-02-15T08:53:49 |
2025-04-01T04:55:59.530250
|
{
"authors": [
"jpountz",
"mikemccand"
],
"repo": "apache/lucene",
"url": "https://github.com/apache/lucene/issues/13106",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2105774562
|
Replace new HashSet<>(Arrays.asList()) with EnumSet.of()
For sets with enum values EnumSet provides a more efficient implementation.
Besides, EnumSet.of() results in a shorter, easier to understand code than new HashSet<>(Arrays.asList()).
When backporting this to Lucene 9.x, it confliced because the sets have different contents in older version. I fixed this.
Nevertheless, the static final constants should be unmodifiable sets, so we should possibly use Java 9+ Set.of() instead of EnumSet.of()(which is modifiable) or wrap the EnumSet with Collections.unmodifiableSet().
Could you open an issue about this?
static final constants should be unmodifiable sets
Could you open an issue about this?
#13055
|
gharchive/pull-request
| 2024-01-29T15:36:54 |
2025-04-01T04:55:59.534117
|
{
"authors": [
"sabi0",
"uschindler"
],
"repo": "apache/lucene",
"url": "https://github.com/apache/lucene/pull/13051",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
361906672
|
METRON-1785 Automate deployment of packet capture for development environment
While trying to test #1201 , I fixed some issues with the Ansible install of the components required for testing packet capture. I added instructions for how to do this in the README.
Testing
Spin-up the development environment and validate that alerts are visible in the Alerts UI and run the Metron Service Check in Ambari.
Follow the instruction in the README, to install and start all of the components for capturing packets. Ensure that you can search and find these packets using the Alerts UI > PCAP tab.
Pull Request Checklist
[x] Is there a JIRA ticket associated with this PR? If not one needs to be created at Metron Jira.
[x] Does your PR title start with METRON-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
[x] Has your PR been rebased against the latest commit within the target branch (typically master)?
[x] Have you included steps to reproduce the behavior or problem that is being changed or addressed?
[x] Have you included steps or a guide to how the change may be verified and tested manually?
[x] Have you ensured that the full suite of tests and checks have been executed in the root metron folder via:
[x] Have you written or updated unit tests and or integration tests to verify your changes?
[x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[x] Have you verified the basic functionality of the build by building and running locally with Vagrant full-dev environment or the equivalent?
Deployed a full dev , I executed set of instructions as in Docs
vagrant up vagrant --ansible-tags="pcap" provision
#Stopped the Parser, Enrichment, Indexing, and Profiler topologies to free-up resources.
vagrant ssh sudo su - source /etc/default/metron yum -y install wireshark
I see that the pcap-replay and pycapa services not deployed ..
[root@node1 ~]# service pcap-replay start
pcap-replay: unrecognized service
[root@node1 ~]# service pycapa start
pycapa: unrecognized service
Yes, you are right @MohanDV. Somehow the default tags are interacting badly with the tags that are passed in. I'll try to figure out what's going on.
|
gharchive/pull-request
| 2018-09-19T20:05:17 |
2025-04-01T04:55:59.540984
|
{
"authors": [
"MohanDV",
"nickwallen"
],
"repo": "apache/metron",
"url": "https://github.com/apache/metron/pull/1205",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2150248184
|
rptun: remove notify_wait in remoteproc ops
Summary
remove notify_wait in remoteproc ops, rptun implements notify_wait through rpmsg_notify_wait_cb.
Impact
simplify code logic.
Testing
tested in sim vela.
@wyr8899 We should also modify nuttx/openamp/0004-openamp-add-new-ops-notify_wait-support.patch to pass CI.
|
gharchive/pull-request
| 2024-02-23T02:07:50 |
2025-04-01T04:55:59.610386
|
{
"authors": [
"CV-Bowen",
"wyr8899"
],
"repo": "apache/nuttx",
"url": "https://github.com/apache/nuttx/pull/11754",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
774804143
|
HDDS-4624. Fix set configs in SCMHAConfigration
What changes were proposed in this pull request?
java.lang.IllegalArgumentException: Attempt to get double field "org.apache.hadoop.hdds.scm.ha.SCMHAConfiguration.raftSegmentSize" with illegal data type conversion to long
In reflection, seems that a Double Field cannot be cast to Long at https://github.com/apache/ozone/blob/master/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigurationReflectionUtil.java#L247
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-4624
How was this patch tested?
UT
R: @nandakumar131 @GlenGeng @ChenSammi
+1
LGTM. Thanks for the fix!
|
gharchive/pull-request
| 2020-12-26T00:58:47 |
2025-04-01T04:55:59.620238
|
{
"authors": [
"GlenGeng",
"amaliujia"
],
"repo": "apache/ozone",
"url": "https://github.com/apache/ozone/pull/1739",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1047187806
|
HDDS-5656. Move old objects to delete table on overwriting multipart objects
What changes were proposed in this pull request?
This pull request aims at porting my work in HDDS-5461 ( https://github.com/apache/ozone/pull/2433 ) to multipart-uploaded objects. The change is almost same; on committing multipart objects (moving the OmKeyInfo from multipart table to key table), move the previous version of the key info to delete table.
CC: @bharatviswa504
What is the link to the Apache JIRA
HDDS-5656
How was this patch tested?
Unit tests updated
Hi guys, I'll fix the conflicts, but any updates on this? @bharatviswa504 @ChenSammi
@hanishakoneru Thank you for the review. I think I've addressed all of your comments.
@kuenishi, I still see that we are reading the key from DB twice.
For example, in OMKeyCommitRequest lines#217-224:
RepeatedOmKeyInfo oldKeyVersionsToDelete = null;
OmKeyInfo keyToDelete =
omMetadataManager.getKeyTable(getBucketLayout()).get(dbOzoneKey);
if (keyToDelete != null && !omBucketInfo.getIsVersionEnabled()) {
oldKeyVersionsToDelete = getOldVersionsToCleanUp(dbOzoneKey,
omMetadataManager, omBucketInfo.getIsVersionEnabled(),
trxnLogIndex, ozoneManager.isRatisEnabled());
}
dbOzoneKey is read from DB on line#219 and again in getOldVersionsToCleanUp() (OMKeyRequest line#801).
Also, we do not need to do versioning enabled check in getOldVersionsToCleanUp() as it is being checked before that method is called.
@hanishakoneru Thanks for picking up nits which I missed. I updated my pull request.
Both normal objects and multipart objects share the OmKeyInfo#addNewVersion() method to create a new version. So the fix in HDDS-6261 should work for this change, too.
From what I understand, versioning is not supported in Multipart keys currently.
In S3MultipartUploadCompleteRequest#getOmKeyInfo():
// Already a version exists, so we should add it as a new version.
// But now as versioning is not supported, just following the commit
// key approach. When versioning support comes, then we can uncomment
// below code keyInfo.addNewVersion(locations);
// As right now versioning is not supported, we can set encryption info
// at KeyInfo level, but once we start supporting versioning,
// encryption info needs to be set at KeyLocation level, as each version
// will have it's own file encryption info.
As such, the bug reported in HDDS-6261 should not effect Multipart keys.
cc. @smengcl @errose28
@kuenishi I will give it a day before committing in case Siyao or Ethan have any other comments. Thanks.
Since keyInfo.addNewVersion(locations) is commented out in the snippet you shared we should be ok here. That was the affected call.
Thanks @errose28. I will commit it now.
Thank you @kuenishi for working on this.
Thank you for the review, too @hanishakoneru
|
gharchive/pull-request
| 2021-11-08T09:04:59 |
2025-04-01T04:55:59.627535
|
{
"authors": [
"errose28",
"hanishakoneru",
"kuenishi"
],
"repo": "apache/ozone",
"url": "https://github.com/apache/ozone/pull/2813",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1429866718
|
HDDS-7442. [Release] Revert "HDDS-7116. Avoid leaking RocksObject from DBProfile (#3673)"
What changes were proposed in this pull request?
As we discussed. We should revert HDDS-7116 in ozone-1.3.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-7442
Hi @ChenSammi @duongkame, In this revert, there's a little bit of conflict in one method.
Because HDDS-7116 added try catch to this method, other PR changed this line. There was only one conflict.
I've resolved the conflict, just in case, can you help double-check?
|
gharchive/pull-request
| 2022-10-31T13:38:32 |
2025-04-01T04:55:59.630504
|
{
"authors": [
"captainzmc"
],
"repo": "apache/ozone",
"url": "https://github.com/apache/ozone/pull/3919",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.