id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1129485720 | [FLINK-25782] [docs] Translate datastream filesystem.md page into Chinese.
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
The TaskInfo is stored in the blob store on job creation time as a persistent artifact
Deployments RPC transmits only the blob storage reference
TaskManagers retrieve the TaskInfo from the blob cache
Verifying this change
Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Added integration tests for end-to-end deployment with large payloads (100MB)
Extended integration test for recovery after master (JobManager) failure
Added test that validates that TaskInfo is transferred only once across recoveries
Manually verified the change by running a 4 node cluser with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
The serializers: (yes / no / don't know)
The runtime per-record code paths (performance sensitive): (yes / no / don't know)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
The S3 file system connector: (yes / no / don't know)
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
Hi, @Thesharing @RocMarshal , May I get your help to review it? Thanks.
@flinkbot run Azure
| gharchive/pull-request | 2022-02-10T05:44:20 | 2025-04-01T06:37:52.842039 | {
"authors": [
"MrWhiteSike"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/18698",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1329644627 | [FLINK-28837][chinese-translation] Translate "Hybrid Source" page of …
What is the purpose of the change
Translate "Hybrid Source" page of "DataStream Connectors" into Chinese
Brief change log
Translate "Hybrid Source" page of "DataStream Connectors" into Chinese
Verifying this change
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not documented)
@flinkbot run azure
@flinkbot run azure
@wuchong please help review when you are free, thanks.
| gharchive/pull-request | 2022-08-05T08:49:40 | 2025-04-01T06:37:52.846459 | {
"authors": [
"JasonLeeCoding"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/20466",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1516131539 | [FLINK-30538][SQL gateway/client] Improve error handling of stop job operation
What is the purpose of the change
Currently, the stop-job operation produces some verbose error msg and doesn't handle exceptions in stop-without-savepoint gracefully.
This PR fixes the problem.
Brief change log
Wrap simple cancel with try-catch.
Wait for simple cancel Acknowledge before returning 'OK'.
Simplify exception message for stop job operations.
Verifying this change
Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
This change is a trivial rework / code cleanup without any test coverage.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): no
The public API, i.e., is any changed class annotated with @Public(Evolving): no
The serializers: no
The runtime per-record code paths (performance sensitive): no
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
The S3 file system connector: no
Documentation
Does this pull request introduce a new feature? no
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
@flinkbot run azure
@flinkbot run azure
Please kindly take a look @fsk119
CI ran into https://issues.apache.org/jira/browse/FLINK-30328. Re-run CI.
@flinkbot run azure
@fsk119 CI turned green. Please kindly take a look at your convenience.
ping @fsk119 . It should be a quick one :)
ping @fsk119
| gharchive/pull-request | 2023-01-02T08:09:41 | 2025-04-01T06:37:52.853527 | {
"authors": [
"link3280"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/21581",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2251933405 | [FLINK-35166][runtime] Make the SortBufferAccumulator use more buffers when the parallelism is small
What is the purpose of the change
Improve the performance of hybrid shuffle when enable memory decoupling and meantime the parallelism is small.
Brief change log
Make the SortBufferAccumulator use more buffers when the parallelism is small
Verifying this change
Please make sure both new and modified tests in this PR follow the conventions for tests defined in our code quality guide.
This change is a trivial rework / code cleanup without any test coverage.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not applicable)
Is TPC-DS performance no longer regression after the fix?
@reswqa The regression still exists because we replace the HashBufferAccumulator with the SortBufferAccumulator when the decoupling is enabled and the parallelism is less than 512, but this PR reduces the regression. According to the previous discussion, the regression is acceptable if the feature is enabled.
| gharchive/pull-request | 2024-04-19T02:11:46 | 2025-04-01T06:37:52.859217 | {
"authors": [
"jiangxin369"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/24683",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2652904072 | Upgrade com.squareup.okio:okio
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
The TaskInfo is stored in the blob store on job creation time as a persistent artifact
Deployments RPC transmits only the blob storage reference
TaskManagers retrieve the TaskInfo from the blob cache
Verifying this change
Please make sure both new and modified tests in this PR follow the conventions for tests defined in our code quality guide.
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Added integration tests for end-to-end deployment with large payloads (100MB)
Extended integration test for recovery after master (JobManager) failure
Added test that validates that TaskInfo is transferred only once across recoveries
Manually verified the change by running a 4 node cluster with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
The serializers: (yes / no / don't know)
The runtime per-record code paths (performance sensitive): (yes / no / don't know)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
The S3 file system connector: (yes / no / don't know)
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
Reviewed by Chi on 21/11/24. Asked submitter questions
Please could you raise a Jira detailing the reason you want to upgrade this component (e.g. is there a particular bug that this would fix)
| gharchive/pull-request | 2024-11-12T17:45:56 | 2025-04-01T06:37:52.867797 | {
"authors": [
"davidradl",
"g-s-eire"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/25649",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
185880994 | [FLINK-4631] Avoided NPE in OneInputStreamTask.
Added additional condition to check possible NPE. This PR solve FLINK-4631.
+1 to merge
merging
| gharchive/pull-request | 2016-10-28T09:41:37 | 2025-04-01T06:37:52.869522 | {
"authors": [
"chermenin",
"zentol"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/2709",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
227601507 | [FLINK-6483] [table] Support time materialization
This PR adds support for time materialization. It also fixes several bugs related to time handling in the Table API & SQL.
Thanks for the update @twalthr!
Looks very good. Will merge this.
| gharchive/pull-request | 2017-05-10T08:21:29 | 2025-04-01T06:37:52.870624 | {
"authors": [
"fhueske",
"twalthr"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/3862",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
239051744 | [FLINK-6522] Add ZooKeeper cleanup logic to ZooKeeperHaServices
Thanks for contributing to Apache Flink. Before you open your pull request, please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your pull request. For more information and/or questions please refer to the How To Contribute guide.
In addition to going through the list, please provide a meaningful description of your changes.
[ ] General
The pull request references the related JIRA issue ("[FLINK-XXX] Jira title text")
The pull request addresses only one issue
Each commit in the PR has a meaningful commit message (including the JIRA id)
[ ] Documentation
Documentation has been added for new functionality
Old documentation affected by the pull request has been updated
JavaDoc for public methods has been added
[ ] Tests & Build
Functionality added by the pull request is covered by tests
mvn clean verify has been executed successfully locally or a Travis build has passed
Hi @tillrohrmann , I have created this PR for issue FLINK-6522. Could you please have a look when you're free, thanks
@tillrohrmann Thank you for your review. I use prefix as the name of sub directory, and add test case to FileSystemStateStorageHelper#closeAndCleanupAllData. Also I have fixed the problem you metioned, thanks
Solved by FLINK-11336.
| gharchive/pull-request | 2017-06-28T04:46:43 | 2025-04-01T06:37:52.875990 | {
"authors": [
"tillrohrmann",
"zjureel"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/4204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
280668091 | [FLINK-7797] [table] Add support for windowed outer joins for streaming tables
What is the purpose of the change
This PR adds support for windowed outer joins for streaming tables.
Brief change log
Adjusts the plan translation logic to accept stream window outer join.
Adheres an ever emitted flag to each row. When a row is removed from the cache (or detected as not cached), a null padding join result will be emitted if necessary.
Adds a custom JoinAwareCollector to track whether there's a successfully joined result for both sides in each join loop.
Adds table/SQL translation tests, and also join integration tests. Since the runtime logic is built on the existing window inner join, no new harness tests are added.
Updates the SQL/Table API docs.
Verifying this change
This PR can be verified by the cases added in JoinTest and JoinITCase.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (yes)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (yes)
If yes, how is the feature documented? (remove the restriction notes)
Thanks for the PR @xccui.
I'll try to have a look at it sometime this week.
Best, Fabian
| gharchive/pull-request | 2017-12-09T02:29:29 | 2025-04-01T06:37:52.969547 | {
"authors": [
"fhueske",
"xccui"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/5140",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
372512734 | [FLINK-10431] Extraction of scheduling-related code from SlotPool into preliminary Scheduler
What is the purpose of the change
This PR extracts the scheduling related code (e.g. slot sharing logic) from to slot pool into a preliminary version of a future scheduler component. Our primary goal is fixing the scheduling logic for local recovery. Changes in this PR open up potential for more code cleanups (e.g. removing all scheduling concerns from the slot pool, removing ProviderAndOwner, moving away from some CompletableFuture return types, etc). This cleanup and some test rewrites will happen in a followup PR.
Brief change log
SlotPool is no longer a RpcEndpoint, we need to take care that all state modification happens in the component's main thread now.
Introduced SlotInfo and moving the slot sharing code into a scheduler component. Slot pool code can now deal with single slot requests. The pattern of interaction is more explicit, we have 3 main new methods: getAvailableSlotsInformation to list available slots, allocateAvailableSlot to allocated a listed / available slot, requestNewAllocatedSlot to request a new slot from the resoure manager. The old codepaths currently still co-exist in the slot pool and will be removed in followup work.
Introduce creating a collection of all previous allocations through ExecutionGraph::computeAllPriorAllocationIds. This serves as basis to compute a "blacklist" of allocation ids that we use to fix the scheduling of local recovery.
Provide an improved version of the scheduling for local recovery, that uses a blacklist.
Verifying this change
This change is already covered by existing tests, but we still need to rewrite tests for the slot pool and add more additional tests in followup work.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not applicable)
Hi @StefanRRichter , I am just wondering why make SlotPool no longer to be an RpcEndpoint?
@Clarkkkkk background is that it will make things easier and otherwise you have concurrency between two components that want to interact in transactional ways: if the scheduler runs in a different thread than the slot pool there can be concurent modifications to the slot pool (e.g. slots added/removed) between the scheduler asking for the available slots and the scheduler requesting the available slot. All of this has to be resolved and it becomes harder to understand and reason about the code. This can be avoided if scheduler and slot pool run in the same thread, and we are also aiming at having all modifications to the execution graph in the same thread as well. The threading model would then be that blocking or expensive operations run in their own thread so that the main thread is never blocked, but the results are always synced back to a main thread to runs all the modifications in scheduler, slot pool, execution graph, etc.
Closed because there is an updated version of this PR in #7662.
| gharchive/pull-request | 2018-10-22T13:04:57 | 2025-04-01T06:37:52.977138 | {
"authors": [
"Clarkkkkk",
"StefanRRichter"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/6898",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
190340170 | Remove hostHeader = hostname property from Host interceptor example
We are overriding the host header name from host to hostname in the example usage section. Due to this example users are overriding the header name too but still use the %{host} substitution as shown in the HDFS Sink section. This won't work for them.
This change removes this config line.
+1, LGTM
I'll leave some time for others to review this, then commit it if nobody disagrees.
I'm about to commit this.
@peterableda : thank you for the patch!
| gharchive/pull-request | 2016-11-18T15:03:19 | 2025-04-01T06:37:52.979704 | {
"authors": [
"bessbd",
"peterableda"
],
"repo": "apache/flume",
"url": "https://github.com/apache/flume/pull/87",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
497524222 | GEODE-6927 make getThreadOwnedConnection code thread safe
Thank you for submitting a contribution to Apache Geode.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message?
[x] Has your PR been rebased against the latest commit within the target branch (typically develop)?
[x] Is your initial contribution a single, squashed commit?
[x] Does gradlew build run cleanly?
[ ] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
Note:
Please ensure that once the PR is submitted, check Concourse for build issues and
submit an update to your PR as soon as possible. If you need help, please send an
email to dev@geode.apache.org.
Instead of adding synchronizations and null checks everywhere let's just make that field final and change the close() method to clear it.
Even butt but I think there is still a race in what to do with calls to this method concurrently with close. The map would be cleared by close but then updated by this method. Having little background in what this is tracking I can’t say if this is good or bad, it is simply just a case the original code seemed to try to avoid.
I don't think we should be using the state of a collection as an indication of whether the service is open or closed. There should be other state that we use to avoid creating a new connection and putting it in the connection map if ConnectionTable is closed.
I don't think we should be using the state of a collection as an indication of whether the service is open or closed. There should be other state that we use to avoid creating a new connection and putting it in the connection map if ConnectionTable is closed.
Couldn't agree more! If you think its safe for connection references that race into the map on close to sit there then this is an easy fix. A simple check if the collection is closed after the put completes could accomplish this. The close itself would empty the map and any race on that operation would get cleared up on the back end after the put by checking for closed state and removing the entry just put.
@bschuchardt @pivotal-jbarrett Are there still concerns with this PR?
@bschuchardt @pivotal-jbarrett Are there still concerns with this PR?
My concern over the threadConnectionMap null checks hasn't been addressed. I've been fighting against the nulling-out of instance variables like this forever. It's always causing NPEs for unsuspecting programmers who don't recognize that this anti-pattern is being used. The instance variable ought to be "final" and some other state should be added and consulted to see if the connection table has been closed.
@bschuchardt @pivotal-jbarrett Are there still concerns with this PR?
My concern over the threadConnectionMap null checks hasn't been addressed. I've been fighting against the nulling-out of instance variables like this forever. It's always causing NPEs for unsuspecting programmers who don't recognize that this anti-pattern is being used. The instance variable ought to be "final" and some other state should be added and consulted to see if the connection table has been closed.
Hi @bschuchardt ,
I think that the best way is to make this threadConnectionMap final and change this close() to iterate over the map, close all connections and clear map. In this way, we don't need these null checks.
Writing it to local map and then execute command on it isn't good as computeIfAbsent() can throw NPE as we didn't know if someone deleted it as we are checking our local copy.
Tnx @bschuchardt! :)
| gharchive/pull-request | 2019-09-24T08:02:00 | 2025-04-01T06:37:52.994123 | {
"authors": [
"bschuchardt",
"mhansonp",
"mkevo",
"pivotal-jbarrett"
],
"repo": "apache/geode",
"url": "https://github.com/apache/geode/pull/4085",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
849748989 | GROOVY-8983: STC: support "Type[] array = collectionOfTypeOrSubtype"
https://issues.apache.org/jira/browse/GROOVY-8983
NOTE: GenericsUtils.parameterizeType("List<? super Type>","Collection") returns Collection<Type> and not Collection<? super Type>. I attempted to address this, but was not successful. This should probably be fixed at some point because it breaks the semantics of "? super Type".
Merged. Thanks!
| gharchive/pull-request | 2021-04-03T22:06:49 | 2025-04-01T06:37:52.996657 | {
"authors": [
"danielsun1106",
"eric-milles"
],
"repo": "apache/groovy",
"url": "https://github.com/apache/groovy/pull/1541",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
509125099 | [SUBMARINE-248]. Add websocket interface to submarine workbench server.
What is this PR for?
Add WebSocket interface to the submarine workbench server. So that the frontend and backend can have bidirectional communications.
What type of PR is it?
Feature
What is the Jira issue?
https://issues.apache.org/jira/browse/SUBMARINE-248
How should this be tested?
https://travis-ci.org/yuanzac/hadoop-submarine/builds/599666968
Questions:
Does the licenses files need update? No
Is there breaking changes for older versions? No
Does this needs documentation? No
Thanks @liuxunorg and @jiwq for the review~
Will merge if no more comments
| gharchive/pull-request | 2019-10-18T14:41:25 | 2025-04-01T06:37:52.999742 | {
"authors": [
"liuxunorg",
"yuanzac"
],
"repo": "apache/hadoop-submarine",
"url": "https://github.com/apache/hadoop-submarine/pull/56",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
463957156 | HADOOP-16409. Allow authoritative mode on non-qualified paths.
This addresses whitespace nits from Gabor's review of https://github.com/apache/hadoop/pull/1043, and allows non-qualified paths to be specified in the config.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
31
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1
mvninstall
1085
trunk passed
+1
compile
30
trunk passed
+1
checkstyle
23
trunk passed
+1
mvnsite
41
trunk passed
+1
shadedclient
684
branch has no errors when building and testing our client artifacts.
+1
javadoc
28
trunk passed
0
spotbugs
58
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
55
trunk passed
_ Patch Compile Tests _
+1
mvninstall
31
the patch passed
+1
compile
27
the patch passed
+1
javac
27
the patch passed
-0
checkstyle
16
hadoop-tools/hadoop-aws: The patch generated 1 new + 40 unchanged - 0 fixed = 41 total (was 40)
+1
mvnsite
32
the patch passed
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedclient
707
patch has no errors when building and testing our client artifacts.
+1
javadoc
20
the patch passed
+1
findbugs
56
the patch passed
_ Other Tests _
+1
unit
285
hadoop-aws in the patch passed.
+1
asflicense
27
The patch does not generate ASF License warnings.
3267
Subsystem
Report/Notes
Docker
Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1054/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/1054
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname
Linux fff5b7977c40 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
personality/hadoop.sh
git revision
trunk / 8965ddc
Default Java
1.8.0_212
checkstyle
https://builds.apache.org/job/hadoop-multibranch/job/PR-1054/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
Test Results
https://builds.apache.org/job/hadoop-multibranch/job/PR-1054/1/testReport/
Max. process+thread count
414 (vs. ulimit of 5500)
modules
C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output
https://builds.apache.org/job/hadoop-multibranch/job/PR-1054/1/console
versions
git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
Test result against ireland: 4 known testMRJob failures, no others.
+1 on this.
| gharchive/pull-request | 2019-07-03T21:17:13 | 2025-04-01T06:37:53.022674 | {
"authors": [
"bgaborg",
"hadoop-yetus",
"mackrorysd"
],
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/1054",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1189017299 | Hadoop-18169. getDelegationTokens in ViewFs should also fetch the token from fallback FS
Description of PR
cherry-pick of 15a5ea2c955a7d1b89aea0cb127727a57db76c76 from trunk to branch-2.10 and created TestViewFsLinkFallback.java file with one test case included.
All other test cases in TestViewFsLinkFallback.java from trunk are removed, as the implementation of InternalDirOfViewFs (createInternal function) is out of date and these test cases won't pass. Leave the fix and the inclusion of these other unit tests as a future pull request.
How was this patch tested?
mvn test -Dtest="TestViewFsLinkFallback"
@omalley,
Here is the backport for DelegationToken for 2.10. Could you take a look? Thanks,
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 35s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ branch-2.10 Compile Tests _
+0 :ok:
mvndep
2m 21s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
12m 12s
branch-2.10 passed
+1 :green_heart:
compile
13m 8s
branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
compile
10m 45s
branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
checkstyle
1m 50s
branch-2.10 passed
+1 :green_heart:
mvnsite
2m 30s
branch-2.10 passed
+1 :green_heart:
javadoc
2m 43s
branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javadoc
2m 2s
branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
-1 :x:
spotbugs
2m 2s
/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html
hadoop-common-project/hadoop-common in branch-2.10 has 2 extant spotbugs warnings.
-1 :x:
spotbugs
2m 31s
/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 1 extant spotbugs warnings.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 20s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 37s
the patch passed
+1 :green_heart:
compile
12m 21s
the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javac
12m 21s
the patch passed
+1 :green_heart:
compile
10m 46s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
javac
10m 46s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 50s
the patch passed
+1 :green_heart:
mvnsite
2m 28s
the patch passed
+1 :green_heart:
javadoc
2m 40s
the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javadoc
2m 4s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
spotbugs
4m 51s
the patch passed
_ Other Tests _
-1 :x:
unit
8m 24s
/patch-unit-hadoop-common-project_hadoop-common.txt
hadoop-common in the patch passed.
-1 :x:
unit
62m 56s
/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 56s
The patch does not generate ASF License warnings.
169m 34s
Reason
Tests
Failed junit tests
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
hadoop.io.compress.TestCompressorDecompressor
hadoop.fs.sftp.TestSFTPFileSystem
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4128
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell
uname
Linux 2d6802dd4aa1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
branch-2.10 / 88a556b79fdc6952d10a6c771f7b436349830a5c
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
Multi-JDK versions
/usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/1/testReport/
Max. process+thread count
2709 (vs. ulimit of 5500)
modules
C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/1/console
versions
git=2.17.1 maven=3.6.0 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org
This message was automatically generated.
This change alone seems fine. It would be good and helpful for others reviewing to identify the differences in viewfs to explain why the rest of the tests couldn't be backported, but thats more of a nice to have, not required to ship it
@mccormickt12 thanks for taking a quick review.
create() method in InternalDirOfViewFs diverged between trunk and branch-2.10. In trunk, when fallbackFS is configured, we can create a file but in branch-2.10, it does not do the check whether fallbackFS exists and simply throws "read-only fs" exception, thus failing these unit tests.
InternalDirOfViewFs class in trunk has two more members than in branch-2.10. Without access to fsState, we can not check whether a fallbackFS is set or not. Does not seem trivial to bring InternalDirOfViewFs in branch-2.10 to be in sync with trunk. Leave it as a separate patch for later.
private final boolean showMountLinksAsSymlinks;
private InodeTree<FileSystem> fsState;
Thanks for backporting this @xinglin and explaining what's not present in 2.10. As the rest of the functionality doesn't exist, I am good with having the required test backported. Can you please rebase your branch and push it again so Yetus gives a positive run? I am +1 on merging this PR after that.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
8m 12s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ branch-2.10 Compile Tests _
+0 :ok:
mvndep
4m 3s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
13m 58s
branch-2.10 passed
+1 :green_heart:
compile
13m 12s
branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
compile
10m 50s
branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
checkstyle
2m 6s
branch-2.10 passed
+1 :green_heart:
mvnsite
2m 36s
branch-2.10 passed
+1 :green_heart:
javadoc
2m 47s
branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javadoc
2m 9s
branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
-1 :x:
spotbugs
2m 28s
/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html
hadoop-common-project/hadoop-common in branch-2.10 has 2 extant spotbugs warnings.
-1 :x:
spotbugs
2m 33s
/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 1 extant spotbugs warnings.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 24s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 38s
the patch passed
+1 :green_heart:
compile
12m 23s
the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javac
12m 23s
the patch passed
+1 :green_heart:
compile
10m 46s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
javac
10m 46s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
2m 2s
the patch passed
+1 :green_heart:
mvnsite
2m 30s
the patch passed
+1 :green_heart:
javadoc
2m 39s
the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javadoc
2m 10s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
spotbugs
4m 56s
the patch passed
_ Other Tests _
-1 :x:
unit
8m 30s
/patch-unit-hadoop-common-project_hadoop-common.txt
hadoop-common in the patch passed.
-1 :x:
unit
63m 37s
/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
1m 0s
The patch does not generate ASF License warnings.
183m 18s
Reason
Tests
Failed junit tests
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
hadoop.io.compress.TestCompressorDecompressor
hadoop.fs.sftp.TestSFTPFileSystem
hadoop.hdfs.server.namenode.ha.TestEditLogTailer
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys
hadoop.hdfs.TestDataTransferKeepalive
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4128
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell
uname
Linux c9dcbe54b6da 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
branch-2.10 / 17b677c42e1dcd5b4236389e1e6735133f698c7e
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
Multi-JDK versions
/usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/2/testReport/
Max. process+thread count
2364 (vs. ulimit of 5500)
modules
C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/2/console
versions
git=2.17.1 maven=3.6.0 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org
This message was automatically generated.
@virajith rebased last night but I guess we are seeing the same set of unit test failures. these tests have been failing before and don't seem to be related with our patch. Please see another example, where we see the same set of unit test failures for another backport to branch-2.10. I think we can commit this PR.
https://github.com/apache/hadoop/pull/4124
The failures in Yetus in the last run are unrelated to the changes in this PR. I will be merging this. Thanks for the backport @xinglin !
| gharchive/pull-request | 2022-03-31T23:55:34 | 2025-04-01T06:37:53.101301 | {
"authors": [
"hadoop-yetus",
"virajith",
"xinglin"
],
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/4128",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1809651422 | HDFS-17094. EC: Fix bug in block recovery when there are stale datanodes.
Description of PR
When a block recovery occurs, RecoveryTaskStriped in datanode expects rBlock.getLocations() and rBlock. getBlockIndices() to be in one-to-one correspondence. However, if there are locations in stale state when NameNode handles heartbeat, this correspondence will be disrupted. In detail, there is no stale location in recoveryLocations, but the block indices array is still complete (i.e. contains the indices of all the locations).
https://github.com/apache/hadoop/blob/c44823dadb73a3033f515329f70b2e3126fcb7be/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1720-L1724
https://github.com/apache/hadoop/blob/c44823dadb73a3033f515329f70b2e3126fcb7be/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1754-L1757
This will cause BlockRecoveryWorker.RecoveryTaskStriped#recover() to generate a wrong internal block ID, and the corresponding datanode cannot find the replica, thus making the recovery process fail.
https://github.com/apache/hadoop/blob/c44823dadb73a3033f515329f70b2e3126fcb7be/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java#L407-L416
This bug needs to be fixed.
How was this patch tested?
Add a new unit test.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 43s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
45m 26s
trunk passed
+1 :green_heart:
compile
1m 24s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
1m 19s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
checkstyle
1m 16s
trunk passed
+1 :green_heart:
mvnsite
1m 31s
trunk passed
+1 :green_heart:
javadoc
1m 12s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 39s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 20s
trunk passed
+1 :green_heart:
shadedclient
35m 43s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 13s
the patch passed
+1 :green_heart:
compile
1m 16s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
1m 16s
the patch passed
+1 :green_heart:
compile
1m 11s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
javac
1m 11s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 2s
the patch passed
+1 :green_heart:
mvnsite
1m 19s
the patch passed
+1 :green_heart:
javadoc
0m 56s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 30s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 13s
the patch passed
+1 :green_heart:
shadedclient
35m 58s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
214m 54s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 55s
The patch does not generate ASF License warnings.
357m 49s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5854
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux f7f9bf4d0ae1 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 5b72c1d7c5ef527c328245874ab5f5d7ab86e9ab
Default Java
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/1/testReport/
Max. process+thread count
3374 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@Hexiaoqiao @tomscut Thanks for your review. I've update this PR according to the suggestions. Please take a look, thanks again.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 42s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
52m 58s
trunk passed
+1 :green_heart:
compile
1m 42s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
1m 29s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
checkstyle
1m 23s
trunk passed
+1 :green_heart:
mvnsite
1m 40s
trunk passed
+1 :green_heart:
javadoc
1m 21s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 0s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
4m 5s
trunk passed
-1 :x:
shadedclient
42m 13s
branch has errors when building and testing our client artifacts.
_ Patch Compile Tests _
-1 :x:
mvninstall
0m 23s
/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs in the patch failed.
+1 :green_heart:
compile
1m 32s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
1m 32s
the patch passed
+1 :green_heart:
compile
1m 26s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
javac
1m 26s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 13s
the patch passed
+1 :green_heart:
mvnsite
1m 30s
the patch passed
+1 :green_heart:
javadoc
1m 6s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 36s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 48s
the patch passed
+1 :green_heart:
shadedclient
36m 38s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
223m 42s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 58s
The patch does not generate ASF License warnings.
382m 58s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5854
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 79c529060291 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 446ddffc53cb891e0a410bd76a6864666f22ff11
Default Java
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/2/testReport/
Max. process+thread count
3594 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/2/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 42s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
49m 27s
trunk passed
+1 :green_heart:
compile
1m 27s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
1m 21s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
checkstyle
1m 14s
trunk passed
+1 :green_heart:
mvnsite
1m 29s
trunk passed
+1 :green_heart:
javadoc
1m 12s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 38s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 20s
trunk passed
+1 :green_heart:
shadedclient
35m 43s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 16s
the patch passed
+1 :green_heart:
compile
1m 12s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
1m 12s
the patch passed
+1 :green_heart:
compile
1m 11s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
javac
1m 11s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 1s
the patch passed
+1 :green_heart:
mvnsite
1m 16s
the patch passed
+1 :green_heart:
javadoc
0m 57s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 31s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 14s
the patch passed
+1 :green_heart:
shadedclient
36m 2s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
215m 51s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 58s
The patch does not generate ASF License warnings.
362m 17s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/3/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5854
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 3913e9b84c85 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 446ddffc53cb891e0a410bd76a6864666f22ff11
Default Java
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/3/testReport/
Max. process+thread count
3028 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/3/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@zhangshuyan0 Could you please backport this to branch-3.3? Thanks!
@zhangshuyan0 Could you please backport this to branch-3.3? Thanks!
Ok, I'll do this later.
@tomscut This PR can cherry pick to branch-3.3 smoothly, Please cherry-pick directly if you evaluate it also need to fix for branch-3.3 rather than submit another PR. Thanks.
@tomscut This PR can cherry pick to branch-3.3 smoothly, Please cherry-pick directly if you evaluate it also need to fix for branch-3.3 rather than submit another PR. Thanks.
OKK, I have backport to branch-3.3. I thought it would be safer to trigger jenkins. But for this PR, it's really not necessary. Thank you for your advice.
| gharchive/pull-request | 2023-07-18T10:30:54 | 2025-04-01T06:37:53.202241 | {
"authors": [
"Hexiaoqiao",
"hadoop-yetus",
"tomscut",
"zhangshuyan0"
],
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/5854",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1942869576 | YARN-11592. Add timeout to GPGUtils#invokeRMWebService.
Description of PR
JIRA: YARN-11592. Add timeout to GPGUtils#invokeRMWebService.
How was this patch tested?
For code changes:
[ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
[ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
11m 33s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 19s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
20m 22s
trunk passed
+1 :green_heart:
compile
4m 30s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 3s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 11s
trunk passed
+1 :green_heart:
javadoc
2m 13s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 7s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
3m 34s
trunk passed
+1 :green_heart:
shadedclient
20m 48s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 9s
the patch passed
+1 :green_heart:
compile
4m 0s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
4m 0s
the patch passed
+1 :green_heart:
compile
3m 50s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 50s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 6s
the patch passed
+1 :green_heart:
mvnsite
1m 54s
the patch passed
+1 :green_heart:
javadoc
1m 56s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
1m 53s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
3m 43s
the patch passed
+1 :green_heart:
shadedclient
21m 7s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 55s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 49s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
0m 55s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 47s
The patch does not generate ASF License warnings.
141m 23s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 4b7c9e4364ed 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 3f3f91ce98cd23e9a14a7af041e635179617c5a8
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/1/testReport/
Max. process+thread count
553 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 26s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 59s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
19m 51s
trunk passed
+1 :green_heart:
compile
4m 29s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 1s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 51s
trunk passed
+1 :green_heart:
javadoc
2m 54s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 45s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 24s
trunk passed
+1 :green_heart:
shadedclient
20m 48s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 26s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 27s
the patch passed
+1 :green_heart:
compile
3m 54s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 54s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 8s
/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 164 unchanged - 0 fixed = 165 total (was 164)
+1 :green_heart:
mvnsite
2m 32s
the patch passed
+1 :green_heart:
javadoc
2m 31s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 27s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 40s
the patch passed
+1 :green_heart:
shadedclient
21m 22s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 57s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 50s
hadoop-yarn-common in the patch passed.
-1 :x:
unit
0m 36s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
hadoop-yarn-client in the patch failed.
+1 :green_heart:
unit
0m 55s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 47s
The patch does not generate ASF License warnings.
137m 2s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 4db31f3a3fa2 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 2bac4178db80535ab2b252690b6bbf535f52d067
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/2/testReport/
Max. process+thread count
617 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/2/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 26s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 5s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
20m 15s
trunk passed
+1 :green_heart:
compile
4m 32s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 1s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 51s
trunk passed
+1 :green_heart:
javadoc
2m 52s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 44s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 25s
trunk passed
+1 :green_heart:
shadedclient
21m 9s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 29s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
compile
3m 56s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 56s
the patch passed
+1 :green_heart:
blanks
0m 1s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 8s
/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 164 unchanged - 0 fixed = 165 total (was 164)
+1 :green_heart:
mvnsite
2m 28s
the patch passed
+1 :green_heart:
javadoc
2m 26s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 21s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 41s
the patch passed
+1 :green_heart:
shadedclient
21m 12s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 56s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 49s
hadoop-yarn-common in the patch passed.
-1 :x:
unit
0m 36s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
hadoop-yarn-client in the patch failed.
+1 :green_heart:
unit
0m 55s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 47s
The patch does not generate ASF License warnings.
137m 56s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/3/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux af5599409b7d 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 16d99187211c620092dc2aaa593e196bb94b3359
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/3/testReport/
Max. process+thread count
755 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/3/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 27s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 4s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
19m 59s
trunk passed
+1 :green_heart:
compile
4m 30s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 5s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 16s
trunk passed
+1 :green_heart:
mvnsite
2m 51s
trunk passed
+1 :green_heart:
javadoc
2m 52s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 44s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 25s
trunk passed
+1 :green_heart:
shadedclient
21m 6s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 28s
the patch passed
+1 :green_heart:
compile
3m 52s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 52s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 4s
the patch passed
+1 :green_heart:
mvnsite
2m 30s
the patch passed
+1 :green_heart:
javadoc
2m 30s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 26s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 33s
the patch passed
+1 :green_heart:
shadedclient
21m 9s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 56s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 48s
hadoop-yarn-common in the patch passed.
-1 :x:
unit
0m 36s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
hadoop-yarn-client in the patch failed.
+1 :green_heart:
unit
0m 56s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 47s
The patch does not generate ASF License warnings.
136m 30s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/4/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 36fc2cc59e18 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / c38f58c72e9d4bc15ba8236c956169669a5d0bbd
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/4/testReport/
Max. process+thread count
613 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/4/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 26s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 1s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 53s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
22m 9s
trunk passed
+1 :green_heart:
compile
5m 11s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 24s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 40s
trunk passed
+1 :green_heart:
javadoc
2m 45s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 35s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 9s
trunk passed
+1 :green_heart:
shadedclient
25m 50s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 17s
the patch passed
+1 :green_heart:
compile
3m 54s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 54s
the patch passed
+1 :green_heart:
compile
4m 30s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
4m 30s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 15s
the patch passed
+1 :green_heart:
mvnsite
2m 17s
the patch passed
+1 :green_heart:
javadoc
2m 32s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 23s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 44s
the patch passed
+1 :green_heart:
shadedclient
23m 40s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 52s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 48s
hadoop-yarn-common in the patch passed.
-1 :x:
unit
0m 34s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
hadoop-yarn-client in the patch failed.
+1 :green_heart:
unit
0m 57s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 48s
The patch does not generate ASF License warnings.
147m 23s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6189/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 602f21487dad 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / c38f58c72e9d4bc15ba8236c956169669a5d0bbd
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6189/1/testReport/
Max. process+thread count
554 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6189/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 24s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 39s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
20m 10s
trunk passed
+1 :green_heart:
compile
4m 29s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 1s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 50s
trunk passed
+1 :green_heart:
javadoc
2m 52s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 45s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 26s
trunk passed
+1 :green_heart:
shadedclient
21m 5s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 26s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 28s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
compile
3m 58s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 58s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 5s
the patch passed
+1 :green_heart:
mvnsite
2m 30s
the patch passed
+1 :green_heart:
javadoc
2m 30s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 26s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 35s
the patch passed
+1 :green_heart:
shadedclient
20m 57s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 56s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 48s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
26m 6s
hadoop-yarn-client in the patch passed.
+1 :green_heart:
unit
0m 58s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 50s
The patch does not generate ASF License warnings.
162m 39s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/5/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 740c2c885900 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 0836b2b76db98ef0b298bc2580e71b278aa46cc2
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/5/testReport/
Max. process+thread count
577 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/5/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@goiri Can you help review this PR? Thank you very much!
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 25s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 13s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
20m 16s
trunk passed
+1 :green_heart:
compile
4m 39s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 0s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 12s
trunk passed
+1 :green_heart:
mvnsite
2m 51s
trunk passed
+1 :green_heart:
javadoc
2m 51s
trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 44s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 23s
trunk passed
+1 :green_heart:
shadedclient
21m 13s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 28s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
compile
3m 57s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 57s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 6s
the patch passed
+1 :green_heart:
mvnsite
2m 32s
the patch passed
+1 :green_heart:
javadoc
2m 30s
the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 27s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 33s
the patch passed
+1 :green_heart:
shadedclient
20m 56s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 56s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 49s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
26m 10s
hadoop-yarn-client in the patch passed.
+1 :green_heart:
unit
0m 59s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 51s
The patch does not generate ASF License warnings.
163m 39s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/6/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux a7bba7fed213 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 6256a3c5e19afd9e1afc1f8b7e4d236db5aaca86
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/6/testReport/
Max. process+thread count
726 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/6/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@goiri Thank you very much for your help in reviewing the code!
| gharchive/pull-request | 2023-10-14T01:59:03 | 2025-04-01T06:37:53.450590 | {
"authors": [
"hadoop-yetus",
"slfan1989"
],
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/6189",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
480077860 | HBASE-22810 Initialize an separate ThreadPoolExecutor for taking/restoring snapshot
…oring snapshot
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
38
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
+1
mvninstall
337
master passed
+1
compile
54
master passed
+1
checkstyle
80
master passed
+1
shadedjars
274
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
37
master passed
0
spotbugs
254
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
250
master passed
_ Patch Compile Tests _
+1
mvninstall
304
the patch passed
+1
compile
56
the patch passed
+1
javac
56
the patch passed
-1
checkstyle
77
hbase-server: The patch generated 4 new + 167 unchanged - 2 fixed = 171 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
274
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
946
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
33
the patch passed
+1
findbugs
255
the patch passed
_ Other Tests _
+1
unit
6636
hbase-server in the patch passed.
+1
asflicense
27
The patch does not generate ASF License warnings.
10077
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux 4d977315dc04 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / 8c1edb3bba
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/1/artifact/out/diff-checkstyle-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/1/testReport/
Max. process+thread count
5094 (vs. ulimit of 10000)
modules
C: hbase-server U: hbase-server
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/1/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
46
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
+1
mvninstall
331
master passed
+1
compile
52
master passed
+1
checkstyle
75
master passed
+1
shadedjars
275
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
36
master passed
0
spotbugs
256
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
253
master passed
_ Patch Compile Tests _
+1
mvninstall
301
the patch passed
+1
compile
53
the patch passed
+1
javac
53
the patch passed
-1
checkstyle
75
hbase-server: The patch generated 4 new + 167 unchanged - 2 fixed = 171 total (was 169)
+1
whitespace
1
The patch has no whitespace issues.
+1
shadedjars
265
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
970
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
33
the patch passed
+1
findbugs
249
the patch passed
_ Other Tests _
-1
unit
6854
hbase-server in the patch failed.
+1
asflicense
32
The patch does not generate ASF License warnings.
10271
Reason
Tests
Failed junit tests
hadoop.hbase.master.assignment.TestOpenRegionProcedureHang
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux 499a55e83057 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / e69af5affe
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/artifact/out/diff-checkstyle-hbase-server.txt
unit
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/artifact/out/patch-unit-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/testReport/
Max. process+thread count
4524 (vs. ulimit of 10000)
modules
C: hbase-server U: hbase-server
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
37
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
+1
mvninstall
323
master passed
+1
compile
53
master passed
+1
checkstyle
74
master passed
+1
shadedjars
261
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
38
master passed
0
spotbugs
207
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
206
master passed
_ Patch Compile Tests _
+1
mvninstall
297
the patch passed
+1
compile
52
the patch passed
+1
javac
52
the patch passed
-1
checkstyle
72
hbase-server: The patch generated 4 new + 167 unchanged - 2 fixed = 171 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
263
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
905
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
32
the patch passed
+1
findbugs
243
the patch passed
_ Other Tests _
+1
unit
6706
hbase-server in the patch passed.
+1
asflicense
24
The patch does not generate ASF License warnings.
9937
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/3/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux aec1cc3557ba 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / 27ed2ac071
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/3/artifact/out/diff-checkstyle-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/3/testReport/
Max. process+thread count
4995 (vs. ulimit of 10000)
modules
C: hbase-server U: hbase-server
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/3/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
84
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
0
mvndep
40
Maven dependency ordering for branch
+1
mvninstall
455
master passed
+1
compile
96
master passed
+1
checkstyle
136
master passed
+1
shadedjars
376
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
64
master passed
0
spotbugs
308
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
366
master passed
_ Patch Compile Tests _
0
mvndep
17
Maven dependency ordering for patch
+1
mvninstall
413
the patch passed
+1
compile
100
the patch passed
+1
javac
100
the patch passed
-1
checkstyle
104
hbase-server: The patch generated 3 new + 162 unchanged - 7 fixed = 165 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
371
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
1282
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
65
the patch passed
+1
findbugs
393
the patch passed
_ Other Tests _
+1
unit
204
hbase-common in the patch passed.
-1
unit
13817
hbase-server in the patch failed.
+1
asflicense
66
The patch does not generate ASF License warnings.
18990
Reason
Tests
Failed junit tests
hadoop.hbase.util.TestFromClientSide3WoUnsafe
hadoop.hbase.client.TestFromClientSide3
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux cd6837ae2514 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / 27ed2ac071
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/artifact/out/diff-checkstyle-hbase-server.txt
unit
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/artifact/out/patch-unit-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/testReport/
Max. process+thread count
4725 (vs. ulimit of 10000)
modules
C: hbase-common hbase-server U: .
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
38
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
0
mvndep
39
Maven dependency ordering for branch
+1
mvninstall
340
master passed
+1
compile
74
master passed
+1
checkstyle
93
master passed
+1
shadedjars
263
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
54
master passed
0
spotbugs
257
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
305
master passed
_ Patch Compile Tests _
0
mvndep
14
Maven dependency ordering for patch
+1
mvninstall
286
the patch passed
+1
compile
71
the patch passed
+1
javac
71
the patch passed
-1
checkstyle
69
hbase-server: The patch generated 3 new + 162 unchanged - 7 fixed = 165 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
264
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
893
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
55
the patch passed
+1
findbugs
310
the patch passed
_ Other Tests _
+1
unit
174
hbase-common in the patch passed.
-1
unit
6870
hbase-server in the patch failed.
+1
asflicense
38
The patch does not generate ASF License warnings.
10643
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux 2b37abec7345 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / 53db390f60
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/artifact/out/diff-checkstyle-hbase-server.txt
unit
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/artifact/out/patch-unit-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/testReport/
Max. process+thread count
4656 (vs. ulimit of 10000)
modules
C: hbase-common hbase-server U: .
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
But please fix the checkstyle issues before merging...
The checkstyle is not an issue I think, the '(' is preceded by a whitespace because we want to make the code align as before :
/**
* Messages originating from Client to Master.<br>
* C_M_CREATE_TABLE<br>
* Client asking Master to create a table.
*/
C_M_CREATE_TABLE (47, ExecutorType.MASTER_TABLE_OPERATIONS),
/**
* Messages originating from Client to Master.<br>
* C_M_SNAPSHOT_TABLE<br>
* Client asking Master to snapshot an offline table.
*/
C_M_SNAPSHOT_TABLE (48, ExecutorType.MASTER_SNAPSHOT_OPERATIONS),
/**
* Messages originating from Client to Master.<br>
* C_M_RESTORE_SNAPSHOT<br>
* Client asking Master to restore a snapshot.
*/
C_M_RESTORE_SNAPSHOT (49, ExecutorType.MASTER_SNAPSHOT_OPERATIONS),
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
58
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
0
mvndep
39
Maven dependency ordering for branch
+1
mvninstall
426
master passed
+1
compile
100
master passed
+1
checkstyle
139
master passed
+1
shadedjars
343
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
70
master passed
0
spotbugs
327
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
390
master passed
_ Patch Compile Tests _
0
mvndep
18
Maven dependency ordering for patch
+1
mvninstall
405
the patch passed
+1
compile
107
the patch passed
+1
javac
107
the patch passed
-1
checkstyle
104
hbase-server: The patch generated 3 new + 162 unchanged - 7 fixed = 165 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
359
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
1294
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
70
the patch passed
+1
findbugs
432
the patch passed
_ Other Tests _
+1
unit
217
hbase-common in the patch passed.
-1
unit
19484
hbase-server in the patch failed.
+1
asflicense
58
The patch does not generate ASF License warnings.
24660
Reason
Tests
Failed junit tests
hadoop.hbase.client.TestFromClientSide
hadoop.hbase.replication.TestReplicationDisableInactivePeer
hadoop.hbase.snapshot.TestFlushSnapshotFromClient
hadoop.hbase.client.TestFromClientSide3
hadoop.hbase.replication.TestReplicationKillSlaveRSWithSeparateOldWALs
hadoop.hbase.replication.TestReplicationSmallTests
hadoop.hbase.master.TestAssignmentManagerMetrics
hadoop.hbase.replication.TestReplicationSmallTestsSync
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux 8e8f7bc61373 4.4.0-154-generic #181-Ubuntu SMP Tue Jun 25 05:29:03 UTC 2019 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / d9d5f69fc6
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/artifact/out/diff-checkstyle-hbase-server.txt
unit
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/artifact/out/patch-unit-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/testReport/
Max. process+thread count
4829 (vs. ulimit of 10000)
modules
C: hbase-common hbase-server U: .
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
| gharchive/pull-request | 2019-08-13T10:23:24 | 2025-04-01T06:37:53.602043 | {
"authors": [
"Apache-HBase",
"Apache9",
"openinx"
],
"repo": "apache/hbase",
"url": "https://github.com/apache/hbase/pull/486",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1442214493 | HBASE-27309 Add major compact table or region operation on m…
…aster web table page
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
1m 15s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
_ branch-2 Compile Tests _
-1 :x:
mvninstall
0m 19s
root in branch-2 failed.
+1 :green_heart:
spotless
0m 51s
branch has no errors when running spotless:check.
_ Patch Compile Tests _
-1 :x:
mvninstall
0m 6s
root in the patch failed.
+1 :green_heart:
whitespace
0m 0s
The patch has no whitespace issues.
+1 :green_heart:
spotless
0m 40s
patch has no errors when running spotless:check.
_ Other Tests _
+1 :green_heart:
asflicense
0m 11s
The patch does not generate ASF License warnings.
4m 41s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/artifact/yetus-general-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/4870
JIRA Issue
HBASE-27309
Optional Tests
dupname asflicense javac spotless
uname
Linux ef68985a85d6 5.4.0-1083-aws #90~18.04.1-Ubuntu SMP Fri Aug 5 08:12:44 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
branch-2 / ecf3debd42
Default Java
Eclipse Adoptium-11.0.17+8
mvninstall
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/artifact/yetus-general-check/output/branch-mvninstall-root.txt
mvninstall
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/artifact/yetus-general-check/output/patch-mvninstall-root.txt
Max. process+thread count
44 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
0m 51s
Docker mode activated.
-0 :warning:
yetus
0m 5s
Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ branch-2 Compile Tests _
+1 :green_heart:
mvninstall
2m 32s
branch-2 passed
+1 :green_heart:
javadoc
0m 24s
branch-2 passed
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 4s
the patch passed
+1 :green_heart:
javadoc
0m 22s
the patch passed
_ Other Tests _
+1 :green_heart:
unit
196m 45s
hbase-server in the patch passed.
207m 33s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/4870
JIRA Issue
HBASE-27309
Optional Tests
javac javadoc unit
uname
Linux 6f9a8c17aa1e 5.4.0-1085-aws #92~18.04.1-Ubuntu SMP Wed Aug 31 17:21:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
branch-2 / ecf3debd42
Default Java
Temurin-1.8.0_352-b08
Test Results
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/testReport/
Max. process+thread count
2577 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
1m 25s
Docker mode activated.
-0 :warning:
yetus
0m 10s
Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ branch-2 Compile Tests _
+1 :green_heart:
mvninstall
2m 46s
branch-2 passed
+1 :green_heart:
javadoc
0m 28s
branch-2 passed
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 41s
the patch passed
+1 :green_heart:
javadoc
0m 26s
the patch passed
_ Other Tests _
+1 :green_heart:
unit
229m 35s
hbase-server in the patch passed.
242m 58s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/4870
JIRA Issue
HBASE-27309
Optional Tests
javac javadoc unit
uname
Linux 0cf2918f4d9d 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
branch-2 / ecf3debd42
Default Java
Eclipse Adoptium-11.0.17+8
Test Results
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/testReport/
Max. process+thread count
2355 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/1/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
3m 32s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
_ branch-2 Compile Tests _
+1 :green_heart:
mvninstall
3m 15s
branch-2 passed
+1 :green_heart:
spotless
0m 44s
branch has no errors when running spotless:check.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 39s
the patch passed
+1 :green_heart:
whitespace
0m 0s
The patch has no whitespace issues.
+1 :green_heart:
spotless
0m 41s
patch has no errors when running spotless:check.
_ Other Tests _
+1 :green_heart:
asflicense
0m 9s
The patch does not generate ASF License warnings.
12m 27s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/2/artifact/yetus-general-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/4870
Optional Tests
dupname asflicense javac spotless
uname
Linux 9338b06d7ebd 5.4.0-1083-aws #90~18.04.1-Ubuntu SMP Fri Aug 5 08:12:44 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
branch-2 / e007e23625
Default Java
Eclipse Adoptium-11.0.17+8
Max. process+thread count
60 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/2/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
1m 44s
Docker mode activated.
-0 :warning:
yetus
0m 5s
Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ branch-2 Compile Tests _
+1 :green_heart:
mvninstall
2m 23s
branch-2 passed
+1 :green_heart:
javadoc
0m 24s
branch-2 passed
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 6s
the patch passed
+1 :green_heart:
javadoc
0m 20s
the patch passed
_ Other Tests _
+1 :green_heart:
unit
191m 2s
hbase-server in the patch passed.
202m 28s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/4870
Optional Tests
javac javadoc unit
uname
Linux f340ffaa5ce6 5.4.0-1088-aws #96~18.04.1-Ubuntu SMP Mon Oct 17 02:57:48 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
branch-2 / e007e23625
Default Java
Temurin-1.8.0_352-b08
Test Results
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/2/testReport/
Max. process+thread count
2804 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/2/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
+0 :ok:
reexec
6m 7s
Docker mode activated.
-0 :warning:
yetus
0m 5s
Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ branch-2 Compile Tests _
+1 :green_heart:
mvninstall
2m 52s
branch-2 passed
+1 :green_heart:
javadoc
0m 27s
branch-2 passed
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
2m 36s
the patch passed
+1 :green_heart:
javadoc
0m 26s
the patch passed
_ Other Tests _
+1 :green_heart:
unit
207m 46s
hbase-server in the patch passed.
224m 48s
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/4870
Optional Tests
javac javadoc unit
uname
Linux 3cc9890179e5 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/hbase-personality.sh
git revision
branch-2 / e007e23625
Default Java
Eclipse Adoptium-11.0.17+8
Test Results
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/2/testReport/
Max. process+thread count
2443 (vs. ulimit of 30000)
modules
C: hbase-server U: hbase-server
Console output
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4870/2/console
versions
git=2.34.1 maven=3.8.6
Powered by
Apache Yetus 0.12.0 https://yetus.apache.org
This message was automatically generated.
| gharchive/pull-request | 2022-11-09T14:34:34 | 2025-04-01T06:37:53.693117 | {
"authors": [
"Apache-HBase",
"SiCheng-Zheng"
],
"repo": "apache/hbase",
"url": "https://github.com/apache/hbase/pull/4870",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
480281328 | HBASE-22845 Revert MetaTableAccessor#makePutFromTableState access to …
…public
HBCK2 is dependent on it
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
62
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
-0
test4tests
0
The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ master Compile Tests _
+1
mvninstall
387
master passed
+1
compile
23
master passed
+1
checkstyle
30
master passed
+1
shadedjars
269
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
24
master passed
0
spotbugs
72
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
69
master passed
_ Patch Compile Tests _
+1
mvninstall
294
the patch passed
+1
compile
24
the patch passed
+1
javac
24
the patch passed
+1
checkstyle
28
the patch passed
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
263
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
955
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
21
the patch passed
+1
findbugs
77
the patch passed
_ Other Tests _
+1
unit
108
hbase-client in the patch passed.
+1
asflicense
11
The patch does not generate ASF License warnings.
3044
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-489/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/489
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux cb8f6c7cb6e6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-489/out/precommit/personality/provided.sh
git revision
master / 8c1edb3bba
Default Java
1.8.0_181
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-489/1/testReport/
Max. process+thread count
291 (vs. ulimit of 10000)
modules
C: hbase-client U: hbase-client
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-489/1/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
| gharchive/pull-request | 2019-08-13T17:41:50 | 2025-04-01T06:37:53.716679 | {
"authors": [
"Apache-HBase",
"jatsakthi"
],
"repo": "apache/hbase",
"url": "https://github.com/apache/hbase/pull/489",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1440089068 | HIVE-26656:Remove hsqldb dependency in hive due to CVE-2022-41853
What changes were proposed in this pull request?
Remove hsqldb dependency in hive
Why are the changes needed?
fix cve
Does this PR introduce any user-facing change?
no
How was this patch tested?
manual
@saihemanth-cloudera please see this
| gharchive/pull-request | 2022-11-08T12:17:43 | 2025-04-01T06:37:53.737222 | {
"authors": [
"devaspatikrishnatri"
],
"repo": "apache/hive",
"url": "https://github.com/apache/hive/pull/3740",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1666166720 | HIVE-26930: Support for increased retention of Notification Logs and Change Manager entries
What changes does this PR contain?
To support the Planned/Unplanned Failover, we need the capability to increase the retention period for both the Notification Logs and Change Manager entries until the successful reverse replication is done (i.e. the Optimized Bootstrap). A database-level property 'repl.db.under.failover.sync.pending' was introduced to signify this state. This PR contains the changes for,
selective deletion of notification events that are not relevant to the database(s) in failover
skipping the CM clearer thread execution until the time the Optimized Bootstrap is not done
Why this change is needed?
The change is needed to make the Optimized Bootstrap and Point-in-time consistency possible. If the relevant Notification logs and Change Manager entries are not retained, we can't perform the Optimized Bootstrap.
Were the changes tested?
I have included relevant unit tests in this PR, and will also perform manual verification after deploying the changes on a cluster.
Does this PR introduce any user-facing change?
No
Closing this PR, will raise another.
| gharchive/pull-request | 2023-04-13T10:40:30 | 2025-04-01T06:37:53.740410 | {
"authors": [
"subhasisgorai"
],
"repo": "apache/hive",
"url": "https://github.com/apache/hive/pull/4230",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1577983566 | [Task]: update MSSQL JDBC Driver
What needs to happen?
Update version of MSSQL jdbc driver included in Hop installation.
https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc
Issue Priority
Priority: 2
Issue Component
Component: Database
already done with #2445
| gharchive/issue | 2023-02-09T14:22:18 | 2025-04-01T06:37:53.742472 | {
"authors": [
"hansva"
],
"repo": "apache/hop",
"url": "https://github.com/apache/hop/issues/2291",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2069132555 | 5.3.x - HTTPCLIENT-2314
Make sure an UnknownHostException is thrown, even if a custom DnsResolver implementation is used.
Fixes the issue of HTTPCLIENT-2314
@ok2c Thanks for the review. Done
| gharchive/pull-request | 2024-01-07T13:33:38 | 2025-04-01T06:37:53.743637 | {
"authors": [
"phax"
],
"repo": "apache/httpcomponents-client",
"url": "https://github.com/apache/httpcomponents-client/pull/533",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2351276506 | [SUPPORT] URI too long error
Describe the problem you faced
I'm using Spark3.5 + Hudi0.15.0 for partitioned table, when I choose req_date and req_hour for partition column name, I will get this error, but task would be executed successfully finally;
when I choose date and hour for partition column name, error disappeared.
Expected behavior
We should get no errors when we just make partition column names a bit longer.
Environment Description
Hudi version : 0.15.0
Spark version : 3.5.0
Hive version : NA
Hadoop version : 3.3.6
Storage (HDFS/S3/GCS..) : GCS
Running on Docker? (yes/no) : no
Stacktrace
2024-06-13 13:21:13 ERROR PriorityBasedFileSystemView:129 - Got error running preferred function. Trying secondary
org.apache.hudi.exception.HoodieRemoteException: URI Too Long
at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.loadPartitions(RemoteHoodieTableFileSystemView.java:447) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.loadPartitions(RemoteHoodieTableFileSystemView.java:465) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.lambda$loadPartitions$6e5c444d$1(PriorityBasedFileSystemView.java:187) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.execute(PriorityBasedFileSystemView.java:69) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.loadPartitions(PriorityBasedFileSystemView.java:185) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:133) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:174) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.execute(CleanPlanActionExecutor.java:200) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.scheduleCleaning(HoodieSparkCopyOnWriteTable.java:212) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieTableServiceClient.scheduleTableServiceInternal(BaseHoodieTableServiceClient.java:647) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieTableServiceClient.clean(BaseHoodieTableServiceClient.java:746) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:843) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:816) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:847) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.autoCleanOnCommit(BaseHoodieWriteClient.java:581) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.mayBeCleanAndArchive(BaseHoodieWriteClient.java:560) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:251) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:108) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.HoodieSparkSqlWriterInternal.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:1082) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.HoodieSparkSqlWriterInternal.writeInternal(HoodieSparkSqlWriter.scala:508) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.HoodieSparkSqlWriterInternal.write(HoodieSparkSqlWriter.scala:187) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:125) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:168) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75) ~[spark-sql_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73) ~[spark-sql_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84) ~[spark-sql_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) ~[spark-sql_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:473) ~[spark-catalyst_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76) ~[spark-sql-api_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:473) ~[spark-catalyst_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32) ~[spark-catalyst_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267) ~[spark-catalyst_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263) ~[spark-catalyst_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32) ~[spark-catalyst_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32) ~[spark-catalyst_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:449) ~[spark-catalyst_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:142) ~[spark-sql_2.12-3.5.0.jar:0.15.0]
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:859) ~[spark-sql_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:388) ~[spark-sql_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:361) ~[spark-sql_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:240) ~[spark-sql_2.12-3.5.0.jar:3.5.0]
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) [scala-library-2.12.18.jar:?]
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) [scala-library-2.12.18.jar:?]
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) [scala-library-2.12.18.jar:?]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) [spark-core_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1032) [spark-core_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194) [spark-core_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217) [spark-core_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91) [spark-core_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1124) [spark-core_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1133) [spark-core_2.12-3.5.0.jar:3.5.0]
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) [spark-core_2.12-3.5.0.jar:3.5.0]
Caused by: org.apache.hudi.org.apache.http.client.HttpResponseException: URI Too Long
at org.apache.hudi.org.apache.http.impl.client.AbstractResponseHandler.handleResponse(AbstractResponseHandler.java:69) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.org.apache.http.client.fluent.Response.handleResponse(Response.java:90) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.org.apache.http.client.fluent.Response.returnContent(Response.java:97) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.executeRequest(RemoteHoodieTableFileSystemView.java:189) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.loadPartitions(RemoteHoodieTableFileSystemView.java:445) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
... 71 more
@michael1991 Thanks for raising this. Can you help me to reproduce this issue. I tried below but it was working fine for me.
fake = Faker()
data = [{"ID": fake.uuid4(), "EventTime": "2023-03-04 14:44:42.046661",
"FullName": fake.name(), "Address": fake.address(),
"CompanyName": fake.company(), "JobTitle": fake.job(),
"EmailAddress": fake.email(), "PhoneNumber": fake.phone_number(),
"RandomText": fake.sentence(), "CityNameDummyBigFieldName": fake.city(), "ts":"1",
"StateNameDummyBigFieldName": fake.state(), "Country": fake.country()} for _ in range(1000)]
pandas_df = pd.DataFrame(data)
hoodie_properties = {
'hoodie.datasource.write.table.type': 'COPY_ON_WRITE',
'hoodie.datasource.write.operation': 'upsert',
'hoodie.datasource.write.hive_style_partitioning': 'true',
'hoodie.datasource.write.recordkey.field': 'ID',
'hoodie.datasource.write.partitionpath.field': 'StateNameDummyBigFieldName,CityNameDummyBigFieldName',
'hoodie.table.name' : 'test'
}
spark.sparkContext.setLogLevel("WARN")
df = spark.createDataFrame(pandas_df)
df.write.format("hudi").options(**hoodie_properties).mode("overwrite").save(PATH)
for i in range(1, 50):
df.write.format("hudi").options(**hoodie_properties).mode("append").save(PATH)
Hi @ad1happy2go , glad to see you again ~
Can you try column name with underscore, i'm not sure if enable urlencode for partition and partition column name with underscore could make this happen.
@michael1991
How many number of partitions in the table? Is it possible to get the URI? I was not able to reproduce this though.
@ad1happy2go Partitions are hours, for example, gs://bucket/tables/hudi/r_date=2024-06-17/r_hour=00. But problem only occurs on two partitions and underscore, we are using one partition column like yyyyMMddHH and it's going on well. Not sure the exact cause.
Can you try reproducing this issue with the sample code. @michael1991 , That will help us to triage it better
I had the same problem when I deleted the data of some partitions ,The data uses secondary partition, event_date=2024-12-03/event_name=active ,about 200 partitions every day
Environment Description
Hudi version : 0.15.0
Spark version : 3.5.1
Hive version : NA
Hadoop version : 3.3.6
Storage (HDFS/S3/GCS..) : GCS
24/12/06 06:22:39 WARN HttpParser: URI is too large >8192
24/12/06 06:22:39 ERROR PriorityBasedFileSystemView: Got error running preferred function. Trying secondary
org.apache.hudi.exception.HoodieRemoteException: URI Too Long
at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.loadPartitions(RemoteHoodieTableFileSystemView.java:447) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.loadPartitions(RemoteHoodieTableFileSystemView.java:465) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.lambda$loadPartitions$6e5c444d$1(PriorityBasedFileSystemView.java:187) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.execute(PriorityBasedFileSystemView.java:69) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.loadPartitions(PriorityBasedFileSystemView.java:185) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:133) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:174) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.execute(CleanPlanActionExecutor.java:200) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.scheduleCleaning(HoodieSparkCopyOnWriteTable.java:212) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieTableServiceClient.scheduleTableServiceInternal(BaseHoodieTableServiceClient.java:647) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieTableServiceClient.clean(BaseHoodieTableServiceClient.java:746) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:843) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:816) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:847) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.autoCleanOnCommit(BaseHoodieWriteClient.java:581) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.mayBeCleanAndArchive(BaseHoodieWriteClient.java:560) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:251) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:108) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.HoodieSparkSqlWriterInternal.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:1082) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.HoodieSparkSqlWriterInternal.writeInternal(HoodieSparkSqlWriter.scala:508) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.HoodieSparkSqlWriterInternal.write(HoodieSparkSqlWriter.scala:187) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:125) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:168) ~[hudi-spark3.5-bundle_2.12-0.15.0.jar:0.15.0]
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:473) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76) [spark-sql-api_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:473) [spark-catalyst_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32) [spark-catalyst_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267) [spark-catalyst_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263) [spark-catalyst_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32) [spark-catalyst_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32) [spark-catalyst_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:449) [spark-catalyst_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98) [spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85) [spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83) [spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:142) [spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:859) [spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:388) [spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:361) [spark-sql_2.12-3.5.1.jar:3.5.1]
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:240) [spark-sql_2.12-3.5.1.jar:3.5.1]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) [py4j-0.10.9.7.jar:?]
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) [py4j-0.10.9.7.jar:?]
at py4j.Gateway.invoke(Gateway.java:282) [py4j-0.10.9.7.jar:?]
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) [py4j-0.10.9.7.jar:?]
at py4j.commands.CallCommand.execute(CallCommand.java:79) [py4j-0.10.9.7.jar:?]
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) [py4j-0.10.9.7.jar:?]
at py4j.ClientServerConnection.run(ClientServerConnection.java:106) [py4j-0.10.9.7.jar:?]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
| gharchive/issue | 2024-06-13T14:16:34 | 2025-04-01T06:37:53.783841 | {
"authors": [
"ad1happy2go",
"clp007",
"michael1991"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/issues/11446",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2238815549 | [HUDI-7576] Improve efficiency of getRelativePartitionPath, reduce computation of partitionPath in AbstractTableFileSystemView
Change Logs
Improve the efficiency of getRelativePartitionPath by reducing the number of operations on the path object that are required to get the final result
Reduce the number of times a partitionPath is computed by supplying a partition path argument where possible in the AbstractFileSystemView
Impact
Reduces overhead of building FSViews with large numbers of files
Risk level (write none, low medium or high below)
None
Documentation Update
Describe any necessary documentation update if there is any new feature, config, or user-facing change. If not, put "none".
The config description must be updated if new configs are added or the default value of the configs are changed
Any new feature or user-facing change requires updating the Hudi website. Please create a Jira ticket, attach the
ticket number here and follow the instruction to make
changes to the website.
Contributor's checklist
[ ] Read through contributor's guide
[ ] Change Logs and Impact were stated clearly
[ ] Adequate tests were added if applicable
[ ] CI passed
@the-other-tim-brown Can you fix the Azure CI failure?
@danny0405 error is:
TestUpsertPartitioner.testUpsertPartitionerWithSmallFileHandlingPickingMultipleCandidates:470 expected: <[BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-1, partitionPath=2016/03/15}, BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-2, partitionPath=2016/03/15}, BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-3, partitionPath=2016/03/15}]> but was: <[BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-3, partitionPath=2016/03/15}, BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-2, partitionPath=2016/03/15}, BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-1, partitionPath=2016/03/15}]>
I'll put up a separate minor pr to make the ordering deterministic for small file handling
@danny0405 https://github.com/apache/hudi/pull/11008
@danny0405 can you take another look when you get a chance? I have updated a few spots in the code
| gharchive/pull-request | 2024-04-12T00:25:32 | 2025-04-01T06:37:53.791398 | {
"authors": [
"danny0405",
"the-other-tim-brown"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/11001",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1084417784 | [HUDI-3069] compact improve
Brief change log
compact improve
I found that when the compact plan is generated, the delta log files under each filegroup are arranged in the natural order of instant time. in the majority of cases,We can think that the latest data is in the latest delta log file, so we sort it from large to small according to the instance time, which can largely avoid rewriting the data in the compact process, and then optimize the compact time.
In addition, when reading the delta log file, we compare the data in the external spillablemap with the delta log data. If oldrecord is selected, there is no need to rewrite the data in the external spillablemap. Rewriting data will waste a lot of resources when data is spill to disk
This pull request is already covered by existing tests, such as (please describe tests).
Committer checklist
[*] Has a corresponding JIRA in PR title & commit()
[*] Commit message is descriptive of the change
[ ] CI is green
[ ] Necessary doc changes done or have another open PR
[ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
@yihua : Can you follow up on the review please.
| gharchive/pull-request | 2021-12-20T06:38:46 | 2025-04-01T06:37:53.795401 | {
"authors": [
"nsivabalan",
"scxwhite"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/4400",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1221813896 | [HUDI-3931][DOCS] Guide to setup async metadata indexing
@nsivabalan @bhasudha @xushiyan Added this guide to go under Services tab. We can land this doc. I'll update the blog #5449 with more design elements. That can go after multi-modal index blog.
| gharchive/pull-request | 2022-04-30T11:57:19 | 2025-04-01T06:37:53.797208 | {
"authors": [
"codope"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/5476",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1293816447 | [HUDI-4360] Fix HoodieDropPartitionsTool based on refactored meta sync
What is the purpose of the pull request
This PR fixes HoodieDropPartitionsTool based on refactored meta sync and the failed Java CI on master.
Brief change log
Fix the usage of old configs and APIs in HoodieDropPartitionsTool.
Verify this pull request
This pull request is a trivial rework / code cleanup without any test coverage.
Committer checklist
[ ] Has a corresponding JIRA in PR title & commit
[ ] Commit message is descriptive of the change
[ ] CI is green
[ ] Necessary doc changes done or have another open PR
[ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
The changes are limited to the HoodieDropPartitionsTool only and should not affect others. Java CI passes. Landing this to fix the master soon.
| gharchive/pull-request | 2022-07-05T05:45:53 | 2025-04-01T06:37:53.800610 | {
"authors": [
"yihua"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/6043",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1953339575 | [HUDI-6798] Add record merging mode and implement event-time ordering in the new file group reader
Change Logs
This PR adds a new table config hoodie.record.merge.mode to control the record merging mode and behavior in the new file group reader (HoodieFileGroupReader) and implements event-time ordering in it. The table config hoodie.record.merge.mode is going to be the single config that determines how the record merging happens in release 1.0 and beyond. Detailed changes include:
Adds RecordMergeMode to define three merging mode:
OVERWRITE_WITH_LATEST: using transaction time to merge records, i.e., the record from later transaction overwrites the earlier record with the same key. This corresponds to the behavior of existing payload class OverwriteWithLatestAvroPayload.
EVENT_TIME_ORDERING: using event time as the ordering to merge records, i.e., the record with the larger event time overwrites the record with the smaller event time on the same key, regardless of transaction time. The event time or preCombine field needs to be specified by the user. This corresponds to the behavior of existing payload class DefaultHoodieRecordPayload.
CUSTOM: using custom merging logic specified by the user. When a user specifies a custom record merger strategy or payload class with Avro record merger, this is going to be specified so the record merging follows user-defined logic as before.
As of now, setting hoodie.record.merge.mode is not mandatory (HUDI-7850 as a follow-up to make it mandatory in release 1.0). This PR adds the inference logic based on the payload class name, payload type, and record merger strategy in HoodieTableMetaClient to properly set hoodie.record.merge.mode in the table config.
Adds merging logic of OVERWRITE_WITH_LATEST and EVENT_TIME_ORDERING in HoodieBaseFileGroupRecordBuffer that do not have to go through the record merger APIs to simplify the implementation (opening up for further optimization when possible). As a fallback, user can always set CUSTOM as the record merge mode to leverage payload class or record merger implementation for transaction or event time-based merging.
Adds a custom compareTo API in HoodieBaseFileGroupRecordBuffer to compare ordering field values of different types, due to an issue around ordering values in the delete records (HUDI-7848).
Adjusts tests to cover the new record merge modes.
New unit and functional tests are added around the new logic. Existing unit and functional tests using file group readers on Spark cover all different merging modes.
Impact
Add record merging mode and implement event-time ordering in the new file group reader
Risk level
medium
Documentation Update
HUDI-7842 to update the docs on the website
Contributor's checklist
[ ] Read through contributor's guide
[ ] Change Logs and Impact were stated clearly
[ ] Adequate tests were added if applicable
[ ] CI passed
CI is green.
| gharchive/pull-request | 2023-10-20T01:46:16 | 2025-04-01T06:37:53.808540 | {
"authors": [
"yihua"
],
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/9894",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2592998408 | IO Implementation using Go CDK
Extends PR #111
Implements #92. The Go CDK has well-maintained implementations for accessing objects stores from S3, Azure, and GCS via a io/fs.Fs-like interface. However, their file interface doesn't support the io.ReaderAt interface or the Seek() function that Iceberg-Go requires for files. Furthermore, the File components are private. So we copied the wrappers and implement the remaining functions inside of Iceberg-Go directly.
In addition, we add support for S3 Read IO using the CDK, providing the option to choose between the existing and new implementation using an extra property.
GCS connection options can be passed in properties map.
@dwilson1988 I saw your note about wanting to work on the CDK features, if you're able to provide some feedback that would be great.
@loicalleyne - happy to take a look. We use this internally in some of our software with Parquet and implemented a ReaderAt. I'll do a more thorough review when I get a chance, but my first thought was to leave it completely separate from the blob.Bucket implementation and let the Create/New funcs simple accept a *blob.Bucket and leave the rest as an exercise to the user. This keeps it more or less completely isolated from the implementation. Thoughts on this direction?
My goal today was just to "get something on paper" to move this forward since the other PR has been stalled since July, I used the other PR as a starting point so I mostly followed the existing patterns. Very open to moving things around if it makes sense. Do you have any idea how your idea would work with the interfaces defined in io.go?
Understood! I'll dig into your last question and get back to you.
Okay, played around a bit and here's where my head is at.
The main reason I'd like to isolated the creation of a *blob.Bucket is I've found that the particular implementation of bucket access can get tricky and rather than support it in this package for all situations, support the most common usage in io.LoadFS/inferFileIOFromSchema and change io.CreateBlobFileIO to accept a *url.URL and a *blob.Bucket. This enables a user to open a bucket with whatever implementation they so choose (GCS, Azure, S3, MinIO, Mem, FileSystem, etc) and there's less code here to maintain.
What I came up with is changing CreateBlobFileIO to:
// CreateBlobFileIO creates a new BlobFileIO instance
func CreateBlobFileIO(parsed *url.URL, bucket *blob.Bucket) (*BlobFileIO, error) {
ctx := context.Background()
return &BlobFileIO{Bucket: bucket, ctx: ctx, opts: &blob.ReaderOptions{}, prefix: parsed.Host + parsed.Path}, nil
}
The URL is still critical there, but now we don't have to concern ourselves with credentials to open the bucket except for in LoadFS.
Thoughts on this?
@dwilson1988
Sounds good, I've made the changes, please take a look.
@loicalleyne is this still on your radar?
hi @dwilson1988
yes, I'm wrapping up some work on another project and will be jumping back on this in a day or two.
Cool - just checking. I'll be patient. 🙂
@dwilson1988 made the suggested changes, there's a deprecation warning on the S3 config EndpointResolver methods that I haven't had time to look into, maybe you could take a look?
Hi @dwilson1988, do you think you'll have time to take a look at this?
Hi @dwilson1988, do you think you'll have time to take a look at this?
I opened a PR on your branch earlier today
@zeroshade hoping you can review when you've got time.
I should be able to give this a review tomorrow or Friday. In the meantime can you resolve the conflict in the go.mod? Thanks!
@loicalleyne looks like the integration tests are failing, unable to read the manifest files from the minio instance.
I did some debugging by copying some of the test scenarios into a regular Go program (if anyone can tell me how to run Delve in VsCode on a test that uses testify please let me know), running the docker compose file and manually running the commands in iceberg-go\.github\workflows\go-integration.yml (note: to point to the local Minio in Docker I had to run export AWS_S3_ENDPOINT=http://127.0.0.1:9000).
It seems there's something wrong with the bucket prefix and how it interacts with subsequent calls, the prefix is assigned here
ie. it's trying to HEAD object
default/test_null_nan/metadata/00000-770ce240-af4c-49dd-bae9-6871f55f8be1.metadata.jsonwarehouse/default/test_null_nan/metadata/snap-2616202072048292962-1-6c011b0d-0f2a-4b62-bc17-158f94b1c470.avro
Unfortunately I don't have time to investigate any further right now, @dwilson1988 if you've seen this before please let me know.
I've been able to replicate and debug the issue myself locally. Aside from needing to make a bunch of changes to fix the prefix, bucket and key strings, I was still unable to get gocloud.dev/blob/s3blob to find the file appropriately. I followed it down to the call to clientV2.GetObject and the s3v2.GetObjectInput has all the correct values: Bucket: "warehouse", Key: "default/test_all_types/....." etc. and yet minio still reports a 404. So I'm not sure what's going on.
I'll try poking at this tomorrow a bit more and see if i can make a small mainprog that is able to use s3blob to access a file from minio locally as a place to start.
Then I suspect it might be the s3ForcePathStyle option referred to here. It affected Minio in particular once they moved to s3 V2.
@loicalleyne I haven't dug too far into the blob code, is it a relatively easy fix to handle that s3ForcePathStyle?
My understanding is that it's just another property to pass in props. Would also have to add it as a recognized property/constant in io/s3.go I should think.
s3.UsePathStyle
// Allows you to enable the client to use path-style addressing, i.e.,
// https://s3.amazonaws.com/BUCKET/KEY . By default, the S3 client will use virtual
// hosted bucket addressing when possible( https://bucket.s3.amazonaws.com/KEY ).
UsePathStyle [bool](https://pkg.go.dev/builtin#bool)
@loicalleyne can you take a look at the latest changes I made here?
Is it intended to not provide the choice between virtual hosted bucket addressing and path-style addressing?
LGTM otherwise - the tests are passing :)
@loicalleyne following pyiceberg's example, I've added an option to force virtual addressing. That work for you?
LGTM 👍
@dwilson1988 When you get a chance, can you take a look at the changes I made here. I liked your thought on isolating things, but there was still a bunch of specific options for particular bucket types that needed to get accounted for as the options are not always passed via URL due to how Iceberg config properties work.
So I'd like your thoughts or comments on what I ultimately came up with to simplify what @loicalleyne already had while solving the failing tests and whether it fits what you were thinking and using internally. Once this is merged, I'd definitely greatly appreciate contributions for Azure as you said :)
@zeroshade - I'll take a look this weekend!
| gharchive/pull-request | 2024-10-16T20:34:32 | 2025-04-01T06:37:53.825584 | {
"authors": [
"dwilson1988",
"loicalleyne",
"zeroshade"
],
"repo": "apache/iceberg-go",
"url": "https://github.com/apache/iceberg-go/pull/176",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2235050946 | Implement caching of manifest-files
Feature Request / Improvement
We currently loop over the manifests of a snapshot often just once. But now when we're compounding the operations (DELETE+APPEND), there is a fair chance that read a manifest more than once. The spec states that manifest files are immutable, this means that we can cache is locally using a method annotated with the lru_cache.
I am trying to working on this, is it possible to assign it to me?
@swapdewalkar Thanks for picking this up! I've just assigned it to you
Hi @swapdewalkar I wanted to check in and see if you have any updates on this task. If you need any assistance or if there are any obstacles, please let me know—I will be happy to help!
Hi, can we increase the scope of this issue to cache/store all_manifests, data_manifests & delete_manifests? Or do I create a new issue for this? This feature would be useful for tasks like Incremental Scans (Append, Changelog, etc) where we frequently access manifest files. I imagine this to be similar to the java implementation.
Also, since @swapdewalkar hasn't responded and if they do not have the time/bandwidth for the issue, I'm happy to give this a shot! :)
@chinmay-bhat I think we can generalize this quite easily, since from the spec:
Once written, data and metadata files are immutable until they are deleted.
I think we could go as easy to have a lru-cache based on the path to the metadata to cache it :)
Thanks @Fokko for the quick response.
based on the path to the metadata to cache it
I'm not clear on this. Are you saying we can simply add lru_cache to def manifests(self, io: FileIO) in class Snapshot? And then whenever we need data manifests or delete manifests, we iterate over the cached manifests? Wouldn't it be better to cache those too, since as you said, the files are immutable?
For ex:
@lru_cache
def manifests(self, io: FileIO):
......
@lru_cache
def data_manifests():
return [manifest_file for manifest_file in self.manifests if manifest.content == ManifestContent.DATA]
@chinmay-bhat I don't think it is as easy as that. We should ensure that the manifest_list path is part of the cache. We could share the cache between calls, since if you do subsequent queries, and the snapshot hasn't been updated, this would speed up the call quite a bit.
We could also make the FileIO part of the caching key. I don't think that's stricktly required, but if something changed in the FileIO we might want to invalidate the cache, but I'm open to arguments here.
Thank you for clarifying! Here's how I imagine manifests() would look like :)
@lru_cache()
def manifests(self, manifest_location: str) -> List[ManifestFile]:
if manifest_location is not None:
file = load_file_io().new_input(manifest_location)
return list(read_manifest_list(file))
return []
When we call snapshot.manifests(snapshot.manifest_list), if manifest_list is the same, we simply query the cached files. But if the snapshot is updated, manifest_list is also updated, and calling manifests() triggers a re-read of manifest files.
Is this similar to what you have in mind?
| gharchive/issue | 2024-04-10T08:35:21 | 2025-04-01T06:37:53.833841 | {
"authors": [
"Fokko",
"MehulBatra",
"chinmay-bhat",
"swapdewalkar"
],
"repo": "apache/iceberg-python",
"url": "https://github.com/apache/iceberg-python/issues/595",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2698638819 | Extend the DataFileWriterBuilder tests
In data_file_writer we write out a schema but don't have any tests for that. I think it would be good to write out a schema to validate that the field-IDs are there (they are, I checked by hand). And also add a test where we write DataFile that has a partition.
@Fokko I would like to try working on this, may I be assigned this?
| gharchive/issue | 2024-11-27T13:55:19 | 2025-04-01T06:37:53.835504 | {
"authors": [
"Fokko",
"jonathanc-n"
],
"repo": "apache/iceberg-rust",
"url": "https://github.com/apache/iceberg-rust/issues/726",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
174940547 | [AIRFLOW-467] Allow defining of project_id in BigQeuryHook
Dear Airflow Maintainers,
Please accept this PR that addresses the following issues:
https://issues.apache.org/jira/browse/AIRFLOW-467
Testing Done:
Unit tests are added including backward compatibility tests
Awesome
| gharchive/pull-request | 2016-09-04T09:46:27 | 2025-04-01T06:37:53.846158 | {
"authors": [
"alexvanboxel",
"bolkedebruin"
],
"repo": "apache/incubator-airflow",
"url": "https://github.com/apache/incubator-airflow/pull/1781",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
289619431 | Update README.md
Make sure you have checked all steps below.
JIRA
[ ] My PR addresses the following Airflow JIRA issues and references them in the PR title. For example, "[AIRFLOW-XXX] My Airflow PR"
https://issues.apache.org/jira/browse/AIRFLOW-XXX
Description
[ ] Here are some details about my PR, including screenshots of any UI changes:
Tests
[ ] My PR adds the following unit tests OR does not need testing for this extremely good reason:
Commits
[ ] My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
Subject is separated from body by a blank line
Subject is limited to 50 characters
Subject does not end with a period
Subject uses the imperative mood ("add", not "adding")
Body wraps at 72 characters
Body explains "what" and "why", not "how"
[ ] Passes git diff upstream/master -u -- "*.py" | flake8 --diff
I'm closing this as there has been no movement from the submitter.
| gharchive/pull-request | 2018-01-18T12:57:17 | 2025-04-01T06:37:53.851826 | {
"authors": [
"r39132",
"topedmaria"
],
"repo": "apache/incubator-airflow",
"url": "https://github.com/apache/incubator-airflow/pull/2953",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
115411872 |
MLHR-1880 #resolve fixed default value documentation for setMaxLeng…
…th method.
@ilooner Please merge to 3.2
| gharchive/pull-request | 2015-11-06T01:03:36 | 2025-04-01T06:37:53.852907 | {
"authors": [
"chandnisingh"
],
"repo": "apache/incubator-apex-malhar",
"url": "https://github.com/apache/incubator-apex-malhar/pull/83",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1789841860 | [CELEBORN-769] Change default value of celeborn.client.push.maxReqsIn…
…Flight to 16
What changes were proposed in this pull request?
Change default value of celeborn.client.push.maxReqsInFlight to 16.
Why are the changes needed?
Previous value 4 is too small, 16 is more reasonable.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Pass GA.
cc @AngersZhuuuu @pan3793
changing the key from celeborn.client.push.maxReqsInFlight to celeborn.client.push.maxRequestsInFlight? it's better to avoid using abbreviations in configurations. WDYT? @pan3793 @waitinfuture
@cfmcgrady We use Reqs in several configurations, and "max" also is an abbr of "maximum" :)
@cfmcgrady We use Reqs in several configurations, and "max" also is an abbr of "maximum" :)
ok, Spark also has the key like spark.reducer.maxReqsInFlight
changing the key from celeborn.client.push.maxReqsInFlight to celeborn.client.push.maxRequestsInFlight? it's better to avoid using abbreviations in configurations. WDYT? @pan3793 @waitinfuture
+1 personally, but I think it's OK to use abbreviations, sometimes whole word is too long 😄
| gharchive/pull-request | 2023-07-05T15:51:09 | 2025-04-01T06:37:53.867601 | {
"authors": [
"cfmcgrady",
"pan3793",
"waitinfuture"
],
"repo": "apache/incubator-celeborn",
"url": "https://github.com/apache/incubator-celeborn/pull/1683",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
536759331 | [BUG] dolphinscheduler compile failed
Problem description:
Now I have a problem. I have cloned the latest code from GitHub, but when I compile it, I fail to compile it.
Compile command:
mvn -U clean package -Prelease -Dmaven.test.skip=true
The following is the compile error information:
Current version:
1.2.0 dev branch
Compiling environment:
MacOS 10.15.2
Expected results:
Hope to be able to compile successfully in any environment and run it successfully.
please cat README.md, there would be 'how to build'.
need .proto file compile java class,idea have relate plugin
I changed the compile command to mvn clean install -Prelease Still failed to compile
Current version:
1.2.0 dev branch ? dev or 1.2.0 branch ?
I compile dev,1.2.0 branch and incoming apache release 1.2.0 no problem
The current branch is dev
I can't import maven dependency with IntelliJ IDEA tool and report the error of Cannot resolve io.grpc:grpc-core:1.9.0
| gharchive/issue | 2019-12-12T05:04:23 | 2025-04-01T06:37:53.873485 | {
"authors": [
"lenboo",
"qiaozhanwei",
"sunnerrr",
"wuchunfu"
],
"repo": "apache/incubator-dolphinscheduler",
"url": "https://github.com/apache/incubator-dolphinscheduler/issues/1456",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
841616320 | A Linux distribution for docker-compose
As I know,the image of apache/dolphinscheduler was base on the Alpine image。But when I install Python dependencies,It takes a lot of time,at least an hour。What's more,It will need more any other dependencies,for example
apk update && apk add --no-cache gcc g++ python3-dev libffi-dev librdkafka librdkafka-dev mariadb-connector-c-dev musl-dev libxml2-utils libxslt libxslt-dev py3-numpy py3-pandas
But in the end, I fail to install python dependencies,because of the pandas。
So can you build another version based on other Linux,such as slim。 constrast
Thank you very much !
@Joder5 This is not a problem. As for a any bare linux, none of the libraries you mentioned will exist.
In order to improve the update speed, you can set the mirror source of alpine as follows:
# 1. install command/library/software
# If install slowly, you can replcae alpine's mirror with aliyun's mirror, Example:
# RUN sed -i "s/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g" /etc/apk/repositories
# RUN sed -i 's/dl-cdn.alpinelinux.org/mirror.tuna.tsinghua.edu.cn/g' /etc/apk/repositories
As for pip, you can also use the mirror source like https://pypi.tuna.tsinghua.edu.cn/simple
pip install --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple
As for slim, we will consider it later.
I give you a version of Dockerfile based on debian:slim, which will be added to the warehouse after optimization in the future.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
FROM openjdk:8-jdk-slim
ARG VERSION
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ Asia/Shanghai
ENV LANG C.UTF-8
ENV DOCKER true
# 1. install command/library/software
# If install slowly, you can replcae alpine's mirror with aliyun's mirror, Example:
RUN echo \
"deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster main contrib non-free\n\
deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster-updates main contrib non-free\n\
deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster-backports main contrib non-free\n\
deb http://mirrors.tuna.tsinghua.edu.cn/debian-security buster/updates main contrib non-free" > /etc/apt/sources.list
RUN apt-get update && \
apt-get install -y tzdata dos2unix python python3 procps netcat sudo tini postgresql-client && \
echo "Asia/Shanghai" > /etc/timezone && \
rm -f /etc/localtime && \
dpkg-reconfigure tzdata && \
rm -rf /var/lib/apt/lists/* /tmp/*
# 2. add dolphinscheduler
ADD ./apache-dolphinscheduler-incubating-${VERSION}-dolphinscheduler-bin.tar.gz /opt/
RUN ln -s /opt/apache-dolphinscheduler-incubating-${VERSION}-dolphinscheduler-bin /opt/dolphinscheduler
ENV DOLPHINSCHEDULER_HOME /opt/dolphinscheduler
# 3. add configuration and modify permissions and set soft links
COPY ./checkpoint.sh /root/checkpoint.sh
COPY ./startup-init-conf.sh /root/startup-init-conf.sh
COPY ./startup.sh /root/startup.sh
COPY ./conf/dolphinscheduler/*.tpl /opt/dolphinscheduler/conf/
COPY ./conf/dolphinscheduler/logback/* /opt/dolphinscheduler/conf/
COPY ./conf/dolphinscheduler/env/dolphinscheduler_env.sh /opt/dolphinscheduler/conf/env/
RUN dos2unix /root/checkpoint.sh && \
dos2unix /root/startup-init-conf.sh && \
dos2unix /root/startup.sh && \
dos2unix /opt/dolphinscheduler/conf/env/dolphinscheduler_env.sh && \
dos2unix /opt/dolphinscheduler/script/*.sh && \
dos2unix /opt/dolphinscheduler/bin/*.sh && \
rm -rf /bin/sh && \
ln -s /bin/bash /bin/sh && \
mkdir -p /tmp/xls /usr/lib/jvm && \
ln -sf /usr/local/openjdk-8 /usr/lib/jvm/java-1.8-openjdk && \
echo "Set disable_coredump false" >> /etc/sudo.conf
# 4. expose port
EXPOSE 5678 1234 12345 50051
ENTRYPOINT ["/usr/bin/tini", "--", "/root/startup.sh"]
``
close by #5158
| gharchive/issue | 2021-03-26T05:55:47 | 2025-04-01T06:37:53.878513 | {
"authors": [
"Joder5",
"chengshiwen"
],
"repo": "apache/incubator-dolphinscheduler",
"url": "https://github.com/apache/incubator-dolphinscheduler/issues/5155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
848066617 | [Question] Stop the workflow, But can‘t Stop yarn application
I run a shell command waht is run sqoop to load data into hive. And it is running with mapreduce, when I stop the workflow,I think this MR application will be kill, but was not. The Mr applicaiton is still RUNNING. And I dont know why.
Which version of DolphinScheduler:
-[1.3.5]
This is the log
It is the same as #4862
fix by #4936
| gharchive/issue | 2021-04-01T06:47:20 | 2025-04-01T06:37:53.881164 | {
"authors": [
"CalvinKirs",
"xingchun-chen",
"zjw-zjw"
],
"repo": "apache/incubator-dolphinscheduler",
"url": "https://github.com/apache/incubator-dolphinscheduler/issues/5194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
989590768 | [FOLLOWUP] create table like clause support copy rollup
Proposed changes
for issue #6474
create table test.table1 like test.table with rollup (r1,r2) -- copy some rollup
create table test.table1 like test.table with rollup -- copy all rollup
create table test.table1 like test.table -- only copy base table
Types of changes
What types of changes does your code introduce to Doris?
Put an x in the boxes that apply
[ ] Bugfix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Documentation Update (if none of the other choices apply)
[ ] Code refactor (Modify the code structure, format the code, etc...)
[ ] Optimization. Including functional usability improvements and performance improvements.
[ ] Dependency. Such as changes related to third-party components.
[x] Other.
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.
[x] I have created an issue on (Fix #6474) and described the bug/feature there in detail
[x] Compiling and unit tests pass locally with my changes
[x] I have added tests that prove my fix is effective or that my feature works
[x] If these changes need document changes, I have updated the document
[x] Any dependent changes have been merged
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...
Please rebase to master
| gharchive/pull-request | 2021-09-07T04:52:54 | 2025-04-01T06:37:53.887193 | {
"authors": [
"liutang123",
"qzsee"
],
"repo": "apache/incubator-doris",
"url": "https://github.com/apache/incubator-doris/pull/6580",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
380522516 | Properly reset total size of segmentsToCompact in NewestSegmentFirstIterator
When NewestSegmentFirstIterator searches for segments to compact, it sometimes needs to clear the segments found so far and starts again. The total size of segments is also needed to be reset properly.
@gianm thanks for the quick review. Added a test.
| gharchive/pull-request | 2018-11-14T03:40:39 | 2025-04-01T06:37:53.888436 | {
"authors": [
"jihoonson"
],
"repo": "apache/incubator-druid",
"url": "https://github.com/apache/incubator-druid/pull/6622",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
313715022 | The application.properties file specifies that the dubbo. protocol. port attribute is invalid.
The application.properties file specifies that the dubbo. protocol. port attribute is invalid.
The dubbo-spring-boot-starter.versionversion is 0.0.1.
You can used dubbo.protocol.${name}.port
| gharchive/issue | 2018-04-12T12:56:10 | 2025-04-01T06:37:53.889933 | {
"authors": [
"huangxincheng",
"mercyblitz"
],
"repo": "apache/incubator-dubbo-spring-boot-project",
"url": "https://github.com/apache/incubator-dubbo-spring-boot-project/issues/105",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
325975446 | I am confused with config server, transporter config.
In my understanding.
if we config transporter="netty4" in provider side. It will affect the consumer。
so we can use server="netty4" in provider side.
but why dubbo check sever key in DubboProtocol.initClient ?
the consumer with low version (not support netty4) will start failed.
And the transporter key will not affect the consumer now.
transport config will not override.
the config that can override is like timeout, retries, loadbalance, actives and so on.
@whanice
if we config transporter="netty4" in provider side. It will affect the consumer。
Transporter is the protocolconfig attribute, the client does not have a protocolconfig concept, therefore the tranporter on the provider side will not be passed to the client.
but why dubbo check sever key in DubboProtocol.initClient ?
Because the value stored by the server will be passed to the client,therefore, the client will try to take the value of the server first.
the consumer with low version (not support netty4) will start failed.
I have tried neety4 as a provider, netty3 as a client, and can use it normally, Convenient to provide demo program verification?
Thank you very much for your support of dubbo.
Thanks for your guys reply.
Surely. the tcp connection should not be related to the nio framework, No matter what you use, netty3, netty4 or mina in server. Will not affect the client use.
I care about migration. when consumer use dubbo-2.5.x . not have netty4 extension.
when provider wants to upgrade dubbo verison to dubbo-2.6.x and use netty4. It will affect the consumer.
So I think in DubboProtol.initClient. change to will be better.
// client type setting.
String str = url.getParameter(Constants.CLIENT_KEY, url.getParameter(Constants.TRANSPORTER_KEY, Constants.DEFAULT_REMOTING_CLIENT));
And in the future. I think the sever ket shoud not pass to consumer.
You are right, I will confirm this question again.
Through communication, we keep netty as the extension point name, there will be no problem
And in the future. I think the sever key shoud not pass to consumer.
Agree, and this can be solved with #2030
| gharchive/issue | 2018-05-24T06:02:58 | 2025-04-01T06:37:53.896324 | {
"authors": [
"carryxyh",
"chickenlj",
"whanice",
"zonghaishang"
],
"repo": "apache/incubator-dubbo",
"url": "https://github.com/apache/incubator-dubbo/issues/1841",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
349050059 | 禁止某个Filter的执行
由于继承了别人的模块,能不能有办法在不修改别人源码的情况,在自己的模块中禁止掉某个Filter的执行
Maybe this documentation can help you. http://dubbo.apache.org/#!/docs/dev/impls/filter.md?lang=zh-cn
但是这个是只能禁止dubbo原生的filter,不能禁止用户自定义的
在 AbstractInterfaceConfig 中 直接限制住了 ,所以非dubbo framework下的会出现No such extension....异常
public void setFilter(String filter) {
checkMultiExtension(Filter.class, "filter", filter);
this.filter = filter;
}
protected static void checkMultiExtension(Class<?> type, String property, String value) {
checkMultiName(property, value);
if (value != null && value.length() > 0) {
String[] values = value.split("\\s*[,]+\\s*");
String[] var4 = values;
int var5 = values.length;
for(int var6 = 0; var6 < var5; ++var6) {
String v = var4[var6];
if (v.startsWith("-")) {
v = v.substring(1);
}
if (!"default".equals(v) && !ExtensionLoader.getExtensionLoader(type).hasExtension(v)) {
throw new IllegalStateException("No such extension " + v + " for " + property + "/" + type.getName());
}
}
}
}
| gharchive/issue | 2018-08-09T09:32:49 | 2025-04-01T06:37:53.899196 | {
"authors": [
"diecui1202",
"luyunfeng"
],
"repo": "apache/incubator-dubbo",
"url": "https://github.com/apache/incubator-dubbo/issues/2222",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
585040952 | Hover bug on 3D Scatter when give one valued data
Hey, when i set data as object (not directly two dimensional array) and object have one value, 3d scatter's on hover event is not functional. You can try with below option code in this example page . I just get first item of current data and put it a array again.
series: [{
type: 'scatter3D',
dimensions: [
config.xAxis3D,
config.yAxis3D,
config.yAxis3D,
config.color,
config.symbolSiz
],
data: [data.map(function (item, idx) {
return [
item[fieldIndices[config.xAxis3D]],
item[fieldIndices[config.yAxis3D]],
item[fieldIndices[config.zAxis3D]],
item[fieldIndices[config.color]],
item[fieldIndices[config.symbolSize]],
idx
];
})[0]],
symbolSize: 12,
// symbol: 'triangle',
itemStyle: {
borderWidth: 1,
borderColor: 'rgba(255,255,255,0.8)'
},
emphasis: {
itemStyle: {
color: '#fff'
}
}
}]
is there any solution?
| gharchive/issue | 2020-03-20T12:35:32 | 2025-04-01T06:37:53.901782 | {
"authors": [
"ljjeseller",
"resulyrt93"
],
"repo": "apache/incubator-echarts",
"url": "https://github.com/apache/incubator-echarts/issues/12308",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
327956842 | 柱形图能分组吗?
如图能实现吗?
参考 http://gallery.echartsjs.com/editor.html?c=xBk7TY_hWx
| gharchive/issue | 2018-05-31T01:13:32 | 2025-04-01T06:37:53.903633 | {
"authors": [
"htjn",
"pissang"
],
"repo": "apache/incubator-echarts",
"url": "https://github.com/apache/incubator-echarts/issues/8427",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
395839429 | [4.2.0][bug]设置legend的背景色backgroundColor,图例依然被覆盖。
版本
[4.2.0]
重现条件
legend: {
data: ['line', 'bar'],
textStyle: {
color: ['#fff', '#ccc'],
backgroundColor: 'rgba(255,0,0,.5)'
}
},
设置textStyle: {
color: ['#fff', '#ccc'],
backgroundColor:'red'
}
主要是color为数组的时候, 同时设置backgroundColor, backgroundColor就会覆盖掉图例
截图
经测试 4.2.1 版本无法复现
| gharchive/issue | 2019-01-04T08:00:27 | 2025-04-01T06:37:53.907331 | {
"authors": [
"Ovilia",
"ihwf"
],
"repo": "apache/incubator-echarts",
"url": "https://github.com/apache/incubator-echarts/issues/9683",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1522398549 | fix call the same method twice
Fixes #2670 .
Motivation
Explain the content here.
Explain why you want to make the changes and what problem you're trying to solve.
Modifications
Describe the modifications you've done.
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
If a feature is not applicable for documentation, explain why?
If a feature is not documented yet in this PR, please create a followup issue for adding the documentation
fix for isse #2670
Pls add Motivation and Modifications description,and solve the build error. @joaovitoras
| gharchive/pull-request | 2023-01-06T11:11:24 | 2025-04-01T06:37:53.910733 | {
"authors": [
"chenyi19851209",
"jonyangx"
],
"repo": "apache/incubator-eventmesh",
"url": "https://github.com/apache/incubator-eventmesh/pull/2838",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
129918242 | hawqextract column context does not exist error
When running hawq extract python stack trace is returned because pg_aoseg no longer has a column called content
[gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect database localhost:5432 gpadmin 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- extract AO_FileLocations Traceback (most recent call last): File "/usr/local/hawq-master/bin/hawqextract", line 551, in <module> sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in main metadata = extract_metadata(conn, args[0]) File "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: ERROR: column "content" does not exist LINE 2: SELECT content, segno as fileno, eof as filesize
LGTM.
Verified this can fix generate yaml file.
Would you please file a jira and update the commit message? Like below:
HAWQ-XXX. hawqextract column context does not exist error
BTW, did you tested run with the generated yarm file? Thanks.
when attempting to create apache jira i get timeout or null pointer
exception. I will try again later.
I only ran the YAML file through yamllint but did not test it in a
mapreduce job or anything. I stumbled on the error while working on
something else. Also comparing it to GPDB the output looks similar.
On Fri, Mar 11, 2016 at 9:11 AM, Radar Lei notifications@github.com wrote:
LGTM.
Verified this can fix generate yaml file.
Would you please file a jira and update the commit message? Like below:
HAWQ-XXX. hawqextract column context does not exist error
BTW, did you tested run with the generated yarm file? Thanks.
—
Reply to this email directly or view it on GitHub
https://github.com/apache/incubator-hawq/pull/305#issuecomment-195406454
.
Fix yaml file generation is good enough for this pull request. We can check if it work with mapreduce job in separate jira.
Please update the commit message, then we can merge it in. Thanks.
jira created
https://issues.apache.org/jira/browse/HAWQ-535
You might need to update the git commit message with the jira number, so when we finished merge, it can keep the origin author information.
thanks i amended the commit message
On Mon, Mar 14, 2016 at 9:42 AM, Radar Lei notifications@github.com wrote:
You might need to update the git commit message with the jira number, so
when we finished merge, it can keep the origin author information.
—
Reply to this email directly or view it on GitHub
https://github.com/apache/incubator-hawq/pull/305#issuecomment-196343453
.
LGTM
Now the fix is in. Thanks.
@randomtask1155 , please close this pull request since it's already been merged. Thanks.
| gharchive/pull-request | 2016-01-30T00:20:09 | 2025-04-01T06:37:53.919473 | {
"authors": [
"radarwave",
"randomtask1155",
"yaoj2"
],
"repo": "apache/incubator-hawq",
"url": "https://github.com/apache/incubator-hawq/pull/305",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
206155833 | [HIVEMALL-54][SPARK] Add an easy-to-use script for spark-shell
What changes were proposed in this pull request?
This pr added a script to automatically download the latest Spark version, compile Hivemall for the version, and invoke spark-shell with the compiled Hivemall binary.
This pr also included a documentation for hivemall-on-spark installation.
What type of PR is it?
Improvement
What is the Jira issue?
https://issues.apache.org/jira/browse/HIVEMALL-54
How was this patch tested?
Manually tested.
Coverage increased (+0.3%) to 36.142% when pulling 5433fd56db138880c9e0ed0f2d20cf7396d4e5e8 on maropu:AddScriptForSparkShell into 85f8e173a2a97005c00b84140f4b9150060c4a56 on apache:master.
Coverage increased (+0.3%) to 36.142% when pulling 5433fd56db138880c9e0ed0f2d20cf7396d4e5e8 on maropu:AddScriptForSparkShell into 85f8e173a2a97005c00b84140f4b9150060c4a56 on apache:master.
@amaya382 @Lewuathe Could you confirm that the updated bin/spark-shell properly works if your spare time?
@maropu BTW, better to update incubator-hivemall-site once this PR is merged.
yea, I'll update just after this merged.
@myui 👌
Coverage remained the same at 35.844% when pulling 17f2dcf375a1c4c0ad979c80c6591143aefe8a1b on maropu:AddScriptForSparkShell into 85f8e173a2a97005c00b84140f4b9150060c4a56 on apache:master.
@amaya382 Have you confirmed?
@maropu please merge this PR if he say fine.
Looks mostly good, but I found minor issues not directly related to this PR.
Some declarations in define-all.spark are incorrect and duplicated.
e.g. train_arowh: https://github.com/maropu/incubator-hivemall/blob/AddScriptForSparkShell/resources/ddl/define-all.spark#L32-L36
@amaya382 Can you make prs to fix them?
Merged.
@amaya382 I made a JIRA ticket: https://issues.apache.org/jira/browse/HIVEMALL-65
@maropu okay, I'll do in a few days
Coverage remained the same at 35.814% when pulling ff03adf1de142fb9cdbe373bed718dda2e8e840e on maropu:AddScriptForSparkShell into 247e1aef87ddc789dacfb74e469263e7e2ab603e on apache:master.
Coverage remained the same at 35.814% when pulling ff03adf1de142fb9cdbe373bed718dda2e8e840e on maropu:AddScriptForSparkShell into 247e1aef87ddc789dacfb74e469263e7e2ab603e on apache:master.
@maropu Could you update incubator-hivemall-site? This feature should be documented.
| gharchive/pull-request | 2017-02-08T10:30:11 | 2025-04-01T06:37:53.931994 | {
"authors": [
"amaya382",
"coveralls",
"maropu",
"myui"
],
"repo": "apache/incubator-hivemall",
"url": "https://github.com/apache/incubator-hivemall/pull/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
238519866 | [HIVEMALL-96-2] Added Geo Spatial UDFs
What changes were proposed in this pull request?
This PR added 5 Geo Spatial UDFs: lat2tiley, lon2tilex, tilex2lon, tileytolat, and haversine_distance.
What type of PR is it?
Feature
What is the Jira issue?
https://issues.apache.org/jira/browse/HIVEMALL-96
How was this patch tested?
Unit tests and manual tests
How to use this feature?
WITH data as (
select 51.51202 as lat, 0.02435 as lon, 17 as zoom
union all
select 51.51202 as lat, 0.02435 as lon, 4 as zoom
union all
select null as lat, 0.02435 as lon, 17 as zoom
)
select
lat, lon, zoom,
tile(lat, lon, zoom) as tile,
(lon2tilex(lon,zoom) + lat2tiley(lat,zoom) * cast(pow(2, zoom) as bigint)) as tile2,
lon2tilex(lon, zoom) as xtile,
lat2tiley(lat, zoom) as ytile,
tiley2lat(lat2tiley(lat, zoom), zoom) as lat2, -- tiley2lat returns center of the tile
tilex2lon(lon2tilex(lon, zoom), zoom) as lon2 -- tilex2lon returns center of the tile
from
data;
select
haversine_distance(35.6833, 139.7667, 34.6603, 135.5232) as km,
haversine_distance(35.6833, 139.7667, 34.6603, 135.5232, true) as mile;
Coverage increased (+0.2%) to 40.19% when pulling 6c391786cf877ef3080db9403192c55700491bae on myui:HIVEMALL-96-2 into c06378a81723e3998f90c08ec7444ead5b6f2263 on apache:master.
| gharchive/pull-request | 2017-06-26T11:54:34 | 2025-04-01T06:37:53.935851 | {
"authors": [
"coveralls",
"myui"
],
"repo": "apache/incubator-hivemall",
"url": "https://github.com/apache/incubator-hivemall/pull/90",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1484477213 | Config the rest frontend service max worker thread
Why are the changes needed?
How was this patch tested?
[ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible
[ ] Add screenshots for manual tests if appropriate
[ ] Run test locally before make a pull request
thanks, merged to master
| gharchive/pull-request | 2022-12-08T11:56:39 | 2025-04-01T06:37:53.958441 | {
"authors": [
"turboFei"
],
"repo": "apache/incubator-kyuubi",
"url": "https://github.com/apache/incubator-kyuubi/pull/3946",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1275752618 | The Linkis website needs to add a github action to check whether the linkis website link is available
Some links on the linkis website have been broken.
#341 fix the issue
| gharchive/issue | 2022-06-18T10:14:10 | 2025-04-01T06:37:53.959344 | {
"authors": [
"Beacontownfc"
],
"repo": "apache/incubator-linkis-website",
"url": "https://github.com/apache/incubator-linkis-website/issues/354",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1411368691 | [功能] Dev 1.3.1 metadatasource driver profiles not loaded by default
linkis-metadata-query-jdbc module driver profiles not loaded by default
最好可以添加下背景说明下为什么没有添加到到provide 依赖的原因
这些依赖POM配置建议以test模式放到pom文件中,用户只需要移除注解就能编译使用
添加对应驱动jar包的下载地址列表 方便直接使用官方安装包的同学使用,并告知应该拷贝到对应的那个目录下
| gharchive/pull-request | 2022-10-17T11:00:11 | 2025-04-01T06:37:53.960670 | {
"authors": [
"casionone",
"dlimeng"
],
"repo": "apache/incubator-linkis-website",
"url": "https://github.com/apache/incubator-linkis-website/pull/544",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1194150622 | Add junit5 test code for module [linkis-storage]
Add junit5 test code for module [linkis-storage]
Module: linkis-storage
Location:linkis-commons/linkis-storage
Task status:✗(unfinished)/ ✓(finished)
Test code guide:https://linkis.apache.org/zh-CN/community/how-to-write-unit-test-code/
Test code example: https://github.com/apache/incubator-linkis/tree/master/linkis-public-enhancements/linkis-publicservice/linkis-jobhistory/src/test
模块名: linkis-storage
代码位置:linkis-commons/linkis-storage
任务状态:✗(unfinished)/ ✓(finished)
单测编写指引:https://linkis.apache.org/zh-CN/community/how-to-write-unit-test-code/
参考样例: https://github.com/apache/incubator-linkis/tree/master/linkis-public-enhancements/linkis-publicservice/linkis-jobhistory/src/test
Task status
Class
Type
Level of Difficulty
Note
✗
org.apache.linkis.storage.excel.*
utils
normal
✗
org.apache.linkis.storage.exception.*
Exceptions
normal
✗
org.apache.linkis.storage.factory.impl.BuildHDFSFileSystem
Factories
normal
✗
org.apache.linkis.storage.factory.impl.BuildLocalFileSystem
Factories
normal
✗
org.apache.linkis.storage.fs.impl.HDFSFileSystem
Factories
normal
✗
org.apache.linkis.storage.fs.impl.LocalFileSystem
Factories
normal
✗
org.apache.linkis.storage.pipeline.PipelineReader
Factories
normal
✗
org.apache.linkis.storage.pipeline.PipelineWriter
Factories
normal
✗
org.apache.linkis.storage.csv.*
Cvs
normal
✗
org.apache.linkis.storage.domain.DataType
Domain
normal
✗
org.apache.linkis.storage.domain.Dolphin
Domain
normal
✗
org.apache.linkis.storage.domain.MethodEntity
Domain
normal
✗
org.apache.linkis.storage.excel.*
Utils
normal
scala
✗
org.apache.linkis.storage.io.IOClient
Utils
normal
✗
org.apache.linkis.storage.io.IOMethodInterceptorCreator
Utils
normal
✗
org.apache.linkis.storage.resultset.html.*
ResultSet
normal
✗
org.apache.linkis.storage.resultset.io.*
ResultSet
normal
✗
org.apache.linkis.storage.resultset.picture.*
ResultSet
normal
✗
org.apache.linkis.storage.resultset.table.*
ResultSet
normal
✗
org.apache.linkis.storage.resultset.txt.*
ResultSet
normal
✗
org.apache.linkis.storage.resultset.*
ResultSet
normal
✗
org.apache.linkis.storage.script.compaction.*
Script
normal
✗
org.apache.linkis.storage.script.parser.*
Script
normal
✗
org.apache.linkis.storage.script.reader.*
Script
normal
✗
org.apache.linkis.storage.script.writer.*
Script
normal
✗
org.apache.linkis.storage.script.*
Script
normal
✗
org.apache.linkis.storage.source.*
Source
normal
✗
org.apache.linkis.storage.utils.*
Utils
normal
✗
org.apache.linkis.storage.FSFactory
Utils
normal
✗
org.apache.linkis.storage.LineMetaData
Utils
normal
✗
org.apache.linkis.storage.LineRecord
Utils
normal
Completed in dev-1.3.1
| gharchive/issue | 2022-04-06T07:10:42 | 2025-04-01T06:37:53.985627 | {
"authors": [
"husofskyzy",
"ruY9527"
],
"repo": "apache/incubator-linkis",
"url": "https://github.com/apache/incubator-linkis/issues/1908",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
240548199 | Minor. Fix deprecated conf warning log issue
./livy-sshao-server.out.5:17/05/08 16:50:44 WARN LivyConf: The configuration key livy.spark.deployMode has been deprecated as of Livy 0.4 and may be removed in the future. Please use the new key livy.spark.deploy-mode instead.
./livy-sshao-server.out.5:17/05/08 16:50:45 WARN LivyConf: The configuration key livy.spark.scalaVersion has been deprecated as of Livy 0.4 and may be removed in the future. Please use the new key livy.spark.scala-version instead.
./livy-sshao-server.out.5:17/05/08 16:51:04 WARN RSCConf: The configuration key livy.rsc.driver_class has been deprecated as of Livy 0.4 and may be removed in the future. Please use the new key livy.rsc.driver-class instead.
This log is incorrect even if we use new configuration key. This is mainly because the logic in logDeprecationWarning to check alternative configurations is not correct.
Codecov Report
Merging #13 into master will increase coverage by 0.18%.
The diff coverage is 94.73%.
@@ Coverage Diff @@
## master #13 +/- ##
============================================
+ Coverage 70.47% 70.65% +0.18%
- Complexity 729 733 +4
============================================
Files 96 96
Lines 5161 5177 +16
Branches 779 781 +2
============================================
+ Hits 3637 3658 +21
+ Misses 1006 996 -10
- Partials 518 523 +5
Impacted Files
Coverage Δ
Complexity Δ
...java/org/apache/livy/client/common/ClientConf.java
99% <94.73%> (-1%)
44 <6> (+3)
rsc/src/main/java/org/apache/livy/rsc/RSCConf.java
86.73% <0%> (-1.03%)
7% <0%> (ø)
...in/java/org/apache/livy/rsc/driver/JobWrapper.java
80.64% <0%> (ø)
8% <0%> (+1%)
:arrow_up:
rsc/src/main/java/org/apache/livy/rsc/rpc/Rpc.java
78.61% <0%> (+0.62%)
12% <0%> (ø)
:arrow_down:
...ain/java/org/apache/livy/rsc/driver/RSCDriver.java
78.44% <0%> (+0.86%)
40% <0%> (-1%)
:arrow_down:
rsc/src/main/java/org/apache/livy/rsc/Utils.java
85.71% <0%> (+2.38%)
16% <0%> (ø)
:arrow_down:
...in/java/org/apache/livy/rsc/rpc/RpcDispatcher.java
69.56% <0%> (+3.26%)
20% <0%> (+1%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 412ccc8...e3a584d. Read the comment docs.
Thanks @ajbozarth , merging to master.
| gharchive/pull-request | 2017-07-05T06:17:46 | 2025-04-01T06:37:53.999130 | {
"authors": [
"codecov-io",
"jerryshao"
],
"repo": "apache/incubator-livy",
"url": "https://github.com/apache/incubator-livy/pull/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
375791871 | Something wrong of form .rec of my own data
I try to form .rec and .idx of my own data.
First, run im2rec.py to generate list :
python im2rec.py --list --recursive --train-ratio 1 [list folder] [images folder]
Second, run im2rec.py to generate .rec and .idx.
python im2rec.py --num-thread 4 [list folder] [images folder]
But when I read index,
imgrec = mx.recordio.MXIndexedRecordIO(args.idx_path, args.bin_path, 'r')
s = imgrec.read_idx(0)
header, _ = mx.recordio.unpack(s)
The "header" has no label value.
Where is the problem?
Thanks.
@mxnet-label-bot [Question]
Thanks a lot for reporting this issue. I tried it out and you are right, that the header does not have the correct values. Using imgrec.read instead of imgrec.read_idx seems to solve this problem.
Thanks a lot for reporting this issue. Is every label in the file incorrect or just a few?
| gharchive/issue | 2018-10-31T04:16:29 | 2025-04-01T06:37:54.004676 | {
"authors": [
"NRauschmayr",
"frankfliu",
"larsonwu0220"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/issues/13053",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
837445662 | OSError: libopenblas.so.0: cannot open shared object file: No such file or directory in mxnet 1.8.0
Description
mxnet 1.8.0 emits the following error when running import mxnet:
OSError: libopenblas.so.0: cannot open shared object file: No such file or directory
Error Message
(generated from the dockerfile attached in the to-reproduce section)
+ Step 1/3 : FROM python:3.7
---> 7fefbebd95b5
+ Step 2/3 : RUN pip install mxnet
---> Running in fc634966f9aa
Collecting mxnet
Downloading mxnet-1.8.0-py2.py3-none-manylinux2014_x86_64.whl (38.7 MB)
Collecting graphviz<0.9.0,>=0.8.1
Downloading graphviz-0.8.4-py2.py3-none-any.whl (16 kB)
Collecting numpy<2.0.0,>1.16.0
Downloading numpy-1.20.1-cp37-cp37m-manylinux2010_x86_64.whl (15.3 MB)
Collecting requests<3,>=2.20.0
Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
Collecting chardet<5,>=3.0.2
Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting idna<3,>=2.5
Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.4-py2.py3-none-any.whl (153 kB)
Installing collected packages: urllib3, idna, chardet, certifi, requests, numpy, graphviz, mxnet
+ Successfully installed certifi-2020.12.5 chardet-4.0.0 graphviz-0.8.4 idna-2.10 mxnet-1.8.0 numpy-1.20.1 requests-2.25.1 urllib3-1.26.4
WARNING: You are using pip version 20.3.1; however, version 21.0.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
Removing intermediate container fc634966f9aa
---> b1c12d9f4376
+ Step 3/3 : RUN python -c "import mxnet"
---> Running in 8362fc58b280
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/mxnet/__init__.py", line 23, in <module>
from .context import Context, current_context, cpu, gpu, cpu_pinned
File "/usr/local/lib/python3.7/site-packages/mxnet/context.py", line 23, in <module>
from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass
File "/usr/local/lib/python3.7/site-packages/mxnet/base.py", line 351, in <module>
_LIB = _load_lib()
File "/usr/local/lib/python3.7/site-packages/mxnet/base.py", line 342, in _load_lib
lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)
File "/usr/local/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
+ OSError: libopenblas.so.0: cannot open shared object file: No such file or directory
The command '/bin/sh -c python -c "import mxnet"' returned a non-zero code: 1
(key lines are colored green)
To Reproduce
Steps to reproduce
Prepare the following dockerfile:
FROM python:3.7
RUN pip install mxnet
RUN python -c "import mxnet"
Then, run docker build .
What have you tried to solve it?
Environment
We recommend using our script for collecting the diagnostic information with the following command
curl --retry 10 -s https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py | python3
Environment Information
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Stepping: 13
CPU MHz: 2400.000
BogoMIPS: 4800.00
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht pbe syscall nx pdpe1gb lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq dtes64 ds_cpl ssse3 sdbg fma cx16 xtpr pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase bmi1 avx2 bmi2 erms xsaveopt arat
----------Python Info----------
Version : 3.7.9
Compiler : GCC 8.3.0
Build : ('default', 'Nov 18 2020 14:10:47')
Arch : ('64bit', 'ELF')
------------Pip Info-----------
Version : 20.3.1
Directory : /usr/local/lib/python3.7/site-packages/pip
----------MXNet Info-----------
No MXNet installed.
----------System Info----------
Platform : Linux-4.19.76-linuxkit-x86_64-with-debian-10.6
system : Linux
node : 9bcb86c4d6cf
release : 4.19.76-linuxkit
version : #1 SMP Tue May 26 11:42:35 UTC 2020
----------Hardware Info----------
machine : x86_64
processor :
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0155 sec, LOAD: 0.9034 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1894 sec, LOAD: 0.2668 sec.
Error open Gluon Tutorial(cn): https://zh.gluon.ai, <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1091)>, DNS finished in 0.3720698356628418 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0193 sec, LOAD: 0.5068 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0474 sec, LOAD: 0.5751 sec.
Error open Conda: https://repo.continuum.io/pkgs/free/, HTTP Error 403: Forbidden, DNS finished in 0.01907634735107422 sec.
----------Environment----------
Removing intermediate container 9bcb86c4d6cf
I think this is due to a change in CD that openblas is no longer statically linked in libmxnet. For now you can install openblas separately.
cc @mseth10 @leezu
@harupy it's unclear what you are doing. You need to provide more details.
We observe the same issue when trying to use pip-installed MXNet 1.8 (ubuntu, CPU): https://github.com/awslabs/sockeye/runs/2161521442?check_suite_focus=true
@mseth10 @access2rohit please take a look why the CD didn't package the libopenblas.so
tools/pip/setup.py includes instructions for copying libopenblas.so in v1.8.x, v1.x, and master. But apparently that didn't work in v1.8.x, potentially due to some missing library file names in the jenkins files?
v1.8.x https://github.com/apache/incubator-mxnet/blob/a0535ddfb0246f53f7b851baf861fc06d3ff48c3/tools/pip/setup.py#L170-L172
v1.x https://github.com/apache/incubator-mxnet/blob/cfa1c890a7ecb8b5e29ff4e90d6784141f09c4cd/tools/pip/setup.py#L164-L166
master https://github.com/apache/incubator-mxnet/blob/4d706e8c19b3354878eda9467b149c0ce1fd6d47/tools/pip/setup.py#L165-L167
However, I noted that v1.8.x also attempts to copy libquadmath, which MUST NOT happen due to license of libquadmath. That should be fixed. Fortunately that code didn't run due to the current bug.
The problem is that https://github.com/apache/incubator-mxnet/pull/19514 is missing on v1.8.x
My container suddenly started failing to build and upon looking, this was the error. I started using previous version which is 1.7.0.post2 and works perfectly
@praneethkv we are working on patching the wheels to fix the problem
| gharchive/issue | 2021-03-22T08:11:05 | 2025-04-01T06:37:54.016221 | {
"authors": [
"fhieber",
"harupy",
"leezu",
"praneethkv",
"szha"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/issues/20068",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
312172693 | Improve row_sparse tutorial
Description
@haojin2
Checklist
Essentials
[ ] Passed code style checking (make lint)
[ ] Changes are complete (i.e. I finished coding on this PR)
[ ] All changes have test coverage:
Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
[ ] Code is well-documented:
For user-facing API changes, API doc string has been updated.
For new C++ functions in header files, their functionalities and arguments are documented.
For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
[ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
Changes
[ ] Feature1, tests, (and when applicable, API doc)
[ ] Feature2, tests, (and when applicable, API doc)
Comments
If this change is a backward incompatible change, why must this change be made.
Interesting edge cases to note here
LGTM!
can you associate this with a jira ?
I don’t think Tiny PR like this needs one but yeah I can create a JIRA item..
| gharchive/pull-request | 2018-04-07T05:40:02 | 2025-04-01T06:37:54.022234 | {
"authors": [
"anirudh2290",
"eric-haibin-lin",
"haojin2"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/10454",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
406569032 | Add dtype visualization to plot_network
Description
Add possibility to print type information alongside shape in mx.vis.plot_network.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
[x] Changes are complete (i.e. I finished coding on this PR)
[x] All changes have test coverage:
Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
[x] Code is well-documented:
For user-facing API changes, API doc string has been updated.
Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
[x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
@mxnet-label-bot add [pr-awaiting-review, Visualization]
@szha Does anything else need to be done with this PR?
Merged. Thanks for your contribution!
| gharchive/pull-request | 2019-02-04T23:39:59 | 2025-04-01T06:37:54.026248 | {
"authors": [
"ptrendx",
"vandanavk",
"wkcn"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/14066",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
444137161 | Silence excessive mkldnn logging output on tests.
Description
Silenced excessive logging output:
http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/mxnet-validation/pipelines/unix-cpu/branches/PR-14940/runs/1/nodes/283/steps/749/log/?start=0
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
[x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
[x] Changes are complete (i.e. I finished coding on this PR)
[x] All changes have test coverage:
Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
[x] Code is well-documented:
For user-facing API changes, API doc string has been updated.
For new C++ functions in header files, their functionalities and arguments are documented.
For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
[x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
@pengzhao-intel I saw this excessive log output and had a 3h timeout on tests, wanted to see if the PR validation time goes down because of this, probably not related.
@larroy Sorry just notice that you're trying to save time of CI? I remember MXNET_MKLDNN_DEBUG is turned on explicitly in CI. So my suggestion might not help for this case.
@larroy Sorry just notice that you're trying to save time of CI? I remember MXNET_MKLDNN_DEBUG is turned on explicitly in CI. So my suggestion might not help for this case.
How about turning it off because we have enough test cases in the CI for MKLDNN now?
Thank you for your improvement. Merge now.
| gharchive/pull-request | 2019-05-14T21:38:18 | 2025-04-01T06:37:54.033504 | {
"authors": [
"TaoLv",
"larroy",
"pengzhao-intel"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/14947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
273198275 | fix row_sparse demo tutorials doc
Description
row sparse demo tutorials, it has a mistake that x should be 12 and w should be 23.
The CI is a bit brittle. Do you mind sync with master and trigger the CI again?
Ok! I will try @eric-haibin-lin
why is there no change?
Looks like this is already merged in. Closing
| gharchive/pull-request | 2017-11-12T04:25:57 | 2025-04-01T06:37:54.035656 | {
"authors": [
"burness",
"eric-haibin-lin",
"piiswrong"
],
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/8621",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
261870969 | [NETBEANS-54] Module Review defaults
no external library
checked Rat report; unrecognized license headers manually changed
skimmed through the module, did not notice additional problems
Merged into master with 1 positive review. Thank you for reviewing.
| gharchive/pull-request | 2017-09-30T20:44:49 | 2025-04-01T06:37:54.037103 | {
"authors": [
"matthiasblaesing"
],
"repo": "apache/incubator-netbeans",
"url": "https://github.com/apache/incubator-netbeans/pull/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
544282176 | net: tcp: Fix compile error in tcp.h
During build testing for spresense with the latest master, I encountered the following compile error.
tcp/tcp_netpoll.c: In function 'tcp_pollsetup':
./tcp/tcp.h:1191:42: error: expected ')' before ';' token
# define tcp_backlogavailable(c) (false);
^
tcp/tcp_netpoll.c:308:40: note: in expansion of macro 'tcp_backlogavailable'
if (!IOB_QEMPTY(&conn->readahead) || tcp_backlogavailable(conn))
^~~~~~~~~~~~~~~~~~~~
Makefile:102: recipe for target 'tcp_netpoll.o' failed
Actually, the line 1191 in tcp/tcp.h was added in 2014-07-06 but not used so far.
And the line 308 in tcp/tcp_netpoll.c was not called on my environment before merging the following commit
commit 90c52e6f8f7efce97ac718c0f98addc13ec880d2
Author: Xiang Xiao <xiaoxiang@xiaomi.com>
Date: Tue Dec 31 09:26:14 2019 -0600
The fix in this PR is very trivial and I think @xiaoxiang781216 enabled CONFIG_NET_TCPBACKLOG on his environments because he did not encounter the error.
BTW: CONFIG_NET_TCPBACKLOG probably should always be enabled. Otherwise, you will miss connection requests. That configuration is a candidate to be removed and just have connection backlog support enabled at all times.
BTW: CONFIG_NET_TCPBACKLOG probably should always be enabled. Otherwise, you will miss connection requests. That configuration is a candidate to be removed and just have connection backlog support enabled at all times.
@patacongo Thanks for the comment. I will modify our defconfigs in separate PR later.
@masayuki2009 Removing support for CONFIG_NET_TCPBACKLOG altogether might be a better solution? Anyone else have an opinion to the contrary?
The TCP backlog was conditioned originally only to support support super tiny networking. But with all of the growth in networking, I think super-tiny networking is not really an option and is certainly not advised in this case since the consequences of disabling backlog are so bad. NuttX now really only supports "small" networking, not super-tiny networking.
@masayuki2009 Removing support for CONFIG_NET_TCPBACKLOG altogether might be a better solution? Anyone else have an opinion to the contrary?
The TCP backlog was conditioned originally only to support super-tiny networking. But with all of the growth in networking, I think super-tiny networking is not really an option and is certainly not advised in this case since the consequences of disabling backlog are so severe. NuttX now really only supports "small" networking, not super-tiny networking.
@patacongo I think that's a good idea, because we will not use NuttX networking with super-tiny environment. If nobody has an objection on it, please remove support for CONFIG_NET_TCPBACKLOG.
| gharchive/pull-request | 2019-12-31T22:35:45 | 2025-04-01T06:37:54.042425 | {
"authors": [
"masayuki2009",
"patacongo"
],
"repo": "apache/incubator-nuttx",
"url": "https://github.com/apache/incubator-nuttx/pull/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
277190319 | Refactor Whisk Deploy errors into new package to avoid cyclic dependencies
currently, wskdeployerror.go (and it unit test file) are in "utils" package; however, if we want to do better unit testing we need to be able to test errors against manifests and the YAML parser. This would mean importing "parsers" into "utils" which leads to cyclic dependency errors in GoLang.
To overcome this cyclic error, we must refactor the error modules into a new "wskdeplyerror" package.
Fixed with wskderrors at https://github.com/apache/incubator-openwhisk-wskdeploy/blob/master/wskderrors/wskdeployerror.go
| gharchive/issue | 2017-11-27T22:22:10 | 2025-04-01T06:37:54.044407 | {
"authors": [
"mrutkows",
"pritidesai"
],
"repo": "apache/incubator-openwhisk-wskdeploy",
"url": "https://github.com/apache/incubator-openwhisk-wskdeploy/issues/650",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2187118195 | support config for jackson buffer recycler pool
the buffer recycler is an important performance feature in Jackson
Jackson 2.17 also changes the default pool implementation and this has proved an issue - see https://github.com/FasterXML/jackson-module-scala/issues/672
my plan for Pekko is to keep the existing ThreadLocal implementation as the default even if Jackson has a different default
It looks good.
Is it necessary to supplement the documentation?
https://pekko.apache.org/docs/pekko/current/serialization-jackson.html#additional-features
| gharchive/pull-request | 2024-03-14T19:27:35 | 2025-04-01T06:37:54.046643 | {
"authors": [
"laglangyue",
"pjfanning"
],
"repo": "apache/incubator-pekko",
"url": "https://github.com/apache/incubator-pekko/pull/1192",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
104154849 | [REEF-624] Replace all use of DriverBridgeConfiguration with DriverConfiguration
This addressed the issue by
replacing all usages of DriverBridgeConfiguration in examples and tests
with DriverConfiguration
adding IObservable and necessary methods to test drivers
JIRA:
REEF-624
Pull Request:
Closes #
I'll test and merge
There are test failures:
BroadcastReduceTest.TestBroadcastAndReduceOnLocalRuntime:
Cleaning up test.
Org.Apache.REEF.Driver.Bridge.ClrHandlerHelper Start: 0 : 2015-08-31T17:42:23.2005420-07:00 0016
START: 2015-08-31 17:42:23 ClrHandlerHelper::CopyDllsToAppDirectory
Org.Apache.REEF.Driver.Bridge.ClrHandlerHelper Stop: 0 : 2015-08-31T17:42:23.2070308-07:00 0016
EXIT: 2015-08-31 17:42:23 ClrHandlerHelper::CopyDllsToAppDirectory. Duration: [00:00:00.0062184].
Org.Apache.REEF.Driver.Bridge.ClrHandlerHelper Start: 0 : 2015-08-31T17:42:23.2100389-07:00 0016
START: 2015-08-31 17:42:23 ClrHandlerHelper::GetAssembliesListForReefDriverApp
Org.Apache.REEF.Driver.Bridge.ClrHandlerHelper Stop: 0 : 2015-08-31T17:42:23.3015332-07:00 0016
EXIT: 2015-08-31 17:42:23 ClrHandlerHelper::GetAssembliesListForReefDriverApp. Duration: [00:00:00.0916312].
GetLogFile, driverContainerDirectory:C:\src\reef\lang\cs\TestResults\Deploy_mweimer 2015-08-31 17_42_20\Out\REEF_LOCAL_RUNTIME\ReefClrBridge-1441068147372\driver
Lines read from log file : 181
Cleaning up test.
Assert.IsTrue failed.
at Org.Apache.REEF.Tests.Functional.ReefFunctionalTest.ValidateSuccessForLocalRuntime(Int32 numberOfEvaluatorsToClose, String testFolder) in ReefFunctionalTest.cs: line 161
Assert.IsTrue failed.
at Org.Apache.REEF.Tests.Functional.ReefFunctionalTest.ValidateSuccessForLocalRuntime(Int32 numberOfEvaluatorsToClose, String testFolder) in ReefFunctionalTest.cs: line 161
Assert.IsTrue failed.
at Org.Apache.REEF.Tests.Functional.ReefFunctionalTest.ValidateSuccessForLocalRuntime(Int32 numberOfEvaluatorsToClose, String testFolder) in ReefFunctionalTest.cs: line 161
PipelinedBroadcastReduceTest.TestPipelinedBroadcastAndReduceOnLocalRuntime:
Cleaning up test.
GetLogFile, driverContainerDirectory:C:\src\reef\lang\cs\TestResults\Deploy_mweimer 2015-08-31 17_42_20\Out\REEF_LOCAL_RUNTIME\ReefClrBridge-1441068245737\driver
Lines read from log file : 183
Cleaning up test.
Assert.IsTrue failed.
at Org.Apache.REEF.Tests.Functional.ReefFunctionalTest.ValidateSuccessForLocalRuntime(Int32 numberOfEvaluatorsToClose, String testFolder) in ReefFunctionalTest.cs: line 161
ScatterReduceTest.TestScatterAndReduceOnLocalRuntime:
Cleaning up test.
GetLogFile, driverContainerDirectory:C:\src\reef\lang\cs\TestResults\Deploy_mweimer 2015-08-31 17_42_20\Out\REEF_LOCAL_RUNTIME\ReefClrBridge-1441068234174\driver
Lines read from log file : 181
Cleaning up test.
Assert.IsTrue failed.
at Org.Apache.REEF.Tests.Functional.ReefFunctionalTest.ValidateSuccessForLocalRuntime(Int32 numberOfEvaluatorsToClose, String testFolder) in ReefFunctionalTest.cs: line 161
TestTaskMessage.TestSendTaskMessage():
Cleaning up test.
Running test edf0b053. If failed AND log uploaded is enabled, log can be find in 2015-08-31\edf0b053
GetLogFile, driverContainerDirectory:C:\src\reef\lang\cs\TestResults\Deploy_mweimer 2015-08-31 17_42_20\Out\REEF_LOCAL_RUNTIME\ReefClrBridge-1441068183300\driver
Lines read from log file : 165
Cleaning up test.
Assert.IsTrue failed.
at Org.Apache.REEF.Tests.Functional.ReefFunctionalTest.ValidateSuccessForLocalRuntime(Int32 numberOfEvaluatorsToClose, String testFolder) in ReefFunctionalTest.cs: line 161
Assert.IsTrue failed.
at Org.Apache.REEF.Tests.Functional.ReefFunctionalTest.ValidateSuccessForLocalRuntime(Int32 numberOfEvaluatorsToClose, String testFolder) in ReefFunctionalTest.cs: line 161
This might be a problem with our master branch, not just with my change. I'm trying to run tests on various recent commits, and I'm getting these test failures starting with https://github.com/apache/incubator-reef/commit/e374351a957c437f29fe9b61f3cf9f703e40ee33 (and not getting them in earlier commits). Could you please double-check?
I've fixed the tests
@tcNickolas Thanks, I will test again.
| gharchive/pull-request | 2015-08-31T23:44:19 | 2025-04-01T06:37:54.052421 | {
"authors": [
"markusweimer",
"tcNickolas"
],
"repo": "apache/incubator-reef",
"url": "https://github.com/apache/incubator-reef/pull/450",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2647179605 | 【功能对齐】梳理 Seata 1.6-2.x 的 seata Java 和 seata-go 待对齐的工作
What would you like to be added:
https://github.com/apache/incubator-seata/issues?q=label%3Amultilingual+is%3Aclosed 梳理了1.6-2.x(最新开发分支)的所有feature和部分需要同步的optimize,待梳理成 seat-go 中需要支持的工作
Why is this needed:
已完成梳理
| gharchive/issue | 2024-11-10T12:18:08 | 2025-04-01T06:37:54.054469 | {
"authors": [
"luky116"
],
"repo": "apache/incubator-seata-go",
"url": "https://github.com/apache/incubator-seata-go/issues/688",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2122280836 | optimize:compatible with integration-tx-api module and spring module
[x] I have registered the PR changes.
Ⅰ. Describe what this PR did
Ⅱ. Does this pull request fix one issue?
fixes #6334
fixes #6335
Ⅲ. Why don't you add test cases (unit test/integration test)?
Ⅳ. Describe how to verify it
Ⅴ. Special notes for reviews
some checkstyle failed.
ok
Codecov Report
Attention: 33 lines in your changes are missing coverage. Please review.
Comparison is base (74785d2) 51.95% compared to head (45b667b) 51.21%.
Report is 6 commits behind head on 2.x.
Additional details and impacted files
@@ Coverage Diff @@
## 2.x #6342 +/- ##
============================================
- Coverage 51.95% 51.21% -0.75%
+ Complexity 5171 5117 -54
============================================
Files 918 921 +3
Lines 32039 32166 +127
Branches 3866 3874 +8
============================================
- Hits 16647 16474 -173
- Misses 13768 14102 +334
+ Partials 1624 1590 -34
Files
Coverage Δ
...in/java/org/apache/seata/common/DefaultValues.java
0.00% <ø> (ø)
...che/seata/common/exception/JsonParseException.java
100.00% <ø> (ø)
.../org/apache/seata/common/metadata/ClusterRole.java
100.00% <ø> (ø)
...ava/org/apache/seata/common/metadata/Metadata.java
100.00% <ø> (ø)
...apache/seata/common/metadata/MetadataResponse.java
80.00% <ø> (ø)
...in/java/org/apache/seata/common/metadata/Node.java
100.00% <ø> (ø)
...java/org/apache/seata/common/util/ConfigTools.java
100.00% <ø> (ø)
...ava/org/apache/seata/common/util/DurationUtil.java
83.33% <ø> (ø)
...a/org/apache/seata/common/util/HttpClientUtil.java
52.45% <ø> (+4.91%)
:arrow_up:
...main/java/org/apache/seata/common/util/IOUtil.java
70.00% <ø> (ø)
... and 91 more
... and 165 files with indirect coverage changes
Done. @slievrly
| gharchive/pull-request | 2024-02-07T06:26:01 | 2025-04-01T06:37:54.072065 | {
"authors": [
"codecov-commenter",
"xingfudeshi"
],
"repo": "apache/incubator-seata",
"url": "https://github.com/apache/incubator-seata/pull/6342",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
372841515 | Split integrated test cases to a new project
[x] Create new repo named sharding-sphere-integrated-test
[ ] Move JDBC integrated test cases to sharding-sphere-integrated-test
[ ] Add MySQL & PostgreSQL docker image for integrated test cases
[ ] Comb JDBC test cases
[ ] Comb JDBC test cases for sharding-rule(sharding, masterslave) with raw bootstrap
[ ] Comb JDBC test cases for other bootstrap(spring namespace, springboot)
[ ] Comb JDBC test cases for database pool(DBCP, HikariCP, C3P0)
[ ] Comb JDBC test cases for ORM(Spring JDBC Template, JPA, Mybatis, Hibernate)
[ ] Comb Proxy test cases
[ ] Add Sharding-Proxy docker image for integrated test cases
[ ] Comb Proxy test cases for sharding-rule(sharding, masterslave) with raw bootstrap
[ ] Comb Proxy test cases for other bootstrap(spring namespace, springboot)
[ ] Comb Proxy test cases for database pool(DBCP, HikariCP, C3P0)
[ ] Comb Proxy test cases for ORM(Spring JDBC Template, JPA, Mybatis, Hibernate)
[ ] Comb Orchestration test cases
[ ] Add Zookeeper docker image for integrated test cases
[ ] Add Etcd docker image for integrated test cases
[ ] Comb Orchestration test cases for jdbc and proxy
expired
| gharchive/issue | 2018-10-23T07:13:11 | 2025-04-01T06:37:54.077938 | {
"authors": [
"terrymanu"
],
"repo": "apache/incubator-shardingsphere",
"url": "https://github.com/apache/incubator-shardingsphere/issues/1366",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
347817164 | apm-spring-annotation-plugin grpc exception
Please answer these questions before submitting your issue.
Why do you submit this issue?
[ ] Question or discussion
[ ] Bug
[ ] Requirement
[ ] Feature or performance improvement
Question
What do you want to know?
when not use everything is ok. After using this plugin apm-spring-annotation-plugin-5.0.0-RC-SNAPSHOT.jar, exception.
Bug
Which version of SkyWalking, OS and JRE?
SkyWalking: 5.0.0.RC
CentOS 7
Which company or project?
What happen?
After using this plugin apm-spring-annotation-plugin-5.0.0-RC-SNAPSHOT.jar:
ERROR 2018-08-06 15:39:25 TraceSegmentServiceClient : Send UpstreamSegment to collector fail with a grpc internal exception.
org.apache.skywalking.apm.dependencies.io.grpc.StatusRuntimeException: UNKNOWN
at org.apache.skywalking.apm.dependencies.io.grpc.Status.asRuntimeException(Status.java:526)
at org.apache.skywalking.apm.dependencies.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:419)
at org.apache.skywalking.apm.dependencies.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684)
at org.apache.skywalking.apm.dependencies.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:391)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:475)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:557)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:478)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:590)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at org.apache.skywalking.apm.dependencies.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Requirement or improvement
Please describe about your requirements or improvement suggestions.
This is nothing about the apm-spring-annotation-plugin. This exception means agent can't connect or uplink data to backend/collector.
may be apm-spring-annotation-plugin create much more spans. I have tried it many times,remove this plugin everything is ok, after adding exceptions will occur
may be apm-spring-annotation-plugin create much more spans. I have tried it many times,remove this plugin everything is ok, after adding exceptions will occur
No. I am pretty sure, this error is nothing about the plugin. If spans trigger something wrong, should have collector side's log or memory related error.
gRPC transports bytes to backend, even don't know the span concept.
adding this plugin there is another exception when start app with agent:
WARN TracingContext : More than 300 spans required to create java.lang.RuntimeException: Shadow tracing context. Thread dump at org.apache.skywalking.apm.agent.core.context.TracingContext.isLimitMechanismWorking(TracingContext.java:515) at org.apache.skywalking.apm.agent.core.context.TracingContext.createLocalSpan(TracingContext.java:290) at org.apache.skywalking.apm.agent.core.context.ContextManager.createLocalSpan(ContextManager.java:110) at org.apache.skywalking.apm.plugin.spring.annotations.SpringAnnotationInterceptor.beforeMethod(SpringAnnotationInterceptor.java:32) at org.apache.skywalking.apm.agent.core.plugin.interceptor.enhance.InstMethodsInter.intercept(InstMethodsInter.java:89) at springfox.documentation.schema.property.ObjectMapperBeanPropertyNamingStrategy.nameForDeserialization(ObjectMapperBeanPropertyNamingStrategy.java) at springfox.documentation.schema.property.BeanPropertyDefinitions.name(BeanPropertyDefinitions.java:48) at springfox.documentation.schema.property.OptimizedModelPropertiesProvider.beanModelProperty(OptimizedModelPropertiesProvider.java:281) at springfox.documentation.schema.property.OptimizedModelPropertiesProvider.access$200(OptimizedModelPropertiesProvider.java:79) at springfox.documentation.schema.property.OptimizedModelPropertiesProvider$2.apply(OptimizedModelPropertiesProvider.java:163) at springfox.documentation.schema.property.OptimizedModelPropertiesProvider$2.apply(OptimizedModelPropertiesProvider.java:155) at com.google.common.base.Present.transform(Present.java:79)
That [warn] means you are trying to create over 300 spans in a single segment. I wonder this is your real purpose. It is very dangerous for your memory.
These are automatically generated. what should I do?
These are automatically generated. what should I do?
I don't know what does your application do, and why do you have so many spans in a single request. Only you or your team can say this is reasonable or some incompatible bug. I don't know, sorry.
| gharchive/issue | 2018-08-06T07:56:20 | 2025-04-01T06:37:54.086886 | {
"authors": [
"wu-sheng",
"ylywyn"
],
"repo": "apache/incubator-skywalking",
"url": "https://github.com/apache/incubator-skywalking/issues/1526",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
244064620 | Redshift could not connect to the server
I tried connect to my Redshift database. There are millions of rows. When query time is longer than
one minute, Superset response is "could not connect to the server". I checked Allow Run Async, but there is another message "Failed to start remote query on worker. Tell your administrator to verify the availability of the message queue." Could you help me with this issue?
You can start by reading the documentation https://superset.incubator.apache.org/installation.html#sql-lab
thanks, i missed it
I installed Redis
pip3 install redis
dependencies
pip install -U "celery[redis]"
run Redis
redise-server
Paste this code to /usr/local/lib/python3.4/dist-packages/superset/config.py
Configure the class "CeleryConfig", by adding the URL of your Redis installation (in my case, localhost:6379):
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/'
CELERY_IMPORTS = ('superset.sql_lab', )
CELERY_RESULT_BACKEND = 'redis://localhost:6379/'
CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig
HTTP_HEADERS = {
'super': 'header!'
}
# comment the current RESULTS_BACKEND value
#RESULTS_BACKEND = None
# assign a new value to RESULTS_BACKEND
RESULTS_BACKEND = FileSystemCache('/tmp/sqllab_cache', default_timeout=60*24*7)
When i try superset worker
Starting SQL Celery worker. Traceback (most recent call last): File "/home/rko/venv/bin/superset", line 15, in <module> manager.run() File "/home/rko/venv/lib/python3.4/site-packages/flask_script/__init__.py", line 412, in run result = self.handle(sys.argv[0], sys.argv[1:]) File "/home/rko/venv/lib/python3.4/site-packages/flask_script/__init__.py", line 383, in handle res = handle(*args, **config) File "/home/rko/venv/lib/python3.4/site-packages/flask_script/commands.py", line 216, in __call__ return self.run(*args, **kwargs) File "/home/rko/venv/lib/python3.4/site-packages/superset/cli.py", line 189, in worker 'broker': config.get('CELERY_CONFIG').BROKER_URL, AttributeError: 'NoneType' object has no attribute 'BROKER_URL'
Can you give me any tips?
i've installed Redis, run superset worker and run superset server..
SQLlab gives me "Failed to start remote query on worker. Tell your administrator to verify the availability of the message queue."
and console:
/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/compiler.py:624: SAWarning: Can't resolve label reference 'changed_on desc'; converting to text() (this warning may be suppressed after 10 occurrences)
util.ellipses_string(element.element))
/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/compiler.py:624: SAWarning: Can't resolve label reference 'database_name asc'; converting to text() (this warning may be suppressed after 10 occurrences)
util.ellipses_string(element.element))
2017-07-20 14:15:42,256:INFO:root:Parsing with sqlparse statement SELECT count(uid) as pocet
FROM events2
where event = 'pageview' and time like '%2017-06-30%'
2017-07-20 14:15:42,300:INFO:root:Triggering query_id: 43
/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/sqltypes.py:596: SAWarning: Dialect sqlite+pysqlite does not support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
'storage.' % (dialect.name, dialect.driver))
2017-07-20 14:15:42,980:ERROR:root:[Errno 111] Connection refused
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/functional.py", line 36, in call
return self.value
AttributeError: 'ChannelPromise' object has no attribute 'value'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 494, in _ensured
return fun(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 187, in _publish
channel = self.channel
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 209, in _get_channel
channel = self._channel = channel()
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/functional.py", line 38, in call
value = self.value = self.contract()
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 224, in
channel = ChannelPromise(lambda: connection.default_channel)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 819, in default_channel
self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 802, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 757, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
conn.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 296, in connect
self.transport.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 123, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 164, in _connect
self.sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 414, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 515, in _ensured
reraise_as_library_errors=False,
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 405, in ensure_connection
callback)
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/functional.py", line 333, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 261, in connect
return self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 802, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 757, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
conn.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 296, in connect
self.transport.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 123, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 164, in _connect
self.sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/superset/views/core.py", line 2011, in sql_json
store_results=not query.select_as_cta)
File "/usr/local/lib/python3.4/dist-packages/celery/app/task.py", line 412, in delay
return self.apply_async(args, kwargs)
File "/usr/local/lib/python3.4/dist-packages/celery/app/task.py", line 535, in apply_async
**options
File "/usr/local/lib/python3.4/dist-packages/celery/app/base.py", line 737, in send_task
amqp.send_task_message(P, name, message, **options)
File "/usr/local/lib/python3.4/dist-packages/celery/app/amqp.py", line 558, in send_task_message
**properties
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 527, in _ensured
errback and errback(exc, 0)
File "/usr/lib/python3.4/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 419, in _reraise_as_library_errors
sys.exc_info()[2])
File "/usr/local/lib/python3.4/dist-packages/vine/five.py", line 178, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 414, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 515, in _ensured
reraise_as_library_errors=False,
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 405, in ensure_connection
callback)
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/functional.py", line 333, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 261, in connect
return self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 802, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 757, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
conn.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 296, in connect
self.transport.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 123, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 164, in _connect
self.sock.connect(sa)
kombu.exceptions.OperationalError: [Errno 111] Connection refused
[2017-07-20 14:15:51 +0200] [1746] [INFO] Handling signal: winch
[2017-07-20 14:15:51 +0200] [1746] [INFO] Handling signal: winch
You have misconfigured celery, it's looking for an amqp broker while you said you want to use redis.
i changed that and worker console:
[2017-07-20 14:35:59,067: ERROR/ForkPoolWorker-29] Task superset.sql_lab.get_sql_results[dc953f93-5cf6-4de6-9432-0d09a354ca2e] raised unexpected: Exception("Results backend isn't configured.",)
Traceback (most recent call last):
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 622, in protected_call
return self.run(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 81, in get_sql_results
handle_error("Results backend isn't configured.")
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 78, in handle_error
raise Exception(query.error_message)
Exception: Results backend isn't configured.
i have this config:
/superset/config.py
Configure the class "CeleryConfig", by adding the URL of your Redis installation (in my case, localhost:6379):
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/'
CELERY_IMPORTS = ('superset.sql_lab', )
CELERY_RESULT_BACKEND = 'redis://localhost:6379/'
CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig
from werkzeug.contrib.cache import RedisCache
RESULTS_BACKEND = RedisCache(
host='localhost', port=6379, key_prefix='superset_results')
You have to add to add proper quoting to your code excerpts otherwise it's impossible to help you.
i changed that and worker console:
[2017-07-20 14:35:59,067: ERROR/ForkPoolWorker-29] Task superset.sql_lab.get_sql_results[dc953f93-5cf6-4de6-9432-0d09a354ca2e] raised unexpected: Exception("Results backend isn't configured.",)
Traceback (most recent call last):
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 622, in protected_call
return self.run(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 81, in get_sql_results
handle_error("Results backend isn't configured.")
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 78, in handle_error
raise Exception(query.error_message)
Exception: Results backend isn't configured.
i have this config:
/superset/config.py
Configure the class "CeleryConfig", by adding the URL of your Redis installation (in my case, localhost:6379):
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/'
CELERY_IMPORTS = ('superset.sql_lab', )
CELERY_RESULT_BACKEND = 'redis://localhost:6379/'
CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig
from werkzeug.contrib.cache import RedisCache
RESULTS_BACKEND = RedisCache(
host='localhost', port=6379, key_prefix='superset_results')
is this better?
i changed that and worker console:
[2017-07-20 14:35:59,067: ERROR/ForkPoolWorker-29] Task superset.sql_lab.get_sql_results[dc953f93-5cf6-4de6-9432-0d09a354ca2e] raised unexpected: Exception("Results backend isn't configured.",)
Traceback (most recent call last):
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 622, in protected_call
return self.run(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 81, in get_sql_results
handle_error("Results backend isn't configured.")
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 78, in handle_error
raise Exception(query.error_message)
Exception: Results backend isn't configured.
i have this config:
/superset/config.py
Configure the class "CeleryConfig", by adding the URL of your Redis installation (in my case, localhost:6379):
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/'
CELERY_IMPORTS = ('superset.sql_lab', )
CELERY_RESULT_BACKEND = 'redis://localhost:6379/'
CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig
from werkzeug.contrib.cache import RedisCache
RESULTS_BACKEND = RedisCache(
host='localhost', port=6379, key_prefix='superset_results')
is this better?
What mean? Results backend isn't configured.
| gharchive/issue | 2017-07-19T14:40:16 | 2025-04-01T06:37:54.128449 | {
"authors": [
"toncek87",
"xrmx"
],
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/issues/3162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
351775295 | Refactor treemap
Decouple the visualization code from slice and formData
Test
Ran a development instance with the code above and verified with production instance that they produce the same results.
@williaster @conglei @graceguo-supercat
Codecov Report
Merging #5670 into master will decrease coverage by 0.03%.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #5670 +/- ##
==========================================
- Coverage 63.51% 63.48% -0.04%
==========================================
Files 360 360
Lines 22904 22915 +11
Branches 2551 2551
==========================================
Hits 14548 14548
- Misses 8341 8352 +11
Partials 15 15
Impacted Files
Coverage Δ
superset/assets/src/visualizations/treemap.js
0% <0%> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cdd348a...723f82b. Read the comment docs.
| gharchive/pull-request | 2018-08-18T00:44:42 | 2025-04-01T06:37:54.136659 | {
"authors": [
"codecov-io",
"kristw"
],
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/pull/5670",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
381773405 | [fix] view results in sql lab
click view results from sql lab will see JS excetions:
exception is from this line:
https://github.com/apache/incubator-superset/blob/69e8df404d46e35bf686cc92992d6e0415172d90/superset/assets/src/SqlLab/components/ExploreResultsButton.jsx#L171
@mistercrunch @michellethomas @kristw
Codecov Report
Merging #6405 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #6405 +/- ##
=======================================
Coverage 77.31% 77.31%
=======================================
Files 67 67
Lines 9581 9581
=======================================
Hits 7408 7408
Misses 2173 2173
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c42bcf8...7cebf7b. Read the comment docs.
LGTM
| gharchive/pull-request | 2018-11-16T21:51:43 | 2025-04-01T06:37:54.143654 | {
"authors": [
"codecov-io",
"graceguo-supercat",
"mistercrunch"
],
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/pull/6405",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
453242667 | [epoch] Remove non-UTC epoch logic
CATEGORY
Choose one
[x] Bug Fix
[ ] Enhancement (new features, refinement)
[ ] Refactor
[ ] Add tests
[ ] Build / Development Environment
[ ] Documentation
SUMMARY
As @agrawaldevesh correctly identified in https://github.com/apache/incubator-superset/pull/6721 previously we were computing the Unix timestamp for the right-hand-side (RHS) of the temporal filter condition using the local time zone as opposed to UTC which is the definition of Unix (or epoch) time.
@agrawaldevesh's change was behind a feature flag and disabled by default however this clearly is a bug and I sense we should remedy the problem by merely replacing the previously incorrect logic. Note I strongly believe users were probably unaware of the issue as Unix timestamps aren't human readable.
TEST PLAN
CI.
ADDITIONAL INFORMATION
[ ] Has associated issue:
[ ] Changes UI
[ ] Requires DB Migration.
[ ] Confirm DB Migration upgrade and downgrade tested.
[ ] Introduces new feature or API
[ ] Removes existing feature or API
REVIEWERS
to: @agrawaldevesh @betodealmeida @michellethomas @mistercrunch @villebro
https://github.com/apache/incubator-superset/issues/7656
Codecov Report
Merging #7667 into master will increase coverage by <.01%.
The diff coverage is 75%.
@@ Coverage Diff @@
## master #7667 +/- ##
==========================================
+ Coverage 65.57% 65.58% +<.01%
==========================================
Files 435 435
Lines 21754 21749 -5
Branches 2394 2394
==========================================
- Hits 14266 14264 -2
+ Misses 7367 7364 -3
Partials 121 121
Impacted Files
Coverage Δ
superset/config.py
93.97% <ø> (-0.04%)
:arrow_down:
superset/connectors/sqla/models.py
82.39% <75%> (+0.41%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d62c37b...e93f33b. Read the comment docs.
@agrawaldevesh are you onboard with this change?
Go for it ! I only introduced the flag since I did not want to break
existing use cases. I have no issues with making this be default
On Mon, Jun 10, 2019 at 5:02 PM John Bodley notifications@github.com
wrote:
@agrawaldevesh
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_agrawaldevesh&d=DwMCaQ&c=r2dcLCtU9q6n0vrtnDw9vg&r=Z3aBmCzO3AGueh3ZEPa3n4ujmbgsJJDE-x0U4W8t_Us&m=O_sBQHDGT4uYRBuD0MeJgRcwajjdTrjVL114CzrCW6A&s=jbT8jeF_xUkzsr1n0uEA-H-vfQW3Vu6ryUdUELUU7vQ&e=
are you onboard with this change?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dsuperset_pull_7667-3Femail-5Fsource-3Dnotifications-26email-5Ftoken-3DAE44R77I3WBGYPB5XWLSMQLPZ3TRVA5CNFSM4HVJ6VW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXLRTFI-23issuecomment-2D500636053&d=DwMCaQ&c=r2dcLCtU9q6n0vrtnDw9vg&r=Z3aBmCzO3AGueh3ZEPa3n4ujmbgsJJDE-x0U4W8t_Us&m=O_sBQHDGT4uYRBuD0MeJgRcwajjdTrjVL114CzrCW6A&s=3Z1HFovjiRBXcX0cRTa1BCbVGz5q-yVMM9B9u7jB_yM&e=,
or mute the thread
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AE44R77CTI7IHEM57QUNF2LPZ3TRVANCNFSM4HVJ6VWQ&d=DwMCaQ&c=r2dcLCtU9q6n0vrtnDw9vg&r=Z3aBmCzO3AGueh3ZEPa3n4ujmbgsJJDE-x0U4W8t_Us&m=O_sBQHDGT4uYRBuD0MeJgRcwajjdTrjVL114CzrCW6A&s=S3v_GuSJtAWrrOdirnt1nz9dQNSS9KGhn1jIfdjd24I&e=
.
| gharchive/pull-request | 2019-06-06T21:18:40 | 2025-04-01T06:37:54.160631 | {
"authors": [
"agrawaldevesh",
"codecov-io",
"john-bodley"
],
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/pull/7667",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
223519389 | [SYSTEMML-1554] IPA Scalar Transient Read Replacement
Currently, during IPA we collect all variables (scalars & matrices)
eligible for propagation across blocks (i.e. not updated in block), and
then propagate the only the matrix sizes across the blocks. It seems
plausible that we could also replace all eligible scalar transient reads
with literals based on the variables that have already been collected.
The benefit is that many ops will be able to determine their respective
output sizes during regular compilation, instead of having to wait until
dynamic recompilation, and thus we can reduce the pressure on dynamic
recompilation.
Are there drawbacks to this approach? The use case is that I was seeing a large number of memory warnings while training a convolutional net due to the sizes being unknown during regular compilation, yet the engine only having CP versions of the ops. Additionally, I was running into actual heap space OOM errors for situations that should not run out of memory, and thus I started exploring.
I've attached an example script and the explain plan (hops & runtime) w/ and w/o the IPA scalar replacement to the associated JIRA issue.
cc @mboehm7
thanks @dusenberrymw I gave it a try on our ARIMA application testcase which historically was challenging for scalar propagation. Unfortunately, it failed due to an - probably unrelated - issue in replaceLiteralFullUnaryAggregate. Once I resolved this, I'll play around with it a bit more.
Once this PR is in, we should also think about the problem of propagating scalars into functions if functions are called once or with consistent scalar inputs.
@mboehm7 Great, interested to see what else is needed for this to be generally applicable. Also, definitely +1 for propagating scalars into functions. In particular, we should allow for the case of functions for which any subset of the inputs are consistent scalars. I.e., a function may have an unknown matrix size as an input, but then have several other scalar inputs that are always consistent.
Thanks, @mboehm7. Looks like it is failing a test still -- org.apache.sysml.test.integration.functions.misc.DataTypeChangeTest#testDataTypeChangeValidate4c. Looking into it, it fails due to trying to cast a Matrix to a Scalar object. At a deeper level, it looks like the propagated variable map is holding onto the "matrix" X, rather than dropping it as it should, since X is turned into a scalar by the call X = foo(X). Interestingly, the FunctionOp for the foo function is marked as having an Unknown datatype and valuetype. That to me seems to be big issue, but I'm not sure exactly where that is failing. Thoughts? Overall, this seems like a bug that was just hidden before, rather than being newly introduced.
yes this is almost certainly a bug - originally we allowed data type changes in conditional control flow (e.g., if branch assigns a scalar and else branch a matrix), in which case we assign UNKNOWN for subsequent references. However, I modified this years ago because SystemML could not compile valid instructions for these scenarios unless we extend the recompiler to actually update the data type and block sizes there.
By the way, aside from that test, everything else passed.
cc @mboehm7 Can you review this fix for the datatype conversion issue? I'm also waiting for the full testing with Jenkins.
Refer to this link for build results (access rights to CI server needed):
https://sparktc.ibmcloud.com/jenkins/job/SystemML-PullRequestBuilder/1437/
Thanks, @mboehm7. I'll update the docs and merge.
Refer to this link for build results (access rights to CI server needed):
https://sparktc.ibmcloud.com/jenkins/job/SystemML-PullRequestBuilder/1442/
| gharchive/pull-request | 2017-04-21T23:09:27 | 2025-04-01T06:37:54.168840 | {
"authors": [
"akchinSTC",
"dusenberrymw",
"mboehm7"
],
"repo": "apache/incubator-systemml",
"url": "https://github.com/apache/incubator-systemml/pull/468",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
473885670 | [iOS] PR #2394 covered by PR #2520 so release v0.26.0 still have thread issue
#2394
#2520
Try release 0.28
If this still bothers you and you find a solution, you could give us a PR if you can, I am very happy to talk with you implemenation detail or review you PR in mailing list.
I have a busy schedule and I can't read Github issue every day, but I check mailing list every day. I am sorry if this bothers you.
| gharchive/issue | 2019-07-29T07:06:59 | 2025-04-01T06:37:54.172009 | {
"authors": [
"YorkShen",
"darkThanBlack"
],
"repo": "apache/incubator-weex",
"url": "https://github.com/apache/incubator-weex/issues/2758",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
142685794 | [ZEPPELIN-757] Ordering dropdown menu items alphabetically.
What is this PR for?
Fixing documentation.
What type of PR is it?
Documentation
Todos
N/A
What is the Jira issue?
https://issues.apache.org/jira/browse/ZEPPELIN-757
How should this be tested?
Follow the steps in https://github.com/apache/incubator-zeppelin/blob/master/docs/README.md to build the documentation.
Screenshots (if appropriate)
Questions:
Does the licenses files need update? No
Is there breaking changes for older versions? No
Does this needs documentation? No
Ready for review.
Thanks @jsimsa for the fix. LGTM and merge if there're no more discussions.
| gharchive/pull-request | 2016-03-22T15:29:03 | 2025-04-01T06:37:54.176179 | {
"authors": [
"Leemoonsoo",
"jsimsa"
],
"repo": "apache/incubator-zeppelin",
"url": "https://github.com/apache/incubator-zeppelin/pull/790",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
803999765 | Constant High CPU Usage
Describe the bug
I have a small IOTDB instance running on Ubuntu. Over time the CPU utilisation is very high even though there are a low number of clients and transactions. The CPU usage appears to climb over time. This system has been running for several weeks without a restart. If I run "ps -ef" I can see that it is IOTDB that is using the CPU constantly.
Below is the CPU usage before and after I ran stop-server/start-server. The CPU dropped from > 80% down to < 10%.
To Reproduce
Steps to reproduce the behavior:
Run IOTDB 11.0 with default settings
Expected behavior
The CPU usage should not be so high.
Screenshots
See above.
Desktop (please complete the following information):
OS: Ubuntu 20.04.1 LTS
Browser Not Applicable
Version 11.0
Additional context
None
I have upgraded to 11.2 so I will monitor and see if the issue persists.
Step 1: Could you upload the logs during the high cpu usage?
Step 2: If possible, could you please use JProfiler to record the CPU when the CPU usage is high, and save it as a .jps snapshot file, then we can see what happens.
Step 3: Some config that may solve the problem:
iotdb-engine.properties
enable_unseq_compaction=false
I have been monitoring the CPU over the past week or so. The image below shows a weeks worth of CPU data. Notice that after a few days the CPU is back up high.
There are no errors in the logs - most logging is info level with query reponse time.
I am going to restart the database with "enable_unseq_compaction=false" and see how that goes.
I am still having the same problem even with "enable_unseq_compaction=false"
I have restarted to captured the logs.
| gharchive/issue | 2021-02-08T21:53:25 | 2025-04-01T06:37:54.183468 | {
"authors": [
"ope-nz",
"qiaojialin"
],
"repo": "apache/iotdb",
"url": "https://github.com/apache/iotdb/issues/2663",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
950774670 | KAFKA-13116: Fix message_format_change_test and compatibility_test_new_broker_test failures
These failures were caused by a46b82bea9abbd08e5. Details for each test:
message_format_change_test: use IBP 2.8 so that we can write in older message
formats.
compatibility_test_new_broker_test_failures: fix down-conversion path to handle
empty record batches correctly. The record scan in the old code ensured that
empty record batches were never down-converted, which hid this bug.
Verified with ducker that some variants of these tests failed without these changes
and passed with them.
Note that the upgrade_test is still failing. It looks like there are multiple causes,
so I left that for another PR.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@hachikuji I addressed your comment, this is ready for another review.
To double-check the local results, I started the branch builder here too https://jenkins.confluent.io/job/system-test-kafka-branch-builder/4620/
Failures are unrelated. Merging to master and cherry-picking to 3.0.
The branch builder system tests passed btw.
| gharchive/pull-request | 2021-07-22T15:11:46 | 2025-04-01T06:37:54.197102 | {
"authors": [
"ijuma"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/11108",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1267342788 | KAFKA-13971: Fix atomicity violations caused by improper usage of ConcurrentHashMap - part2
## Problem #1 in DelegatingClassLoader.java
Atomicity violation in example such as:
Consider thread T1 reaches line 228, but before executing context switches to thread T2 which also reaches line 228. Again context switches to T1 which reaches line 232 and adds a value to the map. T2 will execute line 228 and creates a new map which overwrites the value written by T1, hence change done by T1 would be lost. This code change ensures that two threads cannot initiate the TreeMap, instead only one of them will.
Problem #2 in RocksDBMetricsRecordingTrigger.java
Atomicity violation in example such as:
Consider thread T1 reaches line 40 but before executing it context switches to thread T2 which also reaches line 40. In a serialized execution order, thread T2 should have thrown the exception but it won't in this case. The code change fixes that.
Note that some other problems associated with use of concurrent hashmap has been fixed in https://github.com/apache/kafka/pull/12277
Is the relevant code specified as thread safe?
Is the relevant code specified as thread safe?
Thank you for your review @ijuma. I appreciate it. Though, I am afraid I don't understand your question.
Are you asking whether the existing code is supposed to be thread safe?
If yes, for DelegatingClassLoader.java the javadoc for the class mentioned that it is supposed to be thread safe (but it isn't due to the bug that is fixed in this review). For the RocksDBMetricsRecordingTrigger.java, we run a thread periodically from a metric trigger thread pool which reads from the map maintained in the class. At the same time it is possible that another thread is mutating the map during startup/shutdown of rocksDB which may leave the map in inconsistent state. Hence, it's important for this class to be thread safe as well.
Also, note that both the classes in this review use ConcurrentHashMap (albeit incorrectly) to ensure thread safe mutation over the map.
Are you asking whether the changed code is thread safe?
If yes, the change uses atomic operations provided by ConcurrentHashMap to ensure thread safety.
@C0urante please review when you get a chance.
| gharchive/pull-request | 2022-06-10T10:10:08 | 2025-04-01T06:37:54.202103 | {
"authors": [
"divijvaidya",
"ijuma"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/12281",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1460508948 | KAFKA-14260: add synchronized to prefixScan method
As a result of "14260: InMemoryKeyValueStore iterator still throws ConcurrentModificationException", I'm adding synchronized to prefixScan as an alternative to going back to the ConcurrentSkipList.
I've read up on testing multi-threaded behavior and I believe it's best to leave the testing as it is for now as testing whether synchronized works doesn't always work. I did make sure ./gradlew test was green on my branch. Happy to be corrected here.
This is my first PR. As of the guidelines, I that the contribution is my original work and that I license the work to the project under the project's open source license. I see that I also need to make a build trigger request, @ableegoldman I would appreciate one please :)
I do not believe this requires a documentation update as it is just bringing a method up to standard. Again, happy to help out if it turns out otherwise.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
I see that I also need to make a build trigger request
By the way, this is thankfully no longer the case (it used to be really annoying and only worked like half the time) -- these days the build will run on any PR that's opened, and will rerun each time you push a new commit. I guess the contributing guidelines are out of date so thanks for bringing that up 🙂 I'll update them
Merged to trunk and cherrypicked to 3.4
| gharchive/pull-request | 2022-11-22T20:27:46 | 2025-04-01T06:37:54.206928 | {
"authors": [
"Cerchie",
"ableegoldman"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/12893",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1681193940 | KAFKA-14909: check zkMigrationReady tag before migration
add ZkMigrationReady in apiVersionsResponse
check all nodes if ZkMigrationReady are ready before moving to next migration state
TODO: add more tests
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
Failed tests are unrelated and also failed in trunk build:
Build / JDK 11 and Scala 2.13 / org.apache.kafka.connect.mirror.integration.DedicatedMirrorIntegrationTest.testSingleNodeCluster()
Build / JDK 11 and Scala 2.13 / org.apache.kafka.connect.mirror.integration.DedicatedMirrorIntegrationTest.testMultiNodeCluster()
Build / JDK 11 and Scala 2.13 / kafka.server.KRaftClusterTest.testLegacyAlterConfigs()
Build / JDK 11 and Scala 2.13 / kafka.server.RaftClusterSnapshotTest.testSnapshotsGenerated()
Build / JDK 11 and Scala 2.13 / kafka.server.RaftClusterSnapshotTest.testSnapshotsGenerated()
Build / JDK 11 and Scala 2.13 / org.apache.kafka.controller.QuorumControllerTest.testConfigurationOperations()
Build / JDK 11 and Scala 2.13 / org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated()
Build / JDK 17 and Scala 2.13 / org.apache.kafka.connect.mirror.integration.DedicatedMirrorIntegrationTest.testMultiNodeCluster()
Build / JDK 17 and Scala 2.13 / kafka.server.KRaftClusterTest.testCreateClusterAndPerformReassignment()
Build / JDK 17 and Scala 2.13 / kafka.server.KRaftClusterTest.testUnregisterBroker()
Build / JDK 17 and Scala 2.13 / kafka.server.KRaftClusterTest.testCreateClusterAndCreateAndManyTopics()
Build / JDK 17 and Scala 2.13 / org.apache.kafka.controller.QuorumControllerTest.testUpgradeMigrationStateFrom34()
Build / JDK 17 and Scala 2.13 / org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated()
Build / JDK 8 and Scala 2.12 / org.apache.kafka.connect.mirror.integration.DedicatedMirrorIntegrationTest.testSingleNodeCluster()
Build / JDK 8 and Scala 2.12 / org.apache.kafka.connect.mirror.integration.DedicatedMirrorIntegrationTest.testMultiNodeCluster()
Build / JDK 8 and Scala 2.12 / org.apache.kafka.connect.mirror.integration.IdentityReplicationIntegrationTest.testSyncTopicConfigs()
Build / JDK 8 and Scala 2.12 / org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted
Build / JDK 8 and Scala 2.12 / kafka.server.KRaftClusterTest.testSetLog4jConfigurations()
Build / JDK 8 and Scala 2.12 / kafka.server.KRaftClusterTest.testCreateClusterAndPerformReassignment()
Build / JDK 8 and Scala 2.12 / kafka.server.KRaftClusterTest.testCreateClusterAndPerformReassignment()
Build / JDK 8 and Scala 2.12 / org.apache.kafka.controller.QuorumControllerTest.testCreateAndClose()
Build / JDK 8 and Scala 2.12 / org.apache.kafka.controller.QuorumControllerTest.testCreateAndClose()
I'd like to merge it after the test failure fix completed in this PR: https://github.com/apache/kafka/pull/13647 to make sure we don't introduce more failed tests.
Failed tests are unrelated:
Build / JDK 11 and Scala 2.13 / org.apache.kafka.connect.runtime.distributed.DistributedHerderTest.testExternalZombieFencingRequestAsynchronousFailure
Build / JDK 17 and Scala 2.13 / integration.kafka.server.FetchFromFollowerIntegrationTest.testRackAwareRangeAssignor()
Build / JDK 17 and Scala 2.13 / kafka.server.CreateTopicsRequestTest.testErrorCreateTopicsRequests(String).quorum=kraft
Build / JDK 17 and Scala 2.13 / org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated()
| gharchive/pull-request | 2023-04-24T12:45:36 | 2025-04-01T06:37:54.211187 | {
"authors": [
"showuon"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/13631",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2355896530 | KAFKA-16707: Kafka Kraft : using Principal Type in StandardACL in order to defined ACL with a notion of group without rewriting KafkaPrincipal of client by rules
Default StandardAuthorizer in Kraft mode is defining a KafkaPrincpal as type=User and a name, and a special wildcard eventually.
The difficulty with this solution is that we can't define ACL by group of KafkaPrincipal.
There is a way for the moment to do so by defining RULE to rewrite the KafkaPrincipal name field, BUT, to introduce this way the notion of group, you have to set rules which will make you loose the uniq part of the KafkaPrincipal name of the connected client.
The concept here, in the StandardAuthorizer of Kafka Kraft, is to add the management of KafkaPrincipal type:
Regex
StartsWith
EndsWith
Contains
(User is still available and keep working as before to avoid any regression/issue with current configurations)
This would be done in the StandardAcl class of metadata/authorizer, and the findresult method of StandardAuthorizerData will delegate the match to the StandardAcl class (for performance reason: precompile regex in ACL).
*I added tests in metadat, and run ./gradlew test from kafak:trunk and my fork: no more failed test on my branch than kafka:trunk
Committer Checklist (excluded from commit message)
[ x ] Verify design and implementation => thanks to spell checker in gradle process
[ x ] Verify test coverage and CI build status => adding few tests in metadata, an run gradlew test without more failed test thant kafka:trunk
[ x ] Verify documentation (including upgrade notes) : added few lines in doc, no upgrade info as the previous behaviour should still work as before.
Link to the JIRA-16707
Hello, When I'm running "./gradlew test" on my side from apache/kafka/trunk clone, I have failed tests.
So Is there a way to know (an easy one?) to know which failed test in "continuous-integration/jenkins/pr-merge" I have to look ?
| gharchive/pull-request | 2024-06-16T17:52:49 | 2025-04-01T06:37:54.215742 | {
"authors": [
"handfreezer"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/16361",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2550600816 | MINOR: Cache topic resolution in TopicIds set
Looking up topics in a TopicsImage is relatively slow. Cache the results
in TopicIds to improve assignor performance. In benchmarks, we see a
noticeable improvement in performance in the heterogeneous case.
Before
Benchmark (assignmentType) (assignorType) (isRackAware) (memberCount) (partitionsToMemberRatio) (subscriptionType) (topicCount) Mode Cnt Score Error Units
ServerSideAssignorBenchmark.doAssignment INCREMENTAL RANGE false 10000 10 HOMOGENEOUS 1000 avgt 5 36.400 ± 3.004 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL RANGE false 10000 10 HETEROGENEOUS 1000 avgt 5 158.340 ± 0.825 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL UNIFORM false 10000 10 HOMOGENEOUS 1000 avgt 5 1.329 ± 0.041 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL UNIFORM false 10000 10 HETEROGENEOUS 1000 avgt 5 382.901 ± 6.203 ms/op
After
Benchmark (assignmentType) (assignorType) (isRackAware) (memberCount) (partitionsToMemberRatio) (subscriptionType) (topicCount) Mode Cnt Score Error Units
ServerSideAssignorBenchmark.doAssignment INCREMENTAL RANGE false 10000 10 HOMOGENEOUS 1000 avgt 5 36.465 ± 1.954 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL RANGE false 10000 10 HETEROGENEOUS 1000 avgt 5 114.043 ± 1.424 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL UNIFORM false 10000 10 HOMOGENEOUS 1000 avgt 5 1.454 ± 0.019 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL UNIFORM false 10000 10 HETEROGENEOUS 1000 avgt 5 342.840 ± 2.744 ms/op
Based heavily on https://github.com/apache/kafka/pull/16527.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@mumrah
Another thing to consider is the lifetime of the cache. Do we really need the ID + name mappings kept in memory forever?
The lifetime of the cache is bound to the call. It is not kept forever.
Does this come down to performance differences between HashMap and PCollectionsImmutableMap?
Yes.
If we decide we really need faster topic ID to name lookups, I would consider adding it to TopicsImage. Managing a cache outside of the image will be a bit difficult.
We could consider this separately. At the moment, we don't really have the time to do it. The current strategy seems to be a good tradeoff at the moment given that it is only bound to the call and not kept forever.
@dajac thanks for the explanation, makes sense. Can we include a javadoc on the class describing the expected lifetime of this class?
@mumrah
Seeing that the cache is not actually used outside of tests and benchmarks, I'm guessing this is still WIP.
It's used in TargetAssignmentBuilder.build(), which is used by the new group coordinator.
I've updated the javadoc to describe the lifetime of the cache.
@mumrah I will merge it. If you have further comments, @squah-confluent can address separately.
| gharchive/pull-request | 2024-09-26T13:29:59 | 2025-04-01T06:37:54.222215 | {
"authors": [
"dajac",
"mumrah",
"squah-confluent"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/17285",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
208769653 | kafka-4767: KafkaProducer is not joining its IO thread properly
KafkaProducer#close swallows the InterruptedException which might be acceptable when it's invoked from within the main thread or user is extending Thread and therefore controls all the code higher up on the call stack. For other cases, it'd better retstore the interupted status after capturing the exception.
@ijuma Please have a review. Thanks.
@ijuma Please have a reivew on this PR. Thanks.
LGTM
Thanks for the PR. I looked this in more detail and it looks like we eventually throw KafkaException for this case. In the consumer, we throw InterruptException (which is a non-checked version of InterruptedException that inherits from KafkaException). Seems like we should do the same here. That class sets the interrupt in the constructor.
@ijuma Followed the same pattern as what KafkaConsumer#close treats interruption but also explicitly added an if clause to check InterruptedException since firstException would be set to it explicitly in KafkaProducer#close. Please have a review on that. Thanks.
@ijuma Well, already removed those dead code and also added the code to reserve the interruption status. Looks good now?
| gharchive/pull-request | 2017-02-20T02:29:27 | 2025-04-01T06:37:54.225638 | {
"authors": [
"amethystic",
"ijuma",
"omkreddy"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/2576",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
239626978 | KAFKA-5534: offsetForTimes result should include partitions with no offset
For topics that support timestamp search, if no offset is found for a partition, the partition should still be included in the result with a null offset value. This KafkaConsumer method currently excludes such partitions from the result.
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5806/
Test PASSed (JDK 8 and Scala 2.12).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5820/
Test PASSed (JDK 7 and Scala 2.11).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5937/
Test PASSed (JDK 7 and Scala 2.11).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5922/
Test PASSed (JDK 8 and Scala 2.12).
@hachikuji, is this what you had in mind?
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/6154/
Test PASSed (JDK 7 and Scala 2.11).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/6138/
Test PASSed (JDK 8 and Scala 2.12).
We should add a note to the upgrade notes and I think we can only merge this in trunk as it does change the behaviour.
Thanks @ijuma. I'll update to upgrade notes with this change. I assume it's this file that needs to be updated.
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/6180/
Test PASSed (JDK 7 and Scala 2.11).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/6164/
Test PASSed (JDK 8 and Scala 2.12).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/6210/
Test PASSed (JDK 8 and Scala 2.12).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/6226/
Test PASSed (JDK 7 and Scala 2.11).
| gharchive/pull-request | 2017-06-29T22:18:01 | 2025-04-01T06:37:54.235622 | {
"authors": [
"asfgit",
"ijuma",
"vahidhashemian"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/3460",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
315655169 | MINOR: add window store range query in simple benchmark
There are a couple minor additions in this PR:
add a new test for window store, to range query upon receiving each record.
in the non-windowed state store case, add a get call before the put call.
Enable caching by default to be consistent with other Join / Aggregate cases, where caching is enabled by default.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@mjsax @bbejeck @vvcephei
| gharchive/pull-request | 2018-04-18T21:51:39 | 2025-04-01T06:37:54.238252 | {
"authors": [
"guozhangwang"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/4894",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
318500968 | Upgrade ZooKeeper to 3.4.12 and Scala to 2.12.6
ZK 3.4.12 fixes the regression that forced us to go back to
3.4.10. Release notes:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310801&version=12342040
Scala 2.12.6 fixes the issue that prevented us from upgrading
to 2.12.5.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@junrao trying the ZK upgrade again.
| gharchive/pull-request | 2018-04-27T18:12:10 | 2025-04-01T06:37:54.241020 | {
"authors": [
"ijuma"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/4940",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
131500021 | MINOR: more info in error msg
@guozhangwang
got it.
LGTM
| gharchive/pull-request | 2016-02-04T22:59:42 | 2025-04-01T06:37:54.242016 | {
"authors": [
"ewencp",
"ymatsuda"
],
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/873",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
518024605 | [WIP] KNOX-2095 - Adding in DefaultDispatch code and tests to handle 504 errors
What changes were proposed in this pull request?
Currently, Knox masks all connection errors as 500 errors, when they may be more accurately described using other error codes, especially 504. A change has been made to display a 504 error in the event of a socket timeout..
How was this patch tested?
Ran ant verify under Knox 1.2.0. The patch was generated against the Knox 1.2.0 branch, and subsequently applied to Knox 1.4.0. Currently ant verify is failing, but these are most likely transient errors as the same errors appear in master. Still running tests.
Hmm. It looks like the check at https://api.travis-ci.org/v3/job/607855611/log.txt failed, but the logs suggest that it may be running plugins that aren't thread safe. Assuming that this is a threading issue, is there any way to rerun the tests, possibly without parallelism?
The error was
[ERROR] Failures:
[ERROR] GatewayCorrelationIdTest.testTestService:209
Expected: is <46>
but: was <45>
Not an error I've seen before. I retriggered the JDK 11 job. We have some flaky tests related to ZK but not that test.
Hi Kevin, sorry for the late followup; was out yesterday. Seems like it failed on gateway-service-remoteconfig again; was it still the testTestService test? I don't have JDK 11 installed on this machine and I'd like to confirm before trying to pursue the error.
(On that note, is there any good way to check the Surefire reports/view build artifacts?)
Grasping at straws here, but looking through the test case at https://github.com/apache/knox/blob/89caa5feeed706abc8d7ce1407830ae00d97d405/gateway-test/src/test/java/org/apache/knox/gateway/GatewayCorrelationIdTest.java, is it possible that the reduced timeout might be causing the issue? I'm not completely sure how the test works, but with the change in this PR, all connection attempts that experience a socket timeout are automatically given a 403, whereas without the change, there would at least be an attempt to contact the failover nodes.
...then again, I suppose this wouldn't explain the successes in JDK8. It's a bit difficult to tell without looking at the reports unfortunately.
@jameschen1519 I don't think the test failures are related to your change.
(On that note, is there any good way to check the Surefire reports/view build artifacts?)
The Travis build details are linked in the pr and then you go to the specific build and build log. That has the same output if you were to run locally.
This PR has not been touched for over a year ago; closing it.
| gharchive/pull-request | 2019-11-05T20:55:59 | 2025-04-01T06:37:54.248169 | {
"authors": [
"jameschen1519",
"risdenk",
"smolnar82"
],
"repo": "apache/knox",
"url": "https://github.com/apache/knox/pull/177",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2109613782 | [WIP][KYUUBI #6031] Add CollectMetricsPrettyDisplayListener
:mag: Description
Issue References 🔗
This pull request fixes #6031
Describe Your Solution 🔧
Types of changes :bookmark:
[ ] Bugfix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Test Plan 🧪
Behavior Without This Pull Request :coffin:
Behavior With This Pull Request :tada:
Related Unit Tests
Checklist 📝
[ ] This patch was not authored or co-authored using Generative Tooling
Be nice. Be informative.
cc @zhouyifan279
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (208354c) 61.17% compared to head (cd23093) 61.03%.
Additional details and impacted files
@@ Coverage Diff @@
## master #6035 +/- ##
============================================
- Coverage 61.17% 61.03% -0.14%
Complexity 23 23
============================================
Files 623 623
Lines 37144 37144
Branches 5032 5032
============================================
- Hits 22721 22669 -52
- Misses 11979 12018 +39
- Partials 2444 2457 +13
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
| gharchive/pull-request | 2024-01-31T09:04:35 | 2025-04-01T06:37:54.256113 | {
"authors": [
"codecov-commenter",
"ulysses-you",
"wForget"
],
"repo": "apache/kyuubi",
"url": "https://github.com/apache/kyuubi/pull/6035",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1552056991 | Upgrade kotlin and log4j versions
Hello,
It seems that the Log4j version of the Kotlin API is out of date.
On top of that, the Kotlin version hasn't been updated for the past 2 years.
I have did a code analysis scan and found no incompatibles, tests are passing, and everything seems to be working as excepted.
Reasons to upgrade to 1.8.0 that are relevant:
1.8.0 - Improved kotlin-reflect performance
1.6.0 - Further Improvements to type inference for recursive generic types
1.5.30 - Improvements to type inference for recursive generic types
1.5.0 - Inline classes are released as Stable
And lots of other performance improvements overall...
Changes
Upgrade log4j to 2.19.0
Upgrade Kotlin to 1.8.0
kotlinx.coroutines to 1.6.4
Trivial changes - Feel free to amend as needed. :)
@jvz Can you have a look please?
The Kotlin API is a minimum version requirement. I've been using this library with Kotlin 1.4.x, 1.7.x, and 1.8.x. The Log4j version update is good, though!
I see, that makes sense, thanks for letting me know, I am closing this one and opening a more appropriate PR for Log4j upgrade only.
| gharchive/pull-request | 2023-01-22T09:35:50 | 2025-04-01T06:37:54.261866 | {
"authors": [
"jvz",
"u-ways"
],
"repo": "apache/logging-log4j-kotlin",
"url": "https://github.com/apache/logging-log4j-kotlin/pull/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1925525700 | Make FST BytesStore grow smoothly
Description
Too bad we don't have a writer that uses tiny (like 8 bytes) block at first, but doubles size for each new block (16 bytes, 32 bytes next, etc.). Then we would naturally use log(size) number of blocks without over-allocating.
But then reading bytes is a bit tricky because we'd need to take discrete log (base 2) of the address. Maybe it wouldn't be so bad -- we could do this with Long.numberOfLeadingZeros maybe? But that's a bigger change ... we can do this separately/later.
From https://github.com/apache/lucene/pull/12604#discussion_r1344639608
Note that oal.store.ByteBuffersDataOutput takes a different and neat approach to gracefully growing: it picks an initial block size, and appends new blocks as you write bytes, but then if it reaches 100 blocks, it "resizes" itself by doubling the block size and copying over, so that now you have 50 blocks.
So it's still O(N) amortized cost of that doubling/copying with time, and at any given moment you will not be wasting too many %tg of the bytes you've written, except at the start 1 KB block size.
In https://github.com/apache/lucene/pull/12624, I moved the main FST body out of BytesStore into ByteBuffersDataOutput, and BytesStore becomes only a single byte[] for the currently written node so maybe we don't need to do this?
| gharchive/issue | 2023-10-04T06:53:36 | 2025-04-01T06:37:54.265649 | {
"authors": [
"dungba88",
"gf2121",
"mikemccand"
],
"repo": "apache/lucene",
"url": "https://github.com/apache/lucene/issues/12619",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
106634413 | Add new option "failOnWarning".
This option causes the maven-compiler-plugin to treat warnings as errors
and fail accordingly.
Simply adds "-Werror" to the compiler command line. It may be nice to
add this to the plexus-compiler-api proper but as the sonatype repo only
has tags to 2.4 and the current plugin references 2.6 (and I have no
idea where that comes from), I went the easy route. Happy to refactor
it if wanted.
Is this PR still relevant. If so, is there any reason not to pass to compilerArgs?
Hm. No comment for almost a year and then one comment and closed within 8 days.
Yes, it is still relevant. As an option, I can expose this as a property with
...
<failOnWarning>${failWarningSwitch}</failOnWarning>
...
so this can be overridden from the command line. If it gets added to , there is no way to control this dynamically from the command line.
plexus compiler api 2.8.1 supports failOnWarning directly. I may simply redo this patch to leverage this.
Let's open, Can you reopen, can you provide a PR?
| gharchive/pull-request | 2015-09-15T20:03:21 | 2025-04-01T06:37:54.279972 | {
"authors": [
"hgschmie",
"michael-o"
],
"repo": "apache/maven-plugins",
"url": "https://github.com/apache/maven-plugins/pull/60",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
498922932 | P-NUCLEO-WB55
Slinky works, SPI/I2C/TIM under test...
@kasjer Could you take a look at this again, I think most review issues were tackled, apart from the int vs unsigned for I2c pins and flash suggestions (FLASH_PAGE_SIZE and removing _ prefix). I am not sure what your suggestion is for the first one, but the issue spans across families so I think it would makes more sense to do it in another PR that covers them all. For the flash suggestions, I agree, but would rather send a new PR that changes it on every family. Is that OK?
@kasjer All issues addressed.
| gharchive/pull-request | 2019-09-26T14:28:45 | 2025-04-01T06:37:54.379290 | {
"authors": [
"utzig"
],
"repo": "apache/mynewt-core",
"url": "https://github.com/apache/mynewt-core/pull/2017",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
182301536 | NIFI-2887 NumberFormatException in HL7ExtractAttributes for repeating segments with order control codes
Fixes NIFI-2887 and adds more test cases
+1 LGTM, ran the unit tests and on a full NiFi, verified the NFE is not presented and the flow file is parsed successfully. Merging to master, thanks!
+1 LGTM, ran the unit tests and with a full NiFi, verified the NFE is not presented and messages are parsed successfully. Merging to master, thanks!
| gharchive/pull-request | 2016-10-11T15:48:01 | 2025-04-01T06:37:54.385259 | {
"authors": [
"jfrazee",
"mattyb149"
],
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/1123",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
256451451 | NIFI-4371 - add support for query timeout in Hive processors
Thank you for submitting a contribution to Apache NiFi.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
[x] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
[x] Has your PR been rebased against the latest commit within the target branch (typically master)?
[x] Is your initial contribution a single, squashed commit?
For code changes:
[x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?
[ ] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?
[ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?
[x] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?
For documentation related changes:
[ ] Have you ensured that format looks appropriate for the output in which it is rendered?
Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
The "unit tests" for TestSelectHiveQL use Derby as the database, only to test the functionality of getting the "HiveQL" statement to the database and parsing its results. In that vein, Derby supports setQueryTimeout (DERBY-31), so can we add a unit test that sets the value, to exercise that part of the code?
Thanks for the review @mattyb149 and @joewitt. I updated the property description based on your comments. Regarding the unit test, since I'm using a HiveStatement object in the custom validate method, I'm not sure I can easily test the property (it'll always fail in a default build even with the Derby backend). And, if adding a unit test where I expect the validation to raise an error, this test may not have the expected result if using different profiles in the maven build.
I'm getting NPEs in the unit tests, something weird with MockPropertyValue getting created without "expectExpressions" being set to anything, causing isExpressionLanguagePresent() to throw the NPE
I already noticed this error while working on others PRs (I'm a bit surprised I didn't notice the NPE on this PR...). It's because we're checking if the processor is valid before enabling expression validation (https://github.com/apache/nifi/blob/master/nifi-mock/src/main/java/org/apache/nifi/util/StandardProcessorTestRunner.java#L169). We can't really do it the other way around without changing a lot of things.
I updated the PR to just check if expectExpressions is null and, if yes, return false. This way we can use isExpressionLanguagePresent() in a custom validate method.
I talked to @markap14 about it, perhaps this fix is fine or we can just change it to a boolean, but I'll let him take a look too.
@pvillard31 Mind doing a rebase here, and updating the QUERY_TIMEOUT property to use FlowFile Attribute scope? I pushed up a rebased branch with the additional commit (https://github.com/mattyb149/nifi/commit/40b9d1db89168fac08f343be772516132f1f67c0) but I don't know if you can cherry-pick from there or if you have to do your own rebase, then cherry-pick my additional commit (if you want to use it of course).
Done @mattyb149 - thanks!
Hey @mattyb149 - I believe we added this one for Hive 3 processors but forgot this PR. I know you're not available at the moment, but just a reminder for when you're back ;) (or if someone else wants to merge it in)
finally got time to get back on this one... if you want to have another look @mattyb149
| gharchive/pull-request | 2017-09-09T16:58:18 | 2025-04-01T06:37:54.395892 | {
"authors": [
"mattyb149",
"pvillard31"
],
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/2138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.