id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
277159339
NIFI-4445 Add support for ListS3Version2 API Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: For all changes: [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? [x] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. [x] Has your PR been rebased against the latest commit within the target branch (typically master)? [x] Is your initial contribution a single, squashed commit? For code changes: [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? [x] Have you written or updated unit tests to verify your changes? [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0? [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? [x] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? For documentation related changes: [ ] Have you ensured that format looks appropriate for the output in which it is rendered? Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. @aburkard can you confirm the JIRA you meant this for? NIFI-4445 looks like a typo. Thanks @joewitt yep my bad, NIFI-4628 is the right issue.
gharchive/pull-request
2017-11-27T20:32:47
2025-04-01T06:37:54.402794
{ "authors": [ "aburkard", "joewitt" ], "repo": "apache/nifi", "url": "https://github.com/apache/nifi/pull/2299", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
347163237
NIFI-5479 Upgraded Jetty. Moved where we unpack bundled deps to so we… … can avoid a new jetty bug with META-INF loading logic. WIP for testing/eval. Not ready for merge Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: For all changes: [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? [ ] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? [ ] Is your initial contribution a single, squashed commit? For code changes: [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? [ ] Have you written or updated unit tests to verify your changes? [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0? [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? For documentation related changes: [ ] Have you ensured that format looks appropriate for the output in which it is rendered? Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. 2018-08-02 16:20:07,347 WARN [NiFi Web Server-21] o.e.jetty.annotations.AnnotationParser javax.inject.Inject scanned from multiple locations: jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-1.jar!/javax/inject/Inject.class, jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-2.5.0-b42.jar!/javax/inject/Inject.class 2018-08-02 16:20:07,348 WARN [NiFi Web Server-21] o.e.jetty.annotations.AnnotationParser javax.inject.Named scanned from multiple locations: jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-1.jar!/javax/inject/Named.class, jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-2.5.0-b42.jar!/javax/inject/Named.class 2018-08-02 16:20:07,348 WARN [NiFi Web Server-21] o.e.jetty.annotations.AnnotationParser javax.inject.Provider scanned from multiple locations: jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-1.jar!/javax/inject/Provider.class, jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-S 2018-08-02 16:20:16,335 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default 2018-08-02 16:20:16,429 WARN [main] o.e.jetty.annotations.AnnotationParser Unknown asm implementation version, assuming version 393216 Todo: identify source of and cleanup warnings that now show up on application startup test/what if one just copied the contents of a new lib dir on top of their old lib dir as an upgrade process...will our work dirs get cleaned and restart properly add docs in the unpack method to explain why we move META-INF/bundled-dependencies to NAR-INF/bundled-dependencies this is just a technique to work with older/current nar creation approach. this is done because jetty's code assumes that META-INF is only in a directory path once or else it fails to find some tlds. but we want to keep META-INF for things like META-INF/MANIFEST.mf and maven bits. we might want to just move META-INF/bundled-dependencies to bundled-dependencies. The 'NAR-INF' part is not value add since the nar metadata is in META-INF/MANIFEST.mf and not easily moved due to jar/manifest loading code file a JIRA to change where we write them in the nar plugin to NAR-INF/bundled-dependencies directly Test secure/non-secure clusters/etc.. big thanks to @mcgilman for finding the needed dep change in nifi-web-ui and identifying why we needed a workaround for how we extract working dir nar deps due to recent jetty change Would it help if you would load NAR's without unpacking them to disk? that would not enable us to work around this issue and does not bring the benefits that led to unpacking in the first place @joewitt Thanks for the PR! When starting up in secure mode using a configuration that works with current master branch, I received some stack traces regarding the initialization of the SSLContext. There appears to be a runtime difference introduced here that affects the loading of providers. 1305 Caused by: java.security.NoSuchAlgorithmException: no such algorithm: JKS for provider BC 1306 at sun.security.jca.GetInstance.getService(GetInstance.java:87) 1307 at sun.security.jca.GetInstance.getInstance(GetInstance.java:206) 1308 at java.security.Security.getImpl(Security.java:698) 1309 at java.security.KeyStore.getInstance(KeyStore.java:896) 1310 ... 21 common frames omitted
gharchive/pull-request
2018-08-02T20:30:23
2025-04-01T06:37:54.417609
{ "authors": [ "joewitt", "mcgilman", "ottobackwards" ], "repo": "apache/nifi", "url": "https://github.com/apache/nifi/pull/2933", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
449552886
NIFI-6323 Changed URLs in XML files to use https:// where possible Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR This PR changes existing URLs (project description, mailing lists, dependency repositories, and schema references) to use the https:// protocol when possible. It also standardizes the location of the Maven 4.0.0 XML schema descriptor. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: For all changes: [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? [x] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. [x] Has your PR been rebased against the latest commit within the target branch (typically master)? [ ] Is your initial contribution a single, squashed commit? Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not squash or use --force when pushing to allow for clean monitoring of changes. For code changes: [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? [ ] Have you written or updated unit tests to verify your changes? [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0? [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? For documentation related changes: [ ] Have you ensured that format looks appropriate for the output in which it is rendered? Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. Will review... did full clean build w/contrib check. all looks good and nifi itself still seems good. +1 (assuming gilman also is) Also +1. Successful build with cleaned mvn repo. Verified standalone and clustered functionality. Will merge.
gharchive/pull-request
2019-05-29T01:12:33
2025-04-01T06:37:54.426511
{ "authors": [ "alopresto", "joewitt", "mcgilman" ], "repo": "apache/nifi", "url": "https://github.com/apache/nifi/pull/3497", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
593511945
NIFI-7240: Fixed out-of-order Table Map events in CaptureChangeMySQL Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR When using triggers to update tables in MySQL, the Table Map events may be out of order with the corresponding Write Rows events for those tables. This PR keeps a temporary map of table IDs to cache keys in order to retrieve the correct table information during Write Rows processing. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: For all changes: [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? [x] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. [x] Has your PR been rebased against the latest commit within the target branch (typically master)? [x] Is your initial contribution a single, squashed commit? Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not squash or use --force when pushing to allow for clean monitoring of changes. For code changes: [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? [x] Have you written or updated unit tests to verify your changes? [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0? [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? For documentation related changes: [ ] Have you ensured that format looks appropriate for the output in which it is rendered? Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. I've successfully compiled it on JDK 8 and I can confirm that it does fix the issue, it would be great if this change could make it into the next stable version. Hey @mattyb149 - can you rebase against the main branch? Happy to get this in. I recently reviewed another PR related to this processor. @mattyb149 Are u still following this thread? There are two more issues opened about this now: https://issues.apache.org/jira/browse/NIFI-6914 https://issues.apache.org/jira/browse/NIFI-7252 This issue is drastically impacting my organization's ability to utilize NiFi for synchronizing changes from an older legacy system into our new one. Any way this PR can be re-opened or reviewed? @mattyb149 @pvillard31
gharchive/pull-request
2020-04-03T16:49:06
2025-04-01T06:37:54.436449
{ "authors": [ "BAGELreflex", "Zhouhao12345", "fwolfsjaeger", "mattyb149", "pvillard31" ], "repo": "apache/nifi", "url": "https://github.com/apache/nifi/pull/4179", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
624704761
NIFI-7486 Make InvokeHttp authentication properties able to read from variables. Description of PR InvokeHTTP Basic HTTP credentials support variable registry In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: For all changes: [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? [X] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. [X] Has your PR been rebased against the latest commit within the target branch (typically master)? [X] Is your initial contribution a single, squashed commit? Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not squash or use --force when pushing to allow for clean monitoring of changes. For code changes: [X] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? [X] Have you written or updated unit tests to verify your changes? [ ] Have you verified that the full build is successful on JDK 8? [ ] Have you verified that the full build is successful on JDK 11? [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0? [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? For documentation related changes: [ ] Have you ensured that format looks appropriate for the output in which it is rendered? Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. Hey @adarmiento - does it really makes sense now that we have the parameters concept in NiFi? Besides variables should not be used for sensitive properties so I'd definitely not recommend using a variable on the password property. Hey @adarmiento - does it really makes sense now that we have the parameters concept in NiFi? Besides variables should not be used for sensitive properties so I'd definitely not recommend using a variable on the password property. Hello @pvillard31, I did not think about it, probably because I was still using old 1.9.2 until now. Probably could be a security bad choice then (I noted that also the proxy credentials allow variables, maybe we should then update those two properties for consistency) Thanks for the tips anyway, I'll keep this in mind for the future I think it's hard to remove variable support after the fact because of backward compatibility concerns (even though I'd be a +1 for removing variable support on sensitive properties). I think that (to be discussed though) when the community will start working around Apache NiFi 2.x, we would possibly remove the variables to only support parameters. If you think that a change would still make sense, let me know and we can have a look. In my case, I´m running NIFI under Kubernets and the usage of variables in sensitive properties is needed to use environment variables. These type of properties are injected in the container during the execution and then can be used in NIFI. If it´s not supported, I need to write in each processor the user and password. Maybe, in version 2.x, NIFI can have a different expression language scopes. One for environment variable and other for variables. @axdmoraes - thanks for the feedback. How is the flow published in NiFi? As part of the Docker image? via a volume? or from a NiFi Registry instance? Using Nifi registry instance. We use the same registry for different environments. In that case, would it be an option to use the CLI or REST API to set the parameters values after the flow has been deployed in NiFi from the NiFi Registry. On my side, with my k8s deployments, I'm doing something looking like the below # add the Registry client in NiFi (to adapt for your secured NiFi instances) curl 'http://nifi:8080/nifi-api/controller/registry-clients' -H 'Content-Type: application/json' --data-binary '{"revision":{"version":0},"component":{"name":"NiFi Registry","uri":"http://nifi-registry:18080/nifi-registry"}}' # Deploy the flow in NiFi (add the logic to retrieve the bucket/flow/version) /opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi pg-import -u http://nifi:8080 --bucketIdentifier $bucketID --flowIdentifier $flow --flowVersion $version # Get the parameter context ID paramContextID=`/opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi list-param-contexts -u http://nifi:8080 -ot json | grep -v cli.sh | jq -r '.parameterContexts[].id'` # Set the parameters values (you can do something dynamic based on your needs) /opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi set-param -u http://nifi:8080 --paramContextId $paramContextID --paramName MY_PARAMETER --paramValue MY_VALUE # Start the controller services (add your logic to retrieve the PG ID) /opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi pg-enable-services -u http://nifi:8080 --processGroupId $pgid # Start the process group (add your logic to retrieve the PG ID) /opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi pg-start -u http://nifi:8080 --processGroupId $pgid Thanks for the suggestion. I will try. We were using version 1.9.2 and I didn't know about parameters context.
gharchive/pull-request
2020-05-26T08:40:43
2025-04-01T06:37:54.448955
{ "authors": [ "adarmiento", "axdmoraes", "pvillard31" ], "repo": "apache/nifi", "url": "https://github.com/apache/nifi/pull/4298", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1386254686
NIFI-10549: Remove group name wildcard from assembly for nifi-ranger-resources Summary NIFI-10549 Tracking Please complete the following tracking steps prior to pull request creation. Issue Tracking [x] Apache NiFi Jira issue created Pull Request Tracking [x] Pull Request title starts with Apache NiFi Jira issue number, such as NIFI-00000 [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such NIFI-00000 Pull Request Formatting [x] Pull Request based on current revision of the main branch [x] Pull Request refers to a feature branch with one commit containing changes Verification Please indicate the verification steps performed prior to pull request creation. Build [ ] Build completed using mvn clean install -P contrib-check [ ] JDK 8 [x] JDK 11 [ ] JDK 17 Licensing [ ] New dependencies are compatible with the Apache License 2.0 according to the License Policy [ ] New dependencies are documented in applicable LICENSE and NOTICE files Documentation [ ] Documentation formatting appears as expected in rendered files LGTM Looks good, will merge
gharchive/pull-request
2022-09-26T15:10:11
2025-04-01T06:37:54.455802
{ "authors": [ "bbende", "mattyb149", "tamas-horvath" ], "repo": "apache/nifi", "url": "https://github.com/apache/nifi/pull/6448", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2583080286
nshlib: Add startup process buried points Summary nshlib: Add startup process buried points trace: fix trace dump crash Impact Testing sim [Experimental Bot, please feedback here] No, this PR does not fully meet the NuttX requirements. Missing Information: Summary: Why are these changes necessary? Bug fixes? Enhancements? How do the changes work? Provide specifics about the buried points added and the cause of the trace dump crash and how it was fixed. Issue References: Link any relevant NuttX or NuttX Apps issues. Impact: Address all impact categories (user, build, hardware, documentation, security, compatibility). Even if there's no impact, state "NO" explicitly for each. Testing: Insufficient Detail: "sim" isn't descriptive enough. Specify the simulator (e.g., qemu-rv32) and the configuration used. Missing Logs: Provide actual testing logs from before and after the changes. Recommendations: Expand the Summary: Explain the "why" and "how" for each change. Complete the Impact Assessment: Go through each category and provide a clear "YES" or "NO" along with a brief explanation if necessary. Provide Detailed Testing Information: Specify the exact simulator, configurations, and include the relevant testing logs. My only remark is this is very useful feature, probably will be developed further in future, but as for quite a big change better description in the PR and some additional comments in the commit message would help understand the change better by others that want to use it too :-) Is existing API impacted anyhow? Will old code work the same way or needs an update? Is documentation update required / necessary? Maybe it would be good to provide documentation on how to use new functionalities? Newcomers tend to start at documentation so share your inventions there too :-) If the buffer is too small, sure we can increase the buffer, but also overflow checks are necessary?
gharchive/pull-request
2024-10-12T14:15:59
2025-04-01T06:37:54.464182
{ "authors": [ "Gary-Hobson", "cederom", "nuttxpr" ], "repo": "apache/nuttx-apps", "url": "https://github.com/apache/nuttx-apps/pull/2708", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1503685014
LPC17_40 CAN driver SocketCAN enforce TX fifo behaviour Summary The SocketCAN driver expects a FIFO behaviour yet LPC17_40 didn't enforce this. This changes enables prioritization on transmit function to enforce this. Impact Fix transmit behaviour Testing Tested on a LPC1768 Could you please squash commits? Could you please squash commits? And now we have a documentation to explain how to do it: https://nuttx.apache.org/docs/latest/contributing/making-changes.html#how-to-include-the-suggestions-on-your-pull-request Of course, @PetervdPerk is not a New Kind on the Block, the idea was to help new contributors Let' ignore the macOS ci broken.
gharchive/pull-request
2022-12-19T22:13:19
2025-04-01T06:37:54.467233
{ "authors": [ "PetervdPerk", "acassis", "pkarashchenko", "xiaoxiang781216" ], "repo": "apache/nuttx", "url": "https://github.com/apache/nuttx/pull/7933", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1020618876
ORC-1021: Add -fno-omit-frame-pointer in DEBUG and RELWITHDEBINFO builds What changes were proposed in this pull request? This PR adds -fno-omit-frame-pointer gcc option in DEBUG and RELWITHDEBINFO builds, which helps to generate stacktrace in debugging and profiling. Refs: https://www.brendangregg.com/perf.html#StackTraces https://issues.apache.org/jira/browse/IMPALA-4132 Why are the changes needed? Described as above. How was this patch tested? Built in ubuntu16.04 with gcc 8.4.0. +1 LGTM I backported this to branch-1.7.
gharchive/pull-request
2021-10-08T02:34:43
2025-04-01T06:37:54.474620
{ "authors": [ "dongjoon-hyun", "guiyanakuang", "stiga-huang" ], "repo": "apache/orc", "url": "https://github.com/apache/orc/pull/932", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
756744437
HDDS-4549. Fix typos in documents What changes were proposed in this pull request? Fix typos in documents What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-4549 How was this patch tested? No need @cku328 done, thanks Thanks @lamber-ken for working on this. I will merge it later.
gharchive/pull-request
2020-12-04T02:26:45
2025-04-01T06:37:54.476622
{ "authors": [ "cku328", "lamber-ken" ], "repo": "apache/ozone", "url": "https://github.com/apache/ozone/pull/1655", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
854526178
HDDS-5084. Include HISTORY.md/SECURITY.md/CONTRIBUTING.md in the release artifacts. JIRA: https://issues.apache.org/jira/browse/HDDS-5084 What changes were proposed in this pull request? During the ozone-1.1.0 vote I realized that HISTORY.md/SECURITY.md/CONTRIBUTING.md files are missing from the bin and src artifacts. I think they include very useful information and would be better to include them in the release artifacts. How was this patch tested? mvn clean install -Dmaven.javadoc.skip=true -DskipTests -Psign,dist,src -Dtar -Dgpg.keyname=$CODESIGNINGKEY cd hadoop-ozone/dist/target/ tar tzf hadoop-ozone-1.1.0-SNAPSHOT.tar.gz tar tzf hadoop-ozone-1.1.0-src-SNAPSHOT.tar.gz We should update the History.md, it is quite old right now. @mukul1987 This is a good suggestion, but I disagree with the other statement: Including it in the current form in the release doesn't makes sense. It is not going to be part of the 1.1.0 release, so there is plenty of time to update it until the next one. It's not only about HISTORY.md, but other two docs as well. Writing prose for the history doc is quite distinct from updating a script to copy some files. It may very well be updated by other people, not necessarily @elek. So I think this change is fine in its scope. Fair point @adoroszlai. Can we please create a followup jira and mark it as a blocker for 1.2.0 release? I feel if we will update the history by the next release, then we should be good. Thanks the suggestion @mukul1987, very good point. It seems to be a small update, so I created the patch itself (please see #2149). And agree: as we have PRs for both problems in our radar we can merge the two PRs in any order. Thanks Marton. +1 for this patch as well. I have already added +1 to the other patch. Thanks for updating the file. Thanks the review @mukul1987 @ayushtkn and @adoroszlai I am merging it after the green build.
gharchive/pull-request
2021-04-09T13:28:11
2025-04-01T06:37:54.483420
{ "authors": [ "adoroszlai", "elek", "mukul1987" ], "repo": "apache/ozone", "url": "https://github.com/apache/ozone/pull/2140", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2368057279
Bump Thrift to 0.16.0 Thrift 0.16.0 has been released https://github.com/apache/thrift/releases/tag/v0.16.0 Reporter: Vinoo Ganesh / @vinooganesh Assignee: Vinoo Ganesh / @vinooganesh Related issues: Release 1.12.3 (is depended upon by) Note: This issue was originally created as PARQUET-2128. Please see the migration documentation for further details. Vinoo Ganesh / @vinooganesh: Fixed in https://github.com/apache/parquet-mr/pull/948 Steve Loughran / @steveloughran: homebrew doesn't have anything < 0..18.0, which is java11+ only, so not something parquet can switch to. which means that we have to stop using homebrew here and take control of our build dependencies ourselves. I've already done that with maven and openjdk as brew is too enthusiastic about breaking my workflow. *none of us can rely on homebrew or use "homebrew doesn't have this" as a reason for reverting a change. All old thrift releases can be found at https://archive.apache.org/dist/thrift/
gharchive/issue
2022-02-20T20:59:17
2025-04-01T06:37:54.489825
{ "authors": [ "asfimport" ], "repo": "apache/parquet-java", "url": "https://github.com/apache/parquet-java/issues/2670", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1631089429
optimize queries where lhs and rhs of predicate are equal This is a minor performance bugfix. this fixes NullPointerExceptions in existing optimizers when performing WHERE 1=1 queries. These would fail because the filter expression had no function call I noticed that WHERE 1=1 was no simplified, but WHERE col1>0 AND 1=1 was actually being simplified in the NumericalFilterOptimizer. So I put that part in a separate class to be used more generally for future cases like this it does a little more work than expected once it sees and AND/OR/NOT expression something else is converting 1=1 to literal TRUE, but I'm not sure where that is This adds a IdenticalPredicateFilterOptimizer class that converts WHERE 1=1 or WHERE "colA"!="colA" to TRUE/FALSE respectively I've added a bunch more test cases, and I've tested manually in the Quickstart app. This is my first contribution to the query parsing part of the code base, so I don't have a great sense what test coverage looks like. But I imagine between unit and integration tests, this should catch any glaring breaks? Codecov Report Merging #10444 (c7c578f) into master (d9c4315) will decrease coverage by 50.31%. The diff coverage is 0.00%. @@ Coverage Diff @@ ## master #10444 +/- ## ============================================= - Coverage 64.21% 13.90% -50.31% + Complexity 6089 237 -5852 ============================================= Files 2007 2009 +2 Lines 109281 109337 +56 Branches 16692 16708 +16 ============================================= - Hits 70177 15208 -54969 - Misses 33993 92897 +58904 + Partials 5111 1232 -3879 Flag Coverage Δ unittests1 ? unittests2 13.90% <0.00%> (-0.02%) :arrow_down: Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ .../pinot/controller/recommender/io/InputManager.java 93.22% <ø> (ø) ...che/pinot/core/query/optimizer/QueryOptimizer.java 0.00% <ø> (-100.00%) :arrow_down: ...imizer/filter/BaseAndOrBooleanFilterOptimizer.java 0.00% <0.00%> (ø) .../optimizer/filter/FlattenAndOrFilterOptimizer.java 0.00% <0.00%> (-77.78%) :arrow_down: ...izer/filter/IdenticalPredicateFilterOptimizer.java 0.00% <0.00%> (ø) ...ery/optimizer/filter/MergeEqInFilterOptimizer.java 0.00% <0.00%> (-92.60%) :arrow_down: ...ery/optimizer/filter/NumericalFilterOptimizer.java 0.00% <0.00%> (-80.90%) :arrow_down: ... and 1351 files with indirect coverage changes :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more Looks good in general, great job! thank you! i see all checks passed. let me know if you have further comments, though
gharchive/pull-request
2023-03-19T19:37:53
2025-04-01T06:37:54.506635
{ "authors": [ "codecov-commenter", "jadami10" ], "repo": "apache/pinot", "url": "https://github.com/apache/pinot/pull/10444", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1842980175
separate tags with commas as indicated in action doc @erichgess found that https://github.com/apache/pinot/pull/10528 broke the codecov coverate. In fact coverage was uploaded, but tags were incorrectly configured and therefore they are not uploaded with the expected metadata. As indicated in https://github.com/codecov/codecov-action, different tags should be separated by commas. Codecov Report Merging #11303 (7b23f93) into master (6fa4268) will increase coverage by 0.00%. The diff coverage is n/a. @@ Coverage Diff @@ ## master #11303 +/- ## ========================================= Coverage 0.11% 0.11% ========================================= Files 2231 2157 -74 Lines 120139 116982 -3157 Branches 18218 17772 -446 ========================================= Hits 137 137 + Misses 119982 116825 -3157 Partials 20 20 Flag Coverage Δ integration1temurin11 ? integration1temurin17 ? integration1temurin20 ? integration2temurin11 ? integration2temurin17 ? integration2temurin20 ? java-20 0.11% <ø> (?) temurin 0.11% <ø> (?) unittests1temurin11 ? unittests1temurin17 ? unittests1temurin20 ? unittests2 0.11% <ø> (?) unittests2temurin11 ? unittests2temurin17 ? unittests2temurin20 ? Flags with carried forward coverage won't be shown. Click here to find out more. see 76 files with indirect coverage changes :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
gharchive/pull-request
2023-08-09T10:54:02
2025-04-01T06:37:54.518353
{ "authors": [ "codecov-commenter", "gortiz" ], "repo": "apache/pinot", "url": "https://github.com/apache/pinot/pull/11303", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1920854058
Added UTs for null handling in CaseTransform function. Added unit test cases for null handling in CaseTransformFunction Also, added test case for isNullLiteralTransformation function asked in the PR cc: @shenyu0127 @Jackie-Jiang Codecov Report Merging #11721 (6a96f1a) into master (ae16812) will decrease coverage by 48.69%. The diff coverage is n/a. @@ Coverage Diff @@ ## master #11721 +/- ## ============================================= - Coverage 63.11% 14.42% -48.69% + Complexity 1117 201 -916 ============================================= Files 2342 2342 Lines 125802 125800 -2 Branches 19336 19336 ============================================= - Hits 79395 18150 -61245 - Misses 40745 106116 +65371 + Partials 5662 1534 -4128 Flag Coverage Δ integration ? integration1 ? integration2 ? java-11 14.42% <ø> (-48.64%) :arrow_down: java-17 ? java-20 ? temurin 14.42% <ø> (-48.69%) :arrow_down: unittests 14.42% <ø> (-48.68%) :arrow_down: unittests1 ? unittests2 14.42% <ø> (-0.06%) :arrow_down: Flags with carried forward coverage won't be shown. Click here to find out more. Files Coverage Δ ...ator/transform/function/CaseTransformFunction.java 0.00% <ø> (-57.98%) :arrow_down: ... and 1521 files with indirect coverage changes :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
gharchive/pull-request
2023-10-01T16:33:52
2025-04-01T06:37:54.533875
{ "authors": [ "abhioncbr", "codecov-commenter" ], "repo": "apache/pinot", "url": "https://github.com/apache/pinot/pull/11721", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2523582643
Enhance optimizeDictionary to optionally optimize var-width type cols Changes Add noDictionaryCardinalityRatioThreshold config. If populated, and optimizeDictionary is true, then Pinot will override dictionary encoding with raw encoding based on the condition cardinality / numDocs > noDictionaryCardinalityRatioThreshold. If the new config is omitted, optimizeDictionary behavior is unchanged Motivation When storing log data, often columns will contain many repeated values. It's useful to take advantage of Pinot's dictionary encoding which usually provides better storage/query performance for these columns. Dictionary encoding high cardinality columns is cost/storage prohibitive, so we'd like to avoid applying dictionary encoding unless it is safe. Since column cardinality/values can change rapidly we'd like to make these decisions within Pinot itself. In our experience, cardinality is a good indicator of whether to dictionary or raw encode a col. With a 0.10 threshold (10%), we see roughly 40-60% improvement in storage compared to raw encoding everything. Codecov Report Attention: Patch coverage is 0% with 26 lines in your changes missing coverage. Please review. Project coverage is 0.00%. Comparing base (59551e4) to head (8c368f9). Report is 1029 commits behind head on master. Files with missing lines Patch % Lines .../segment/index/dictionary/DictionaryIndexType.java 0.00% 12 Missing :warning: ...ocal/segment/index/loader/ForwardIndexHandler.java 0.00% 4 Missing :warning: ...ot/segment/spi/creator/SegmentGeneratorConfig.java 0.00% 4 Missing :warning: ...ment/creator/impl/SegmentColumnarIndexCreator.java 0.00% 3 Missing :warning: .../apache/pinot/spi/config/table/IndexingConfig.java 0.00% 3 Missing :warning: :exclamation: There is a different number of reports uploaded between BASE (59551e4) and HEAD (8c368f9). Click for more details. HEAD has 48 uploads less than BASE Flag BASE (59551e4) HEAD (8c368f9) integration 7 2 integration2 3 2 temurin 12 2 java-21 7 2 skip-bytebuffers-true 3 1 skip-bytebuffers-false 7 1 unittests 5 0 unittests1 2 0 java-11 5 0 unittests2 3 0 integration1 2 0 custom-integration1 2 0 Additional details and impacted files @@ Coverage Diff @@ ## master #13994 +/- ## ============================================= - Coverage 61.75% 0.00% -61.76% ============================================= Files 2436 2514 +78 Lines 133233 139046 +5813 Branches 20636 21371 +735 ============================================= - Hits 82274 0 -82274 - Misses 44911 139046 +94135 + Partials 6048 0 -6048 Flag Coverage Δ custom-integration1 ? integration 0.00% <0.00%> (-0.01%) :arrow_down: integration1 ? integration2 0.00% <0.00%> (ø) java-11 ? java-21 0.00% <0.00%> (-61.63%) :arrow_down: skip-bytebuffers-false 0.00% <0.00%> (-61.75%) :arrow_down: skip-bytebuffers-true 0.00% <0.00%> (-27.73%) :arrow_down: temurin 0.00% <0.00%> (-61.76%) :arrow_down: unittests ? unittests1 ? unittests2 ? Flags with carried forward coverage won't be shown. Click here to find out more. :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here. Certain settings are impossible, e.g. only apply cardinality based optimization and skip optimization for fixed-length type. I'm not too worried about this limitation, but it would be good if we make it possible If I understand the concern correctly, I think users can set noDictionaryCardinalityRatioThreshold = 0 to effectively skip optimization for fixed-length type? I'd prefer to merge as is, since providing a way to use the cardinality ratio threshold instead of size ratio threshold means making the old size ratio threshold config optional, which is backwards compatible and could be done in the future if the need is found Basically we specify: Size based only for fixed length type Cardinality based only for var-length type Let's document this behavior so that user doesn't expect wrong type being applied
gharchive/pull-request
2024-09-13T00:28:12
2025-04-01T06:37:54.566056
{ "authors": [ "Jackie-Jiang", "codecov-commenter", "itschrispeck" ], "repo": "apache/pinot", "url": "https://github.com/apache/pinot/pull/13994", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1250721694
simplify segment pruning Segment pruning feels over-engineered: DataSchemaSegmentPruner and ValidSegmentPruner are just one liners which can be applied when necessary ColumnValueSegmentPruner and SelectionQuerySegmentPruner are mutually exclusive so never both need to run, and these cases can be identified easily by examining the QueryContext No new segment pruners have been in a very long time This leads to inefficiencies like None of the pruners inline Lots of lists are created unnecessarily We end up tracing one liner checks This PR removes the two trivial pruners and applies them inline within the two remaining pruners. It adds a new method to identify based on the query context whether the pruner should run at all. Codecov Report Merging #8790 (853cdc9) into master (c4549e2) will decrease coverage by 48.69%. The diff coverage is 0.00%. @@ Coverage Diff @@ ## master #8790 +/- ## ============================================= - Coverage 62.86% 14.17% -48.70% + Complexity 4601 168 -4433 ============================================= Files 1690 1688 -2 Lines 89212 89211 -1 Branches 13411 13415 +4 ============================================= - Hits 56082 12642 -43440 - Misses 29079 75623 +46544 + Partials 4051 946 -3105 Flag Coverage Δ unittests1 ? unittests2 14.17% <0.00%> (-0.03%) :arrow_down: Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ ...ot/core/query/pruner/ColumnValueSegmentPruner.java 0.00% <0.00%> (-72.17%) :arrow_down: .../apache/pinot/core/query/pruner/SegmentPruner.java 0.00% <0.00%> (-88.89%) :arrow_down: ...pinot/core/query/pruner/SegmentPrunerProvider.java 0.00% <0.00%> (-66.67%) :arrow_down: .../pinot/core/query/pruner/SegmentPrunerService.java 0.00% <0.00%> (-100.00%) :arrow_down: ...core/query/pruner/SelectionQuerySegmentPruner.java 0.00% <0.00%> (-86.37%) :arrow_down: ...src/main/java/org/apache/pinot/sql/FilterKind.java 0.00% <0.00%> (-100.00%) :arrow_down: ...ain/java/org/apache/pinot/core/data/table/Key.java 0.00% <0.00%> (-100.00%) :arrow_down: ...in/java/org/apache/pinot/spi/utils/StringUtil.java 0.00% <0.00%> (-100.00%) :arrow_down: .../java/org/apache/pinot/spi/utils/BooleanUtils.java 0.00% <0.00%> (-100.00%) :arrow_down: .../java/org/apache/pinot/core/data/table/Record.java 0.00% <0.00%> (-100.00%) :arrow_down: ... and 1137 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update c4549e2...853cdc9. Read the comment docs. LGTM
gharchive/pull-request
2022-05-27T12:38:58
2025-04-01T06:37:54.587725
{ "authors": [ "codecov-commenter", "gortiz", "richardstartin" ], "repo": "apache/pinot", "url": "https://github.com/apache/pinot/pull/8790", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
372400465
[PIO-187] Livedoc with Docker Installation Update After #462, we have Docker support in the repo. This PR will modify the corresponding updates in Livedoc. @marevol, please help to check if I make any mistake. Thanks for writing this up! One small request: we have to declare that any Docker container image published to Docker Hub be not official ASF releases. Those can only be referred to as convenience binaries. We can say that the Docker build files in our Git repo as official though. For more information please refer to http://www.apache.org/legal/release-policy.html. @dszeto, I will modify it for sure. thanks! LGTM. @marevol do you want to take a second look? Going to merge this now. Thanks @Wei-1 ! @marevol if you see issues please open a separate ticket.
gharchive/pull-request
2018-10-22T05:47:03
2025-04-01T06:37:54.591281
{ "authors": [ "Wei-1", "dszeto" ], "repo": "apache/predictionio", "url": "https://github.com/apache/predictionio/pull/486", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1150019402
[Broker] Fix producerFuture not completed in ServerCnx#handleProducer Motivation producerFuture should be completed and removed from producers when exception occurs. Modifications Add producerFuture.completeExceptionally Verifying this change [ ] Make sure that the change passes the CI checks. This change is a trivial rework / code cleanup without any test coverage. Does this pull request potentially affect one of the following parts: If yes was chosen, please highlight the changes Dependencies (does it add or upgrade a dependency): (no) The public API: (no) The schema: (no) The default values of configurations: (no) The wire protocol: (no) The rest endpoints: (no) The admin cli options: (no) Anything that affects deployment: (no) Documentation Check the box below and label this PR (if you have committer privilege). Need to update docs? [x] no-need-doc bug fix. @codelipenghui - it'd be great to include this in 2.10.0 rc 2, if possible. @michaeljmarshall Yes, I have cherry-picked to branch-2.10
gharchive/pull-request
2022-02-25T04:08:25
2025-04-01T06:37:54.599891
{ "authors": [ "Jason918", "codelipenghui", "michaeljmarshall" ], "repo": "apache/pulsar", "url": "https://github.com/apache/pulsar/pull/14467", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2435688227
[improve][build] Move docker-push profile to submodule Motivation The profile refactoring breaks the pulsar release in the #23091. Modifications Move docker-push profile to docker/pulsar and docker/pulsar-all modules Documentation [ ] doc [ ] doc-required [x] doc-not-needed [ ] doc-complete Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Project coverage is 73.44%. Comparing base (bbc6224) to head (6022c9e). Report is 479 commits behind head on master. Additional details and impacted files @@ Coverage Diff @@ ## master #23093 +/- ## ============================================ - Coverage 73.57% 73.44% -0.14% - Complexity 32624 33524 +900 ============================================ Files 1877 1919 +42 Lines 139502 144087 +4585 Branches 15299 15745 +446 ============================================ + Hits 102638 105824 +3186 - Misses 28908 30145 +1237 - Partials 7956 8118 +162 Flag Coverage Δ inttests 27.58% <ø> (+2.99%) :arrow_up: systests 24.76% <ø> (+0.43%) :arrow_up: unittests 72.51% <ø> (-0.34%) :arrow_down: Flags with carried forward coverage won't be shown. Click here to find out more. see 516 files with indirect coverage changes
gharchive/pull-request
2024-07-29T15:10:35
2025-04-01T06:37:54.610539
{ "authors": [ "codecov-commenter", "nodece" ], "repo": "apache/pulsar", "url": "https://github.com/apache/pulsar/pull/23093", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1323472602
[Conf] Add controller start config file Add the configuration file for startup controller to the configuration file directory. when controller Independent Deployment. Well, thanks for you attention, could you please submit a pr to add it? Well, thanks for you attention, could you please submit a pr to add it? yes I will submit a PR for this
gharchive/issue
2022-07-31T14:09:54
2025-04-01T06:37:54.620822
{ "authors": [ "hzh0425", "mxsm" ], "repo": "apache/rocketmq", "url": "https://github.com/apache/rocketmq/issues/4746", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
576334626
[ISSUE #1770]Add a query message trace command in mqadmin. What is the purpose of the change Add a query message trace command in mqadmin. Brief changelog Add a query message trace command in mqadmin. Coverage decreased (-0.09%) to 50.829% when pulling 4fca91fa6f3c91ba0b6c8f13f4fdfcfffb2401d7 on zhangjidi2016:add_query_trace_command into 3974677f04815609951c17059d85d3795eb51247 on apache:develop. The command result in console,@zongtanghu @duhenglucky ,please help to review it,thanks!
gharchive/pull-request
2020-03-05T15:26:27
2025-04-01T06:37:54.623600
{ "authors": [ "coveralls", "zhangjidi2016" ], "repo": "apache/rocketmq", "url": "https://github.com/apache/rocketmq/pull/1824", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
626140279
Fix 2046 fix selectOneMessageQueue must select lastFailBroker As you say, if lastBrokerName is null, it will select one MessageQueue, assuming MessageQueue-a. If it occurs some exception while producing to MessageQueue-a, it will always select MessageQueue-a since then. Coverage increased (+0.2%) to 51.023% when pulling aea101b6e26f05d3677e5842702d42312537d921 on HaoTianZhao:fix-2046 into 8ef01a6c635f6972847c40d5540b1945180d7cbd on apache:master.
gharchive/pull-request
2020-05-28T01:14:29
2025-04-01T06:37:54.625675
{ "authors": [ "HaoTianZhao", "coveralls" ], "repo": "apache/rocketmq", "url": "https://github.com/apache/rocketmq/pull/2047", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1662878073
[ISSUE #6576] Fix pop lmq message Make sure set the target branch to develop What is the purpose of the change fix #6576 Brief changelog If use pop to consume a LMQ message, recode message, set LMQ's POP_CK, topic, queueId and queueOffset Verifying this change Follow this checklist to help us incorporate your contribution quickly and easily. Notice, it would be helpful if you could finish the following 5 checklist(the last one is not necessary)before request the community to review your PR. [x] Make sure there is a Github issue filed for the change (usually before you start working on it). Trivial changes like typos do not require a Github issue. Your pull request should address just this issue, without pulling in other changes - one PR resolves one issue. [x] Format the pull request title like [ISSUE #123] Fix UnknownException when host config not exist. Each commit in the pull request should have a meaningful subject line and body. [x] Write a pull request description that is detailed enough to understand what the pull request does, how, and why. [x] Write necessary unit-test(over 80% coverage) to verify your logic correction, more mock a little better when cross module dependency exist. If the new feature or significant change is committed, please remember to add integration-test in test module. [x] Run mvn -B clean apache-rat:check findbugs:findbugs checkstyle:checkstyle to make sure basic checks pass. Run mvn clean install -DskipITs to make sure unit-test pass. Run mvn clean test-compile failsafe:integration-test to make sure integration-test pass. [ ] If this contribution is large, please file an Apache Individual Contributor License Agreement. Codecov Report Merging #6577 (b4d9cdf) into develop (f44a1c3) will increase coverage by 0.03%. The diff coverage is 77.77%. @@ Coverage Diff @@ ## develop #6577 +/- ## ============================================= + Coverage 43.08% 43.11% +0.03% - Complexity 8994 8998 +4 ============================================= Files 1107 1107 Lines 78257 78278 +21 Branches 10201 10203 +2 ============================================= + Hits 33716 33750 +34 + Misses 40318 40305 -13 Partials 4223 4223 Impacted Files Coverage Δ ...rocketmq/broker/processor/PopMessageProcessor.java 39.67% <77.77%> (+2.06%) :arrow_up: ... and 16 files with indirect coverage changes :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
gharchive/pull-request
2023-04-11T16:43:26
2025-04-01T06:37:54.636326
{ "authors": [ "HScarb", "codecov-commenter" ], "repo": "apache/rocketmq", "url": "https://github.com/apache/rocketmq/pull/6577", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2554415918
[Feature][Transformer] Supporting fake data generation in transformer for sensitive data masking options Search before asking [X] I had searched in the feature and found no similar feature requirement. Description Hi All Following to the short discussion I create this issue. https://github.com/apache/seatunnel/discussions/7746 So there is an idea and goal to source and sink completely full postgres (or etc) to another postgres (source) with data masking or generation fake data for sensitive attributes. Good to know that there are a lot of fakesource available with random generators but at this moment I don't know is it working in transformer or not. Also some good news that there is dynamic compilation available for some completely custom cases. What do you think? Usage Scenario Some maybe will try to use Transformer in case of masking and fake generation. The real case is to make data synchronization from prod to test environment with some predefined option by user request [ ] Support fake data generation in transformer for sensitive data masking options in full DB sync case or partial Related issues Supporting fake data generation in transformer Are you willing to submit a PR? [ ] Yes I am willing to submit a PR! Code of Conduct [X] I agree to follow this project's Code of Conduct How about support join with dimension table (fake source is one type of dimension table)? I think we can extend this requirement to any source. eg: join with jdbc transform { JoinWithSource { join_on = "source.id = type_bin.item_id" source = [ Jdbc { url = "jdbc:mysql://localhost/test?serverTimezone=GMT%2b8" driver = "com.mysql.cj.jdbc.Driver" connection_check_timeout_sec = 100 user = "root" password = "123456" query = "select * from type_bin" } ] } } or join with fake source transform { JoinWithSource { join_on = "source.id = fake.c_int" source = [ FakeSource { row.num = 5 schema { fields { c_string = string c_tinyint = tinyint c_smallint = smallint c_int = int c_bigint = bigint c_float = float c_double = double } } } ] } } Then we can use SQL transform to filter data you want. Or join with sql transform env { parallelism = 10 job.mode = "BATCH" } source { Jdbc { url = "jdbc:mysql://localhost/test?serverTimezone=GMT%2b8" driver = "com.mysql.cj.jdbc.Driver" connection_check_timeout_sec = 100 user = "root" password = "123456" table_path = "testdb.table1" query = "select * from testdb.table1" split.size = 10000 } FakeSource { row.num = 5 schema { fields { c_string = string c_tinyint = tinyint c_smallint = smallint c_int = int c_bigint = bigint c_float = float c_double = double } } } } transform { sql { query = "select * from table1 join table2 on table1.id = table2.id" } } sink { Console {} }
gharchive/issue
2024-09-28T17:22:59
2025-04-01T06:37:54.644020
{ "authors": [ "Hisoka-X", "YuriyGavrilov" ], "repo": "apache/seatunnel", "url": "https://github.com/apache/seatunnel/issues/7766", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1405650260
[BUG] Shenyu-Admin > BasicConfig > Plugin list page 2 cannot display Is there an existing issue for this? [X] I have searched the existing issues Current Behavior I do not know when to start, the second page of BasicConfig > Plugin cannot be displayed(empty white page),but the first and third page is OK(Page size is 12) F12 of Chrome show this: Expected Behavior No response Steps To Reproduce No response Environment ShenYu version(s):2.5.0 Debug logs No response Anything else? No response And when I open Request plugin,all page of Shenyu-Admin can not display,so I have to close the Request plugin by modifing Database table (plugin.enable => 0). The Request plugin is in Plug in list Page 2,maybe it cause the problem? The stacktrace of Shenyu-Admin is below: can you execute right SQL ? about some plugin~
gharchive/issue
2022-10-12T06:34:09
2025-04-01T06:37:54.687255
{ "authors": [ "Once2012", "yu199195" ], "repo": "apache/shenyu", "url": "https://github.com/apache/shenyu/issues/4072", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1877052634
[type:refactor] Put the creation logic of HttpClient in a separate class The creation logic of HttpClient is too complicated. A better way than putting it in the HttpClientPluginConfiguration is to write it into a separate class. Codecov Report Merging #5107 (d9a6391) into master (74cc301) will decrease coverage by 0.16%. Report is 1 commits behind head on master. The diff coverage is 48.05%. :exclamation: Current head d9a6391 differs from pull request most recent head cb4230e. Consider uploading reports for the commit cb4230e to get more accurate results @@ Coverage Diff @@ ## master #5107 +/- ## ============================================ - Coverage 61.81% 61.65% -0.16% + Complexity 8497 8482 -15 ============================================ Files 1227 1228 +1 Lines 36963 36958 -5 Branches 3514 3511 -3 ============================================ - Hits 22849 22787 -62 - Misses 12156 12216 +60 + Partials 1958 1955 -3 Files Changed Coverage Δ ...t/starter/plugin/httpclient/HttpClientFactory.java 47.36% <47.36%> (ø) ...ugin/httpclient/HttpClientPluginConfiguration.java 81.25% <100.00%> (+30.67%) :arrow_up: ... and 33 files with indirect coverage changes :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
gharchive/pull-request
2023-09-01T09:11:41
2025-04-01T06:37:54.694622
{ "authors": [ "codecov-commenter", "xuziyang" ], "repo": "apache/shenyu", "url": "https://github.com/apache/shenyu/pull/5107", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1246221402
Fix/sql temporal @desruisseaux I'd like your review please (still cannot assign pull request to you). PR content: added tests upon date/time conversion from sql to java types. Changed output API of date conversion from java.sql.Date to java.time.LocalDate Fix timezone management for TIME WITH TIMEZONE and TIMESTAMP WITH TIMEZONE sql types. Will apply this pull request with one amendment. This pull request changes the TIMESTAMP_WITH_TIMEZONE mapping from java.time.OffsetDateTime to java.time.Instant. I propose to keep the previous OffsetDataTime. Some searches on internet suggest that this mapping is part of JDBC 4.2 specification: JDBC Maintenance Release 4.2 Using Java 8 Date and Time classes in PostgreSQL Mapping between PostgreSQL and Java date/time types
gharchive/pull-request
2022-05-24T08:55:52
2025-04-01T06:37:54.698685
{ "authors": [ "alexismanin", "desruisseaux" ], "repo": "apache/sis", "url": "https://github.com/apache/sis/pull/27", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
783043086
在UI管理页面的日志页面,无法查看client js上报的error page字段和调用栈 Please answer these questions before submitting your issue. Why do you submit this issue? [ ] Question or discussion [x] Bug [ ] Requirement [ ] Feature or performance improvement Question What do you want to know? Bug Which version of SkyWalking, OS, and JRE? SkyWalking client js SkyWalking v8.3.0 for H2/MySQL/TiDB/InfluxDB/ElasticSearch 7 Which company or project? What happened? If possible, provide a way to reproduce the error. e.g. demo application, component version. vue开发的前端控制台报了错误,但在UI管理页面貌似没有正确展示字段和调用栈 能否展示更多字段,以及展示调用栈。 Requirement or improvement Please describe your requirements or improvement suggestions. Please use English on Github. Please use English on Github.
gharchive/issue
2021-01-11T03:47:23
2025-04-01T06:37:54.703917
{ "authors": [ "withyanni", "wu-sheng" ], "repo": "apache/skywalking", "url": "https://github.com/apache/skywalking/issues/6166", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
802056734
add debug info when type not match in instrumentation process [x] If this is non-trivial feature, paste the links/URLs to the design doc. [x] Update the documentation to include this new feature. [x] Tests(including UT, IT, E2E) are added to verify the new feature. [x] If it's UI related, attach the screenshots below. [x] If this pull request closes/resolves/fixes an existing issue, replace the issue number. Closes #. [x] Update the CHANGES log. Principle, we should not add log for helping developing. The logs are for product env debug only. Principle, we should not add log for helping developing. The logs are for product env debug only. I still do not think the log is just for product env. It's a worth as long as it can help any people resolve their problems and save their time. :) Principle, we should not add log for helping developing. The logs are for product env debug only. I still do not think the log is just for product env. It's a worth as long as it can help any people resolve their problems and save their time. :) And I think adding logs here is cheap. Then a project has to face countless PR to add logs, because any line of codes could have potential risk, people could ask to log out anything. Then your system breaks. All internal systems are being added logs randomly, one way, it is a bad thing, and also, you only face limited developers, so damages are controllable. But in the open source, especially like SkyWalking, we have 400+ code contributors, and more in potential. We cant afford to argue with every one, this log is acceptable, and others are not. The easy, clear, and affordable principle could be only, you will need this in the runtime, or are facing the requirements. Then a project has to face countless PR to add logs, because any line of codes could have potential risk, people could ask to log out anything. Then your system breaks. All internal systems are being added logs randomly, one way, it is a bad thing, and also, you only face limited developers, so damages are controllable. But in the open source, especially like SkyWalking, we have 400+ code contributors, and more in potential. We cant afford to argue with every one, this log is acceptable, and others are not. The easy, clear, and affordable principle could be only, you will need this in the runtime, or are facing the requirements. How many contributors is agent contributors of the 400+ code contributors? I think this log is important because I feel it. Actually I merely log anything in my project. I am not a crazy logger. The easy, clear, and affordable principle could be only, you will need this in the runtime, or are facing the requirements. This is a acceptable reason but still good for me. :) Anyway, thank you for spending you energy on this pr. :) How many contributors is agent contributors of the 400+ code contributors? Over 70% focused on or worked on agent, AFAIK. Agent side clearly has more plugins than the server side, and easier. I think this log is important because I feel it. Actually I merely log anything in my project. I am not a crazy logger. You may not, but how the community should answer the question, when another people want to add logs and quote this PR? How could I prove this is useful than another log? :) It is hard for the community, and hard to understand to the new contributors. I know where I was wrong. I should requset a more valuable pr, add this log in passing. Should we add a troubleshoutting at the plugin develop doc? Should we add a troubleshoutting at the plugin develop doc? That depends, how it could be written. It is not easy to provide such kind of documentation. Usually it is a presentation of showcases, but documentation will require it more like a book.
gharchive/pull-request
2021-02-05T10:42:04
2025-04-01T06:37:54.714236
{ "authors": [ "libinglong", "wu-sheng" ], "repo": "apache/skywalking", "url": "https://github.com/apache/skywalking/pull/6330", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2014221476
Adding support to dynamic value in values.yaml First of all this project is amazing! I encountered an issue when i tried to use this chart as an dependency to my chart. in my use case i want to be able to give a dynamic value in values yaml. for an example i want that the configmap name to be dynamic because i want to deploy multiple solr cluters in the same k8s envirement. my chart values.yaml global: configmap: idolaman solr: podOptions: volumes: - name: my-configmap source: configMap: name: "{{ .Values.global.configmap }}" ... my Chart.yaml name: idolaman-solr dependencies: - name: solr .... an solution might be change the solr chart to evaluate using tpl every value it gets Is this common for other Helm charts? I understand how it could be useful, but it would add a lot of complexity to the already fairly complex helm chart. @HoustonPutman I also think that it would be nice and actually I think it's quite common. I've found this article that helps with this issue. In this example, the tpl function is used in the Helm chart templates to allow for dynamic referencing of values. Useful for scenarios where you want to deploy multiple instances of an application (like multiple Solr clusters). This example aligns with this scenario where we want to dynamically set the configmap name for deploying multiple Solr clusters. To show the need for the feature in general I'm referring you to a SO question the requests the same.
gharchive/issue
2023-11-28T11:25:03
2025-04-01T06:37:54.718630
{ "authors": [ "HoustonPutman", "almogtavor", "idolaman" ], "repo": "apache/solr-operator", "url": "https://github.com/apache/solr-operator/issues/661", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1225879066
SOLR-16179 : Updated the documentation as reported in the ticket , along with the examples https://issues.apache.org/jira/browse/SOLR-16179 Description terms.fl can be specified in the query multiple times, which is not clear from the documentation. Solution Added the respective documentation with examples in JSON and XML Tests Reviewed the created doc with asciidoctor and verified the changes. Checklist Please review the following and check all that apply: [x] I have reviewed the guidelines for How to Contribute and my code conforms to the standards described there to the best of my ability. [x] I have created a Jira issue and added the issue ID to my pull request title. [x] I have given Solr maintainers access to contribute to my PR branch. (optional but recommended) [x] I have developed this patch against the main branch. [x] I have run ./gradlew check. [ ] I have added tests for my changes. [x ] I have added documentation for the Reference Guide The rest of the Terms page ONLY has xml... And since the xml output and the json output are really jsut the same, I don't think having both formats helps the readability. I could see a case for just using JSON everywhere????
gharchive/pull-request
2022-05-04T20:11:40
2025-04-01T06:37:54.723700
{ "authors": [ "atarora", "epugh" ], "repo": "apache/solr", "url": "https://github.com/apache/solr/pull/835", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
124613653
[SPARK-10359][PROJECT-INFRA] Use more random number in dev/test-dependencies.sh; fix version switching This patch aims to fix another potential source of flakiness in the dev/test-dependencies.sh script. @pwendell's original patch and my version used $(date +%s | tail -c6) to generate a suffix to use when installing temporary Spark versions into the local Maven cache, but this value only changes once per second and thus is highly collision-prone when concurrent builds launch on AMPLab Jenkins. In order to reduce the potential for conflicts, this patch updates the script to call Python's random number generator instead. I also fixed a bug in how we captured the original project version; the bug was causing the exit handler code to fail. /cc @rxin Test build #48589 has started for PR 10558 at commit 8e86e9c. Test build #48589 has finished for PR 10558 at commit 8e86e9c. This patch fails build dependency tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48589/ Test FAILed. Test failure was due to Python 3. Test build #48591 has started for PR 10558 at commit 77a23bf. Test build #48591 has finished for PR 10558 at commit 77a23bf. This patch fails build dependency tests. This patch merges cleanly. This patch adds no public classes. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48591/ Test FAILed. Merged build finished. Test FAILed. Test build #48594 has started for PR 10558 at commit 0a6b120. Test build #2298 has started for PR 10558 at commit 0a6b120. Test build #48594 has finished for PR 10558 at commit 0a6b120. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48594/ Test PASSed. Test build #2298 has finished for PR 10558 at commit 0a6b120. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. lgtm On Saturday, January 2, 2016, Apache Spark QA notifications@github.com wrote: Test build #2298 has finished https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2298/consoleFull for PR 10558 at commit 0a6b120 https://github.com/apache/spark/commit/0a6b120b13cca8b4c4264bbda6ceb7c3ec5b7135 . This patch passes all tests. This patch merges cleanly. This patch adds no public classes. — Reply to this email directly or view it on GitHub https://github.com/apache/spark/pull/10558#issuecomment-168449633. Jenkins, retest this please. Test build #48595 has started for PR 10558 at commit 0a6b120. Test build #48595 has finished for PR 10558 at commit 0a6b120. This patch fails build dependency tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48595/ Test FAILed. Aha! It looks like the code for resetting the version has a problem: + build/mvn --force -q versions:set '-DnewVersion= `` `OLD_VERSION` isn't being set properly: OLD_VERSION=' [WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available [WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Could not find artifact org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in central (https://repo1.maven.org/maven2) 2.0.0-SNAPSHOT' Pushed a fix for the version issue, so I'm going to run this a few more times then will merge if it's passing. Test build #48596 has started for PR 10558 at commit ae3d7a3. Test build #48596 has finished for PR 10558 at commit ae3d7a3. This patch fails MiMa tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48596/ Test FAILed. Test build #48599 has started for PR 10558 at commit a2d59e5. Test build #48599 has finished for PR 10558 at commit a2d59e5. This patch fails from timeout after a configured wait of `250m`. This patch merges cleanly. This patch adds no public classes. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48599/ Test FAILed. Merged build finished. Test FAILed. Jenkins, retest this please. Test build #48615 has started for PR 10558 at commit a2d59e5. Test build #48615 has finished for PR 10558 at commit a2d59e5. This patch fails PySpark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48615/ Test FAILed. Jenkins, retest this please. Test build #48640 has started for PR 10558 at commit a2d59e5. Test build #48640 has finished for PR 10558 at commit a2d59e5. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48640/ Test PASSed. Merged build finished. Test PASSed. Merging now.
gharchive/pull-request
2016-01-02T22:02:07
2025-04-01T06:37:54.762843
{ "authors": [ "AmplabJenkins", "JoshRosen", "SparkQA", "rxin" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/10558", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
152023669
[SPARK-15030] [ML] [SparkR] Support formula in spark.kmeans in SparkR What changes were proposed in this pull request? RFormula supports empty response variable like ~ x + y. Support formula in spark.kmeans in SparkR. Fix some outdated docs for SparkR. How was this patch tested? Unit tests. Test build #57439 has started for PR 12813 at commit f1ba442. Test build #57439 has finished for PR 12813 at commit f1ba442. This patch fails Spark unit tests. This patch merges cleanly. This patch adds no public classes. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57439/ Test FAILed. Merged build finished. Test FAILed. Jenkins, test this please. Test build #57442 has started for PR 12813 at commit f1ba442. Test build #57442 has finished for PR 12813 at commit f1ba442. This patch fails Spark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57442/ Test FAILed. Test build #57445 has started for PR 12813 at commit 79d1be4. Test build #57446 has started for PR 12813 at commit 5bdce92. Test build #57445 has finished for PR 12813 at commit 79d1be4. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57445/ Test PASSed. Merged build finished. Test PASSed. Test build #57446 has finished for PR 12813 at commit 5bdce92. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57446/ Test PASSed. LGTM. Merged into master. Thanks!
gharchive/pull-request
2016-04-30T10:46:27
2025-04-01T06:37:54.779321
{ "authors": [ "AmplabJenkins", "SparkQA", "mengxr", "yanboliang" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/12813", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
223975734
[SPARK-20453] Bump master branch version to 2.3.0-SNAPSHOT This patch bumps the master branch version to 2.3.0-SNAPSHOT. Test build #76122 has started for PR 17753 at commit 983f746. Test build #76122 has finished for PR 17753 at commit 983f746. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76122/ Test PASSed. Merging in master.
gharchive/pull-request
2017-04-24T23:04:57
2025-04-01T06:37:54.784069
{ "authors": [ "AmplabJenkins", "JoshRosen", "SparkQA", "rxin" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/17753", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
240289166
[SPARK-19507][SPARK-21296][PYTHON] Avoid per-record type dispatch in schema verification and improve exception message What changes were proposed in this pull request? Context While reviewing https://github.com/apache/spark/pull/17227, I realised here we type-dispatch per record. The PR itself is fine in terms of performance as is but this prints a prefix, "obj" in exception message as below: from pyspark.sql.types import * schema = StructType([StructField('s', IntegerType(), nullable=False)]) spark.createDataFrame([["1"]], schema) ... TypeError: obj.s: IntegerType can not accept object '1' in type <type 'str'> I suggested to get rid of this but during investigating this, I realised my approach might bring a performance regression as it is a hot path. Only for SPARK-19507 and https://github.com/apache/spark/pull/17227, It needs more changes to cleanly get rid of the prefix and I rather decided to fix both issues together. Propersal This PR tried to get rid of per-record type dispatch as we do in many code paths in Scala so that it improves the performance (roughly ~25% improvement) - SPARK-21296 This was tested with a simple code spark.createDataFrame(range(1000000), "int"). However, I am quite sure the actual improvement in practice is larger than this, in particular, when the schema is complicated. improve error message in exception describing field information as prose - SPARK-19507 How was this patch tested? Manually tested and unit tests were added in python/pyspark/sql/tests.py. Benchmark - codes: https://gist.github.com/HyukjinKwon/c3397469c56cb26c2d7dd521ed0bc5a3 Error message - codes: https://gist.github.com/HyukjinKwon/b1b2c7f65865444c4a8836435100e398 Before Benchmark: Results: https://gist.github.com/HyukjinKwon/4a291dab45542106301a0c1abcdca924 Error message Results: https://gist.github.com/HyukjinKwon/57b1916395794ce924faa32b14a3fe19 After Benchmark Results: https://gist.github.com/HyukjinKwon/21496feecc4a920e50c4e455f836266e Error message Results: https://gist.github.com/HyukjinKwon/7a494e4557fe32a652ce1236e504a395 Closes #17227 Test build #79116 has started for PR 18521 at commit d7f6778. Test build #79116 has finished for PR 18521 at commit d7f6778. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79116/ Test PASSed. cc @ueshin and @holdenk who were reviewing it and, @dgingrich the author of that PR. cc @cloud-fan who I believe reviewed my related few PRs before and @davies who I believe is used to this code path. Test build #79128 has started for PR 18521 at commit 5b80a8b. Test build #79128 has finished for PR 18521 at commit 5b80a8b. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79128/ Test PASSed. Merged build finished. Test PASSed. Test build #79131 has started for PR 18521 at commit 9ee8d03. Test build #79134 has started for PR 18521 at commit 420b4bf. Test build #79131 has finished for PR 18521 at commit 9ee8d03. This patch passes all tests. This patch merges cleanly. This patch adds the following public classes (experimental): class DataTypeVerificationTests(unittest.TestCase): Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79131/ Test PASSed. Test build #79134 has finished for PR 18521 at commit 420b4bf. This patch fails PySpark unit tests. This patch merges cleanly. This patch adds no public classes. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79134/ Test FAILed. Merged build finished. Test FAILed. Test build #79140 has started for PR 18521 at commit 15c575f. Test build #79141 has started for PR 18521 at commit 826dcfd. Test build #79140 has finished for PR 18521 at commit 15c575f. This patch fails PySpark pip packaging tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79140/ Test FAILed. Test build #79141 has finished for PR 18521 at commit 826dcfd. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79141/ Test PASSed. @cloud-fan, I believe it is ready for another look. LGTM, merging to master!
gharchive/pull-request
2017-07-04T01:12:19
2025-04-01T06:37:54.813818
{ "authors": [ "AmplabJenkins", "HyukjinKwon", "SparkQA", "cloud-fan" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/18521", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
257501816
[SPARK-21513][SQL][FOLLOWUP] Allow UDF to_json support converting MapType to json for PySpark and SparkR What changes were proposed in this pull request? In previous work SPARK-21513, we has allowed MapType and ArrayType of MapTypes convert to a json string but only for Scala API. In this follow-up PR, we will make SparkSQL support it for PySpark and SparkR, too. We also fix some little bugs and comments of the previous work in this follow-up PR. For PySpark >>> data = [(1, {"name": "Alice"})] >>> df = spark.createDataFrame(data, ("key", "value")) >>> df.select(to_json(df.value).alias("json")).collect() [Row(json=u'{"name":"Alice")'] >>> data = [(1, [{"name": "Alice"}, {"name": "Bob"}])] >>> df = spark.createDataFrame(data, ("key", "value")) >>> df.select(to_json(df.value).alias("json")).collect() [Row(json=u'[{"name":"Alice"},{"name":"Bob"}]')] For SparkR # Converts a map into a JSON object df2 <- sql("SELECT map('name', 'Bob')) as people") df2 <- mutate(df2, people_json = to_json(df2$people)) # Converts an array of maps into a JSON array df2 <- sql("SELECT array(map('name', 'Bob'), map('name', 'Alice')) as people") df2 <- mutate(df2, people_json = to_json(df2$people)) How was this patch tested? Add unit test cases. cc @viirya @HyukjinKwon Can one of the admins verify this patch? ok to test Test build #81739 has started for PR 19223 at commit 29e7323. Test build #81739 has finished for PR 19223 at commit 29e7323. This patch fails some tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81739/ Test FAILed. Test build #81761 has started for PR 19223 at commit 158140e. LGTM except for one comment left. Test build #81766 has started for PR 19223 at commit af8d941. Test build #81766 has finished for PR 19223 at commit af8d941. This patch fails due to an unknown error code, -9. This patch merges cleanly. This patch adds no public classes. Test build #81761 has finished for PR 19223 at commit 158140e. This patch fails due to an unknown error code, -9. This patch merges cleanly. This patch adds no public classes. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81766/ Test FAILed. Merged build finished. Test FAILed. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81761/ Test FAILed. retest this please. Test build #81769 has started for PR 19223 at commit af8d941. Test build #81769 has finished for PR 19223 at commit af8d941. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81769/ Test PASSed. @HyukjinKwon @felixcheung @viirya I has finished those change at your suggestions for this PR and it also passed all tests. Please take a look when you are available. Thanks :) Test build #81780 has started for PR 19223 at commit 8a3a068. Test build #81780 has finished for PR 19223 at commit 8a3a068. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81780/ Test FAILed. Test build #81781 has started for PR 19223 at commit 66bc5b7. Test build #81781 has finished for PR 19223 at commit 66bc5b7. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81781/ Test PASSed. LGTM AppVeyor didn't run on this? D'oh, yes. I wonder why it was not triggered. I manually triggered via my account: Build started: [SparkR] ALL Diff: https://github.com/apache/spark/compare/master...spark-test:8981A5F1-E2DC-4015-8266-12E9ADEE189B @HyukjinKwon Thanks for triggering AppVeyor. In normal case, will AppVeyor be triggered automatically? Yes, when there are some changes in: https://github.com/apache/spark/blob/828fab03567ecc245a65c4d295a677ce0ba26c19/appveyor.yml#L29-L35 It should run the R tests on Windows via AppVeyor. ok. I got it. Thanks :) Looks passed fine. Let me merge this one. Thanks @felixcheung @HyukjinKwon Merged to master. Thanks @HyukjinKwon @felixcheung @viirya
gharchive/pull-request
2017-09-13T19:55:43
2025-04-01T06:37:54.843526
{ "authors": [ "AmplabJenkins", "HyukjinKwon", "SparkQA", "felixcheung", "goldmedal", "viirya" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/19223", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
277421801
[SPARK-22585][Core] Path in addJar is not url encoded What changes were proposed in this pull request? This updates a behavior of addJar method of sparkContext class. If path without any scheme is passed as input it is used literally without url encoding/decoding it. How was this patch tested? A unit test is added for this. Test build #84262 has started for PR 19834 at commit 1fc5db3. @srowen Let's continue our discussion here. So I have removed those three commented lines. Is there anything else to do before merge? Test build #84266 has started for PR 19834 at commit bd667d9. Test build #84262 has finished for PR 19834 at commit 1fc5db3. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/84262/ Test PASSed. Test build #84266 has finished for PR 19834 at commit bd667d9. This patch fails Spark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/84266/ Test FAILed. LGTM. Another LGTM retest this please Test build #84295 has started for PR 19834 at commit bd667d9. Test build #84295 has finished for PR 19834 at commit bd667d9. This patch fails Spark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/84295/ Test FAILed. retest this please Test build #84302 has started for PR 19834 at commit bd667d9. Test build #84302 has finished for PR 19834 at commit bd667d9. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/84302/ Test PASSed. Merged to master. Thanks!
gharchive/pull-request
2017-11-28T15:17:07
2025-04-01T06:37:54.860768
{ "authors": [ "AmplabJenkins", "HyukjinKwon", "SparkQA", "james64", "jerryshao", "jiangxb1987" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/19834", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
348211126
[SPARK-25041][build] upgrade genJavaDoc-plugin from 0.10 to 0.11 What changes were proposed in this pull request? This PR fixes a build error with sbt using Scala-2.12. Since [genJavaDoc-plugin] (https://mvnrepository.com/artifact/com.typesafe.genjavadoc/genjavadoc-plugin) 0.10 is not prepared for Scala-2.12.6, the recent version of genJavaDoc-plugin is necessary. The version 0.11 of genJavaDoc-plugin is also prepared for Scala-2.11.12. genJavaDoc-0.10 genJavaDoc-0.11 How was this patch tested? Manually tested for Scala-2.12. cc @ueshin @HyukjinKwon @srowen Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/1899/ Test PASSed. Test build #94356 has started for PR 22020 at commit 1b41ce4. Test build #94356 has finished for PR 22020 at commit 1b41ce4. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/94356/ Test PASSed. Merged to master
gharchive/pull-request
2018-08-07T08:04:16
2025-04-01T06:37:54.868994
{ "authors": [ "AmplabJenkins", "SparkQA", "kiszk", "srowen" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/22020", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
394876854
[SPARK-26449][PYTHON] add a transform method to the Dataframe class What changes were proposed in this pull request? added a transform method to the Dataframe class, see https://issues.apache.org/jira/browse/SPARK-26449 How was this patch tested? Tested manually by injecting the proposed method to the current spark version dataframe class. I've tried to compile spark from scratch and test using ./build/mvn test. However, unrelated tests fails before my change. Please review http://spark.apache.org/contributing.html before opening a pull request. Can one of the admins verify this patch? Can one of the admins verify this patch? Can one of the admins verify this patch? ok to test Test build #100560 has started for PR 23414 at commit def5b2c. Test build #100560 has finished for PR 23414 at commit def5b2c. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100560/ Test FAILed. Test build #100562 has started for PR 23414 at commit def5b2c. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6494/ Test PASSed. Test build #100562 has finished for PR 23414 at commit def5b2c. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100562/ Test FAILed. Merged build finished. Test FAILed. Test build #100592 has finished for PR 23414 at commit b370363. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100592/ Test FAILed. @HyukjinKwon I get the following errors: [error] running /home/jenkins/workspace/SparkPullRequestBuilder@2/dev/lint-python ; received return code 1 Attempting to post to Github... > Post successful. Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error? Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100592/ Test FAILed. Finished: FAILURE Can you please help? Looks it's failed for below reasons. pycodestyle checks failed: ./python/pyspark/sql/dataframe.py:2048:1: W293 blank line contains whitespace ./python/pyspark/sql/dataframe.py:2064:1: W293 blank line contains whitespace added doctest and removed more empty line with spaces. please re-test Test build #100594 has started for PR 23414 at commit f5aaa1a. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6523/ Test PASSed. Test build #100594 has finished for PR 23414 at commit f5aaa1a. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100594/ Test FAILed. removed *args **kwargs (albeit I think they're useful). Please re-test Test build #100595 has started for PR 23414 at commit 0b1f562. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6524/ Test FAILed. Test build #100595 has finished for PR 23414 at commit 0b1f562. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100595/ Test FAILed. @HyukjinKwon I am sorry for being newbie but I don't understand the fail reason: Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --progress https://github.com/apache/spark.git +refs/pull/23414/*:refs/remotes/origin/pr/23414/*" returned status code 128: stdout: stderr: error: RPC failed; curl 18 transfer closed with outstanding read data remaining fatal: The remote end hung up unexpectedly Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6525/ Test PASSed. @HyukjinKwon what do you mean when you say the Scala impl has this? I'm missing it. I don't see the value in this. From the blog post at https://medium.com/@mrpowers/chaining-custom-pyspark-transformations-4f38a8c7ae55 why is ... actual_df = (source_df .transform(lambda df: with_greeting(df)) .transform(lambda df: with_something(df, "crazy"))) better than just actual_df = with_greeting(source_df) actual_df = with_something(actual_df, "crazy") The idea is to be able to chain function easily when you have 10 stages. no need for keeping temporary variables. You can also... actual_df = source_df for f in [...]: actual_df = f(actual_df) Unless I'm really missing something this doesn't exist for Scala (?) and I can't see adding an API method for this. The small additional maintenance and user cognitive load just doesn't seem to buy much at all. @srowen the motivation is from this blogpost https://medium.com/@mrpowers/chaining-custom-pyspark-transformations-4f38a8c7ae55 I was referring: https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala#L2497 If it were new API, I won't encourage to add but it's existing. I think we should rather deprecate Scala side one if we don't see some values on that. Otherwise, I thought matching it is fine. Oh hm I had never seen that! Yah seems fine for consistency then. @chanansh, also please fix the PR title to [SPARK-26449][PYTHON] ... so that it automatically links your PR to the JIRA. Test build #100610 has started for PR 23414 at commit 9919e28. Test build #100610 has finished for PR 23414 at commit 9919e28. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100610/ Test FAILed. Test build #100611 has started for PR 23414 at commit e54d2f7. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6535/ Test PASSed. Test build #100611 has finished for PR 23414 at commit e54d2f7. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100611/ Test FAILed. Test build #100612 has started for PR 23414 at commit 3d9a751. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6536/ Test PASSed. Test build #100612 has finished for PR 23414 at commit 3d9a751. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100612/ Test PASSed. Looks fine except https://github.com/apache/spark/pull/23414/files#r244654162 Closing this due to author's inactivity. sorry, please reopen I will do it. HS On Mon, Feb 11, 2019 at 12:10 PM Hyukjin Kwon notifications@github.com wrote: Closed #23414 https://github.com/apache/spark/pull/23414. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/apache/spark/pull/23414#event-2130116400, or mute the thread https://github.com/notifications/unsubscribe-auth/AFtFzGK1zXfQq-EhAgPgm1uYJcDhZCMjks5vMUGsgaJpZM4Zk9No . Just push more commits; I think that reopens it. is this one still open? I would want to PR basically the same thing. Should I commit here or create a new PR? You can pick up commits and create new PR. Looks the author is inactive.
gharchive/pull-request
2018-12-30T14:37:37
2025-04-01T06:37:54.916588
{ "authors": [ "AmplabJenkins", "Hellsen83", "HyukjinKwon", "SparkQA", "chanansh", "srowen" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/23414", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
416443895
[MINOR][DOCS] Clarify that Spark apps should mark Spark as a 'provided' dependency, not package it What changes were proposed in this pull request? Spark apps do not need to package Spark. In fact it can cause problems in some cases. Our examples should show depending on Spark as a 'provided' dependency. How was this patch tested? Doc build Test build #102943 has started for PR 23938 at commit f8fcc52. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8412/ Test PASSed. Test build #102943 has finished for PR 23938 at commit f8fcc52. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/102943/ Test PASSed. @dongjoon-hyun yeah, that one I wasn't sure about as it's some support code that sounded like it was meant to be bundled in an app. @steveloughran is that correct -- hadoop-cloud should be a compile scope dependency, not provided by the cluster? you should compile with hadoop-cloud and add those JARs it pulls in to the spark tarball placed on the shared cluster FS for YARN to pick up. Don't know about other deployment engines I'm afraid. The build also adds it to the SPARK_HOME/lib, which gives it to you for spark-standalone during spark submit, either for anything related to JAR upload, or for any store which implements delegation tokens (HADOOP-14456, HADOOP-16068, etc), so it collects the tokens for all stores listed in spark.yarn.hadoopFilesystems. @steveloughran to be clear do you compile your app, or Spark, with this dependency? it sounds like "Spark" not the app. If so I'll update this further. sorry, yeah, spark. Even if the spark team doesn't redist those JARs, it'd be really useful if the release process published the POM. that way, if you want your build to pick up the exact set of dependencies which are in sync with spark, excluding all the stuff which will cause grief, you'd just add it as a dependency. Ah OK on further review @steveloughran , the docs here are saying to include the dependency in your app, which would be the right thing if not bundled by Spark, and that's the current state of things for a default cluster. I think that much of the doc is then OK, and shouldn't change to mentioned provided. Merged to master/2.4/2.3
gharchive/pull-request
2019-03-02T21:25:33
2025-04-01T06:37:54.926246
{ "authors": [ "AmplabJenkins", "SparkQA", "srowen", "steveloughran" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/23938", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
478317513
[SPARK-28656][SQL] Support millennium, century and decade at extract() What changes were proposed in this pull request? In the PR, I propose new expressions Millennium, Century and Decade, and support additional parameters of extract() for feature parity with PostgreSQL (https://www.postgresql.org/docs/11/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT): millennium - the current millennium for given date (or a timestamp implicitly casted to a date). For example, years in the 1900s are in the second millennium. The third millennium started January 1, 2001. century - the current millennium for given date (or timestamp). The first century starts at 0001-01-01 AD. decade - the current decade for given date (or timestamp). Actually, this is the year field divided by 10. Here are examples: spark-sql> SELECT EXTRACT(MILLENNIUM FROM DATE '1981-01-19'); 2 spark-sql> SELECT EXTRACT(CENTURY FROM DATE '1981-01-19'); 20 spark-sql> SELECT EXTRACT(DECADE FROM DATE '1981-01-19'); 198 Also the expressions are registered as functions - millennium, century and decade. For example: spark-sql> SELECT MILLENNIUM('2019-08-08'); 3 spark-sql> SELECT CENTURY('2019-08-08'); 21 spark-sql> SELECT DECADE('2019-08-08'); 201 How was this patch tested? Added new tests to DateExpressionsSuite, DateFunctionsSuite, and uncommented existing tests in pgSQL/date.sql. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/13887/ Test PASSed. Test build #108806 has started for PR 25388 at commit 6755bce. Can one of the admins verify this patch? Test build #108806 has finished for PR 25388 at commit 6755bce. This patch fails Spark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/108806/ Test FAILed. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/13906/ Test PASSed. Test build #108829 has started for PR 25388 at commit 381f214. Test build #108829 has finished for PR 25388 at commit 381f214. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/108829/ Test PASSed. Hi, @MaxGekk . Supporting Extract seems to be enough for PostgreSQL feature parity. Supporting the followings are easy, but I'm not sure about these. In general, I'd recommend not to register as functions. PMC may have different opinions. spark-sql> SELECT MILLENNIUM('2019-08-08'); 3 spark-sql> SELECT CENTURY('2019-08-08'); 21 spark-sql> SELECT DECADE('2019-08-08'); 201 How do you think about (2) which adds these new functions , @gatorsmile and @cloud-fan ? Let's not add builtin functions that only exist in Spark. Thank you for the decision, @cloud-fan ! Test build #108867 has started for PR 25388 at commit 9d9a0ad. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/13940/ Test PASSed. Test build #108867 has finished for PR 25388 at commit 9d9a0ad. This patch fails due to an unknown error code, -9. This patch merges cleanly. This patch adds the following public classes (experimental): sealed trait RewritableTransform extends Transform case class ArrayForAll( case class DescribeTable(table: NamedRelation, isExtended: Boolean) extends Command trait V2CreateTablePlan extends LogicalPlan case class DescribeColumnStatement( case class DescribeTableStatement( case class InsertAdaptiveSparkPlan( case class DescribeTableExec(table: Table, isExtended: Boolean) extends LeafExecNode Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/108867/ Test FAILed. jenkins, retest this, please Test build #108868 has started for PR 25388 at commit 9d9a0ad. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/13941/ Test PASSed. Test build #108868 has finished for PR 25388 at commit 9d9a0ad. This patch passes all tests. This patch merges cleanly. This patch adds the following public classes (experimental): sealed trait RewritableTransform extends Transform case class ArrayForAll( case class DescribeTable(table: NamedRelation, isExtended: Boolean) extends Command trait V2CreateTablePlan extends LogicalPlan case class DescribeColumnStatement( case class DescribeTableStatement( case class InsertAdaptiveSparkPlan( case class DescribeTableExec(table: Table, isExtended: Boolean) extends LeafExecNode Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/108868/ Test PASSed.
gharchive/pull-request
2019-08-08T08:14:10
2025-04-01T06:37:54.954889
{ "authors": [ "AmplabJenkins", "MaxGekk", "SparkQA", "cloud-fan", "dongjoon-hyun" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/25388", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
675510094
[SPARK-32564][SQL][TEST][3.0] Inject data statistics to simulate plan generation on actual TPCDS data What changes were proposed in this pull request? TPCDSQuerySuite currently computes plans with empty TPCDS tables, then checks if plans can be generated correctly. But, the generated plans can be different from actual ones because the input tables are empty (e.g., the plans always use broadcast-hash joins, but actual ones use sort-merge joins for larger tables). To mitigate the issue, this PR defines data statistics constants extracted from generated TPCDS data in TPCDSTableStats, then injects the statistics via spark.sessionState.catalog.alterTableStats when defining TPCDS tables in TPCDSQuerySuite. Please see a link below about how to extract the table statistics: https://gist.github.com/maropu/f553d32c323ee803d39e2f7fa0b5a8c3 For example, the generated plans of TPCDS q2 are different with/without this fix: ==== w/ this fix: q2 ==== == Physical Plan == * Sort (43) +- Exchange (42) +- * Project (41) +- * SortMergeJoin Inner (40) :- * Sort (28) : +- Exchange (27) : +- * Project (26) : +- * BroadcastHashJoin Inner BuildRight (25) : :- * HashAggregate (19) : : +- Exchange (18) : : +- * HashAggregate (17) : : +- * Project (16) : : +- * BroadcastHashJoin Inner BuildRight (15) : : :- Union (9) : : : :- * Project (4) : : : : +- * Filter (3) : : : : +- * ColumnarToRow (2) : : : : +- Scan parquet default.web_sales (1) : : : +- * Project (8) : : : +- * Filter (7) : : : +- * ColumnarToRow (6) : : : +- Scan parquet default.catalog_sales (5) : : +- BroadcastExchange (14) : : +- * Project (13) : : +- * Filter (12) : : +- * ColumnarToRow (11) : : +- Scan parquet default.date_dim (10) : +- BroadcastExchange (24) : +- * Project (23) : +- * Filter (22) : +- * ColumnarToRow (21) : +- Scan parquet default.date_dim (20) +- * Sort (39) +- Exchange (38) +- * Project (37) +- * BroadcastHashJoin Inner BuildRight (36) :- * HashAggregate (30) : +- ReusedExchange (29) +- BroadcastExchange (35) +- * Project (34) +- * Filter (33) +- * ColumnarToRow (32) +- Scan parquet default.date_dim (31) ==== w/o this fix: q2 ==== == Physical Plan == * Sort (40) +- Exchange (39) +- * Project (38) +- * BroadcastHashJoin Inner BuildRight (37) :- * Project (26) : +- * BroadcastHashJoin Inner BuildRight (25) : :- * HashAggregate (19) : : +- Exchange (18) : : +- * HashAggregate (17) : : +- * Project (16) : : +- * BroadcastHashJoin Inner BuildRight (15) : : :- Union (9) : : : :- * Project (4) : : : : +- * Filter (3) : : : : +- * ColumnarToRow (2) : : : : +- Scan parquet default.web_sales (1) : : : +- * Project (8) : : : +- * Filter (7) : : : +- * ColumnarToRow (6) : : : +- Scan parquet default.catalog_sales (5) : : +- BroadcastExchange (14) : : +- * Project (13) : : +- * Filter (12) : : +- * ColumnarToRow (11) : : +- Scan parquet default.date_dim (10) : +- BroadcastExchange (24) : +- * Project (23) : +- * Filter (22) : +- * ColumnarToRow (21) : +- Scan parquet default.date_dim (20) +- BroadcastExchange (36) +- * Project (35) +- * BroadcastHashJoin Inner BuildRight (34) :- * HashAggregate (28) : +- ReusedExchange (27) +- BroadcastExchange (33) +- * Project (32) +- * Filter (31) +- * ColumnarToRow (30) +- Scan parquet default.date_dim (29) This comes from the @cloud-fan comment: https://github.com/apache/spark/pull/29270#issuecomment-666098964 This is the backport of #29384. Why are the changes needed? For better test coverage. Does this PR introduce any user-facing change? No. How was this patch tested? Existing tests. Test build #127221 has started for PR 29390 at commit 750a632. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/31842/ Test PASSed. Test build #127221 has finished for PR 29390 at commit 750a632. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/127221/ Test PASSed. Thanks, @maropu . Merged to branch-3.0. Thanks a lot, @dongjoon-hyun !
gharchive/pull-request
2020-08-08T11:22:53
2025-04-01T06:37:54.966386
{ "authors": [ "AmplabJenkins", "SparkQA", "dongjoon-hyun", "maropu" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/29390", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1233381140
[SPARK-39156][SQL] Clean up the usage of ParquetLogRedirector in ParquetFileFormat. What changes were proposed in this pull request? SPARK-17993 introduce ParquetLogRedirector for Parquet version < 1.9, PARQUET-305 change to use slf4j instead of jul in Parquet 1.9, Spark uses Parquet 1.12.2 now and no longer relies on Parquet version 1.6 now , the ParquetLogRedirector is no longer needed, so this pr clean up the usage of ParquetLogRedirector in ParquetFileFormat. Why are the changes needed? Clean up the usage of ParquetLogRedirector in ParquetFileFormat. Does this PR introduce any user-facing change? No How was this patch tested? Pass GA Manual test: Build Spark client manually before and after this pr Change parquet log4j level to debug: logger.parquet1.name = org.apache.parquet logger.parquet1.level = debug logger.parquet2.name = parquet logger.parquet2.level = debug Try to read Parquet file write with 1.6 , for example sql/core/src/test/resources/test-data/dec-in-i32.parquet. java -jar parquet-tools-1.10.1.jar meta /${basedir}/dec-in-i32.parquet file: file:/${basedir}/dec-in-i32.parquet creator: parquet-mr version 1.6.0 extra: org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"i32_dec","type":"decimal(5,2)","nullable":true,"metadata":{}}]} file schema: spark_schema -------------------------------------------------------------------------------- i32_dec: OPTIONAL INT32 O:DECIMAL R:0 D:1 row group 1: RC:16 TS:102 OFFSET:4 -------------------------------------------------------------------------------- i32_dec: INT32 GZIP DO:0 FPO:4 SZ:131/102/0.78 VC:16 ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED ST:[no stats for this column] spark.read.parquet("file://${basedir}/ptable/dec-in-i32.parquet").show() The log contents before and after this pr are consistent, and there is no error log mentioned in SPARK-17993 Looks OK. Could you cross link the fix (JIRA) from Parquet side? Should be PARQUET-305 in Parquet 1.9 hmm... @sunchao any other need changes? Thanks! Merged to master. thanks @huaxingao @sunchao
gharchive/pull-request
2022-05-12T02:58:12
2025-04-01T06:37:54.973385
{ "authors": [ "LuciferYang", "huaxingao" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/36515", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1249472516
[SPARK-36681][CORE][TESTS][FOLLOW-UP] Handle LinkageError when Snappy native library is not available in low Hadoop versions What changes were proposed in this pull request? This is a follow-up to https://github.com/apache/spark/pull/36136 to fix LinkageError handling in FileSuite to avoid test suite abort when Snappy native library is not available in low Hadoop versions: 23:16:22 FileSuite: 23:16:22 org.apache.spark.FileSuite *** ABORTED *** 23:16:22 java.lang.RuntimeException: Unable to load a Suite class that was discovered in the runpath: org.apache.spark.FileSuite 23:16:22 at org.scalatest.tools.DiscoverySuite$.getSuiteInstance(DiscoverySuite.scala:81) 23:16:22 at org.scalatest.tools.DiscoverySuite.$anonfun$nestedSuites$1(DiscoverySuite.scala:38) 23:16:22 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) 23:16:22 at scala.collection.Iterator.foreach(Iterator.scala:941) 23:16:22 at scala.collection.Iterator.foreach$(Iterator.scala:941) 23:16:22 at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 23:16:22 at scala.collection.IterableLike.foreach(IterableLike.scala:74) 23:16:22 at scala.collection.IterableLike.foreach$(IterableLike.scala:73) 23:16:22 at scala.collection.AbstractIterable.foreach(Iterable.scala:56) 23:16:22 at scala.collection.TraversableLike.map(TraversableLike.scala:238) 23:16:22 ... 23:16:22 Cause: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z 23:16:22 at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) 23:16:22 at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) 23:16:22 at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136) 23:16:22 at org.apache.spark.FileSuite.$anonfun$new$12(FileSuite.scala:145) 23:16:22 at scala.util.Try$.apply(Try.scala:213) 23:16:22 at org.apache.spark.FileSuite.<init>(FileSuite.scala:141) 23:16:22 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 23:16:22 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 23:16:22 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 23:16:22 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) Scala's Try can handle only NonFatal throwables. Why are the changes needed? To make the tests robust. Does this PR introduce any user-facing change? Nope, this is test-only. How was this patch tested? Manual test. cc @HyukjinKwon, @viirya, @dongjoon-hyun Thanks all for the review.
gharchive/pull-request
2022-05-26T11:51:13
2025-04-01T06:37:54.977513
{ "authors": [ "peter-toth" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/36687", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1302729165
[SPARK-39748][SQL][FOLLOWUP] Add missing origin logical plan on DataFrame.checkpoint on building LogicalRDD What changes were proposed in this pull request? This PR adds missing origin logical plan on building LogicalRDD in DataFrame.checkpoint, via review comment https://github.com/apache/spark/pull/37161#discussion_r919204026. Why are the changes needed? This is missing spot on previous PR and @viirya helped to find out. Does this PR introduce any user-facing change? No. How was this patch tested? N/A cc. @viirya lgtm Thanks! Merging to master.
gharchive/pull-request
2022-07-12T23:46:31
2025-04-01T06:37:54.980725
{ "authors": [ "HeartSaVioR", "viirya" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/37167", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1514539132
[SPARK-41790][SQL] Set TRANSFORM reader and writer's format correctly What changes were proposed in this pull request? We'll get wrong data when transform only specify reader or writer 's row format delimited, the reason is using the wrong format to feed/fetch data to/from running script now. we should set the format correctly. Currently in Spark: spark-sql> CREATE TABLE t1 (a string, b string); spark-sql> INSERT OVERWRITE t1 VALUES("1", "2"), ("3", "4"); spark-sql> SELECT TRANSFORM(a, b) > ROW FORMAT DELIMITED > FIELDS TERMINATED BY ',' > USING 'cat' > AS (c) > FROM t1; c spark-sql> SELECT TRANSFORM(a, b) > USING 'cat' > AS (c) > ROW FORMAT DELIMITED > FIELDS TERMINATED BY ',' > FROM t1; c 1 23 4 The same sql in hive: hive> SELECT TRANSFORM(a, b) > ROW FORMAT DELIMITED > FIELDS TERMINATED BY ',' > USING 'cat' > AS (c) > FROM t1; c 1,2 3,4 hive> SELECT TRANSFORM(a, b) > USING 'cat' > AS (c) > ROW FORMAT DELIMITED > FIELDS TERMINATED BY ',' > FROM t1; c 1 2 3 4 Why are the changes needed? Fix transform writer format and reader format. Does this PR introduce any user-facing change? When we set transform's row format delimited in the sql, we may get the wrong data. How was this patch tested? New tests. Can one of the admins verify this patch? cc @AngersZhuuuu Merged to master.
gharchive/pull-request
2022-12-30T13:52:41
2025-04-01T06:37:54.984338
{ "authors": [ "AmplabJenkins", "HyukjinKwon", "mattshma" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/39315", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1586786247
[SPARK-41591][PYTHON][FOLLOW-UP] Remove gRPC version check for Distributor What changes were proposed in this pull request? Removing redundant check for whether GPUs exist on the driver node. Why are the changes needed? For slightly cleaner code. Could close PR if we don't need to merge it in. Does this PR introduce any user-facing change? No. How was this patch tested? As long as normal tests work, we don't expect any other failures. Yeah, let's close. I don't think this is an issue. Thank you, @HyukjinKwon and @rithwik-db .
gharchive/pull-request
2023-02-16T00:11:28
2025-04-01T06:37:54.986889
{ "authors": [ "HyukjinKwon", "dongjoon-hyun", "rithwik-db" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/40045", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1631090329
[SPARK-42779][SQL][FOLLOWUP] Allow V2 writes to indicate advisory shuffle partition size What changes were proposed in this pull request? This PR addresses non-blocking comments for PR #40421. Why are the changes needed? These changes are needed to make sure the new logic only applies in expected cases. Does this PR introduce any user-facing change? No. How was this patch tested? Existing tests. cc @cloud-fan @dongjoon-hyun thanks, merging to master! Thank you, @cloud-fan ! Thanks, @dongjoon-hyun @cloud-fan!
gharchive/pull-request
2023-03-19T19:40:44
2025-04-01T06:37:54.990816
{ "authors": [ "aokolnychyi", "cloud-fan", "dongjoon-hyun" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/40478", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1678140768
[SPARK-43228][SQL] Join keys also match PartitioningCollection in CoalesceBucketsInJoin What changes were proposed in this pull request? This PR updates CoalesceBucketsInJoin.satisfiesOutputPartitioning to support matching PartitioningCollection. A common case is that we add an alias on the join key. For example: SELECT * FROM (SELECT /*+ BROADCAST(t3) */ t1.i AS t1i, t1.j AS t1j, t3.* FROM t1 JOIN t3 ON t1.i = t3.i AND t1.j = t3.j) t JOIN t2 ON t.t1i = t2.i AND t.t1j = t2.j The left side outputPartitioning is: (hashpartitioning(t1i#41, t1j#42, 8) or hashpartitioning(i#46, t1j#42, 8) or hashpartitioning(t1i#41, j#47, 8) or hashpartitioning(i#46, j#47, 8)) Why are the changes needed? Enhance CoalesceBucketsInJoin to support more cases. Does this PR introduce any user-facing change? No. How was this patch tested? Unit test. cc @cloud-fan
gharchive/pull-request
2023-04-21T08:47:47
2025-04-01T06:37:54.993608
{ "authors": [ "wangyum" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/40897", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1706686592
[SPARK-42945][CONNECT][FOLLOWUP] Disable JVM stack trace by default What changes were proposed in this pull request? This is a follow-up of #40575. Disables JVM stack trace by default. % ./bin/pyspark --remote local ... >>> spark.conf.set("spark.sql.ansi.enabled", True) >>> spark.sql('select 1/0').show() ... Traceback (most recent call last): ... pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. == SQL(line 1, position 8) == select 1/0 ^^^ >>> >>> spark.conf.set("spark.sql.pyspark.jvmStacktrace.enabled", True) >>> spark.sql('select 1/0').show() ... Traceback (most recent call last): ... pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. == SQL(line 1, position 8) == select 1/0 ^^^ JVM stacktrace: org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. == SQL(line 1, position 8) == select 1/0 ^^^ at org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:226) at org.apache.spark.sql.catalyst.expressions.DivModLike.eval(arithmetic.scala:674) ... Why are the changes needed? Currently JVM stack trace is enabled by default. % ./bin/pyspark --remote local ... >>> spark.conf.set("spark.sql.ansi.enabled", True) >>> spark.sql('select 1/0').show() ... Traceback (most recent call last): ... pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. == SQL(line 1, position 8) == select 1/0 ^^^ JVM stacktrace: org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. == SQL(line 1, position 8) == select 1/0 ^^^ at org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:226) at org.apache.spark.sql.catalyst.expressions.DivModLike.eval(arithmetic.scala:674) ... Does this PR introduce any user-facing change? Users won't see the JVM stack trace by default. How was this patch tested? Existing tests. Merged to master. Thank you, @ueshin and @allisonwang-db .
gharchive/pull-request
2023-05-11T23:11:27
2025-04-01T06:37:54.997029
{ "authors": [ "dongjoon-hyun", "ueshin" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/41148", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2317741445
[SPARK-48394][3.5][CORE] Cleanup mapIdToMapIndex on mapoutput unregister This PR backports https://github.com/apache/spark/pull/46706 to branch 3.5. What changes were proposed in this pull request? This PR cleans up mapIdToMapIndex when the corresponding mapstatus is unregistered in three places: removeMapOutput removeOutputsByFilter addMapOutput (old mapstatus overwritten) Why are the changes needed? There is only one valid mapstatus for the same mapIndex at the same time in Spark. mapIdToMapIndex should also follows the same rule to avoid chaos. Does this PR introduce any user-facing change? No. How was this patch tested? Unit tests. Was this patch authored or co-authored using generative AI tooling? No. https://github.com/apache/spark/pull/46749 should fix the issue in the build in this case. For now, you could rebase/force push and that should fix up the build It seems I mistakenly pushed the branch to apache repo rather than my own repo. Will remove that branch after the PR merged. @Ngone51 the build won't trigger if the branch is in apache repo. Let's just open a new PR with your forked repository. @HyukjinKwon Oh, I see. Thanks for the reminder. FYI created a new PR (https://github.com/apache/spark/pull/46768) to replace this one.
gharchive/pull-request
2024-05-26T14:31:51
2025-04-01T06:37:55.003734
{ "authors": [ "HyukjinKwon", "Ngone51" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/46747", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2467580654
[MINOR][SQL][TESTS] Changes the test:runMain in the code comments to Test/runMain What changes were proposed in this pull request? This PR only changes the test:runMain description related to run command in the code comments to Test/runMain. Why are the changes needed? When we use the execution command in the code comments, we will see the following compilation warning: build/sbt "sql/test:runMain org.apache.spark.sql.execution.benchmark.TopKBenchmark" [warn] sbt 0.13 shell syntax is deprecated; use slash syntax instead: sql / Test / runMain The relevant comments should be updated to eliminate the compilation warnings when run the command. Does this PR introduce any user-facing change? No How was this patch tested? Manually run the test using the updated command and check that the corresponding compilation warning is no longer present. Was this patch authored or co-authored using generative AI tooling? No Please add [TEST] to the PR title Please add [TEST] to the PR title done Merged into master. Thanks @yaooqinn
gharchive/pull-request
2024-08-15T07:42:50
2025-04-01T06:37:55.007699
{ "authors": [ "LuciferYang", "yaooqinn" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/47767", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2755613743
[SPARK-50649] Fix inconsistencies with casting between different collations What changes were proposed in this pull request? Fixing the inconsistent behavior of casts between different collations. Currently, we are allowed to do casts between them in dataframe API but not in the SQL API. I propose allowing casts in SQL as well (we are already allowing them for complex types anyways). Also, this means changing the behavior or CAST(x AS STRING) which was previously not altering the collation of x, and will now change it to the default collation. Why are the changes needed? To make collation casts between the dataframe and SQL api consistent. Does this PR introduce any user-facing change? No. How was this patch tested? Added new unit tests for the dataframe API which we didn't have before and also updated the existing tests for the SQL API to match the new behavior. Was this patch authored or co-authored using generative AI tooling? No. @cloud-fan please take a look when you can The Spark Connect test failure is unrelated and flaky, I'm merging it to master, thanks!
gharchive/pull-request
2024-12-23T09:29:32
2025-04-01T06:37:55.011377
{ "authors": [ "cloud-fan", "stefankandic" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/49269", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
70329246
[SPARK-7069][SQL] Rename NativeType -> AtomicType. Also renamed JvmType to InternalType. Test build #30817 has started for PR 5651 at commit cbd4028. LGTM pending Jenkins. Test build #30817 has finished for PR 5651 at commit cbd4028. This patch passes all tests. This patch merges cleanly. This patch adds the following public classes (experimental): protected[sql] abstract class AtomicType extends DataType abstract class NumericType extends AtomicType class Encoder[T <: AtomicType](columnType: NativeColumnType[T]) extends compression.Encoder[T] class Decoder[T <: AtomicType](buffer: ByteBuffer, columnType: NativeColumnType[T]) class Encoder[T <: AtomicType](columnType: NativeColumnType[T]) extends compression.Encoder[T] class Decoder[T <: AtomicType](buffer: ByteBuffer, columnType: NativeColumnType[T]) class Encoder[T <: AtomicType](columnType: NativeColumnType[T]) extends compression.Encoder[T] class Decoder[T <: AtomicType](buffer: ByteBuffer, columnType: NativeColumnType[T]) This patch does not change any dependencies. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/30817/ Test PASSed.
gharchive/pull-request
2015-04-23T06:59:34
2025-04-01T06:37:55.018452
{ "authors": [ "AmplabJenkins", "SparkQA", "liancheng", "rxin" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/5651", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
105287792
[SPARK-10482] [ML] Add Python interface for ml.CountVectorizer jira: https://issues.apache.org/jira/browse/SPARK-10482 Add Python interface for feature transformer: ml.CountVectorizer Merged build triggered. Merged build started. Test build #42112 has started for PR 8650 at commit 0f1fa34. Test build #42112 has finished for PR 8650 at commit 0f1fa34. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/42112/ Test FAILed. Merged build finished. Test FAILed. Merged build triggered. Merged build started. Test build #42122 has started for PR 8650 at commit d22ba5a. Test build #42122 has finished for PR 8650 at commit d22ba5a. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/42122/ Test FAILed. Merged build finished. Test FAILed. Merged build triggered. Merged build started. Test build #42125 has started for PR 8650 at commit dd0e933. Test build #42125 has finished for PR 8650 at commit dd0e933. This patch passes all tests. This patch merges cleanly. This patch adds the following public classes (experimental): class CountVectorizer(JavaEstimator, HasInputCol, HasOutputCol): class CountVectorizerModel(JavaModel): Merged build finished. Test PASSed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/42125/ Test PASSed. LGTM except some minor issues This seems to do the same work as the outstanding PR https://github.com/apache/spark/pull/8561 @holdenk Yes, I just noticed it. Could you merge some changes in this PR into yours? I think the doctest from @hhbyyh is better and the default values are specified correctly in this PR. I will make a pass after. @hhbyyh Since this duplicates #8561, do you mind closing this PR? You can check opening PRs at https://spark-prs.appspot.com/#mllib. Ok, I'll merge in the doc tests. @mengxr Sorry for the extra effort during review.
gharchive/pull-request
2015-09-08T02:18:30
2025-04-01T06:37:55.033776
{ "authors": [ "AmplabJenkins", "SparkQA", "hhbyyh", "holdenk", "mengxr" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/8650", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
117975649
[SPARK-11877] Prevent agg. fallback conf. from leaking across test suites This patch fixes an issue where the spark.sql.TungstenAggregate.testFallbackStartsAt SQLConf setting was not properly reset / cleared at the end of TungstenAggregationQueryWithControlledFallbackSuite. This ended up causing test failures in HiveCompatibilitySuite in Maven builds by causing spilling to occur way too frequently. This configuration leak was inadvertently introduced during test cleanup in #9618. Test build #46402 has started for PR 9857 at commit ffe29f7. The failing HiveCompatibilitySuite test, mapjoin_mapjoin, has passed in the Maven pull request builder, and the modified TungstenAggregationQueryWithControlledFallbackSuite also passed tests, so I'm going to merge this now so that the overnight Maven builds have the opportunity to exhibit new test failures now that this one has been fixed. Test build #46402 has finished for PR 9857 at commit ffe29f7. This patch fails Spark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/46402/ Test FAILed.
gharchive/pull-request
2015-11-20T06:43:55
2025-04-01T06:37:55.039107
{ "authors": [ "AmplabJenkins", "JoshRosen", "SparkQA" ], "repo": "apache/spark", "url": "https://github.com/apache/spark/pull/9857", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
85184657
[STORM-848] Shade external dependencies Shading following dependencies: org.apache.httpcomponents:http* org.apache.zookeeper:zookeeper org.apache.curator:* com.twitter:carbonite com.twitter:chill-java org.objenesis:objenesis org.tukaani:xz org.yaml:snakeyaml org.jgrapht:jgrapht-core commons-httpclient:commons-httpclient org.apache.commons:commons-compress org.apache.commons:commons-exec commons-io:commons-io commons-codec:commons-codec commons-fileupload:commons-fileupload commons-lang:commons-lang com.googlecode.json-simple:json-simple org.clojure:math.numeric-tower org.clojure:tools.cli org.clojure:tools.macro joda-time:joda-time Can we also shade jetty? Also why there are new dependency on some connectors. @Parth-Brahmbhatt Primarily, anything that's getting directly called from withing clojure is hard to shade. Hence (disruptor, jetty..) etc are not shaded. As such newer version of jetty are self-shaded ( the packages are different.). But it would be a much bigger change to accomplish. As the storm-core is shading these dependencies, they need to be called out explicitly by connectors now if they implicitly depended on storm-core so far. I agree with @Parth-Brahmbhatt. I think for components in "/external" we should use the use the shaded classes in storm-core instead of manually adding the dependency. Doing so is just a matter of changing import statements. @kishorvpatil what issues did you run into when you shaded Jetty? @ptgoetz my problem with that is the distribution mechanism is different for external from what it is for storm proper. Say I create a topology jar with storm-hdfs in it. At some point in the future storm upgrades the version of snake.yaml and it is a non-backwards compatible change. Now my topology will not run on a newer version of storm. @revans2 Good point. @ptgoetz I get following exceptions when I try to shade Jetty Compiling storm.starter.clj.word-count to /home/kpatil/commun/incubator-storm/examples/storm-starter/target/classes Exception in thread "main" java.lang.SecurityException: Invalid signature file digest for Manifest main attributes, compiling:(word_count.clj:16:1) at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3558) at clojure.lang.Compiler.compile1(Compiler.java:7226) at clojure.lang.Compiler.compile1(Compiler.java:7216) at clojure.lang.Compiler.compile(Compiler.java:7292) at clojure.lang.RT.compile(RT.java:398) at clojure.lang.RT.load(RT.java:438) at clojure.lang.RT.load(RT.java:411) at clojure.core$load$fn__5066.invoke(core.clj:5641) at clojure.core$load.doInvoke(core.clj:5640) at clojure.lang.RestFn.invoke(RestFn.java:408) at clojure.core$load_one.invoke(core.clj:5446) at clojure.core$compile$fn__5071.invoke(core.clj:5652) at clojure.core$compile.invoke(core.clj:5651) at clojure.lang.Var.invoke(Var.java:379) at clojure.lang.Compile.main(Compile.java:81) Caused by: java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:286) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:239) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:317) at java.util.jar.JarVerifier.update(JarVerifier.java:228) at java.util.jar.JarFile.initializeVerifier(JarFile.java:348) at java.util.jar.JarFile.getInputStream(JarFile.java:415) at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:775) at sun.misc.Resource.cachedInputStream(Resource.java:77) at sun.misc.Resource.getByteBuffer(Resource.java:160) at java.net.URLClassLoader.defineClass(URLClassLoader.java:436) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:412) at java.lang.ClassLoader.loadClass(ClassLoader.java:412) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:190) at storm.starter.clj.word_count$loading__4958__auto__.invoke(word_count.clj:16) at clojure.lang.AFn.applyToHelper(AFn.java:152) at clojure.lang.AFn.applyTo(AFn.java:144) at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3553) can you try adding following to the shade-plugin part. See http://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html for a full example. *:* META-INF/*.SF META-INF/*.DSA META-INF/*.RSA @kishorvpatil I can't reproduce it. What does your shade config for jetty look like? Here are the sections I added to test: The include: <include>org.eclipse.jetty:*</include> The relocation: <pattern>org.eclipse.jetty</pattern> <shadedPattern>org.apache.storm.jetty</shadedPattern> </relocation> I left out the filter @Parth-Brahmbhatt mentioned, expecting to get the error you did, but everything succeeded. I tried What @Parth-Brahmbhatt suggested to successfully build it. Testing it now. Thank you @ptgoetz. The changes worked. And Jetty is shaded now. Thanks @kishorvpatil! +1 +1. +1 With this patch, there are a lot of ClassNotFoundException when nimbus or ui load Plugins such as AutoHdfs, what can we do to fix this. @tedxia I responded on your pull request #600 @revans2 thanks for your replay.
gharchive/pull-request
2015-06-04T16:57:09
2025-04-01T06:37:55.050729
{ "authors": [ "Parth-Brahmbhatt", "kishorvpatil", "ptgoetz", "revans2", "tedxia" ], "repo": "apache/storm", "url": "https://github.com/apache/storm/pull/577", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
100412398
[STORM-949] On the topology summary UI page, added Elapsed time since error column. Current on the topology summary UI page and tells you the last error and highlights it red if it happened within the last 30 min. However, I think its useful if there was a column that told you the time that has elapsed since the most recent error occurred. I found this useful for monitoring the well being of my storm cluster. Not sure why travis is failing again. I just built and ran all tests on my local machine and everything was fine Hi, there's a storm-hive compilation issue. Your patch modifies html pages, so you don't need to worry about build. :) I'll take a look when I have some time. Thanks! @jerrypeng I'd like to see applied screenshot first before taking a detail look. Could you post it to comment? fixed the formating issues I'm more in favor of putting a time/date here than an elapsed time. The actual description for STORM-949 is "On the topology summary UI page, last shown error should have the time and date". Yes but the exact time and date can be found if you drill down to the component. Having the elapsed time since time I feel like will tell administrators in a finer grain when the error happened @HeartSaVioR can you take a look at my pull request again @jerrypeng I'm with @knusbaum. PR is a bit different from JIRA title, and I'm also more in favor of putting time/date. How about gathering consensus about this feature from dev mailing list and reflect feedback? Just modified the UI to have error time shown as a time and date. If you hover over that time a tooltip will pop up displaying the elapsed time. Modified version of UI is what I and @knusbaum , @d2r suggested, and it seems that there're no other opinions now. Build failure is not related to. So I'm +1.
gharchive/pull-request
2015-08-11T21:06:15
2025-04-01T06:37:55.056910
{ "authors": [ "HeartSaVioR", "jerrypeng", "knusbaum" ], "repo": "apache/storm", "url": "https://github.com/apache/storm/pull/675", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
898200806
[Explore]'Is temporal' checkbox should take effect Current behavior: Only columns that have 'Is Temporal' box checked in the Edit dataset shows in Time Column select dropdown which is the right behavior ✅ What need to be done: ICON CHANGE - When user uncheck 'Is Temporal' on a column that was detected as Time, column icon should change to "?" or "#", vise versa to use the 🕑 icon if the column was previous selected as time column, time range where clause should be removed automatically from the query related project [explore]Can't remove unnecessary Datetime column from time filter https://user-images.githubusercontent.com/67837651/119166999-2a7ed700-ba14-11eb-9e26-a75d668ef8ec.mov other related project: [Explore]Search data panel column by key words Drag and Drop Not able to sort column in Edit dataset modal is super annoying.. 🤣 @geido another related issue when clicking "SYNC COLUMNS FROM SOURCE", Is temporal is not detected
gharchive/issue
2021-05-21T16:08:41
2025-04-01T06:37:55.061522
{ "authors": [ "junlincc" ], "repo": "apache/superset", "url": "https://github.com/apache/superset/issues/14754", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
927362849
Custom Color Theming Not working for embedded chart While adding custom color theme for dashboard using below (where testpart23456789,Info,due,beforeTime,High are label names), the colors are getting applied only when charts are seen in the dashboard or during dashboard embed. Issue we are facing is that when chart is explored or embedded, these label colors are getting overridden by default theming. Also, while exploring chart, these label colors appear when "Superset Colors" theme is chosen but after saving, again the label colors get overridden by default theming. "label_colors": { "testpart23456789": "#FFFF00", "Info": "#FFFF00", "due": "#8B0000", "beforeTime": "#008000", "High": "#8B0000" } We got reference for above from FAQ- https://superset.apache.org/docs/frequently-asked-questions ( Is there a way to force the use specific colors?) Expected results While chart embed or explore, the label colors should appear as provided i.e should follow below: "label_colors": { "testpart23456789": "#FFFF00", "Info": "#FFFF00", "due": "#8B0000", "beforeTime": "#008000", "High": "#8B0000" } Actual results While adding custom color theme for dashboard using below (where testpart23456789,Info,due,beforeTime,High are label names), the colors are getting applied only when charts are seen in the dashboard or during dashboard embed. Issue we are facing is that when chart is explored or embedded, these label colors are getting overridden by default theming. Also, while exploring chart, these label colors appear when "Superset Colors" theme is chosen but after saving, again the label colors get overridden by default theming. Screenshots How to reproduce the bug Edit dashboard to add above provided config in Advanced Dashboard properties and save. Open dashboard and explore any chart under it. You will see that the colors provided are not being followed for labels. Environment (please complete the following information): superset version: 1.1.0 python version: 3.8.5 node.js version: v14.17.0 Checklist Make sure to follow these steps before submitting your issue - thank you! [ ] I have checked the superset logs for python stacktraces and included it here as text if there are any. [x] I have reproduced the issue with at least the latest released version of superset. [x] I have checked the issue tracker for the same issue and I haven't found one similar. This should be solved in 1.4
gharchive/issue
2021-06-22T15:28:55
2025-04-01T06:37:55.069517
{ "authors": [ "amitmiran137", "laveenamurjani789" ], "repo": "apache/superset", "url": "https://github.com/apache/superset/issues/15299", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
495165654
Unable to display data table in sqlite? I have successfully installed superset in docker mode, and the example data is also loaded successfully. When I add local sqlite for testing, it can't display the specific table name in sql lab. superset version: 0.28.0 python version: 2.7.5 node.js version: 10.16.3 npm version: 6.9.0 Checklist [ X] I have checked the superset logs for python stacktraces and included it here as text if there are any. [ X] I have reproduced the issue with at least the latest released version of superset. [ X] I have checked the issue tracker for the same issue and I haven't found one similar. Additional context no any table display in main schema, when i exec select sql: sqlite error: no such table: xxx add datasource: Test Connection: OK expose in SQL Lab: selected No data table displayed in bottom ! View In SQL Lab Editor: schema: main(only) see table schema(0 in main) View database by sqlite3 in Centos7: display table This issue refers to an old version of Superset (also deprecated version of Python); please try the most recent official release (0.34) and reopen if the issue persists. I installed the superset as per the tutorial via Docker Compose on Windows 11 and added a SQLite database using the PREVENT_UNSAFE_DB_CONNECTIONS = False directive The connection test is successful. The database is fully functional and accessible. The superset version is 0.0.0dev (as stated in the About section of the Settings menu). But the same thing is happening to me: View In SQL Lab Editor: schema: main (only) see table schema (0 in main) I installed the superset as per the tutorial via Docker Compose on Windows 11 and added a SQLite database using the PREVENT_UNSAFE_DB_CONNECTIONS = False directive The connection test is successful. The database is fully functional and accessible. The superset version is 0.0.0dev (as stated in the About section of the Settings menu). But the same thing is happening to me: View In SQL Lab Editor: schema: main (only) see table schema (0 in main) Can we repoen this? I cloned the repo just now and got it running with docker compose and followed this comment https://github.com/apache/superset/issues/9748#issuecomment-1124323169 to get the sqlite db added but I have the same experience, adding the sqlite db file via the path copied into superset_home works with the driver connection string properly formatted, but adding a dataset from the sqlite db results in a single schema main and no tables found. exact same experience just now as capttrousers. Brand new install, managed to get sqlite db to connect, but main only and no data :( I just installed superset using manual steps and have it running. I added a sqlite3 db file under database connections , where it showed the connection to be ok with unsafe flag set to FALSE. I also only see main as schema under the newly attached database and don't see the table which exists when I query the db file via sqlite3 commain line. Infra: SQLite 3.41.2 2023-03-22 11:56:21 0d1fc92f94cb6b76bffe3ec34d69cffde2924203304e8ffc4155597af0c191da zlib version 1.2.13 gcc-11.2.0 Loaded your LOCAL configuration at [/home/vibhu/src/talkAItive/superset/superset_config.py] Python 3.10.13 Flask 2.2.5 Werkzeug 2.3.8 Help please. the same problem persists even as of May 2024. A solution please?
gharchive/issue
2019-09-18T11:26:55
2025-04-01T06:37:55.082055
{ "authors": [ "LoveMyBaby", "capttrousers", "everton3x", "hiwaveSupport", "mechgt", "sammigachuhi", "villebro" ], "repo": "apache/superset", "url": "https://github.com/apache/superset/issues/8249", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
499341600
How to change Big Number CSS to be like that? How to configure to make big number like that? Example: @syazwan0913 you'll probably have to add your own component for this. Superset abstracts the common components to their plugins and exposes them as individual packages. https://github.com/apache-superset/superset-ui-plugins/blob/master/packages/. If you take a look at the big number chart imported via the MainPresets.js file. You will find it does use a mainColor variable (ie the colorPicker) in the linear gradient. The source code is pretty straightforward if you want to read it here: https://github.com/apache-superset/superset-ui-plugins/blob/484d63993b81d593183f1f1a2b8f9d91aeef310f/packages/superset-ui-legacy-preset-chart-big-number/src/BigNumber/BigNumber.jsx#L207 As you can see it's tightly coupled with the AreaSeries chart from @data-ui/xy-chart. You might want a new component or a new render section here that just takes a bg image. Probably making an issue request / PR and seeing if you can add it to the ui-plugins would be best. I'd like that feature too :) @timurista i see. thanks for your guide. @syazwan0913 You could, in theory, do this with CSS. When you "Edit Dashboard" there's an option to add CSS. If the number and sequence of these Big Number components on your dashboard is static, you can use CSS nth-of-type trickery. I made a quick example to illustrate the case: Syles applied here are as follows. Set the background color of each instance. .superset-legacy-chart-big-number:nth-of-type(1){ background: orange; } Put the subheader where you want it (you can set the width so it wraps, etc.) .superset-legacy-chart-big-number .subheader-line { text-align: left; position: absolute; bottom: 10px; left: 10px; } Add a CSS pseudo element, and give it the icon you want as a background. .superset-legacy-chart-big-number:nth-of-type(1)::after { content: ''; display: block; height: 60px; width: 60px; background: url(https://image.flaticon.com/icons/png/512/121/121901.png); background-size: contain; position: absolute; bottom: 20px; right: 20px; } That's super hacky, but might get you the result you want without having to make new components. @rusackas That is great. I will try it out. Thanks @rusackas What version of superset do you have installed?, I have tried this hack on v.0.28.1 and did not work. Thanks @rusackas What version of superset do you have installed?, I have tried this hack on v.0.28.1 and did not work. Thanks I'm not sure what I was running at the time, but I usually run the latest code on master. Not sure where you added the CSS, but just for clarity, do the following (which I did on the example Baby Names dashboard): click "Edit Dashboard" click the dropdown arrow at the far right next to "Switch to view mode", and select "Edit CSS" from the dropdown menu. Paste in this block of CSS: .superset-legacy-chart-big-number:nth-of-type(1){ background: orange; } .superset-legacy-chart-big-number .subheader-line { text-align: left; position: absolute; bottom: 10px; left: 10px; } .superset-legacy-chart-big-number:nth-of-type(1)::after { content: ''; display: block; height: 60px; width: 60px; background: url(https://image.flaticon.com/icons/png/512/121/121901.png); background-size: contain; position: absolute; bottom: 20px; right: 20px; } You should see the result instantly, but you can close the modal/overlay, and click "Switch to view mode" to finish editing. The result should look like so: @muneneg Another hacky way I found out is using the chart-id #chart-id-225{background: green;} #chart-id-226{background: orange;} #chart-id-227{background: red;} @B-Cheye How did you manage to make the entire box coloured. With the above CSS you have shared only the chart area will get coloured but not the dashboard-component. Using your code above results in the image shown. @stevensuting If you want the whole dashboard-component color to change then you will need to target the whole dashboard div @B-Cheye How do you do that when this is how the CSS is ordered? Could you share your CSS snippet? @stevensuting it looks like @B-Cheye 's solution references the id attribute on the same line I'd annotated with "But it's here..." in the screenshot. So I don't think it is coloring the entire chart wrapper on the dash, but just the chart area itself. You could get hacky with nth-child/nth-of-type CSS selectors on the dashboard to color the whole wrapper, if your layout is fairly stable. So I am asking in 2024 with v 3.1.1. how do we do this with CSS? I particularly want my value to be centered
gharchive/issue
2019-09-27T09:35:24
2025-04-01T06:37:55.096509
{ "authors": [ "B-Cheye", "Nikomahal", "muneneg", "rusackas", "stevensuting", "syazshafei", "syazwan0913", "timurista" ], "repo": "apache/superset", "url": "https://github.com/apache/superset/issues/8314", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
934083147
feat: more SIP-40 errors SUMMARY Make more SQL Lab error message SIP-40 compliant. BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF TESTING INSTRUCTIONS ADDITIONAL INFORMATION [ ] Has associated issue: [ ] Changes UI [ ] Includes DB Migration (follow approval process in SIP-59) [ ] Migration is atomic, supports rollback & is backwards-compatible [ ] Confirm DB migration upgrade and downgrade tested [ ] Runtime estimates and downtime expectations provided [ ] Introduces new feature or API [ ] Removes existing feature or API Hi @betodealmeida this PR cause Sql Lab can not handle some error message correctly: I have reverted this from airbnb's release branch, but you probably have to fix it in open source master branch.
gharchive/pull-request
2021-06-30T20:04:53
2025-04-01T06:37:55.100818
{ "authors": [ "betodealmeida", "graceguo-supercat" ], "repo": "apache/superset", "url": "https://github.com/apache/superset/pull/15482", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1084729679
fix(deck.gl): update view state on property changes (#17720) SUMMARY Updates viewport state of deck.gl on filter updates BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF Before (screeshot 1 & 2): After: https://user-images.githubusercontent.com/2187389/146766468-a031dad9-b4d1-48ff-b307-dbfa2dd79417.mov TESTING INSTRUCTIONS Create a Dashboard containing a deck.gl polygon chart (auto zoom = True) with a column included in filter Open the dahsboard => the map is zoomed according to show features (screenshot 1) Change native filter value The map is now zoomed to the new bounding box (see video) ADDITIONAL INFORMATION [x] Has associated issue: fixes #17720 [ ] Required feature flags: [ ] Changes UI [ ] Includes DB Migration (follow approval process in SIP-59) [ ] Migration is atomic, supports rollback & is backwards-compatible [ ] Confirm DB migration upgrade and downgrade tested [ ] Runtime estimates and downtime expectations provided [ ] Introduces new feature or API [ ] Removes existing feature or API @hbruch can you rebase this? I believe there may have been some CI issues when you pushed these last changes @hbruch sorry about this, but there's yet some more flakiness in our CI pipeline: #17918 . Bear with us while we get it sorted.. @villebro Is there anything I could do to have this merged? Thx
gharchive/pull-request
2021-12-20T12:23:08
2025-04-01T06:37:55.108066
{ "authors": [ "hbruch", "villebro" ], "repo": "apache/superset", "url": "https://github.com/apache/superset/pull/17826", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
454037038
Code to add support for TOMEE-542 Signed-off-by: Doychin Bondzhev doychin@dsoft-bg.com @doychin Thanks for the PR It looks good to me. Even if it's a small change, I think it should be visible on release notes. Do you mind creating a JIRA ticket and updating the ticket title with it? This looks great, and thank you for the PR! I'm just going to run a build with this on a Windows box and get it merged in for you.
gharchive/pull-request
2019-06-10T07:35:10
2025-04-01T06:37:55.115537
{ "authors": [ "doychin", "jeanouii", "jgallimore" ], "repo": "apache/tomee", "url": "https://github.com/apache/tomee/pull/482", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
640765380
Can mark an empty ip address as service when perform /POST servers API I'm submitting a ... [x] bug report [ ] new feature / enhancement request [ ] improvement request (usability, performance, tech debt, etc.) [ ] other Traffic Control components affected ... [ ] CDN in a Box [ ] Documentation [ ] Grove [ ] Traffic Control Client [ ] Traffic Monitor [x] Traffic Ops [ ] Traffic Ops ORT [ ] Traffic Portal [ ] Traffic Router [ ] Traffic Stats [ ] Traffic Vault [ ] unknown Current behavior: User can create a servers with empty IP address and can mark it as service. For example using this payload: { "cachegroupId": 3, "cachegroup": "infrastructure", "cdnId": 2, "cdnName": "testCDN", "domainName": "test.net", "hostName": "testingServer", "interfaceMtu": 1500, "interfaceName": "eth0", "ipAddress": "", "ipGateway": "0.0.0.2", "ipNetmask": "255.255.255.0", "ip6Address": "0:0:0:0:0:0:0:1", "ip6Gateway": "::1", "physLocationId": 1, "physLocation": "Augusta-ME", "profileId": 34, "profile": "ATS_Edge_MKGA", "statusId": 2, "typeId": 11, "updPending": false, "ipIsService": true, "ip6IsService": false } The server will return 200 OK Expected / new behavior: Should return 400 with message like "an empty IP or IPv6 address cannot be marked as a service address" Minimal reproduction of the problem with instructions: Anything else: There's also another problem with that. You said it returns 200 OK, but ignoring the fact that you made an empty IP a service address, it also, therefore, doesn't have any service addresses. It should have at least rejected it on that basis. @ocket8888 Yeah, you right about that. I just check and server still create without ip and ip6 @ocket8888 Also even if user input the ipv6 and ip address in. The TP still clean out ip of the servers.
gharchive/issue
2020-06-17T21:52:24
2025-04-01T06:37:55.122902
{ "authors": [ "dpham692", "ocket8888" ], "repo": "apache/trafficcontrol", "url": "https://github.com/apache/trafficcontrol/issues/4804", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
425930036
BACKPORT: fixes #3223. updated type of SteeringTargetNullable.Value to JSONIntStr (cherry picked from commit 3e11da43d31ce8e4469e033d633a47976a3f783b) Which issue is fixed by this PR? If not related to an existing issue, what does this PR do? Fixes #3223 Can't update or create steering target in 3.0.0 or 3.0.1 Which TC components are affected by this PR? [ ] Documentation [ ] Grove [ ] Traffic Analytics [ ] Traffic Monitor [x] Traffic Ops [ ] Traffic Ops ORT [x] Traffic Portal [ ] Traffic Router [ ] Traffic Stats [ ] Traffic Vault [ ] Other _________ What is the best way to verify this PR? Please include manual steps or automated tests. (If no tests are part of this PR, please provide explanation as to why no tests are included.) Check all that apply [ ] This PR includes tests [ ] This PR includes documentation updates [ ] This PR includes an update to CHANGELOG.md [ ] This PR includes all required license headers [ ] This PR includes a database migration (ensure that migration sequence is correct) [ ] This PR fixes a serious security flaw. Read more: www.apache.org/security Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/trafficcontrol-PR/3402/ Test PASSed.
gharchive/pull-request
2019-03-27T12:25:08
2025-04-01T06:37:55.129701
{ "authors": [ "asfgit", "smalenfant" ], "repo": "apache/trafficcontrol", "url": "https://github.com/apache/trafficcontrol/pull/3440", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
522978361
Rewrite current stats from Perl to Golang What does this PR (Pull Request) do? [x] This PR fixes #3853 This PR rewrites current_stats from Perl to Golang and adds API documentation for it. Currently no tests as we do not have Traffic Stats available for TO Api tests. Which Traffic Control components are affected by this PR? Documentation Traffic Golang Control Client Traffic Ops What is the best way to verify this PR? Build traffic ops with PR code and then hit /current_stats, it should return the same data as perl implementation If this is a bug fix, what versions of Traffic Control are affected? The following criteria are ALL met by this PR [x] This PR includes tests OR I have explained why tests are unnecessary [x] This PR includes documentation OR I have explained why documentation is unnecessary [x] This PR includes an update to CHANGELOG.md OR such an update is not necessary [x] This PR includes any and all required license headers [x] This PR ensures that database migration sequence is correct OR this PR does not include a database migration [x] This PR DOES NOT FIX A SERIOUS SECURITY VULNERABILITY (see the Apache Software Foundation's security guidelines for details) Additional Information Still need to look into why Perl implementation did https://github.com/apache/trafficcontrol/blob/master/traffic_ops/app/lib/Utils/Helper.pm#L39 on each cdn name for the query and if that escaping is handled for us in the Golang client Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/trafficcontrol-PR/4780/ So then is a work in progress or ready for review? @ocket8888 this is ready for review. The perl code did replace on ,',",\n in the CDN names prior to going to influxdb but all are blocked from even being apart of a CDN name in the first place -> https://github.com/apache/trafficcontrol/blob/master/traffic_ops/traffic_ops_golang/cdn/cdns.go#L112 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/trafficcontrol-PR/4808/ Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/trafficcontrol-PR/4847/ Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/trafficcontrol-PR/4848/
gharchive/pull-request
2019-11-14T16:44:54
2025-04-01T06:37:55.140255
{ "authors": [ "asf-ci", "mhoppa", "ocket8888" ], "repo": "apache/trafficcontrol", "url": "https://github.com/apache/trafficcontrol/pull/4114", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
743212671
Video streaming issue on traffic server Hi , We are using traffic server as video streaming CDN. We have 2 server each 1,4TB SSD and 10G network interface. During peak hour video stream hard to watch and buffering in client side. Server's load average and Memory are higher than normal state. But streaming bandwidth is around 3Gbps on all 3 server total. We wondering that we have potential only 3Gbps bandwidth on streaming but it has high load on traffic servers. Does those server can handle this load? load average 106.94, 93.30, 88.32 as 32 core CPU memory usage total 62GB used 62GB established tcp connection is around 1200 on each server traffic server's error log is increasing as following 20201115.16h42m02s CONNECT:[0] could not connect [CONNECTION_CLOSED] to 127.0.0.1 for 'http://localhost/vod/encrypt/prod/8a01918b72167aad01722a8007db243e/8a01918b72167aad01722a8007db243e_1500_2/media-88320000.mp4' 20201115.16h42m02s CONNECT:[0] could not connect [CONNECTION_CLOSED] to 127.0.0.1 for 'http://localhost/vod/encrypt/prod/8a01918b752cbbb7017577a9a1fc05ba/8a01918b752cbbb7017577a9a1fc05ba_1500/media-46260040.mp4' 20201115.16h42m02s CONNECT:[0] could not connect [CONNECTION_CLOSED] to 127.0.0.1 for 'http://localhost/vod/encrypt/prod/8a01918b752cbbb7017549e7cc523b37/8a01918b752cbbb7017549e7cc523b37_1000_2/media-220416000.mp4' 20201115.16h42m02s CONNECT:[0] could not connect [CONNECTION_CLOSED] to 127.0.0.1 for 'http://localhost/vod/encrypt/prod/8a01918b69817ebf016a951962ab7ccc/8a01918b69817ebf016a951962ab7ccc_1500/media-4513608.mp4' Best Regards, how about disk util,and how much MB every disk,if each sever around 1200 on each server,it is abort 1.2G,the disk may be over used loadavg is a very broad metric, which includes CPU, memory wait, disk wait, disk usage, and potentially other things. Can you look at specific metrics on your system, and see what specific things have high load? Is it just CPU? Memory? Disk io_wait? All of the above? ATS will use as much memory as you tell it to. You can allocate ramdisks and give them to ATS as block devices. Each disk given to ATS also has a memory cache in front of it, the size of which is configurable. See: https://docs.trafficserver.apache.org/en/8.0.x/admin-guide/files/records.config.en.html#ram-cache https://docs.trafficserver.apache.org/en/8.0.x/admin-guide/files/storage.config.en.html https://docs.trafficserver.apache.org/en/8.0.x/admin-guide/files/volume.config.en.html ATS does have some known memory leaks, but they're generally pretty small. It shouldn't use much more memory than what you allocated for storage and ram_cache, and the memory shouldn't grow much over time. Many people run ATS in production with bandwidth much higher than 3Gbps. My company has caches doing in excess of 20Gbps. If you're having trouble achieving those speeds, another possibility is Linux Kernel Parameters. It's common to have to do a lot of tuning of Linux Kernel Parameters to achieve high performance. Though I wouldn't expect a great deal of tuning to be necessary under 10Gbps. I assume this is somewhat recent hardware, with decent CPUs? We do have some Prod servers that struggle to exceed 10Gbps, from underpowered CPUs with few PCI lanes. Platforms with too few PCI lanes can also cause network bottlenecks like that. If your high load average is being caused by disk io_wait, are you certain your SSDs are fast? Some SSD brands have poor performance. It may be worth testing their sequential and random speeds, just to be sure that isn't the problem. traffic server's error log is increasing as following 20201115.16h42m02s CONNECT:[0] could not connect [CONNECTION_CLOSED] to 127.0.0.1 for 'http://localhost/vod/encrypt/prod/8a01918b72167aad01722a8007db243e/8a01918b72167aad01722a8007db243e_1500_2/media-88320000.mp4' I'm not sure I understand. Your initial question is about bandwidth bottlenecks and high loadavg, but this looks like an error? This looks like an origin (on localhost?) is misconfigured, or unable to handle the requests or load? Are you saying you see a lot of these errors as you approach 3Gbps? That sounds like the Origin server isn't able to handle the load, that the problem is with the Origin, not ATS. Can you verify your Origin itself is capable of the request load? Are these requests mostly Cache Hits or Misses? For a CDN, ATS should be caching I assume. Is the full traffic going to the Origin? Could that be causing the problem? Maybe the Origin can't handle the full 3Gbps, because everything is a Cache Miss, and you need to set Cache-Control to make ATS cache the content so the Origin can handle it. Certain SSL Certificates can also cause high CPU usage, especially RSA. Are you using HTTPS? In short, there are a huge number of factors that can cause bottlenecks like you're seeing. You'll have to narrow it down further, and inspect your hardware usage to figure out what the bottleneck is, and how to fix it. But ATS can definitely do +20Gbps, potentially even 100Gbps, and many large corporations are doing so in Production. Please try to reproduce this in ATS 9.1, and reopen this issue if you still see it.
gharchive/issue
2020-11-15T08:52:13
2025-04-01T06:37:55.152302
{ "authors": [ "Dorjpalam", "rob05c", "ywkaras", "zds05" ], "repo": "apache/trafficserver", "url": "https://github.com/apache/trafficserver/issues/7324", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2211476254
swoc: install swoc_ip_util.h Adding swoc_ip_util.h to the list of header sources so that it will be installed in the expected swoc include location. [approve ci autest] Cherry-picked to v10.0.x
gharchive/pull-request
2024-03-27T18:10:52
2025-04-01T06:37:55.154490
{ "authors": [ "bneradt", "cmcfarlen" ], "repo": "apache/trafficserver", "url": "https://github.com/apache/trafficserver/pull/11190", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
273623281
Remove multiprocessing.Queue.qsize() from traffic_replay Because this raise NotImplementedError on Unix platforms like Mac OS X. Details in below. http://python.readthedocs.io/en/stable/library/multiprocessing.html#multiprocessing.Queue.qsize [ci approve autest] [approve ci autest]
gharchive/pull-request
2017-11-14T00:09:34
2025-04-01T06:37:55.156130
{ "authors": [ "bryancall", "masaori335" ], "repo": "apache/trafficserver", "url": "https://github.com/apache/trafficserver/pull/2809", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
163514600
TS-4396: fix number_of_redirections off-by-one please test FreeBSD build successful! See https://ci.trafficserver.apache.org/job/Github-FreeBSD/406/ for details. Linux build successful! See https://ci.trafficserver.apache.org/job/Github-Linux/300/ for details. FreeBSD build failed! See https://ci.trafficserver.apache.org/job/Github-FreeBSD/412/ for details. Linux build successful! See https://ci.trafficserver.apache.org/job/Github-Linux/306/ for details. [approve ci] Closing since there is a new PR for this #2092
gharchive/pull-request
2016-07-02T11:34:02
2025-04-01T06:37:55.160274
{ "authors": [ "PSUdaemon", "atsci", "bryancall", "mingzym" ], "repo": "apache/trafficserver", "url": "https://github.com/apache/trafficserver/pull/786", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
162752800
[ZEPPELIN-1076] Set hbase.client.retries.number for JDBC What is this PR for? If a user has "org.apache.phoenix:phoenix-core:4.x.x" jar added as a dependency in JDBC interpreter, and for some reason phoenix was not accessible or not properly configured; then the phoenix tries to for 35 times (which is default for hbase.client.retries.number) and each retires is 8 second apart, before it finally fails. What type of PR is it? [Bug Fix] Todos [x] - Set phoenix.hbase.client.retries.number for JDBC What is the Jira issue? ZEPPELIN-1076 How should this be tested? In JDBC interpreter add org.apache.phoenix:phoenix-core:4.4.0-HBase-1.0 as dependency, but don't configure phoenix setting. Then try to run any sql query with any of the configured JDBC driver (like show tables) Without this it will take slightly more than about 5 mins With this it should fetch result sooner (in less than a minute) Screenshots (if appropriate) Questions: Does the licenses files need update? n/a Is there breaking changes for older versions? n/a Does this needs documentation? n/a LGTM. That property hbase.client.retries.number will be passed to phoenix jdbc, right? Yes, phoenix.hbase.client.retries.number, and have tested with and without this string. Merging this if no more discussion.
gharchive/pull-request
2016-06-28T18:23:00
2025-04-01T06:37:55.166988
{ "authors": [ "jongyoul", "prabhjyotsingh" ], "repo": "apache/zeppelin", "url": "https://github.com/apache/zeppelin/pull/1103", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
359433249
[ZEPPELIN-3773] - add check permission on write. What is this PR for? Sometimes when drawing the result of a paragraph, a call is made commit paragraph. And if the user does not have permission on write paragraph a window appears with a warning. This PR fix it. What type of PR is it? Bug Fix What is the Jira issue? ZEPPELIN-3773 Questions: Does the licenses files need update? no Is there breaking changes for older versions? no Does this needs documentation? no @Savalek I see an infinite number of GET request like http://localhost:8080/api/helium/suggest/2DMKVSPYC/20180807-154514_2063688624 with response "401 Unauthorized" is case if user don't have permission for write. Could you fix? @mebelousov, Perhaps you have incorrectly configured shiro.ini. @felixcheung, there is already a check on the server. Due to some errors, a commit of the paragraph went to the server and an error occurred. @Savalek You're right. I fix my shiro.ini. But I still see a huge amount of background queries (up to 100 per second) connected with Helium. I can reproduce the issue in development mode. Tested on Ubuntu 16.04, Chromium (Version 69.0.3497.92) and Firefox Quantum 62.0 Check please. This seems to be the bug that we are also facing currently. I have a notebook with some users that only have read permissions. These users receive an error message that the do not have the update permission on the notebook and therefore cannot open it. Any estimate when this will be merged and released? Need someone else to check this PR. Because I could not reproduce this error.
gharchive/pull-request
2018-09-12T11:16:09
2025-04-01T06:37:55.172940
{ "authors": [ "Savalek", "deradam", "mebelousov" ], "repo": "apache/zeppelin", "url": "https://github.com/apache/zeppelin/pull/3179", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2311801347
🛑 Element is down In 264ed85, Element (https://element.interhop.org) was down: HTTP code: 0 Response time: 0 ms Resolved: Element is back up in b8d5422 after 17 minutes.
gharchive/issue
2024-05-23T02:20:53
2025-04-01T06:37:55.181176
{ "authors": [ "aparrot89" ], "repo": "aparrot89/interhop-status", "url": "https://github.com/aparrot89/interhop-status/issues/131", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2554816822
[Features] support create Kibana for Elasticsearch cluster Kibana is an essential analysis and visualization platform for Elasticsearch. Hi @shuoshadow , It is a good suggestion. We plan to support by the end of Oct.
gharchive/issue
2024-09-29T08:49:38
2025-04-01T06:37:55.183662
{ "authors": [ "shanshanying", "shuoshadow" ], "repo": "apecloud/kubeblocks-addons", "url": "https://github.com/apecloud/kubeblocks-addons/issues/1070", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
775905252
Support one way hash for OAuth2 client secrets Add one way hash function support for OAuth2 clientSecret storage mechanism as recommended by OWASP Thank you for the pull request. It does not look like this would be something we'd want to accept just yet. If need does come up and there is time, we can review this again.
gharchive/pull-request
2020-12-29T13:57:14
2025-04-01T06:37:55.186791
{ "authors": [ "antoine777", "mmoayyed" ], "repo": "apereo/cas", "url": "https://github.com/apereo/cas/pull/5017", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
252030529
Error ill-formed ticket found in the URL when ticket is encrypted If the cas server is encrypting the ticket value (cas.ticket.security.cipherEnabled=true) the php client fails with the error: Error ill-formed ticket found in the URL You have to set this property to false for the client to work. Related to #180 I don't believe it is related to #180 because php CAS doesn't need to decrypt the ticket instead the criteria for what is a valid ticket needs to accept that the encrypted ticket is valid and just pass it back. Can you please supply a debug log? 3424 .START (2017-08-22 12:11:13) phpCAS-1.3.5 ****************** [CAS.php:468] 3424 .=> phpCAS::client('3.0', 'castst.conncoll.edu', 443, 'cas') [index.php:12] 3424 .| => CAS_Client::__construct('3.0', false, 'castst.conncoll.edu', 443, 'cas', true) [CAS.php:360] 3424 .| | Starting a new session cq25i8qv8bbq8uqoiud8hh2g75 [Client.php:932] 3424 .| | Session is not authenticated [Client.php:938] 3424 .| | => phpCAS::error('ill-formed ticket found in the URL (ticket=eyJhbGciOiJIUzUxMiJ9.WlhsS05tRllRV2xQYVVwRlVsVlphVXhEU21oaVIyTnBUMmxLYTJGWVNXbE1RMHBzWW0xTmFVOXBTa0pOVkVrMFVUQktSRXhWYUZSTmFsVXlTVzR3TGk1NFYxaDNTbWhOY25KMFoxQm9aM1ExVkc4eGFGVjNMakJhZHpRdE0zcFRVV2hFUlhsaGNYVTVibWw1VlUxT05EazFURkpRWDNkRWVHcERXR0l3YUVGcldXSkdPWGxMTmpWRFIyazBiV2M1VUdocldWbHVVa0l1VVUxWmFtZFVUWGRLWm1VM2NVNVhabVk1VjBsclp3PT0.QCt2Ma0yxcfigVaNE5DYlwog1Vz8bIRB_EzoJjs85wWnXKCEwaxlvQKoIMU7C4HdxFbJya-Pj6URByRfpMwbsg\')') [Client.php:1028] 3424 .| | | ill-formed ticket found in the URL (ticket=eyJhbGciOiJIUzUxMiJ9.WlhsS05tRllRV2xQYVVwRlVsVlphVXhEU21oaVIyTnBUMmxLYTJGWVNXbE1RMHBzWW0xTmFVOXBTa0pOVkVrMFVUQktSRXhWYUZSTmFsVXlTVzR3TGk1NFYxaDNTbWhOY25KMFoxQm9aM1ExVkc4eGFGVjNMakJhZHpRdE0zcFRVV2hFUlhsaGNYVTVibWw1VlUxT05EazFURkpRWDNkRWVHcERXR0l3YUVGcldXSkdPWGxMTmpWRFIyazBiV2M1VUdocldWbHVVa0l1VVUxWmFtZFVUWGRLWm1VM2NVNVhabVk1VjBsclp3PT0.QCt2Ma0yxcfigVaNE5DYlwog1Vz8bIRB_EzoJjs85wWnXKCEwaxlvQKoIMU7C4HdxFbJya-Pj6URByRfpMwbsg') in /cwd/cwassets/httpd/alias/tp/cas/cas5-php-test/index.phpon line 12 [CAS.php:566] 3424 .| | <= '' Thanks, looks like we simply need to adjust the regexp security filter for the ticket so that it allows all formats including the new encryption. Has there been any movement on this? I'm running against master, with all of the fixes for ticket/session length (#248, #257, and #224), but enabling cas.ticket.security.cipherEnabled still throws an error.
gharchive/issue
2017-08-22T17:22:03
2025-04-01T06:37:55.192595
{ "authors": [ "MrDys", "atilling", "jfritschi" ], "repo": "apereo/phpCAS", "url": "https://github.com/apereo/phpCAS/issues/240", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
249080454
Add UP_STAGE environment variable Technically I think it's more correct to set env vars for the environment, not based on the env such as NODE_ENV. However we need it for logging anyway. This may have to be a hack, setting the env on first request, since stageVariables is the only way to get at the stage (AFAIK). It would be handy for UP_STAGE to be made available to the hooks too, allowing particularly the build script to adjust it's behaviour accordingly. I'm not sure if this issue refers to the application environment (in AWS) only, or making UP_STAGE available everywhere including local environment when building. Hmm yeah it's a little tricky, since the "ideal" way of deploying would be to stage the application, then promote the staged version to production. UP_STAGE would will be "staging" at that point Promoting the stage build to production doesn't work for statically built files like next.js does. @cgarvis yea not if you differentiate config in those two stages. It depends how you structure things I suppose, since you can pass UP_STAGE to your client JS for example and choose config that way. It would be a more ideal way to deploy since you know exactly what you're getting. @tj I use webpack's DefinePlugin to replace process.env in my builds. That way I don't leak environment vars by accident. Also makes it more difficult for someone on the team to hit the wrong endpoints.
gharchive/issue
2017-08-09T16:00:52
2025-04-01T06:37:55.199784
{ "authors": [ "cgarvis", "jamesramsay", "tj" ], "repo": "apex/up", "url": "https://github.com/apex/up/issues/200", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
831067091
GLIBC_2.32 not found / CGO_ENABLED=0 Prerequisites [X] I am running the latest version. (up upgrade) [X] I searched to see if the issue already exists. [X] I inspected the verbose debug output with the -v, --verbose flag. [X] Are you an Up Pro subscriber? Description On Arch glibc is not compatible with AWS's. So the workaround is to use CGO_ENABLED=0 However CGO_ENABLED=0 often means poor support with sqlite drivers or example. CGO_ENABLED=0 versions of sqlite drivers exist but they are significantly slower for example. Perhaps it's having some sort of cross compile option as suggest here https://twitter.com/benbjohnson/status/1370955506471165956 ? Steps to Reproduce Try deploy https://github.com/kaihendry/dfts Hmm I'm not sure I'd want a dependency on Docker, it could get really complicated quick. I think that sort of thing is probably best left to CI like GH Actions where you're building in the target environment already, but something to think about for sure I'm told this what I need to do: https://gist.github.com/egonelbre/01bbf7ca97d6b5588438da36a2578e7b I do really want a smooth way to deploy locally without CI. I have no previous experience with apex/up, but as far as I'm able to deduce it seems that hooks.build is a way to override with arbitrary build command? So it should be possible to pretty much use the same invocation -- maybe modify it to output "server" as the binary. Although, I'm not sure in which computer/server context will run. The only concern is invoking subcommands such as $(pwd) and $(go env GOPATH) within the command. Maybe it can invoke a separate script? Based on https://github.com/apex/up/blob/9770c5062e3a39563d183f84ce51cec52bf683ec/up.go#L67, I'm guessing it will work? @egonelbre yep it runs as a shell command, so you should be fine to use stuff like that @kaihendry yeah I definitely hear you, but the number of variables explodes once you start introducing platform specifics. Using Docker manually locally and putting Up inside would more or less have the same UX as Up trying to use Docker, so I'm not sure there would be much of a benefit, since Up would have to support all the various OSes and shared libraries etc
gharchive/issue
2021-03-14T05:10:33
2025-04-01T06:37:55.206824
{ "authors": [ "egonelbre", "kaihendry", "tj" ], "repo": "apex/up", "url": "https://github.com/apex/up/issues/825", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
595107157
Always have the fill below the curve in area charts I'm creating trend line widgets, showing the angle of the current trend line (another request btw) for a given time period. Some chart values are however negative and that makes the chart "flip" the area or even worse, cross the line. Like in this codepen: https://codepen.io/jkohlin/pen/BaNgbOy What I want is an option for fill that might look like this: fill: { colors: '#0000ff', opacity: 0.9, type: 'gradient', fillTo: -20, } Where fillTo is a numeric value that, if set, forces the fill to flow all the way down to that value on the y-axis. Then you could get this: instead of this: Describe alternatives you've considered At the moment I have to recalculate the y values and add a positive value big enough to force the curve above zero. This works on sparklines, where the y-axis is hidden, but not otherwise of course. This would be an awesome feature! I would, too, like to request this. As of now it looks rather strange for negative values, if the area has a gradient: . It makes it quite difficult to see the lowest values. Sorry for the late response. I have added a new option that will extend the area beneath zero-line and fill it till the end. plotOptions: { area: { fillTo: 'end' } } This will be available from v3.19.3
gharchive/issue
2020-04-06T13:16:28
2025-04-01T06:37:55.211942
{ "authors": [ "jkohlin", "junedchhipa", "oherik" ], "repo": "apexcharts/apexcharts.js", "url": "https://github.com/apexcharts/apexcharts.js/issues/1472", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
435598895
x-axis labels are not appears Codepen https://codepen.io/anon/pen/YMjPEa added to code-pen Explanation What is the behaviour you expect? the x-axis labels are too long What is happening instead? so it is not appearing correctly. What error message are you getting? half of it replaced with ... and I need it all to be visible thanks Increase the yaxis labels maxWidth by yaxis: { labels: { maxWidth: 200 } }
gharchive/issue
2019-04-22T04:55:28
2025-04-01T06:37:55.214824
{ "authors": [ "bashairm", "junedchhipa" ], "repo": "apexcharts/apexcharts.js", "url": "https://github.com/apexcharts/apexcharts.js/issues/533", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1173532557
Airnode functions refactor This updates wallet functions, loadConfig, logger, getGasPrice, callApi to be imported from airnode-node v0.5 Note: to get this to build locally, you will need to copy the built airnode-node and airnode-utilities /dist folders from Airnode PR 944 into your Airkeeper node_modules/@api3 This probably shouldn't be merged until we can update to v0.5 and then we can temporarily update the package.json to use airnode-node for PR944 as a dependency with: 'git+https://gitpkg.now.sh/api3dao/airnode/packages/airnode-node?ec0340aabc3be174e62f315b9cc0bbe62b43d0da' using this which allows us to use a monorepo subdirectory as a package @acenolaza Do the example configs work for you in the main branch? I had to change the airnodeAddress in airkeeper.json (these changes) to make them work. I'm actually using 0xA30CA71Ba54E83127214D3271aEA8F5D6bD4Dace because in secrets.env I have the test mnemonic we are using everywhere else
gharchive/pull-request
2022-03-18T12:22:00
2025-04-01T06:37:55.252365
{ "authors": [ "acenolaza", "vponline" ], "repo": "api3dao/airkeeper", "url": "https://github.com/api3dao/airkeeper/pull/44", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1196376089
BEC258: Terraform ECS I'm creating this as a WIP because I wanted to get feedback as soon as possible and also because the Airseeker docket image is not yet in docker hub. Some notes: Created an ecs cluster, task definition and service to tie them together. Since I'm using FARGATE I had to add a vpc resource but since we are not exposing a frontend I didn't think it was necessary to deal with internet gateways, load balancers and security groups. The simplest way I found to add a vpc is to just define a default vpc resource with default subnets. Since I wanted to add logs to the task (awslogs set in the Task Definition) I had to also attach the AmazonECSTaskExecutionRolePolicy. I've hardcoded the airnode-client docker image url just for testing until airseeker image is ready. Once that happens I guess I'll replace it with a variable. I wasn't sure if we needed to deploy to different stages so I decided to keep it simple and have all terraform files in the same directory. Also I have not created any modules until I see the need for them. I kept it as a Draft PR because I wasn't sure if airseeker-dev was the right image to use. Also, running docker pull api3/airseeker-dev:latest doesn't seem to work for me. I need to use a specific task in order to be able to pull the image from docker hub. Same thing happens when I use latest to pull airkeeper image. Maybe @aquarat has any idea what might be happening thinking I did found something about having to set some config to true in docker hub in order to be able to pull using latest. I think the problem is that there is no @api3/airseeker-dev tagged as latest :smile: Set the default to @api3/airseeker:0.1.0 (even though it doesn't exist yet). You can pass a dev one (with commit hash) as an variable to test that it's actually working. @amarthadan I feel like this is much better now and it's ready for final review. @amarthadan I feel like this is much better now and it's ready for final review. Yes, I'll review it later today :slightly_smiling_face: I've rebased this branch on top of main and I started getting an issue with the use of console.log in scripts/terraform-fmt.ts which I fixed by just putting the whole scripts folder in the eslintignore file because those scripts are only used for development.
gharchive/pull-request
2022-04-07T17:49:39
2025-04-01T06:37:55.257928
{ "authors": [ "acenolaza", "amarthadan" ], "repo": "api3dao/airseeker", "url": "https://github.com/api3dao/airseeker/pull/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
83687274
Incorrect highlighting when MSON attribute description contains underscore See apiaryio/api-blueprint#202. Sample at line 126. Same problem here with wrong rendering + Request Client credentials (application/json) + Attributes + grant_type: `client_credentials` (string, required) + client_id: `88888888-4444-4444-4444-cccccccccccc` (string, required) + client_secret: `clientsecret` (string, required) + Body { "grant_type": "client_credentials", "client_id": "88888888-4444-4444-4444-cccccccccccc", "client_secret": "clientsecret" } @danielgtaylor Now that I look at this with fresh eyes, this might not be an issue at all, since underscores in MSON are not allowed and they need to be escaped. If that rule is followed, the highlighter would not make any mistake. MSON are not allowed and they need to be escaped This is true for property names and values. Not for description. Same hightlighting issue happens if you have a JSON payload that contains a key that starts with an underscore. For instance: ## Videos Collection [/videos] Provides access to all videos. + Model (application/json) JSON representation of videos + Body { "_pagination": { "next": "/videos?offset=7", "total_count": 100 } } Highlighting is incorrect starting from the underscore. any update?
gharchive/issue
2015-06-01T20:50:21
2025-04-01T06:37:55.262669
{ "authors": [ "danielgtaylor", "edwardaa", "freezy-sk", "jeanregisser", "pksunkara", "zdne" ], "repo": "apiaryio/api-blueprint-sublime-plugin", "url": "https://github.com/apiaryio/api-blueprint-sublime-plugin/issues/23", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
348395511
Add Wercker and more Summer cleanup 😎 Preparing for CircleCI 2.0 support in Dredd's dredd init Adding Wercker Using simpler naming for several files Being more explicit in the README about where all the files are, to which service they belong, etc. Adding Jenkins to the README even though it's not live-tested Swagger is dead, long live the OpenAPI 2 (preparing for OpenAPI 3 support in Dredd and elsewhere) @nadade @kylef Call for help 🙋‍♂️ I'm trying to make Wercker to npm install --no-optional, but the no optional flag seems to be ignored - it still installs protagonist: ... export WERCKER_NPM_INSTALL_OPTIONS="--no-optional" source "/pipeline/npm-install-83ebc85d-4d44-47f3-a21f-75237025acae/run.sh" < /dev/null Using wercker cache Creating $WERCKER_CACHE_DIR/wercker/npm Configuring npm to use wercker cache Starting npm install, try: 1 npm WARN deprecated json-schema-faker@0.5.0-rc13: Broken not support npm WARN deprecated nomnom@1.5.2: Package no longer supported. Contact support@npmjs.com for more info. > protagonist@1.6.8 install /pipeline/source/node_modules/protagonist > node-gyp rebuild ... Even if I tried to create my own step, it was still ignored and Wercker just installed the project as without --no-optional, compiling protagonist. Any ideas? @honzajavorek I think this may be NPM bug: https://npm.community/t/npm-install-no-optional-not-actually-filtering-optionals-in-cli-6-0-1-or-6-1-0/257 / https://github.com/npm/npm/issues/17633#issuecomment-403938408 Perhaps you can work around the issue with having a package-lock.json generated without protagonist. Probably something you should do regardless due to transient dependency licensing approval. Oh, package-lock.json is a different can of worms I'll need to resolve later. Thanks for looking into it, I think I'm fine with Wercker installing protagonist now.
gharchive/pull-request
2018-08-07T16:25:28
2025-04-01T06:37:55.267869
{ "authors": [ "honzajavorek", "kylef" ], "repo": "apiaryio/dredd-example", "url": "https://github.com/apiaryio/dredd-example/pull/29", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2421580707
httpx.InvalidURL: Invalid non-printable ASCII character in URL Hey, I try to scrap music but it seems that the crawler with the await context.enqueue_links(strategy="all") add invalid url, I run my code but I have the error: [crawlee.autoscaling.autoscaled_pool] INFO Waiting for remaining tasks to finish Traceback (most recent call last): File "/home/jourdelune/dev/Crawler/src/main.py", line 21, in <module> asyncio.run(main()) File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/home/jourdelune/dev/Crawler/src/main.py", line 14, in main await crawler.run( File "/home/jourdelune/dev/Crawler/env/lib/python3.10/site-packages/crawlee/basic_crawler/basic_crawler.py", line 359, in run await run_task File "/home/jourdelune/dev/Crawler/env/lib/python3.10/site-packages/crawlee/basic_crawler/basic_crawler.py", line 398, in _run_crawler await self._pool.run() File "/home/jourdelune/dev/Crawler/env/lib/python3.10/site-packages/crawlee/autoscaling/autoscaled_pool.py", line 185, in run await run.result File "/home/jourdelune/dev/Crawler/env/lib/python3.10/site-packages/crawlee/autoscaling/autoscaled_pool.py", line 336, in _worker_task await asyncio.wait_for( File "/usr/lib/python3.10/asyncio/tasks.py", line 408, in wait_for return await fut File "/home/jourdelune/dev/Crawler/env/lib/python3.10/site-packages/crawlee/basic_crawler/basic_crawler.py", line 734, in __run_task_function await self._commit_request_handler_result(crawling_context, result) File "/home/jourdelune/dev/Crawler/env/lib/python3.10/site-packages/crawlee/basic_crawler/basic_crawler.py", line 653, in _commit_request_handler_result destination = httpx.URL(request_model.url) File "/home/jourdelune/dev/Crawler/env/lib/python3.10/site-packages/httpx/_urls.py", line 115, in __init__ self._uri_reference = urlparse(url, **kwargs) File "/home/jourdelune/dev/Crawler/env/lib/python3.10/site-packages/httpx/_urlparse.py", line 163, in urlparse raise InvalidURL("Invalid non-printable ASCII character in URL") httpx.InvalidURL: Invalid non-printable ASCII character in URL the invalid url is: https://www.linkedin.com/company/nic-br/ Code: import re import urllib.parse from crawlee.basic_crawler import Router from crawlee.beautifulsoup_crawler import BeautifulSoupCrawlingContext from crawlee.playwright_crawler import PlaywrightCrawlingContext router = Router[PlaywrightCrawlingContext]() regex = r"https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()!@:%_\+.~#?&\/\/=]*)\.(mp3|wav|ogg)" @router.default_handler async def default_handler(context: BeautifulSoupCrawlingContext) -> None: url = context.request.url html_page = str(context.soup).replace("\/", "/") matches = re.finditer(regex, html_page) audio_links = [html_page[match.start() : match.end()] for match in matches] for link in audio_links: link = urllib.parse.urljoin(url, link) data = { "url": link, "label": "audio", } await context.push_data(data) await context.enqueue_links(strategy="all") hey, thanks you for the answer, the code that imports the router: """ main script for the crawler """ import asyncio from crawlee.beautifulsoup_crawler import BeautifulSoupCrawler from routes import router from utils import process async def main() -> None: """ Function to launch the crawler """ crawler = BeautifulSoupCrawler( request_handler=router, ) await crawler.run( ["https://www.cgi.br/publicacao/revista-br-ano-07-2016-edicao-09/"] ) await crawler.export_data("results.json") process("results.json") if __name__ == "__main__": asyncio.run(main()) I want to crawl the full web to create a dataset of song url (to create an AI music generation model), that why I use strategy="all", if you run the code, you should get the error. The url where it get the invalid url is: https://www.cgi.br/publicacao/revista-br-ano-07-2016-edicao-09/ the invalid url is: https://www.linkedin.com/company/nic-br/ Huh, this is getting interesting. I added this to the request handler: context.log.info(f'links found: {"\n".join([repr(link.attrs.get('href')) for link in context.soup.select('a')])}', ) ...and it showed me that the linkedin link in fact contains a line break: <a class="btn-floating btn-lg btn-li" type="button" role="button" href="https://www.linkedin.com/company/nic-br/ " target="_blank"> <i class="fab fa-linkedin-in"></i> </a> However unusual this is, I'll add a .strip() to the enqueue_links implementation.
gharchive/issue
2024-07-21T19:59:47
2025-04-01T06:37:55.293395
{ "authors": [ "Jourdelune", "janbuchar" ], "repo": "apify/crawlee-python", "url": "https://github.com/apify/crawlee-python/issues/337", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1296481662
Styleguides: improvements in styleguide/conformance protos With the aim of displaying the lint results in the UI, the conformance report protos need the following updates: The spec field should include the revision ID of the spec. RuleReport should include the metadata of the rule defined in the styleguide. The metadata fields which should be included in the rule are as follows: display_name description doc_uri cc @michaelyara
gharchive/issue
2022-07-06T21:14:03
2025-04-01T06:37:55.297635
{ "authors": [ "shrutiparabgoogle" ], "repo": "apigee/registry", "url": "https://github.com/apigee/registry/issues/643", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
130342263
Feature/api metadata i18n. Closes #779 Added i18n strings to view metadata page. This does not include AutoForm label translations, as that currently seems non-trivial. @elnzv will you please review thie PR? It may not be necessary to add the translation strings to the fi.i18n.json. Did you read somewhere that this is a requirement? @brylie, no, that is not a requirement, but just more comfortable to translate, no need to search for new lines in en.json. Anyway, I added those myself, resolved conflicts. Merging.
gharchive/pull-request
2016-02-01T11:56:02
2025-04-01T06:37:55.301609
{ "authors": [ "brylie", "elnzv" ], "repo": "apinf/api-umbrella-dashboard", "url": "https://github.com/apinf/api-umbrella-dashboard/pull/825", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1611897422
Requires Tatin to intialise before opening project Don't know whether this is documented and I missed it, but ]CIDER.OpenProject \g\ParquetDotNet * Command Execution Failed: VALUE ERROR: Undefined name: Tatin goes away if I first do ]TATIN.Version ┌─────┬───────────┬──────────┐ │Tatin│0.90.0+1485│2023-02-27│ └─────┴───────────┴──────────┘ I would expect not to have to call a different UCMD before being able to use Cider Tatin it's will be part of a 19.0 standard installation. In older versions it's up to the user to make sure that it's loaded into ⎕SE. However, the error message should be more explicit about what's required. Changed my mind. Solved in version 0.23.4
gharchive/issue
2023-03-06T17:28:45
2025-04-01T06:37:55.329689
{ "authors": [ "aplteam", "rikedyp" ], "repo": "aplteam/Cider", "url": "https://github.com/aplteam/Cider/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2438309
proxy support A production server I use has outgoing connections firewalled, and we have to use a proxy. This plugin needs to be able to connect via the proxy for geocoding requests, similar to https://rails.lighthouseapp.com/projects/8994/tickets/2133-activeresource-http-proxy-support #126 has code attached
gharchive/issue
2011-12-03T17:50:31
2025-04-01T06:37:55.331178
{ "authors": [ "dgm" ], "repo": "apneadiving/Google-Maps-for-Rails", "url": "https://github.com/apneadiving/Google-Maps-for-Rails/issues/124", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
186654251
Using exec.start() Hi again, So I read the examples you referred me to in issue #309. I've even "sort of" run your example. Specifically, rather than create an instance of an Ubuntu container, I did a docker.getContainer(ID) The container in question is running MongoDB and I'd like to be able to present to its shell (bash) several commands and get their responses. But I am uncertain about how to accomplish this. As matters stand, based on your example, I can see the usual console messages from MongoDB as its shell starts. These were raised in response to the original Cmd, 'mongo', presented by container.exec() But now I need to present a few more commands to the MongoDB shell & parse the responses. These commands don't reside in a file. They are strings defined in the program itself. Would you be so kind as to provide an example of how to do this with dockerode's exec facility? Thanks. Cordially, Paul Hmmm...maybe hold off for a bit on any kind of answer. I may have gotten it working. The "trick" appears to be the use of 'hijack: true', 'stdin: true' in the exec.start options as well as 'AttachStdin: true' in the container.exec options. 👍
gharchive/issue
2016-11-01T21:35:46
2025-04-01T06:37:55.334788
{ "authors": [ "apocas", "moonlitSpider" ], "repo": "apocas/dockerode", "url": "https://github.com/apocas/dockerode/issues/310", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
213578998
Fix inconsistent behaviour of Container.inspect container.inspect() is supposed to return a Promise not a stringified object Yup legacy pre-promise code left behind. Could you please do the same for the other object's inspects? (image, network, etc) So everything is consistent. @apocas done! Published v2.4.0
gharchive/pull-request
2017-03-12T03:39:51
2025-04-01T06:37:55.336534
{ "authors": [ "apocas", "knight42" ], "repo": "apocas/dockerode", "url": "https://github.com/apocas/dockerode/pull/344", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
239231375
TypeScript optional inputs Similar to #155, but for TypeScript, fixes #81 @tgriesser @lewisf: I'm sorry I'm only getting to this now, but I wanted to publish a new release. It turns out some of the TypeScript snapshot tests are failing however, and it seems the results do not match the expected behavior in this PR. Could you have a look at this? @martijnwalraven will take a look now @martijnwalraven this actually looks like a small oversight. Some changes weren't pulled in. I opened up #162 to get the tests passing
gharchive/pull-request
2017-06-28T17:03:43
2025-04-01T06:37:55.407059
{ "authors": [ "lewisf", "martijnwalraven", "tgriesser" ], "repo": "apollographql/apollo-codegen", "url": "https://github.com/apollographql/apollo-codegen/pull/156", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1235963987
OkHttpExecutionContext replacement Question. OkHttpExecutionContext was available in version 2 after migration to 3.3.0 this context key was removed and I can't find another way how to get okhttp response from ApolloResponse Hi 👋 Thanks for reaching out In 3.x you can use response.executionContext[HttpInfo]. That will give you the status code and headers from your HTTP call. If you need access to the body, I'd recommend using an HttpInterceptor so that you can read the body as it is streamed. If you need something else, let us know and we can investigate together how to get it. thanks @martinbonnin for quick reply! The reason why I need okhttp Response is because we have backend driven logic when automatically refresh page. This is an example how we did it previously. apolloClient.query(...).rx().singleOrError().map { val okHttpResponse = it.executionContext[OkHttpExecutionContext]?.response val maxAgeInSeconds = okHttpResponse?.cacheControl?.maxAgeSeconds ?: 0 } I see, thanks for providing the details! Looking at the OkHttp code, looks like it's getting this value from a header so you could do something like: val maxAgeInSeconds = response.executionContext[HttpInfo] ?.headers ?.firstOrNull { it.name.lowercase() == "max-age" } ?.value ?.toIntOrNull() Could that work? That's so neat! thank you for help and have a nice weekend 😊 Thank you, you too :) !
gharchive/issue
2022-05-14T12:37:56
2025-04-01T06:37:55.448483
{ "authors": [ "denys-meloshyn", "martinbonnin" ], "repo": "apollographql/apollo-kotlin", "url": "https://github.com/apollographql/apollo-kotlin/issues/4108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2065815228
Firing queries from Android Studio plugin with @nonull directive Question Hey, I am using Apollo plugin with Android Studio and I used to test my graphql files directly from IDE. Noticed that adding @nonull does not allow plugin to work and returns error message "message": "directive 'nonnull' is not defined in the schema" The question is if that can be somehow workaround? Can you try enabling the "Apollo Kotlin" framework in the GraphQL plugin? Hey @martinbonnin I had this enabled IDE support is still being worked on (see https://github.com/JetBrains/js-graphql-intellij-plugin/issues/697) so it'll come but the red underlines for @semanticNonNull are still expected at this stage. I would still expect @nonnull to be detected though(see https://github.com/JetBrains/js-graphql-intellij-plugin/blob/21d5800921d07176992e3d006728a89a1c1eb242/resources/definitions/ApolloKotlin.graphql#L23) @nonnull works just fine when editing graphql file, there is support for that. It just fails when executing graphql queries from IDE Thanks. They work just fine from the app. Been using it for some time now on prod. It's just very convenient sometimes to verify something right from the IDE. I created https://github.com/apollographql/apollo-kotlin/issues/5507 as a follow up before I forget.
gharchive/issue
2024-01-04T15:02:27
2025-04-01T06:37:55.453460
{ "authors": [ "damianpetla", "martinbonnin" ], "repo": "apollographql/apollo-kotlin", "url": "https://github.com/apollographql/apollo-kotlin/issues/5506", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2298341080
Crash when use policy Version 3.8.2 Summary override suspend fun getMerchantsByKeyWord(keyword:String): Flow<Response> { return apolloClient.query(MerchantsByKeyWordQuery(keyword)).fetchPolicy(FetchPolicy.CacheAndNetwork) .toFlow() .map { response -> try { //response.data.merchants Response.Success(response.data?.toRestaurantsSearch() as T) } catch (e: Exception) { Response.Failure( e.message ?: "unknown error") } } } Steps to reproduce the behavior No response Logs (Your logs here) Hi! It looks like you're having a network issue. This can happen if your device has a connectivity issue, or e.g. if the backend is not responding. I enabled working offline from cache policy CacheAndNetwork will both go to the cache and the network, which will throw if the network fails. I disable wifi to test app working offline In that case what you're seeing is definitely expected. You should probably add a .catch {} call to the Flow chain, to handle the exception.
gharchive/issue
2024-05-15T16:18:34
2025-04-01T06:37:55.458503
{ "authors": [ "BoD", "enggazzar" ], "repo": "apollographql/apollo-kotlin", "url": "https://github.com/apollographql/apollo-kotlin/issues/5891", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
208523916
Not possible to control the types of items in a list Given a field that returns an array that composed of Union or Interface types, it is not possible to control the type of each item in the list. For example, given the following schema: union AdvertOrPost = Advert | Post enum SupportedTypes { Advert Post } type Advert { id: ID! productName: String! } type Post { id: ID! text: String! } type RootQuery { getAll(type: SupportedTypes): [AdvertOrPost] } schema { query: RootQuery } It should be possible to apply the following mocks: const mockMap = { RootQuery: () => ({ getAll: (o: any, a: { [key: string]: any }) => new MockList(2, () => ({typename: a['type']})), }), Advert: () => ({ productName: 'supercoolproduct', }), Post: () => ({ text: 'superlongpost', }), }; However, using the above code would result in objects that do not have the type-specific mocks applied, for example a result to querying getAll(type: Advert) would look like this: { "getAll": [ {"__typename": "Advert", "productName": "Hello World"}, {"__typename": "Advert", "productName": "Hello World"}, ] } @ajs139 Sorry for the late reply. I think in order for the server to know which mock to apply, your schema has to define the __resolveType or __ofType functions. I might also be wrong however, in which case a failing test case would be much appreciated. Sure, please see https://github.com/apollographql/graphql-tools/pull/282/files#diff-b9774ff344f81ba2175bf0fb4973ac7a for a test (this PR also contains a fix, but I closed it because it didn't deal with a nested array return type (e.g. someField: [[Foo]]) and haven't had the chance to revisit. I think this should work in 1.1.0.
gharchive/issue
2017-02-17T18:46:05
2025-04-01T06:37:55.471275
{ "authors": [ "ajs139", "helfer" ], "repo": "apollographql/graphql-tools", "url": "https://github.com/apollographql/graphql-tools/issues/281", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
272119610
New: Added Unit Tests for Graph Generators **Merge PR https://github.com/aporeto-inc/trireme-statistics/pull/31 before merging this PR ** --> Added unittests for server functions --> Added mock for influxdb Thanks for addressing those changes. I will let you merge this PR (whenever you feel it is in a ready state)
gharchive/pull-request
2017-11-08T08:54:32
2025-04-01T06:37:55.506877
{ "authors": [ "bvandewalle", "sibicramesh" ], "repo": "aporeto-inc/trireme-statistics", "url": "https://github.com/aporeto-inc/trireme-statistics/pull/33", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }