id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
231165061
|
Manager::finderByNullable shall not return an anonymous lambdas as a result
anonymous lambdas prevents the optimizer from acting on fileds in the future.
A new Interface FindFromNullable is now created. We need to generate a number of implementing classes like FindFromNullableInt and FindFromNullableLong though. Also, we need to write AbstractFindFromNullable
|
gharchive/issue
| 2017-05-24T20:41:53 |
2025-04-01T04:35:55.715736
|
{
"authors": [
"minborg"
],
"repo": "speedment/speedment",
"url": "https://github.com/speedment/speedment/issues/461",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1852985176
|
init scripts will be deprecated
Is spetlr aligned with this?
I will close this issue - since we now fully deploy with terraform and have less custom powershell legacy scripts.
|
gharchive/issue
| 2023-08-16T10:49:14 |
2025-04-01T04:35:55.731433
|
{
"authors": [
"LauJohansson"
],
"repo": "spetlr-org/spetlr",
"url": "https://github.com/spetlr-org/spetlr/issues/80",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2486560374
|
Add the "case_sensitive" parameter to the EhJsonToDeltaOrchestrator
What type of PR is this?
Feature
Overview
The EhJsonToDeltaOrchestrator class should have a new optional parameter to disable case-sensitivity
when parsing JSON property names.
What is the current behavior?
There is no parameter.
What is the new behavior?
The EhJsonToDeltaOrchestrator class constructor has a new boolean parameter named "case_sensitive".
Does this PR introduce a breaking change?
No
Very nice code.
|
gharchive/pull-request
| 2024-08-26T10:49:15 |
2025-04-01T04:35:55.733594
|
{
"authors": [
"RadekBuczkowski",
"mrmasterplan"
],
"repo": "spetlr-org/spetlr",
"url": "https://github.com/spetlr-org/spetlr/pull/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
101721821
|
Small redesign of the main page
As you can read in this issue @alireza-ahmadi began to redesign parts of the Hugo website, as he teased in a screencast. But he had limited time to complete the redesign in the past. Therefore, I gave it a try and began to recreate the header and navigation as shown in the video. The current state of my work can be seen here. It should get very close to the original.
Since my CSS experience is limited I would encourage one someone to check the browser compatibility. My developement environment is Ubuntu Linux 15.04 and Chrome 43 / Firefox 40.
If you're are interested I would create a PR.
Currently I don't implemented a responsive menu for the navigation. @alireza-ahmadi - did you have a specific idea for this, something like an off-canvas menu with a hamburger icon?
I haven't thought about how should navigation look on small screens, Maybe, as you say, adding a hamburger menu is the solution of this problem.
@digitalcraftsman I have started implementing new idea for Hugo's home page, Please take a look at my sample image and videos and code. For the next two weeks, I am completely free so If you (and other members of Hugo team) prefer new design, I can complete implementation of it and if you prefer the old design, I can add responsive navigation to your work.
@alireza-ahmadi that looks very, very good. As too the "zoom variants", I liked the second best (light color version).
@alireza-ahmadi that are great news. Since now we've three design prototypes. Personally I like the original one and the second version (light color version) as @bep. My implementation of your original design can be found here.
Furthermore, we should talk a better way to communicate than this issue. The implementation of details could cause long discussions and would mess up this issue. Otherwise the discussions shouldn't be closed for the public since everybody should be able to track the changes, give feedback or even to participate.
Since the redesign breaks out in a new direction in comparison the current 'design language' of Hugo we could enhance a redesign to the docs too. But this is maybe out of scope for now and I don't know how long the redesign of the frontpage will take (which defenitely should be in the spotlight for now).
Finally, it would be interesting what @spf13 thinks about this topic.
Well, I will complete the implementation of the home page and until that time we can discuss how can we redesign all pages of the website.
This sounds more or less like creating a whole new theme for the Hugo site(s). The current setup uses Bootstrap and your implemenation are build with the more minimalistic Skeleton framework. Therefore, do we switch entirely?
@digitalcraftsman & @alireza-ahmadi I absolutely love the new designs you guys are coming up with. I really like the cleaner look of @digitalcraftsman's homepage. I love the updated effects and layout @alireza-ahmadi has for the homepage.
I agree a completely new design for the docs should be a longer term goal. I think we should focus on the homepage first and get that out the door.
We definitely should a responsive design. A good amount of visitors are from mobile.
I agree hamburger works well for this.
My screencast just showed the implementation of one of @alireza-ahmadi's earlier designs. Furthermore it seems to be the case, that we will use some sort of taskrunner like Gulp or Grunt to minify, concetenate, ....files or preprocess Sass or Less code. This would require the installation of node.js and the taskrunner as dependencies in order to build and deploy the site.
This would give us benefits us with more flexible stylesheets and smaller (minified) assets, as examples. The disaddvantage would be, that users who want to access the docs offline would need the dependencies if they want to read the docs.
Another great aspect would be the use of amber or ace templates if we've finished the design.
I'm fine with using a taskrunner to prepare the site when working on it. It shouldn't be a requirement to just run hugo server -s docs on it and be able to view the docs.
I've been looking into adding support for Sass directly into Hugo, but not sure if it's a good idea or not yet.
The integration of Sass would be part of the long discussion of wether and how plugins should be integrated. But this would be off topic.
If it shouldn't be a requirement, we would need to have a pre-built version of the docs in the repo, which wouldn't be pragmatic. Furthermore you can't now further changes of dependencies that are required to build the docs.
@digitalcraftsman using skeleton as a framework is temporarily and we should use a more popular framework like Bootstrap or Foundation in the final version.
@spf13 I can apply the the current design. And we can also improve fix minor design issues in the current version of Hugo's website, Then we can plan for a complete overhaul of homepage and docs of hugo.
@digitalcraftsman as the Hugo's website(homepage+docs) is not a complex web application, while using task runner is a plus but it's not necessary, So we can remove it.
@alireza-ahmadi you showed in a screencast the current website redesigned. Do you use some sort of sketch from Photoshop or do you code right away?
Because this is a redesign, will we use the old codebase or do we make a complete rewrite?
@digitalcraftsman It's was a super simple prototype and the code had not ability for development. Currently, we can apply minor improvements on the current code base and plan for a complete rewrite in the future.
#1725 is related to the thoughts about a redesign.
This issue has been automatically marked as stale because it has not been commented on for at least four months.
The resources of the Hugo team are limited, and so we are asking for your help.
If this is a bug and you can still reproduce this error on the master branch, please reply with all of the information you have about it in order to keep the issue open.
If this is a feature request, and you feel that it is still valuable, please open a proposal at https://discuss.gohugo.io/.
This issue will automatically be closed in four months if no further activity occurs. Thank you for all your contributions.
Note/Update: This issue is marked as stale, and I may have said something earlier about "opening a thread on the discussion forum". Please don't.
If this is a bug and you can still reproduce this error on the latest release or the master branch, please reply with all of the information you have about it in order to keep the issue open.
If this is a feature request, and you feel that it is still relevant and valuable, please tell us why.
@alireza-ahmadi thanks for the your initial idea to redesign the homepage of Hugo. This issue is stale since a while and a few others pushed this idea further to a redesign that is likely to be deployed soon. Those are namely @rdwatters for the editorial part and @budparr who's finalizing the design.
Cheers,
Digitalcraftsman
|
gharchive/issue
| 2015-08-18T18:35:00 |
2025-04-01T04:35:55.841993
|
{
"authors": [
"alireza-ahmadi",
"bep",
"digitalcraftsman",
"spf13"
],
"repo": "spf13/hugo",
"url": "https://github.com/spf13/hugo/issues/1362",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
174228328
|
Took me a long time to find this
It took me a long time to realize that this could also be done by adding a line to the config file. I always expected it to be mentioned on this page, as it is the first thing you really have to do when testing out Hugo.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
Looks good to me, but maybe might be worthwhile linking to https://gohugo.io/overview/configuration/ as well? It mentions the theme, and themesdir configuration variables and shows how they're used.
If you're able, please amend your commit message to be something like docs: Mention theme config option on Usage page.
I'm not sure I did this correctly. I tried changing according to the previous comment on spelling.
Thank you @Nichlas for the additions. What @moorereason meant is that you should merge both commits and edit the commit message after this convention:
<package/folder>: Description of changes/additions
However, I merged your pull request as bb1812b6af0ecd8fd064eb6f69289ddc8eb2c8e1. Maybe it sounds confusing, but it isn't :wink:
|
gharchive/pull-request
| 2016-08-31T09:39:07 |
2025-04-01T04:35:55.847444
|
{
"authors": [
"CLAassistant",
"Nichlas",
"alenbasic",
"digitalcraftsman",
"moorereason"
],
"repo": "spf13/hugo",
"url": "https://github.com/spf13/hugo/pull/2402",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
107420657
|
How About the Detox Theme
I'd love to see the Detox theme added to the master list.
https://github.com/allnightgrocery/hugo-theme-blueberry-detox
Cheers!
-andrew
Merged in bd893ce
|
gharchive/issue
| 2015-09-20T21:58:04 |
2025-04-01T04:35:55.849016
|
{
"authors": [
"allnightgrocery",
"bep"
],
"repo": "spf13/hugoThemes",
"url": "https://github.com/spf13/hugoThemes/issues/74",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1789000732
|
egressgateway ippools IP repeat
Describe the version
egress v0.2.0-rc1
Describe the bug
when egressgateway ippools ip repeat, it can create
Hwo To Reproduce
create egressgateway use same ippools
Expected behavior
we expect that when ippools ip repeat its not allowed to create
Screenshots and log
Additional context
any update on this ?
any update on this ?
There is no PR fix for this issue yet
|
gharchive/issue
| 2023-07-05T07:58:32 |
2025-04-01T04:35:55.964648
|
{
"authors": [
"bzsuni",
"weizhoublue"
],
"repo": "spidernet-io/egressgateway",
"url": "https://github.com/spidernet-io/egressgateway/issues/538",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1581641870
|
Update deploy-spire-and-csi-driver.sh
Moved the namespace creation to before the spiffe-csi-driver creation, because the csi driver service account requires the spire namespace to exist.
Thanks, @abe-hpe . Happy to merge after you have fixed DCO signoff on your commit.
In this case it's probably easiest to do a git commit --amend --signoff to add the sign off to the original commit and then force-push over the PR branch.
I'll close this one and create a new one with correct DCO.
|
gharchive/pull-request
| 2023-02-13T04:33:31 |
2025-04-01T04:35:55.966578
|
{
"authors": [
"abe-hpe",
"azdagron"
],
"repo": "spiffe/spiffe-csi",
"url": "https://github.com/spiffe/spiffe-csi/pull/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
530791597
|
quickhack: fix compile errors
force to use -std=c++2a
dit you ever compile?
multiple times, of course.
You've encountered this repository while I'm in the middle of rebasing on the latest ninja master, so this specific branch, "16" is the 16th patch on top of ninja. Earlier items in the patch series compile, or don't compile, based on my time availability
I believe I addressed all the items that your PR addressed.
If you have no objection, I'll be closing this PR in a few days.
Note: There are unit test failures happening in patch 12, 13, 14, 15, and 16. So please be aware of those problems until I have more time to address the issue.
|
gharchive/pull-request
| 2019-12-01T20:37:47 |
2025-04-01T04:35:56.077388
|
{
"authors": [
"ClausKlein",
"jonesmz"
],
"repo": "splinter-build/splinter",
"url": "https://github.com/splinter-build/splinter/pull/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1185382665
|
Darktrace
Name of the app
Darktrace
Integration
The app will integrate with https://www.darktrace.com/en/
repo created at https://github.com/splunk-soar-connectors/darktrace
|
gharchive/issue
| 2022-03-29T20:34:14 |
2025-04-01T04:35:56.090673
|
{
"authors": [
"dfederschmidt",
"pzhou-splunk"
],
"repo": "splunk-soar-connectors/.github",
"url": "https://github.com/splunk-soar-connectors/.github/issues/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1447956523
|
chore(deps): ADDON-57161 Version bump by dependabot
PR Links:
https://github.com/splunk/addonfactory-ucc-base-ui/pull/92
https://github.com/splunk/addonfactory-ucc-base-ui/pull/95
https://github.com/splunk/addonfactory-ucc-base-ui/pull/122
https://github.com/splunk/addonfactory-ucc-base-ui/pull/96
https://github.com/splunk/addonfactory-ucc-base-ui/pull/99
https://github.com/splunk/addonfactory-ucc-base-ui/pull/106
https://github.com/splunk/addonfactory-ucc-base-ui/pull/102
:tada: This PR is included in version 1.13.0 :tada:
The release is available on:
v1.13.0
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-11-14T12:15:31 |
2025-04-01T04:35:56.095282
|
{
"authors": [
"srv-rr-github-token",
"tbalar-splunk"
],
"repo": "splunk/addonfactory-ucc-base-ui",
"url": "https://github.com/splunk/addonfactory-ucc-base-ui/pull/126",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2202511093
|
feat(code): add custom validators for account configs
Added logic to add custom validators for configuration tab.
This PR fixes #520 issue.
Updated the docs and smoke tests as well.
I tried the change on the one of the recent add-ons that I worked upon.
|
gharchive/pull-request
| 2024-03-22T13:35:46 |
2025-04-01T04:35:56.096702
|
{
"authors": [
"hetangmodi-crest"
],
"repo": "splunk/addonfactory-ucc-generator",
"url": "https://github.com/splunk/addonfactory-ucc-generator/pull/1115",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
950378194
|
Update Condition on detect_processes_used_for_system_network_configuration
Except user is NULL or unknown. It will decrease false detection
Hello,
Thank you for your contribution, since there are SPL changes in your: we would like to get them tested via our detection testing service. I create a duplicate PR https://github.com/splunk/security_content/pull/1545
|
gharchive/pull-request
| 2021-07-22T07:28:01 |
2025-04-01T04:35:56.205848
|
{
"authors": [
"BlackB0lt",
"patel-bhavin"
],
"repo": "splunk/security_content",
"url": "https://github.com/splunk/security_content/pull/1539",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1401382057
|
Helm package
My dev team will be performing an upgrade to SCK v1.5 and would like to know where to locate the Helm package and how to install
From dev:
“I got this downloaded from github but we need helm package of v1.5 downloaded from splunk account since we use helm package to install from Rafay which I need help since I don't have an account so that i can use this to install splunk connect in Q1 and plan to get this to prod next weekend.”
It is documented here: https://github.com/splunk/splunk-connect-for-kubernetes#deploy-with-helm
|
gharchive/issue
| 2022-10-07T15:50:50 |
2025-04-01T04:35:56.207508
|
{
"authors": [
"NBRAZ22",
"harshit-splunk"
],
"repo": "splunk/splunk-connect-for-kubernetes",
"url": "https://github.com/splunk/splunk-connect-for-kubernetes/issues/815",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1168057588
|
test: add check for rabbitmq init in CI
Description
This PR adds check for RabbitMQ initialisation in CI part for integration tests.
Fixes # (issue)
Type of change
Please delete options that are not relevant.
[x] Refactor/improvement
How Has This Been Tested?
Running script locally on local env.
Checklist
[x] My commit message is conventional
[x] I have run pre-commit on all files before creating the PR
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation
[x] I have added tests that prove my fix is effective or that my feature works
[x] New and existing unit tests pass locally with my changes
[x] I have checked my code and corrected any misspellings
Codecov Report
Merging #428 (7cf4981) into develop (f54d9c4) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## develop #428 +/- ##
========================================
Coverage 87.00% 87.00%
========================================
Files 23 23
Lines 1400 1400
========================================
Hits 1218 1218
Misses 182 182
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f54d9c4...7cf4981. Read the comment docs.
|
gharchive/pull-request
| 2022-03-14T08:43:00 |
2025-04-01T04:35:56.216086
|
{
"authors": [
"codecov-commenter",
"omrozowicz-splunk"
],
"repo": "splunk/splunk-connect-for-snmp",
"url": "https://github.com/splunk/splunk-connect-for-snmp/pull/428",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1866552718
|
docs: inventory from csv
Description
Adding documentation about updating inventory from csv file.
Fixes # (issue)
Type of change
Please delete options that are not relevant.
N/A
How Has This Been Tested?
N/A
Checklist
[x] My commit message is conventional
[x] I have run pre-commit on all files before creating the PR
[x] I have made corresponding changes to the documentation
[x] I have checked my code and corrected any misspellings
Codecov Report
Merging #852 (62c7689) into develop (4de3381) will not change coverage.
Report is 2 commits behind head on develop.
The diff coverage is n/a.
:exclamation: Current head 62c7689 differs from pull request most recent head 62e8341. Consider uploading reports for the commit 62e8341 to get more accurate results
:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.
@@ Coverage Diff @@
## develop #852 +/- ##
========================================
Coverage 87.65% 87.65%
========================================
Files 27 27
Lines 1847 1847
========================================
Hits 1619 1619
Misses 228 228
|
gharchive/pull-request
| 2023-08-25T08:00:56 |
2025-04-01T04:35:56.221991
|
{
"authors": [
"ajasnosz",
"codecov-commenter"
],
"repo": "splunk/splunk-connect-for-snmp",
"url": "https://github.com/splunk/splunk-connect-for-snmp/pull/852",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1333566583
|
SC4S "sc4s_recv_time" setting creates lots of small buckets due to strings.data filling with epoch timestamps
Logging this on behalf of customer.
cisco_asa logs being sent by SC4S to Splunk results in lots of small buckets being generated for the destination index. The warning below is reported by Splunk in the logs :
"The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=cisco_asa, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=39, small buckets=39"
Further investigation discovered that the strings.data file within each bucket was filling up with epoch timestamps due to the use of "sc4s_recv_time". Un-setting this in the conf file below resolved the issue :
#/opt/sc4s/local/config/filters/app-postfilter-drop_metadata.conf
block parser app-postfilter-cisco_asa_metadata() {
channel {
rewrite {
unset(value('fields.sc4s_recv_time'));
};
};
};
application app-postfilter-cisco_asa_metadata[sc4s-postfilter] {
filter {
'cisco' eq "${fields.sc4s_vendor}"
and 'asa' eq "${fields.sc4s_product}"
};
parser { app-postfilter-cisco_asa_metadata(); };
};
This needs a permanent update/change to make this the default.
We will check the feasibility in version 3 as we already provided work around for this.
Just curious, what is the workaround for this issue?
Its mentioned in the issue itself @mattweber78
Ok thanks. Thought there was another workaround without having to disable that field.
no this field is coming out of the box , so to stop creating small buckets based on this field the only way i foresee is dropping the field.
|
gharchive/issue
| 2022-08-09T17:30:25 |
2025-04-01T04:35:56.227165
|
{
"authors": [
"RKH-splunk",
"mattweber78",
"rjha-splunk"
],
"repo": "splunk/splunk-connect-for-syslog",
"url": "https://github.com/splunk/splunk-connect-for-syslog/issues/1779",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
965839526
|
Add failsafe to get duration during timeupdate
We noticed that sometimes, unpredictably, the 25-50-75 percentile events would not fire. We could not figure out a reliable way to reproduce the error, but if you add a breakpoint around line 55 here:
https://github.com/spodlecki/videojs-event-tracking/blob/204dba2ac76da072cd6bf8745c5bb2d8658068e5/src/tracking/percentile.js#L47-L71
...and repeatedly refresh the page while trying to play a video.js player with these events bound, eventually, you may run into a situation where duration is equal to 0.
It seems that either the durationchange event never fired, or it fired before the watcher got bound. In either case, for whatever reason, the durationchange callback here:
https://github.com/spodlecki/videojs-event-tracking/blob/204dba2ac76da072cd6bf8745c5bb2d8658068e5/src/tracking/percentile.js#L83-L93
...never ran in those bugged-out instances, so the first, second, and third variables never got set.
(I'm guessing this could be a browser bug? We are using Chrome 92 on macOS.)
This pull request aims to fix that issue by adding an additional check for duration during the timeupdate event.
I confirmed that this fix works by adding a breakpoint on the getDuration() call inside timeupdate. By doing the same refresh-page-and-hit-play, I was able to confirm that the code at the breakpoint does occasionally get called (i.e. when it would have broken before), and the percentile events do fire as expected.
awesome catch! I'm not sure how this actually happens, but I love the solution
Sweet! Thank you for the merge! Would you be down to publish a new version to the npm registry?
|
gharchive/pull-request
| 2021-08-11T03:52:44 |
2025-04-01T04:35:56.231420
|
{
"authors": [
"IllyaMoskvin",
"spodlecki"
],
"repo": "spodlecki/videojs-event-tracking",
"url": "https://github.com/spodlecki/videojs-event-tracking/pull/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
52060906
|
Bind store methods before init() method executes.
Fixes #167.
I needed to update one unrelated test because the methods are bound before this.listenToMany runs. The alternative was moving the listenables handling earlier in the Store() constructor, but I was concerned that would break init methods that mutate this.listenables. As a note, PublisherMethods.listen is now getting passed bound store methods, so the callback binding it does is redundant for this case.
Sweet. Thanks for merging!
|
gharchive/pull-request
| 2014-12-16T01:06:36 |
2025-04-01T04:35:56.233290
|
{
"authors": [
"chromakode"
],
"repo": "spoike/refluxjs",
"url": "https://github.com/spoike/refluxjs/pull/168",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
218745848
|
Refund paymetn.
I was wondering if there's refund function exists?
Thanks
Closed for reasons described in the README.
|
gharchive/issue
| 2017-04-02T06:39:28 |
2025-04-01T04:35:56.236039
|
{
"authors": [
"s2krish",
"spookylukey"
],
"repo": "spookylukey/django-paypal",
"url": "https://github.com/spookylukey/django-paypal/issues/173",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
53799464
|
Updates ruby-saml gem dependency to 0.8.1
The Ruby SAML gem has breaking changes between 0.7.x and 0.8.x.
In addition, version 0.8.0 is marked as broken.
Updates dependencies to Ruby SAML gem 0.8.1
I'm in favor of the version bump, but I don't want to make those methods public unless there's a good reason.
Note that #41 will supercede this and bump ruby-saml to 1.2.0.
#41 updated to ~> 1.2 so closing this
|
gharchive/pull-request
| 2015-01-08T20:51:36 |
2025-04-01T04:35:56.240871
|
{
"authors": [
"amoose",
"jphenow",
"pkarman"
],
"repo": "sportngin/saml_idp",
"url": "https://github.com/sportngin/saml_idp/pull/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
160451500
|
Failed job doesn't give non-zero exit code
A failing job doesn't quit with a non-zero exit code
This progress looks :( because there were failed tasks
===== Luigi Execution Summary =====
real 0m35.030s
user 0m46.968s
sys 0m2.036s
((conda)) [hadoop@ip-123-45-67-890 current]$ echo $?
0
Could you provide more detail on this job failure? What module are you using, does it happen all the time, etc.
This is intended behaviour. See http://luigi.readthedocs.io/en/stable/configuration.html#retcode
Ah thanks, I must have missed that part of the docs :)
|
gharchive/issue
| 2016-06-15T15:27:17 |
2025-04-01T04:35:56.298468
|
{
"authors": [
"Tarrasch",
"arnov",
"dlstadther"
],
"repo": "spotify/luigi",
"url": "https://github.com/spotify/luigi/issues/1721",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1472953087
|
add ulr to form
Fixing issue described here: https://github.com/spree/spree_auth_devise/pull/570#issuecomment-1333614975
It's actually problem with legacy_frontend not auth_devise. The form wass missing url to which to redirect after submitting and deafult one was causing lose of locale.
Awesome, works great 🎉
|
gharchive/pull-request
| 2022-12-02T14:28:25 |
2025-04-01T04:35:56.318737
|
{
"authors": [
"nciemniak",
"wjwitek"
],
"repo": "spree/spree_legacy_frontend",
"url": "https://github.com/spree/spree_legacy_frontend/pull/40",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
131781353
|
Taxonomies
In the docs, the taxonomy instructions for listing out categories at the end of the post don't work. But if we use the original approach which was still used in Sculpin, then it does work. So this works:
{% if page.categories %}
<p class="categories text">
<span>Categories:</span>
{% for category in page.categories %}
<a href="{{ site.url }}/blog/categories/{{ category|url_encode(true) }}">{{ category }}</a>
{% endfor %}
</p>
{% endif %}
{% if page.tags %}
<p class="tags text">
<span>Tags:</span>
{% for tag in page.tags %}
<a href="{{ site.url }}/blog/tags/{{ tag|url_encode(true) }}">{{ tag }}</a>
{% endfor %}
</p>
{% endif %}
Recommend updating the taxonomies section significantly to make it 100% bulletproof - because even with this, and with setting up taxonomy permalinks to /blog/categories/:name for cats and similar for tags, I got 404 errors.
I'm working on this issue
Where are you using prior snippet of code? site.tags and site.categories are special variables that contains post items organized by tags and categories regardless of taxonomy generator.
This work fine. Keep in mind that page represents the current item:
{% set categoryList = [] %}
<ul>
{% for category in site.categories %}
{% for item in category %}
{% for categoryName, url in item.terms_url.categories %}
{% if categoryName not in categoryList | keys %}
{% set categoryList = categoryList | merge({ (categoryName) : url }) %}
<li><a href="{{ url }}">{{ categoryName }}</a></li>
{% endif %}
{% endfor %}
{% endfor %}
{% endfor %}
</ul>
|
gharchive/issue
| 2016-02-05T22:51:49 |
2025-04-01T04:35:56.322813
|
{
"authors": [
"Swader",
"yosymfony"
],
"repo": "spress/Spress",
"url": "https://github.com/spress/Spress/issues/68",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
146687094
|
SNS Notification Question
is solved!
solved
|
gharchive/issue
| 2016-04-07T17:11:28 |
2025-04-01T04:35:56.323783
|
{
"authors": [
"borehack"
],
"repo": "spring-cloud/spring-cloud-aws",
"url": "https://github.com/spring-cloud/spring-cloud-aws/issues/142",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
618445354
|
LoadBalancer: support instance selection based on specific actuator metrics
Provide more fine-grained load balancing rules based on actuator metrics like system.cpu.usage, jvm.memory.usage etc. Could be something configurable based on a specific metric and a corresponding threshold value. If this value is exceeded the balancer will prefer instances below this value. Otherwise default behavior will be applied.
Alternatively I assume this can be achieved by implementing a custom HealthIndicator but a more out-of-the-box configuration-only capability could be very useful.
#601 is along those lines. It's fairly hard to do since you would want those metrics from other instances, how do you report that the load balancer.
@spencergibb so the only solution would be to implement a custom HealthIndicator?
that's one option, yes.
@spencergibb I think that custom HealthIndicator should be avoided though since it would show a status [DOWN] if some instance is under load exceeding some threshold and this is will not be accurate. I guess implementing a custom LoadBalancerClientFilter or ReactiveLoadBalancerClientFilter could be the only alternative at the moment.
@spencergibb I was working on a custom ReactorServiceInstanceLoadBalancer at the moment cause I want to achieve a "least_conn" behavior (similarly to NGNIX or HA_PROXY). This is usefult in many cases one of which is when you need to load balance WebSocket connections from the Spring Cloud Gateway to multiple instances of WebSocket servers. Do you think this would of any interest to prepare a PR?
@kmandalas Definitely, if you come up with something, do submit a PR. You might also want to keep an eye on what is happening with the following issues: https://github.com/spring-cloud/spring-cloud-commons/issues/675 (currently a PR in review - introduces possibilities to propagate load-balanced call data and to run a callback method after a load-balanced call has been completed; probably best to base your changes on that) and https://github.com/spring-cloud/spring-cloud-commons/issues/674 (planning to work on adding in micrometer here).
|
gharchive/issue
| 2020-05-14T18:25:12 |
2025-04-01T04:35:56.328722
|
{
"authors": [
"OlgaMaciaszek",
"kmandalas",
"spencergibb"
],
"repo": "spring-cloud/spring-cloud-commons",
"url": "https://github.com/spring-cloud/spring-cloud-commons/issues/756",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1062033348
|
Adds: RegistrationLifecycle
This PR is used to solve this issue#999
Please add tests
Related use cases have been added
@OlgaMaciaszek The related operations have been completed
@huifer Thanks, but I still see some pending comments. Please address them.
If you can, please mark the unmodified for me
| |
@.***
|
|
@.***
|
---- Replied Message ----
| From | Olga @.> |
| Date | 05/25/2022 19:15 |
| To | @.> |
| Cc | Zen @.@.> |
| Subject | Re: [spring-cloud/spring-cloud-commons] Adds: RegistrationLifecycle (PR #1044) |
@huifer Thanks, but I still see some pending comments. Please address them.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>
@OlgaMaciaszek I think the repair has been completed. If there is any incomplete repair, please help to point it out. Thank you
|
gharchive/pull-request
| 2021-11-24T05:58:55 |
2025-04-01T04:35:56.335020
|
{
"authors": [
"OlgaMaciaszek",
"huifer"
],
"repo": "spring-cloud/spring-cloud-commons",
"url": "https://github.com/spring-cloud/spring-cloud-commons/pull/1044",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
299027153
|
Spring boot 2.0.0.RC2 + Finchley M6 results in java.lang.NoSuchFieldError: BINDER_BEAN_NAME
java.lang.IllegalStateException: Failed to load ApplicationContext
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:125)
at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:107)
at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:190)
at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:132)
at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:242)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:227)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:246)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy1.processTestClass(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:146)
at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:128)
at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'configurationPropertiesBeans' defined in org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.context.properties.ConfigurationPropertiesBeans]: Factory method 'configurationPropertiesBeans' threw exception; nested exception is java.lang.NoSuchFieldError: BINDER_BEAN_NAME
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:587)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:205)
at org.springframework.context.support.PostProcessorRegistrationDelegate.registerBeanPostProcessors(PostProcessorRegistrationDelegate.java:238)
at org.springframework.context.support.AbstractApplicationContext.registerBeanPostProcessors(AbstractApplicationContext.java:709)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:534)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:388)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:327)
at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:136)
at org.springframework.cloud.bootstrap.BootstrapApplicationListener.bootstrapServiceContext(BootstrapApplicationListener.java:197)
at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:104)
at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:70)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:127)
at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:74)
at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:54)
at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:351)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:317)
at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:138)
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99)
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117)
... 48 more
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.context.properties.ConfigurationPropertiesBeans]: Factory method 'configurationPropertiesBeans' threw exception; nested exception is java.lang.NoSuchFieldError: BINDER_BEAN_NAME
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:579)
... 77 more
Caused by: java.lang.NoSuchFieldError: BINDER_BEAN_NAME
at org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration.configurationPropertiesBeans(ConfigurationPropertiesRebinderAutoConfiguration.java:52)
at org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$4bfb1694.CGLIB$configurationPropertiesBeans$1(<generated>)
at org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$4bfb1694$$FastClassBySpringCGLIB$$1f781d8b.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:361)
at org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$4bfb1694.configurationPropertiesBeans(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
... 78 more
Already fixed. On mobile or I'd reference issue. Please search closed issues before submitting
@spencergibb Could you point me to the issue that is fixed? I search for the specific error in this repository and nothing turned up. At the moment, I cannot test spring boot rc2 with finchley m6 unless I use a snapshot which i'm not keen on
Would you know when a new release will be made available with the fix?
I believe this is the fix https://github.com/spring-cloud/spring-cloud-commons/commit/43ea0461ee16e1ee6f250ece8c82f323f2aef2ae.
A compatible Finchley release is coming soon.
|
gharchive/issue
| 2018-02-21T16:01:14 |
2025-04-01T04:35:56.340012
|
{
"authors": [
"mrcasablr",
"ryanjbaxter",
"spencergibb"
],
"repo": "spring-cloud/spring-cloud-config",
"url": "https://github.com/spring-cloud/spring-cloud-config/issues/919",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
983784458
|
The problem with child node's XML namespaces
Describe the bug
I want to write tests in Kotlin for an SOAP service and there is a problem with child namespaces. Child node's namespaces break the application during build.
Sample
response {
body = body ("""
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ns2:Res xmlns:ns2="http://*******/****/****/******/schema">
<ns2:ID>1</ns2:ID>
</ns2:Res>
</soap:Body>
</soap:Envelope>""")
}
Possible problem
Class:
org.springframework.cloud.contract.verifier.util.xml.DOMNamespaceContext
method:
private void addNamespaces (Node element) {
if (element.getParentNode ()! = null) {
addNamespaces (element.getParentNode ());
}
if (element instanceof Element) {
Element el = (Element) element;
NamedNodeMap map = el.getAttributes ();
for (int x = 0; x <map.getLength (); x ++) {
Attr attr = (Attr) map.item (x);
if ("xmlns" .equals (attr.getPrefix ())) {
namespaceMap.put (attr.getLocalName (), attr.getValue ());
}
}
}
}
As you can see, it only parses namespaces from the root (soap: Envelope) node. All namespaces declared in child nodes are ignored.
Hi! Are you willing to file a fix for this issue?
Hi! Are you willing to file a fix for this issue?
Hello, sorry, but I'm afraid I can't. Hope you will fix it when you have time
Any updates on this? If not Can I give it a try?
|
gharchive/issue
| 2021-08-31T12:05:58 |
2025-04-01T04:35:56.343899
|
{
"authors": [
"AliceBlue08",
"bensonbenny021",
"marcingrzejszczak"
],
"repo": "spring-cloud/spring-cloud-contract",
"url": "https://github.com/spring-cloud/spring-cloud-contract/issues/1709",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
200961075
|
Setup initial project infra
Initial work for spring-cloud/spring-cloud-dataflow#1122 and spring-cloud/spring-cloud-dataflow#1123 to bring over needed modules, do package rename and setup CI builds.
Initial import done and CI building https://build.spring.io/browse/SCD-DASHMASTER.
I believe we need to preserve the commit history for the modules moved here
This is the one I used before to keep the commit history in one of our projects: http://www.pixelite.co.nz/article/extracting-file-folder-from-git-repository-with-full-git-history/
I don't think that works that well. afaik, git totally loses history when you change path of a file.
12:32 $ git log yyy/1
commit dbe4db2244ec3a6d68b8003b7341c02e6abcbd54
Author: Janne Valkealahti <janne.valkealahti@gmail.com>
Date: Wed Jan 18 12:32:31 2017 +0000
commit1
12:32 $ mv yyy xxx
✔ /tmp/hhh [master L|✚ 10…10]
12:32 $ git add .
✔ /tmp/hhh [master L|●20]
12:32 $ git commit -m commit2
[master c09c806] commit2
12:32 $ git log xxx/1
commit c09c8067452746a4bbccf3e133166e6af3a98a19
Author: Janne Valkealahti <janne.valkealahti@gmail.com>
Date: Wed Jan 18 12:32:59 2017 +0000
commit2
With this when we change package/dir names from dataflow to dashboard any history is kinda useless.
|
gharchive/issue
| 2017-01-16T08:54:37 |
2025-04-01T04:35:56.347322
|
{
"authors": [
"ilayaperumalg",
"jvalkeal"
],
"repo": "spring-cloud/spring-cloud-dashboard",
"url": "https://github.com/spring-cloud/spring-cloud-dashboard/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
122065375
|
Add integration-tests for K8S SPI
As a developer, I'd like to continuously test and evaluate k8S SPI capabilities using local SPI as part of the CI plan.
Acceptance:
K8S' Integration tests run successfully on each code commit
Installation steps and gotchas are documented in the README / ref. guide
I think it maybe enough for us to rely on the google container service for integration testing vs. standing up k8s on AWS. Maybe we can change this issue to be CI integration tests that use a local install of k8s?
Sure, makes sense. Updated.
This is a duplicate of #9.
This is a duplicate of #9.
|
gharchive/issue
| 2015-12-14T15:28:19 |
2025-04-01T04:35:56.349969
|
{
"authors": [
"markpollack",
"sabbyanandan"
],
"repo": "spring-cloud/spring-cloud-dataflow-admin-kubernetes",
"url": "https://github.com/spring-cloud/spring-cloud-dataflow-admin-kubernetes/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
388196391
|
Prepare for 1.7.3.RELEASE
As part of the 1.7.3.RELEASE, prepare the following.
[x] Bump SCDF core and SCDF parent to 1.7.3.RELEASE
[x] Update Spring Boot test starter test dependency
Pushed the changes as 6d57f0356256c334ebce8420069cb0eaddcf6b99
|
gharchive/issue
| 2018-12-06T12:40:19 |
2025-04-01T04:35:56.351462
|
{
"authors": [
"ilayaperumalg"
],
"repo": "spring-cloud/spring-cloud-dataflow-server-cloudfoundry",
"url": "https://github.com/spring-cloud/spring-cloud-dataflow-server-cloudfoundry/issues/456",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
326065926
|
Consul services always return 404?
Hi, there,
I'm using the latest version of spring cloud and consul, http requests from spring-cloud-gateway always return 404, seems cannot find the right path from lb services?
spring cloud version: Finchley.RC1
spring boot version: 2.0.2.RELEASE
main project: https://github.com/kopstill/peach
spring cloud config: https://github.com/kopstill/peach-config
Got it, it's the path problem, use "filters -> SetPath" would solve it.
nice! Solve the problem that has troubled me for 2 days.
|
gharchive/issue
| 2018-05-24T10:47:40 |
2025-04-01T04:35:56.356020
|
{
"authors": [
"kopstill",
"zx88cvb"
],
"repo": "spring-cloud/spring-cloud-gateway",
"url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/333",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
381772657
|
@RefreshScope on Custom Route Locator
I have built my route locators in @Bean in my configuration class. The reason for this, in production, I dont want to restart my gateway if there are changes in the routing. I used @RefreshScope but the bean does not get re initialized by calling the /actuator/endpoint. Is this expected because gateway is using Webflux?
That shouldn't be the case. Without more information, I can't diagnose the problem.
Can you provide a complete, minimal, verifiable sample that reproduces the problem? It should be available as a GitHub (or similar) project or attached to this issue as a zip file.
@spencergibb any custom route locator is not rebuilt when refresh. I am figuring out how custom locators are registered in flux and why they are not reloaded.
Can you provide a complete, minimal, verifiable sample that reproduces the problem? It should be available as a GitHub (or similar) project or attached to this issue as a zip file.
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
Sorry for the delay. Not an issue anymore. Looks like working now from my end. I cleaned up my local repo to update with finchley sr2.
|
gharchive/issue
| 2018-11-16T21:49:35 |
2025-04-01T04:35:56.359182
|
{
"authors": [
"sincang",
"spencergibb",
"spring-issuemaster"
],
"repo": "spring-cloud/spring-cloud-gateway",
"url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/662",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1478537930
|
Add Labels And Annotations From Pods To ServiceInstance Metadata
Is there a good way to get the pod's label if I want to. I might want to go through different pod tags to do rule routing
I see that the label in the metadata of the ServiceInstance I get seems to be the label of the kubernetes service
I want to get the tags of each pod, but I found that method
```java
// KubernetesInformerDiscoveryClient
public List getInstances(String serviceId)
```
does not have these tags, the tags of his original data are the tags of the service in kubernetes and not the tags of the pods
it starts to make sense, but what is the point in you having the labels of the pods? this is what I don't get
Because I might want to do a grayscale release, selecting the pods with grayscale labels from multiple pods in a service by labels them
Seems like you want to get also the pods and their metadata from an underlying ServiceInstance. This is doable, I guess, but I need also @ryanjbaxter thoughts here.
I can possibly see the use case. If we do this though, I wonder what the impact would be to the existing labels and annotations (could they conflict?). Also we would have to correlates the endpoints to the pods, which i am not particularly clear how to do, not to say its impossible.
We now want to use spring cloud kubernetes to implement grayscale or swimlane features, is there any good advice from the community?
I can submit a pr if needed, I think it is available through V1PodList
A PR would be welcome
DefaultKubernetesServiceInstance has a field Map<String, Map<String, String>> podMetadata.
All 3 clients return such metadata, if it is requested. This can be closed @ryanjbaxter
|
gharchive/issue
| 2022-12-06T08:28:09 |
2025-04-01T04:35:56.363217
|
{
"authors": [
"ryanjbaxter",
"weihubeats",
"wind57"
],
"repo": "spring-cloud/spring-cloud-kubernetes",
"url": "https://github.com/spring-cloud/spring-cloud-kubernetes/issues/1162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2678539659
|
eureka server latency high(large than 30 seconds) due to all http thread blocked in EurekaInstanceMonitor(when set eureka.server.metrics.enabled=true)
version: spring-cloud-netflix-eureka-server-4.1.3
https://github.com/spring-cloud/spring-cloud-netflix/blob/eac21f837eea703af565b4b2606a831fa0b27848/spring-cloud-netflix-eureka-server/src/main/java/org/springframework/cloud/netflix/eureka/server/metrics/EurekaInstanceMonitor.java#L68
all of eureka tomcat http thread blocked in EurekaInstanceMonitor
"http-nio-0.0.0.0-8761-exec-123" #295 [29837] daemon prio=5 os_prio=0 cpu=373.45ms elapsed=50.25s tid=0x0000ffff1809b470 nid=29837 waiting for monitor entry [0x0000ffff066ab000]
java.lang.Thread.State: BLOCKED (on object monitor)
at io.micrometer.core.instrument.MeterRegistry.remove(MeterRegistry.java:756)
- waiting to lock <0x00000000cee8c920> (a java.lang.Object)
at io.micrometer.core.instrument.MeterRegistry.remove(MeterRegistry.java:723)
at io.micrometer.core.instrument.MeterRegistry.removeByPreFilterId(MeterRegistry.java:740)
at io.micrometer.core.instrument.MultiGauge.lambda$register$0(MultiGauge.java:72)
at io.micrometer.core.instrument.MultiGauge$$Lambda/0x000000e001921e78.apply(Unknown Source)
at java.util.stream.ReferencePipeline$3$1.accept(java.base@21.0.3/ReferencePipeline.java:197)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(java.base@21.0.3/ArrayList.java:1708)
at java.util.stream.AbstractPipeline.copyInto(java.base@21.0.3/AbstractPipeline.java:509)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(java.base@21.0.3/AbstractPipeline.java:499)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(java.base@21.0.3/ReduceOps.java:921)
at java.util.stream.AbstractPipeline.evaluate(java.base@21.0.3/AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(java.base@21.0.3/ReferencePipeline.java:682)
at io.micrometer.core.instrument.MultiGauge.lambda$register$1(MultiGauge.java:82)
at io.micrometer.core.instrument.MultiGauge$$Lambda/0x000000e0019213e8.apply(Unknown Source)
at java.util.concurrent.atomic.AtomicReference.getAndUpdate(java.base@21.0.3/AtomicReference.java:188)
at io.micrometer.core.instrument.MultiGauge.register(MultiGauge.java:63)
at org.springframework.cloud.netflix.eureka.server.metrics.EurekaInstanceMonitor.onApplicationEvent(EurekaInstanceMonitor.java:72)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:452)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:385)
at org.springframework.cloud.netflix.eureka.server.InstanceRegistry.publishEvent(InstanceRegistry.java:148)
at org.springframework.cloud.netflix.eureka.server.InstanceRegistry.handleRenewal(InstanceRegistry.java:136)
at org.springframework.cloud.netflix.eureka.server.InstanceRegistry.renew(InstanceRegistry.java:105)
at com.netflix.eureka.resources.InstanceResource.renewLease(InstanceResource.java:112)
at java.lang.invoke.LambdaForm$DMH/0x000000e0014bc000.invokeVirtual(java.base@21.0.3/LambdaForm$DMH)
at java.lang.invoke.LambdaForm$MH/0x000000e001a3a800.invoke(java.base@21.0.3/LambdaForm$MH)
at java.lang.invoke.Invokers$Holder.invokeExact_MT(java.base@21.0.3/Invokers$Holder)
at jdk.internal.reflect.DirectMethodHandleAccessor.invokeImpl(java.base@21.0.3/DirectMethodHandleAccessor.java:157)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(java.base@21.0.3/DirectMethodHandleAccessor.java:103)
at java.lang.reflect.Method.invoke(java.base@21.0.3/Method.java:580)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$$Lambda/0x000000e00170d4a8.invoke(Unknown Source)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:146)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:189)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:93)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:478)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:400)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:274)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:266)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:253)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:696)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
at org.glassfish.jersey.servlet.ServletContainer.serviceImpl(ServletContainer.java:378)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:554)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:494)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:431)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at dev.akkinoc.spring.boot.logback.access.security.LogbackAccessSecurityServletFilter.doFilter(LogbackAccessSecurityServletFilter.kt:17)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.springframework.cloud.netflix.eureka.server.EurekaServerAutoConfiguration$1.doFilterInternal(EurekaServerAutoConfiguration.java:337)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108)
at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231)
at org.springframework.security.web.FilterChainProxy$$Lambda/0x000000e00196aac0.doFilter(Unknown Source)
at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479)
at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation$$Lambda/0x000000e001980268.doFilter(Unknown Source)
at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340)
at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation$$Lambda/0x000000e001980488.doFilter(Unknown Source)
at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82)
at org.springframework.security.web.ObservationFilterChainDecorator$$Lambda/0x000000e00196af00.doFilter(Unknown Source)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128)
at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:100)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:131)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:85)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90)
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82)
at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter$$Lambda/0x000000e00196ed80.doFilter(Unknown Source)
at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323)
at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation$$Lambda/0x000000e00196efc0.doFilter(Unknown Source)
at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224)
at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191)
at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113)
at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:195)
at org.springframework.web.servlet.handler.HandlerMappingIntrospector$$Lambda/0x000000e00178d5f0.doFilter(Unknown Source)
at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113)
at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74)
at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:230)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:352)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:268)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:113)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at dev.akkinoc.spring.boot.logback.access.tomcat.LogbackAccessTomcatValve.invoke(LogbackAccessTomcatValve.kt:55)
at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:731)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:389)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:904)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
at java.lang.Thread.runWith(java.base@21.0.3/Thread.java:1596)
at java.lang.Thread.run(java.base@21.0.3/Thread.java:1583)
with only several hundred registry.When latency high, other eureka node will retry to replica registry to problem eureka server.This can cause an avalanche of eureka server。
@OlgaMaciaszek Does use task executor can avoid block?I stop all other eurekaserver (only remain one problem eureka server) and wait aboud 1 hours, but it‘s thread still blocked.(all http nio thread + Eureka-EvictionTimer thread )
most thread block at:
https://github.com/micrometer-metrics/micrometer/blob/083f12e2a09e081567807f1be6273acf4da39dc5/micrometer-core/src/main/java/io/micrometer/core/instrument/MeterRegistry.java#L756
a little thread block at:
https://github.com/micrometer-metrics/micrometer/blob/083f12e2a09e081567807f1be6273acf4da39dc5/micrometer-core/src/main/java/io/micrometer/core/instrument/MeterRegistry.java#L647
here is thre thread dump:
eureka-threaddump.zip
Does it have a dead lock? If not,why it process so slow(or may we use some Non-thread-safe method?)
@huntkalio The change should cause an AsyncTaskExecutor to be injected (unless you're using virtual threads; are you?) and it should trigger the execution asynchronously in some other thread (javadoc). Have you tried again after the change? If yes, could you please provide the thread dump in some kind of text format, not images?
@OlgaMaciaszek What I mean is that if it is still very slow even if it is executed in other threads, the slow execution will eventually cause the main thread to block or memory overflow (unless you discard it).
May due to when many thread register in MeterRegistry.java ,it will have pool performance
@huntkalio Have pushed another change.
@huntkalio Have pushed another change.
problem is not in function collectAggregatedCounts(),but in registerMetrics().
all thread block in it registerMetrics.
AsyncTaskExecutor will also be blocked
May due to when many thread register in MeterRegistry.java ,it will have pool performance。If so,do we need to update monitoring in eureka registration or heartbeat events every time? Can we update EurekaInstanceMonitor at fixed intervals to reduce lock contention?
Possibly, we might consider switching to this kind of behaviour. Could you please try it out against the current snapshots first and provide a fresh thread dump if the issue persists?
Possibly, we might consider switching to this kind of behaviour. Could you please try it out against the current snapshots first and provide a fresh thread dump if the issue persists?
@OlgaMaciaszek here is the dump for 4.1.4-SNAPSHOT
eureka-server-dump.txt
all task thread block because contention lock (Because there are only 8 threads, competition is better than tomcat http thread);
because task thread consumption may cannot keep up with production,AsyncTaskExecutor block queue may continue grow until OMM 。and taskEurekaInstanceMonitor may also have delay
Thanks for the threa dump @huntkalio. Was it done on the latest code version? (It's now available in 4.1.4 release in Maven). Also, it seems you're using virtual threads, is that right?
Thanks for the threa dump @huntkalio. Was it done on the latest code version? (It's now available in 4.1.4 release in Maven). Also, it seems you're using virtual threads, is that right?
@OlgaMaciaszek it was done on the latest code 。Not use virtual threads.
Thanks @huntkalio, could you please also share the sample code of the app that you instantiate as a link to a separate repo with an executable app?
|
gharchive/issue
| 2024-11-21T08:42:22 |
2025-04-01T04:35:56.379490
|
{
"authors": [
"OlgaMaciaszek",
"huntkalio"
],
"repo": "spring-cloud/spring-cloud-netflix",
"url": "https://github.com/spring-cloud/spring-cloud-netflix/issues/4374",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
174377035
|
Fixes gh-1308
Deleting the "RibbonCommand" suffix on master
@garciafl Please sign the Contributor License Agreement!
Click here to manually synchronize the status of this Pull Request.
See the FAQ for frequently asked questions.
@garciafl Thank you for signing the Contributor License Agreement!
@garciafl thanks for the pull request! Can you look at the test failures? Also I believe @spencergibb said this should be fixed on the 1.1.x branch (I could be wrong though)
Oh i see the other pull request for the 1.1.x branch now :)
Closed via 048d401309287928f6843a10626fe20aeada6645
|
gharchive/pull-request
| 2016-08-31T20:58:24 |
2025-04-01T04:35:56.384369
|
{
"authors": [
"garciafl",
"pivotal-issuemaster",
"ryanjbaxter",
"spencergibb"
],
"repo": "spring-cloud/spring-cloud-netflix",
"url": "https://github.com/spring-cloud/spring-cloud-netflix/pull/1310",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
309641589
|
Add a property to enable / disable Zipkin Discovery Client support
Hey,
I've spend some minutes in thinking of this..
maybe it would be nice to have some kind of "ConfigurationBasedZipkinLoadBalancer" where spring.zipkin.baseUrl can be a comma separated list of FQDNs
Cheers,
A
When we're using LoadBalancerClient for Zipkin, it follows all the standard setup of load balancing that we have in Spring Cloud. That means that if you have Ribbon on the classpath, LoadBalancerClient will delegate work to Ribbon, meaning that you can use the standard Ribbon configuration to provide the list of URLs. We don't want to invest too much effort in some additional customization of Zipkin LoadBalancer, since the mechanism to tweak that communication is already there. WDYT?
you mean something like:
zipkin:
ribbon:
ListOfServers: host1,host2
sure, works for me. But this should be well documented because I would not expect to have this configured at a completely different place, you know?
@doernbrackandre, wait a couple of minutes for the snapshots to be built and you can check if things are working fine for you. I've also updated the docs.
looks good to me 👍
|
gharchive/issue
| 2018-03-29T07:02:48 |
2025-04-01T04:35:56.387759
|
{
"authors": [
"doernbrackandre",
"marcingrzejszczak"
],
"repo": "spring-cloud/spring-cloud-sleuth",
"url": "https://github.com/spring-cloud/spring-cloud-sleuth/issues/919",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
339559287
|
GH-1412 Ensured SpelExpressionConverterConfiguration present in multi…
…-binder
Ensured that SpelExpressionConverterConfiguration is always present in the AC regardless of single or multi-binder application configuration
Resolves #1412
Looks good - polished and squashed the commits. Merged upstream.
|
gharchive/pull-request
| 2018-07-09T18:29:53 |
2025-04-01T04:35:56.389076
|
{
"authors": [
"olegz",
"sobychacko"
],
"repo": "spring-cloud/spring-cloud-stream",
"url": "https://github.com/spring-cloud/spring-cloud-stream/pull/1413",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1187794839
|
Spring boot 2.6.4 and mongo - reflection configuration missing for Query
Hello i'm getting :
Native reflection configuration for org.springframework.data.mongodb.core.query.Query is missing.
my build gradle:
plugins {
id("org.springframework.boot") version "2.6.4"
id("io.spring.dependency-management") version "1.0.11.RELEASE"
kotlin("jvm") version "1.6.10"
kotlin("plugin.spring") version "1.6.10"
id("org.springframework.experimental.aot") version "0.11.3"
}
group = "com.services.playit"
java.sourceCompatibility = JavaVersion.VERSION_17
repositories {
maven { url = uri("https://repo.spring.io/release") }
mavenCentral()
}
extra["springCloudVersion"] = "2021.0.1"
dependencies {
implementation("org.springframework.boot:spring-boot-starter-security")
implementation("org.springframework.boot:spring-boot-starter-actuator")
implementation("org.springframework.boot:spring-boot-starter-data-mongodb-reactive")
implementation("org.springframework.boot:spring-boot-starter-webflux")
implementation("com.fasterxml.jackson.module:jackson-module-kotlin")
implementation("io.projectreactor.kotlin:reactor-kotlin-extensions")
implementation("io.github.microutils:kotlin-logging:2.1.21")
implementation("com.auth0:java-jwt:3.19.0")
implementation("com.auth0:jwks-rsa:0.21.0")
implementation("org.bouncycastle:bcprov-jdk15on:1.70")
implementation("org.jetbrains.kotlin:kotlin-reflect")
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-reactor")
implementation("org.springframework.cloud:spring-cloud-starter-sleuth")
testImplementation("org.springframework.boot:spring-boot-starter-test")
testImplementation("io.projectreactor:reactor-test")
}
dependencyManagement {
imports {
mavenBom("org.springframework.cloud:spring-cloud-dependencies:${property("springCloudVersion")}")
}
}
tasks.withType<KotlinCompile> {
kotlinOptions {
freeCompilerArgs = listOf("-Xjsr305=strict")
jvmTarget = "17"
}
}
tasks.withType<Test> {
useJUnitPlatform()
}
tasks.getByName<org.springframework.boot.gradle.tasks.bundling.BootBuildImage>("bootBuildImage") {
runImage = System.getenv("CUSTOM_REGISTRY_URL") + "/paketobuildpacks/run:tiny-cnb"
builder = System.getenv("CUSTOM_REGISTRY_URL") + "/paketobuildpacks/builder:tiny"
environment = mapOf(
"BP_NATIVE_IMAGE" to "true",
"BP_NATIVE_IMAGE_BUILD_ARGUMENTS" to "--enable-all-security-services"
)
isPublish = true
docker {
host = "unix:///var/run/docker.sock"
builderRegistry {
username = System.getenv("CUSTOM_REGISTRY_USER")
password = System.getenv("CUSTOM_REGISTRY_PASS")
url = System.getenv("CUSTOM_REGISTRY_URL")
}
publishRegistry {
username = "gitlab-ci-token"
password = System.getenv("CI_JOB_TOKEN")
url = System.getenv("CI_REGISTRY")
}
}
}```
If you'd like us to spend some time investigating, please take the time to provide a complete minimal sample (something that we can unzip or git clone, build, and deploy) that reproduces the problem.
@mhalbritter got it working by changing spring repository interface into concrete class with my implementation.
But now I have problem with DB model class " Unresolved class"
Nice. Please provide a minimal sample (something that we can unzip or git clone, build, and deploy), then i can look into it.
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue.
|
gharchive/issue
| 2022-03-31T09:36:41 |
2025-04-01T04:35:56.405680
|
{
"authors": [
"lukaszpy",
"mhalbritter",
"spring-projects-issues"
],
"repo": "spring-projects-experimental/spring-native",
"url": "https://github.com/spring-projects-experimental/spring-native/issues/1564",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
868730059
|
can add something to ensure queue consume ordering while consumes to hundreds of queues
Affects Version(s): <Spring AMQP version>
<=2.2.x, 2.3.x not tested.
Question
Bug report
Enhancement
if setMaxConcurrentConsumers to ListenerContainer, each queue may increase to more than 1 consumers, break the queue ordering.
if not set, all queues consumes with one thread one channel, if a queue consumer blocks , all blocks.
so, may be ConcurrentConsumers semantics should contains two:
how many consumers per queue
how many threads can be used by container
then if set fetsize=1 and one consumer per queue and 50(max) threads on container, the queue consuming can satisfies ordering and concurrent.
Have you tried using a DirectMessageListenerContainer instead?
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#choose-container
Have you tried using a DirectMessageListenerContainer instead?
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#choose-container
it solves, thanks a lot.
|
gharchive/issue
| 2021-04-27T11:00:45 |
2025-04-01T04:35:56.410139
|
{
"authors": [
"bthulu",
"garyrussell"
],
"repo": "spring-projects/spring-amqp",
"url": "https://github.com/spring-projects/spring-amqp/issues/1328",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
54239963
|
Failed resolution of HttpClientContext
Hi guys,
I'm working on integrating an API in an Android application and I want to use the framework provided by Spring for Android/Spring Social to do so.
I used the spring-android-twitter-client as an example to get my things up and running. The problem is that the my application must be able to get an access token based on the consumer key and consumer secret with a grant_type=client_credentials (http://tools.ietf.org/html/rfc6749#section-4.4).
This feature is available in the newest version of the spring-social-core (1.0.0.RELEASE) but it's not in the version used in the Twitter guide. So I did what I add to do and updated my application dependencies but I ran in a problem.
The newest version of the spring-social-core uses new features from the org.apache.httpclient librairies. I reproduced the error in your Twitter sample simply by updating the dependencies, When I run the application and try to connect to my account, I get the following runtime error:
01-13 14:33:27.941 24112-24178/org.springframework.android.twitterclient E/AndroidRuntime﹕ FATAL EXCEPTION: AsyncTask #1
Process: org.springframework.android.twitterclient, PID: 24112
java.lang.RuntimeException: An error occured while executing doInBackground()
at android.os.AsyncTask$3.done(AsyncTask.java:300)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:355)
at java.util.concurrent.FutureTask.setException(FutureTask.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:242)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:231)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
Caused by: java.lang.NoClassDefFoundError: Failed resolution of: Lorg/apache/http/client/protocol/HttpClientContext;
at org.springframework.social.support.ClientHttpRequestFactorySelector$HttpComponentsClientRequestFactoryCreator$1.createHttpContext(ClientHttpRequestFactorySelector.java:80)
at org.springframework.http.client.HttpComponentsClientHttpRequestFactory.createRequest(HttpComponentsClientHttpRequestFactory.java:133)
at org.springframework.http.client.support.HttpAccessor.createRequest(HttpAccessor.java:84)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:472)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:453)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:429)
at org.springframework.social.oauth1.OAuth1Template.exchangeForToken(OAuth1Template.java:187)
at org.springframework.social.oauth1.OAuth1Template.fetchRequestToken(OAuth1Template.java:115)
at org.springframework.android.twitterclient.TwitterWebOAuthActivity$TwitterPreConnectTask.doInBackground(TwitterWebOAuthActivity.java:136)
at org.springframework.android.twitterclient.TwitterWebOAuthActivity$TwitterPreConnectTask.doInBackground(TwitterWebOAuthActivity.java:126)
at android.os.AsyncTask$2.call(AsyncTask.java:288)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:231)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
Caused by: java.lang.ClassNotFoundException: Didn't find class "org.apache.http.client.protocol.HttpClientContext" on path: DexPathList[[zip file "/data/app/org.springframework.android.twitterclient-2/base.apk"],nativeLibraryDirectories=[/vendor/lib, /system/lib]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:56)
at java.lang.ClassLoader.loadClass(ClassLoader.java:511)
at java.lang.ClassLoader.loadClass(ClassLoader.java:469)
at org.springframework.social.support.ClientHttpRequestFactorySelector$HttpComponentsClientRequestFactoryCreator$1.createHttpContext(ClientHttpRequestFactorySelector.java:80)
at org.springframework.http.client.HttpComponentsClientHttpRequestFactory.createRequest(HttpComponentsClientHttpRequestFactory.java:133)
at org.springframework.http.client.support.HttpAccessor.createRequest(HttpAccessor.java:84)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:472)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:453)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:429)
at org.springframework.social.oauth1.OAuth1Template.exchangeForToken(OAuth1Template.java:187)
at org.springframework.social.oauth1.OAuth1Template.fetchRequestToken(OAuth1Template.java:115)
at org.springframework.android.twitterclient.TwitterWebOAuthActivity$TwitterPreConnectTask.doInBackground(TwitterWebOAuthActivity.java:136)
at org.springframework.android.twitterclient.TwitterWebOAuthActivity$TwitterPreConnectTask.doInBackground(TwitterWebOAuthActivity.java:126)
at android.os.AsyncTask$2.call(AsyncTask.java:288)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:231)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
Suppressed: java.lang.ClassNotFoundException: org.apache.http.client.protocol.HttpClientContext
at java.lang.Class.classForName(Native Method)
at java.lang.BootClassLoader.findClass(ClassLoader.java:781)
at java.lang.BootClassLoader.loadClass(ClassLoader.java:841)
at java.lang.ClassLoader.loadClass(ClassLoader.java:504)
... 17 more
Caused by: java.lang.NoClassDefFoundError: Class not found using the boot class loader; no stack available
The problem seems to be caused by the httpclient version that is shipped with Android which is older than the one required. If I try to add the dependencies of the HttpClient port for Android in my build, Gradle seems to exclude it to avoid conflict.
I'm on Linux, I use Android Studio 1.0.1 and I open Android projects with Gradle. What can be done?
Closed this, and opened : https://github.com/spring-projects/spring-social-twitter/issues/81
|
gharchive/issue
| 2015-01-13T19:56:29 |
2025-04-01T04:35:56.415821
|
{
"authors": [
"daddykotex"
],
"repo": "spring-projects/spring-android-samples",
"url": "https://github.com/spring-projects/spring-android-samples/issues/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1204130281
|
OpenID Provider Configuration endpoint should include the revocation token endpoint
Expected Behavior
The OpenID Connect Discovery endpoint should provide the revocation token endpoint.
i.e. adding this:
"revocation_endpoint": "http://{host_issuer}:{port_issuer}/oauth2/revoke"
Current Behavior
The current response for /.well-known/openid-configuration does not contain the endpoint revocation.
Context
Some angular lib such as angular-auth-oidc-client expect the revocation endpoint to be in the response when requesting the OpenID Connect Discovery endpoint.
Thanks for the report @mogTheDev. We'll look at adding this soon.
|
gharchive/issue
| 2022-04-14T07:14:55 |
2025-04-01T04:35:56.423301
|
{
"authors": [
"jgrandja",
"mogTheDev"
],
"repo": "spring-projects/spring-authorization-server",
"url": "https://github.com/spring-projects/spring-authorization-server/issues/687",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
312721006
|
SpEL not supported for containerFactory on KafkaListener annotation ?
I previously had an expression that works fine when setting topics; however, when I recently tried to update the containerFactory to use an expression as well it fails.
@KafkaListener(topics = "#{appArguments.getTopic()}", containerFactory = "#{appArguments.getConsumerTypeFactory()}")
Here topics is resolved but for containerFactory I get:
no KafkaListenerContainerFactory with id '#{appArguments.getConsumerTypeFactory()}' was found in the application context;
where hardcoding works.
Using
org.springframework.boot:spring-boot-gradle-plugin:2.0.0.RELEASE
org.springframework.kafka:spring-kafka:2.1.4.RELEASE
That's correct. It must support SpEL (or even just properties placeholder resolver). Just because it is about a bean name, not about something resolved at runtime. Not telling that there was just no such a request, but for consistency with other similar approaches throughout Spring Portfolio, this kind of attributes are for static values - exact bean names.
you can expose your appArguments.getConsumerTypeFactory() as a bean and use its name for the @KafkaListener.
Otherwise consider to use KafkaMessageListenerContainer directly instead of annotation configuration. It really has restricted abilities.
I'm fully against support SpEL in bean names attributes in annotations.
@artembilan Great, thanks for the detailed response and suggestion. I wanted a listener on the same topic which possibly could alter the factory as in one case it may require a StringDeserializer and another JSON.
@artembilan on a side node, is it possible to use @KafkaListener in a way to set the listener to be an AcknowledgingMessageListener ?
|
gharchive/issue
| 2018-04-09T23:18:41 |
2025-04-01T04:35:56.519918
|
{
"authors": [
"SamD",
"artembilan"
],
"repo": "spring-projects/spring-kafka",
"url": "https://github.com/spring-projects/spring-kafka/issues/644",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2060349669
|
Refactor: Use Environment Variables for URLs and @Autowired for Dependency Injection
Replaced hardcoded URLs "owners/createOrUpdateOwnerForm" and "pets/createOrUpdatePetForm" with environment variables, enhancing flexibility.
Removed the static final type for variables to enable injection, allowing for easier configuration changes.
Replaced constructor injection for OwnerRepository with the @Autowired annotation, reducing lines of code and improving readability.
@gabemagioli Please sign the Contributor License Agreement!
Click here to manually synchronize the status of this Pull Request.
See the FAQ for frequently asked questions.
@gabemagioli Thank you for signing the Contributor License Agreement!
Thanks for the PR.
Replaced hardcoded URLs "owners/createOrUpdateOwnerForm" and "pets/createOrUpdatePetForm" with environment variables, enhancing flexibility.
There's actually no requirement for being able to customize views like this.
Replaced constructor injection for OwnerRepository with the @Autowired annotation, reducing lines of code and improving readability.
Actually, that isn't a best practice. First of all, the field is not final as it should be and it makes testing of the component harder.
|
gharchive/pull-request
| 2023-12-29T19:23:34 |
2025-04-01T04:35:56.524277
|
{
"authors": [
"gabemagioli",
"pivotal-cla",
"snicoll"
],
"repo": "spring-projects/spring-petclinic",
"url": "https://github.com/spring-projects/spring-petclinic/pull/1428",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
112580341
|
Spring XD java config with jasypt encrypt lib
Hello,
I am planning to use jasypt encrypt lib and reading application.properties from external location outside XD_HOME. Would it be possible?
And I have tried some POCs to load properties using normal Spring PropertyPlaceholderConfigurer and the property holder variable is still not set.
In my module config class
@Configuration
@EnableIntegration
public class ModuleConfiguration {
@Bean
MessageChannel input() { return new DirectChannel(); }
@Bean
MessageChannel output() { return new DirectChannel(); }
@Autowired
TweetProcessor tweetProcessor;
@Bean
PropertyPlaceholderConfigurer propertyConfigurer() throws Exception {
PropertyPlaceholderConfigurer propertyConfigurer = new PropertyPlaceholderConfigurer();
propertyConfigurer.setLocation(new UrlResource("file:/apps/conf/application.properties"));
return propertyConfigurer;
}
}
And in my processor class
@MessageEndpoint
public class TweetProcessor {
@Value("${filestorage.location}")
private String tempFolder;
@ServiceActivator(inputChannel = "input", outputChannel = "output")
public Message process(Message message) {
System.out.println(tempFolder);
return message;
}
}
I got error in xd after deploying my stream
Could not resolve placeholder 'filestorage.location' in string value "${filestorage.location}"
Thanks.
Thank you @garyrussell for the answer in http://stackoverflow.com/questions/33250684/spring-xd-java-config-doesnt-load-xml-resource
@PropertySource works , but i am planning to use jasypt encrypt lib , and load the properties using
@Bean
public EnvironmentStringPBEConfig environmentStringPBEConfig() {
final EnvironmentStringPBEConfig environmentStringPBEConfig = new EnvironmentStringPBEConfig();
environmentStringPBEConfig.setAlgorithm("PBEWithMD5AndDES");
environmentStringPBEConfig.setPasswordEnvName("ENV_PWD");
return environmentStringPBEConfig;
}
@Bean
public StandardPBEStringEncryptor configurationEncryptor() {
final StandardPBEStringEncryptor standardPBEStringEncryptor = new StandardPBEStringEncryptor();
standardPBEStringEncryptor.setConfig(environmentStringPBEConfig());
return standardPBEStringEncryptor;
}
@Bean
public EncryptablePropertyPlaceholderConfigurer propertyConfigurer() throws Exception {
final EncryptablePropertyPlaceholderConfigurer propertyConfigurer =
new EncryptablePropertyPlaceholderConfigurer(configurationEncryptor());
propertyConfigurer.setLocation(new UrlResource("file:/apps/conf/application.properties"));
return propertyConfigurer;
}
So far i am still getting Could not resolve placeholder 'filestorage.location' in string value "${filestorage.location}"
Thanks.
Not sure if this will help you, but PropertyPlaceholderConfigurer is BeanFactoryPostProcessor, hence his @Bean method must be specific with static modifier. See this SO question and sample in its answer: http://stackoverflow.com/questions/33149198/spring-integration-spel-issues-with-annotation
Found the solution by using this: https://github.com/ulisesbocchio/jasypt-spring-boot
Using @EnableEncryptableProperties and @PropertySource inside my java config resolves the property placeholder in my processor with decrypted value.
|
gharchive/issue
| 2015-10-21T12:47:10 |
2025-04-01T04:35:56.556754
|
{
"authors": [
"artembilan",
"helloworld-2013"
],
"repo": "spring-projects/spring-xd",
"url": "https://github.com/spring-projects/spring-xd/issues/1812",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
256519562
|
SPA용 인증 API 개발하기
https://spring.io/blog/2015/01/12/spring-and-angular-js-a-secure-single-page-application
https://spring.io/blog/2015/01/12/the-login-page-angular-js-and-spring-security-part-ii
https://geowarin.github.io/social-login-with-spring.html
@outsideris 조금 작업해 놨는데요.
일단 API는 GET /api/session 이 현재 로그인된 사용자 조회가 될거 같구요. 리턴으로 {user : null} 이 나가면 없는거고 있으면 이름이 담길꺼에요.
로그인 하러가는 페이지는 페이스북으로 가게 될텐데 POST /signup/facebook 이라고 서버한테 보내면 서버가 페이스북에 보내줄꺼에요.
페이스북에서 로그인하고나면 / 루트로 올꺼구요. 그때다시 GET /api/session 조회하면 아마 이름이 들어있을 겁니다.
실제 구현된 코드는 facebook-login 이란 브랜치 보시거나 이 댓글 바로 위에 있는 글 한번 보시면 아마 이해되실꺼 같아요.
과연 보면 이해가 될런지.... 프론트가 백엔드랑 디자인이 아직 둘다 없어서 작업진행이 약간 난항을 겪는 중.. ㅎㅎ
facebook을 먼저 붙일거지?
네 일단은 페이스북 로그인만요. 백엔드에서 뭐가 필요한지 알려주세요.
이슈에 만들어져 있는 모임 생성이랑 상세보기 API요.
|
gharchive/issue
| 2017-09-10T15:55:56 |
2025-04-01T04:35:56.560699
|
{
"authors": [
"keesun",
"outsideris"
],
"repo": "spring-sprout/moilago",
"url": "https://github.com/spring-sprout/moilago/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
337294552
|
Unable to infer base url when spring boot application deployed as war and has servlet mapping.
Good day.
I'm trying to make work swagger-ui.html in deployed war and has problem with detecting base url.
Application has one extra property:
server.servlet-path=/api/v4
Here is an example demo repo
When application is started with embedded servlet container, everything works fine:
swagger-ui.html is accessible on url http://localhost:8080/springfox/api/v4/swagger-ui.html
and it determines base url as well.
json is accessible on url http://localhost:8080/springfox/api/v4/v2/api-docs?group=category-api
When application deployed as war file into Tomcat 8.5.24 with context 'springfox'
swagger-ui.html is accessible on different url http://localhost:8080/springfox/swagger-ui.html it ignores for some reason servlet path from application.properties and says cannot infer base url when it loads.
When I'm entering base url manually http://localhost:8080/springfox/api/v4, it loads data fine and shows api docs.
The question is, am I missing something from configuration, or is it bug?
Forgot to mention: Spring Boot version 1.5.14.RELEASE and Springfox version 2.9.2
Thanks for the repro-repo. I'll take a look and let you know
I don't think its related but I added
providedRuntime('org.springframework.boot:spring-boot-starter-tomcat') to the gradle dependencies section.
After that I setup the application context root as /springfox in tomcat and I see the swagger-ui served up at http://localhost:8080/springfox/api/v4/swagger-ui.html just like if you ran it in the embedded container.
Hm. Who can imagine that!
Don't know what tomcat-starter doing there but it works.
Thanks!
@amelnikoff Its included by default in the spring-boot-starter-web. Setting it as provided runtime makes sure it doesnt have the embedded tomcat in the dependencies list of libraries.
Seems to me it's time reread spring boot reference. Last time I read it 3 years ago )
|
gharchive/issue
| 2018-07-01T14:19:42 |
2025-04-01T04:35:56.584633
|
{
"authors": [
"amelnikoff",
"dilipkrish"
],
"repo": "springfox/springfox",
"url": "https://github.com/springfox/springfox/issues/2526",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
823808387
|
Where is "DefaultTagsProvider" in 3.0?
Thanks for the lib! I wonder where is "DefaultTagsProvider" in 3.0? It exist for 2.x but has gone in 3.0, and I cannot find it in any other packages.
not stale
not stale
notstale
not stale
bump
|
gharchive/issue
| 2021-03-07T02:24:51 |
2025-04-01T04:35:56.586387
|
{
"authors": [
"fzyzcjy"
],
"repo": "springfox/springfox",
"url": "https://github.com/springfox/springfox/issues/3756",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
58530134
|
Per class setup and teardown
This is basically an enhancement idea to add per class setup and teardown capabilities to the framework.
I missed it earlier, but it looks like this has already been requested in #40. Closing this one to keep the conversation in one place.
Hi Phil,
So to start modifying the framework I guess I can use your idea of
modifying the @Databasesetup annotation so that it takes a paramter and
does either per class or method setup.
Do you think this change will affect many classes/have a ripple effect on
all the framework?
Kind regards,
Bruce.
On Tue, Feb 24, 2015 at 9:38 PM, Phil Webb notifications@github.com wrote:
Closed #70
https://github.com/springtestdbunit/spring-test-dbunit/issues/70.
—
Reply to this email directly or view it on GitHub
https://github.com/springtestdbunit/spring-test-dbunit/issues/70#event-241438293
.
I think it should be fine as long as the default value is "per method"
Ok I'll do that, also I have another question I haven't used GitHub to
collaborate in an open source project, I am using IntelliJ, I selected
checkout from VCS and then GitHub, it asked me for my account, should I be
able to push from an alternate branch to the repository itself or should I
push somewhere else and request a merge?
Regarding the change I am thinking of adding a boolean isPerClass field
that defaults to false to the @DatabaseSetup annotation and then adding
afterTestClass and beforeTestClass methods to DbUnitRunner that only act
when the isPerClass value is set to true and then I would have the
beforeTestMethod and afterTestMethod to not act when the flag is true as
well, do you think this is a good approach or there might be a more elegant
one?
Thank you very much and kind regards,
Bruce
On Wed, Feb 25, 2015 at 5:15 PM, Phil Webb notifications@github.com wrote:
I think it should be fine as long as the default value is "per method"
—
Reply to this email directly or view it on GitHub
https://github.com/springtestdbunit/spring-test-dbunit/issues/70#issuecomment-76072532
.
Hi Phil,
I have an update, I have started to move things around and added
@Override
public void beforeTestClass(TestContext testContext) throws Exception {
runner.beforeTestClass(new DbUnitTestContextAdapter(testContext));
}
However I noticed hat the textContext variable at this point has a null
connection which causes the database setup to fail, I am trying to think of
a workaround for this issue.
Thanks and kind regards,
Bruce
On Wed, Feb 25, 2015 at 9:07 PM, bruce.w H bruce264@gmail.com wrote:
Ok I'll do that, also I have another question I haven't used GitHub to
collaborate in an open source project, I am using IntelliJ, I selected
checkout from VCS and then GitHub, it asked me for my account, should I be
able to push from an alternate branch to the repository itself or should I
push somewhere else and request a merge?
Regarding the change I am thinking of adding a boolean isPerClass field
that defaults to false to the @DatabaseSetup annotation and then adding
afterTestClass and beforeTestClass methods to DbUnitRunner that only act
when the isPerClass value is set to true and then I would have the
beforeTestMethod and afterTestMethod to not act when the flag is true as
well, do you think this is a good approach or there might be a more elegant
one?
Thank you very much and kind regards,
Bruce
On Wed, Feb 25, 2015 at 5:15 PM, Phil Webb notifications@github.com
wrote:
I think it should be fine as long as the default value is "per method"
—
Reply to this email directly or view it on GitHub
https://github.com/springtestdbunit/spring-test-dbunit/issues/70#issuecomment-76072532
.
Hello Phil,
Yet another update, I've managed to do it, it's working now, but I had to
make some changes, I noticed the project is using Java 8 and even though I
have jdk 8 installed some parts of the project wouldn't compile to me so I
had to temporarily disable the @Repeatable annotations and switch the
compile version to 7 locally, I also broke some unit tests that I'll have
to fix. (I also noticed that there are 3 failing unit tests in master)
Once again, amazing job with the framework, the tests are a lot faster with
the per class initial implementation I have added to them.
I would like to show you my progress neverthless as I think it can be
improved upon by your observations, should I push it to my git hub account
or how can I do this?
Kind regards,
Bruce
On Thu, Feb 26, 2015 at 1:50 PM, bruce.w H bruce264@gmail.com wrote:
Hi Phil,
I have an update, I have started to move things around and added
@Override
public void beforeTestClass(TestContext testContext) throws Exception {
runner.beforeTestClass(new DbUnitTestContextAdapter(testContext));
}
However I noticed hat the textContext variable at this point has a null
connection which causes the database setup to fail, I am trying to think of
a workaround for this issue.
Thanks and kind regards,
Bruce
On Wed, Feb 25, 2015 at 9:07 PM, bruce.w H bruce264@gmail.com wrote:
Ok I'll do that, also I have another question I haven't used GitHub to
collaborate in an open source project, I am using IntelliJ, I selected
checkout from VCS and then GitHub, it asked me for my account, should I be
able to push from an alternate branch to the repository itself or should I
push somewhere else and request a merge?
Regarding the change I am thinking of adding a boolean isPerClass field
that defaults to false to the @DatabaseSetup annotation and then adding
afterTestClass and beforeTestClass methods to DbUnitRunner that only act
when the isPerClass value is set to true and then I would have the
beforeTestMethod and afterTestMethod to not act when the flag is true as
well, do you think this is a good approach or there might be a more elegant
one?
Thank you very much and kind regards,
Bruce
On Wed, Feb 25, 2015 at 5:15 PM, Phil Webb notifications@github.com
wrote:
I think it should be fine as long as the default value is "per method"
—
Reply to this email directly or view it on GitHub
https://github.com/springtestdbunit/spring-test-dbunit/issues/70#issuecomment-76072532
.
Hello Phil,
I hope you are doing ok, I guess you have been very busy with other
projects, I am really interested in pushing this feature collaboratively
with you or someone in charge of Spring Test DbUnit, please let me know
what the next steps are.
Best,
Bruce
On Thu, Feb 26, 2015 at 10:41 PM, bruce.w H bruce264@gmail.com wrote:
Hello Phil,
Yet another update, I've managed to do it, it's working now, but I had to
make some changes, I noticed the project is using Java 8 and even though I
have jdk 8 installed some parts of the project wouldn't compile to me so I
had to temporarily disable the @Repeatable annotations and switch the
compile version to 7 locally, I also broke some unit tests that I'll have
to fix. (I also noticed that there are 3 failing unit tests in master)
Once again, amazing job with the framework, the tests are a lot faster
with the per class initial implementation I have added to them.
I would like to show you my progress neverthless as I think it can be
improved upon by your observations, should I push it to my git hub account
or how can I do this?
Kind regards,
Bruce
On Thu, Feb 26, 2015 at 1:50 PM, bruce.w H bruce264@gmail.com wrote:
Hi Phil,
I have an update, I have started to move things around and added
@Override
public void beforeTestClass(TestContext testContext) throws Exception
{
runner.beforeTestClass(new DbUnitTestContextAdapter(testContext));
}
However I noticed hat the textContext variable at this point has a null
connection which causes the database setup to fail, I am trying to think of
a workaround for this issue.
Thanks and kind regards,
Bruce
On Wed, Feb 25, 2015 at 9:07 PM, bruce.w H bruce264@gmail.com wrote:
Ok I'll do that, also I have another question I haven't used GitHub to
collaborate in an open source project, I am using IntelliJ, I selected
checkout from VCS and then GitHub, it asked me for my account, should I be
able to push from an alternate branch to the repository itself or should I
push somewhere else and request a merge?
Regarding the change I am thinking of adding a boolean isPerClass field
that defaults to false to the @DatabaseSetup annotation and then adding
afterTestClass and beforeTestClass methods to DbUnitRunner that only act
when the isPerClass value is set to true and then I would have the
beforeTestMethod and afterTestMethod to not act when the flag is true as
well, do you think this is a good approach or there might be a more elegant
one?
Thank you very much and kind regards,
Bruce
On Wed, Feb 25, 2015 at 5:15 PM, Phil Webb notifications@github.com
wrote:
I think it should be fine as long as the default value is "per method"
—
Reply to this email directly or view it on GitHub
https://github.com/springtestdbunit/spring-test-dbunit/issues/70#issuecomment-76072532
.
Hi @bruce264. Are you able to push the code that you've written so far to GitHub? You need to fork the repository (the button at the top of the page) and then add your fork as a remote repository and push it.
If you use the git command line and you have a local copy checked out from this repository the steps will be something like:
$ git remote add bruce264 https://github.com/bruce264/spring-test-dbunit.git
$ git checkout -b perclass
$ git push bruce265 perclass
There are some good guides about forking a repo here.
Once you have some code pushed I can pull it locally and take a look.
Also, we should use issue #40 to discuss the issue.
|
gharchive/issue
| 2015-02-23T00:11:33 |
2025-04-01T04:35:56.618562
|
{
"authors": [
"bruce264",
"philwebb"
],
"repo": "springtestdbunit/spring-test-dbunit",
"url": "https://github.com/springtestdbunit/spring-test-dbunit/issues/70",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1814944408
|
v2.0.0
Updates SSX to v2.
[x] Release
Codecov Report
Patch and project coverage have no change.
Comparison is base (c93118a) 79.70% compared to head (4855e48) 79.70%.
Additional details and impacted files
@@ Coverage Diff @@
## main #146 +/- ##
=======================================
Coverage 79.70% 79.70%
=======================================
Files 34 34
Lines 4365 4365
Branches 252 252
=======================================
Hits 3479 3479
Misses 886 886
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
gharchive/pull-request
| 2023-07-20T23:37:41 |
2025-04-01T04:35:56.629947
|
{
"authors": [
"codecov-commenter",
"w4ll3"
],
"repo": "spruceid/ssx",
"url": "https://github.com/spruceid/ssx/pull/146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1364640438
|
REFRESH_TOKEN_AUTH is not working
When calling initiate_auth with REFRESH_TOKEN_AUTH I am getting next exception
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/werkzeug/serving.py", line 335, in run_wsgi
execute(self.server.app)
File "/usr/local/lib/python3.8/site-packages/werkzeug/serving.py", line 322, in execute
application_iter = app(environ, start_response)
File "/usr/local/lib/python3.8/site-packages/moto/moto_server/werkzeug_app.py", line 241, in __call__
return backend_app(environ, start_response)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2091, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2076, in wsgi_app
response = self.handle_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1519, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1517, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1503, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/usr/local/lib/python3.8/site-packages/moto/core/utils.py", line 129, in __call__
result = self.callback(request, request.url, dict(request.headers))
File "/usr/local/lib/python3.8/site-packages/moto/core/responses.py", line 217, in dispatch
return cls()._dispatch(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/moto/core/responses.py", line 356, in _dispatch
return self.call_action()
File "/usr/local/lib/python3.8/site-packages/moto/core/responses.py", line 443, in call_action
response = method()
File "/usr/local/lib/python3.8/site-packages/moto/cognitoidp/responses.py", line 558, in initiate_auth
auth_result = region_agnostic_backend.initiate_auth(
File "/usr/local/lib/python3.8/site-packages/moto/cognitoidp/models.py", line 1861, in initiate_auth
return backend.initiate_auth(client_id, auth_flow, auth_parameters)
File "/usr/local/lib/python3.8/site-packages/moto/cognitoidp/models.py", line 1702, in initiate_auth
if client.generate_secret:
AttributeError: 'NoneType' object has no attribute 'generate_secret'
moto==4.0.2
Hi @daka83, can you share a reproducible test case that triggers this error?
I added a pull request, fix is simple https://github.com/spulec/moto/pull/5453
|
gharchive/issue
| 2022-09-07T13:08:56 |
2025-04-01T04:35:56.638370
|
{
"authors": [
"bblommers",
"daka83"
],
"repo": "spulec/moto",
"url": "https://github.com/spulec/moto/issues/5451",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
391526820
|
Rename nix.plugin.zsh → nix-zsh-completions.plugin.zsh
This fixes the plugin to work with Oh My Zsh's custom plugin discovery
method, i.e. plugins=(…).
Thanks.
I don't use Oh My Zsh so I'm not familiar with the plugin system, so I have a few questions:
What would the install instructions look like after this change?
How would the change affect existing users?
So I took a look at oh my zsh and it looks to me like the only thing this does is changing to a more descriptive plugin name, the plugin discovery isn't actually broken as it is. I don't feel that's worth annoying existing users with tbh (at least not on every shell startup).
For reference, the relevant code from oh my zsh that loads plugins (which should have no problem with the current setup):
# Add all defined plugins to fpath. This must be done
# before running compinit.
for plugin ($plugins); do
if is_plugin $ZSH_CUSTOM $plugin; then
fpath=($ZSH_CUSTOM/plugins/$plugin $fpath)
elif is_plugin $ZSH $plugin; then
fpath=($ZSH/plugins/$plugin $fpath)
fi
done
If there is a way to alert existing users to the change in plugin name when upgrading nix-zsh-completions that might make this more palatable, as I readily agree the plugin name isn't great. Doing it on every shell startup is far too spammy however.
It's not broken per se. But the convention is to have the plugin name the same as the repository name afaict.
It's really just interfering with my ability to do this:
{ lib, fetchgit }:
with builtins;
let
pluginSpecs = map lib.importJSON [
./nix-zsh-completions.json
./zsh-completions.json
./zsh-autosuggestions.json
./zsh-syntax-highlighting.json
];
fetchPlugin = plugin: rec {
name = baseNameOf plugin.url;
value = fetchgit {
inherit name;
inherit (plugin) url rev sha256;
};
};
plugins = map fetchPlugin pluginSpecs;
in
listToAttrs plugins
(I know about config.programs.zsh, but this is for non-NixOS.)
Would you rather we add a symlink the opposite way instead?
|
gharchive/pull-request
| 2018-12-17T01:18:15 |
2025-04-01T04:35:56.642933
|
{
"authors": [
"hedning",
"ma9e"
],
"repo": "spwhitt/nix-zsh-completions",
"url": "https://github.com/spwhitt/nix-zsh-completions/pull/20",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
522763791
|
spyder crashed
What steps will reproduce the problem?
The issue occured when I was about to mark a line in the variable/dataframe inspector.
Please provide more information next time as requested on the issue tenplate.
This was the automatic response by Spyder as sending through Spyder didn't work. And that was the generic text it created on its own so I just performed that as I thought, when the system uses this kind of information, the devs know what they want ;)
However, I had this problem in a dataframe (125 columns and 64 rows) which I converted by a certain formula into another dataframe (125 columns and 64 rows). While each process worked, I couldn't inspect the dataframe in Spyder.. at column.. I don't know, let's say 20+ it crashed or rather it froze. But when trying to slide back to the first columns several times, it worked again - until I reached that certain column again.
The error is reproducible, so also restarting and so on didn't change anything.
|
gharchive/issue
| 2019-11-14T10:24:23 |
2025-04-01T04:35:56.645145
|
{
"authors": [
"DeLaRiva",
"goanpeca"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/10707",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1040264121
|
Auto Suggestions not coming no suggestions for general functions and local variables
Here are the screenshots
This is because you don't have the right dependencies of Jedi and Parso. So I'm going to close this issue as a duplicate of the one you just opened (#16679).
|
gharchive/issue
| 2021-10-30T18:10:19 |
2025-04-01T04:35:56.646920
|
{
"authors": [
"ccordoba12",
"gmmkmtgk"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/16680",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1241111195
|
ctrl-Q is a very annoying combination
What steps will reproduce the problem?
I have got caught many times accidentally clicking crtl-Q and spyder exits without asking.
I wonder who ever uses this combinaiton, but if you do insist having it, would you at least popup warning: are you sure you want to exit?
You can change/remove the shortcut yourself, under Preferences -> Keyboard shortcuts and search for "quit".
You can also enable prompt at exit under Preferences -> Application -> Advanced settings and "Prompt when exiting"
I kind of disagree to the attitude. Why are you so in a hurry to close the request without even trying try to understand the problem???
Ctrl-Q is very close to ctrl-1, which is very often used. Problem are default settings that the set back to zero every couple of months with a new version of the editor.
An improvement would be to set by default request to check if you are sure to close of remove ctrl-Q as redundant to alt-F4.
As to Firefox, yes it is true. But they do care to ask user by default if he really wants to close the program.
|
gharchive/issue
| 2022-05-19T03:12:17 |
2025-04-01T04:35:56.649372
|
{
"authors": [
"mikhailo",
"rhkarls"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/17926",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2185188927
|
match not recognized as valid Python
Description
What steps will reproduce the problem?
use the match: ... case : construction in a script, which has been valid Python since 3.10
Note the red x icon in the left sidebar of the script editor window reporting 'invalid syntax' even though I am running Python 3.11.
Versions
Spyder version: 5.5.2 1754f9ac9 (standalone)
Python version: 3.9.14 64-bit
Qt version: 5.15.11
PyQt5 version: 5.15.10
Operating System: macOS-14.3.1-x86_64-i386-64bit
Dependencies
# Mandatory:
atomicwrites >=1.2.0 : 1.4.1 (OK)
chardet >=2.0.0 : 5.2.0 (OK)
cloudpickle >=0.5.0 : 3.0.0 (OK)
cookiecutter >=1.6.0 : 2.6.0 (OK)
diff_match_patch >=20181111 : 20230430 (OK)
intervaltree >=3.0.2 : 3.1.0 (OK)
IPython >=8.13.0,<9.0.0,!=8.17.1 : 8.18.1 (OK)
jedi >=0.17.2,<0.20.0 : 0.19.1 (OK)
jellyfish >=0.7 : 1.0.3 (OK)
jsonschema >=3.2.0 : 4.21.1 (OK)
keyring >=17.0.0 : 24.3.1 (OK)
nbconvert >=4.0 : 7.16.2 (OK)
numpydoc >=0.6.0 : 1.6.0 (OK)
parso >=0.7.0,<0.9.0 : 0.8.3 (OK)
pexpect >=4.4.0 : 4.9.0 (OK)
pickleshare >=0.4 : 0.7.5 (OK)
psutil >=5.3 : 5.9.8 (OK)
pygments >=2.0 : 2.17.2 (OK)
pylint >=2.5.0,<3.1 : 3.0.4 (OK)
pylint_venv >=3.0.2 : None (OK)
pyls_spyder >=0.4.0 : 0.4.0 (OK)
pylsp >=1.10.0,<1.11.0 : 1.10.0 (OK)
pylsp_black >=2.0.0,<3.0.0 : 2.0.0 (OK)
qdarkstyle >=3.2.0,<3.3.0 : 3.2.3 (OK)
qstylizer >=0.2.2 : 0.2.2 (OK)
qtawesome >=1.2.1,<1.3.0 : 1.2.3 (OK)
qtconsole >=5.5.1,<5.6.0 : 5.5.1 (OK)
qtpy >=2.1.0 : 2.4.1 (OK)
rtree >=0.9.7 : 1.2.0 (OK)
setuptools >=49.6.0 : 69.1.1 (OK)
sphinx >=0.6.6 : 5.1.1 (OK)
spyder_kernels >=2.5.1,<2.6.0 : 2.5.1 (OK)
textdistance >=4.2.0 : 4.6.1 (OK)
three_merge >=0.1.1 : 0.1.1 (OK)
watchdog >=0.10.3 : 4.0.0 (OK)
zmq >=24.0.0 : 25.1.2 (OK)
# Optional:
cython >=0.21 : 3.0.9 (OK)
matplotlib >=3.0.0 : 3.8.3 (OK)
numpy >=1.7 : 1.26.4 (OK)
pandas >=1.1.1 : 2.2.1 (OK)
scipy >=0.17.0 : 1.12.0 (OK)
sympy >=0.7.3 : 1.12 (OK)
Environment
Environment
I did not find any Spyder user settings to specify which minimum Python version to use for Spyder's code analysis feature. That might be an approach to consider?
Hey @alanngnet, thanks for reporting. This is basically a duplicate of issue #21872, which will be fixed in Spyder 6 (to be released in a couple of months). Although that issue was about our Windows installer, the same problem applies to our Mac one as well because it uses Python 3.9 at the moment.
Please read that issue to learn how to work around this problem in the meantime.
|
gharchive/issue
| 2024-03-14T01:15:52 |
2025-04-01T04:35:56.654811
|
{
"authors": [
"alanngnet",
"ccordoba12"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/21890",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
120064058
|
Spyder won't connect to kernel / iPython console after switching interpreter
So I just freshly installed th spyder dmg from this repository. When i switch to my standard python interpreter (/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7) however, the console keeps giving the message: Connecting to kernel.
Things I have checked & run:
I can run ipython from terminal
I can use the iPython console in PyCharm using the SAME interpreter
pip install jupyter -U
I can import IPython from the regular 'console' in spyder.
I am currently on OS X el capitan.
Backlink : I had this issue plus another one in #2846, in which I provide a workaround, it that helps.
Thanks, will look into it!
Right now you need to have IPython and PyQt installed in your external interpreter for this to work.
In 3.0 this will be reduced to just having ipykernel installed, and we'll inform our users about it :-)
|
gharchive/issue
| 2015-12-03T00:04:15 |
2025-04-01T04:35:56.657804
|
{
"authors": [
"ccordoba12",
"ewjoachim",
"icam0"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/2844",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
672760760
|
PR: Fix parent of project type
Description of Changes
[ ] Wrote at least one-line docstrings (for any new functions)
[ ] Added unit test(s) covering the changes (if testable)
[ ] Included a screenshot or animation (if affecting the UI, see Licecap)
Issue(s) Resolved
Fixes #13471
Affirmation
By submitting this Pull Request or typing my (user)name below,
I affirm the Developer Certificate of Origin
with respect to all commits and content included in this PR,
and understand I am releasing the same under Spyder's MIT (Expat) license.
I certify the above statement is true and correct: @goanpeca
@nerohmot this should fix the problem. Thanks for the find!
Hello @goanpeca! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
In the file spyder/plugins/projects/plugin.py:
Line 784:80: E501 line too long (80 > 79 characters)
|
gharchive/pull-request
| 2020-08-04T12:39:19 |
2025-04-01T04:35:56.663239
|
{
"authors": [
"goanpeca",
"pep8speaks"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/pull/13472",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2212400049
|
🛑 Client Website - letthechildrensing.org is down
In 6706d12, Client Website - letthechildrensing.org (https://letthechildrensing.org) was down:
HTTP code: 404
Response time: 429 ms
Resolved: Client Website - letthechildrensing.org is back up in 2ec5f06 after 11 minutes.
|
gharchive/issue
| 2024-03-28T05:42:03 |
2025-04-01T04:35:56.676848
|
{
"authors": [
"matthewthowells"
],
"repo": "sqbxmediagroup/status",
"url": "https://github.com/sqbxmediagroup/status/issues/546",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1855555754
|
Missing import for Time types when query_parameter_limit > 0 and wrong null Time type for pgx/v5
Version
1.21.0
What happened?
When query_parameter_limit > 0 and emit_pointers_for_null_types is set to true sqlc generates parameter type of DB engine specific time type instead of *time.Time:
pgx/v5: func (q *Queries) UpdateLoginTime(ctx context.Context, uid uuid.UUID, lastLogin pgtype.Timestamptz) (User, error) {
I expect type for lastLogin to be *time.Time.
Database schema
CREATE TABLE IF NOT EXISTS users
(
id SERIAL PRIMARY KEY,
first_login timestamp with time zone,
last_login timestamp with time zone
);
SQL queries
-- name: UpdateLoginTime :one
UPDATE users
SET first_login = CASE WHEN first_login IS NULL THEN $2 END,
last_login = $2
WHERE uid = $1
RETURNING *;
Configuration
version: "2"
sql:
- engine: "postgresql"
queries: "storage"
schema: "storage/migrations"
gen:
go:
sql_package: "pgx/v5"
package: "db"
out: "db"
emit_json_tags: true
emit_pointers_for_null_types: true
json_tags_case_style: camel
query_parameter_limit: 5
overrides:
- db_type: "uuid"
go_type: "github.com/gofrs/uuid.UUID"
- go_struct_tag: "json:\"-\""
column: "*.password"
Playground URL
No response
What operating system are you using?
macOS
What database engines are you using?
PostgreSQL
What type of code are you generating?
Go
Same as https://github.com/sqlc-dev/sqlc/issues/2459
This issue will be resolved at https://github.com/sqlc-dev/sqlc/pull/2597.
Oh, I'm sorry, I've tried to find existing issue for that but I couldn't. Is it also going to address problem with incorrect pgtype.Timestamptz instead of *time.Time for emit_pointers_for_null_types: true?
Same as #2459
This issue will be resolved at #2597.
I've edited the description to remove issue fixed in the 1.21.0 release and kept the one that still exists.
I don't think query_parameter_limit is related, as I see the same behavior with it set to 0.
I also don't think the value of emit_pointers_for_null_types has any effect on how sqlc determines input parameter types, but I might be wrong about that. And arguably it should I suppose. That could be raised as a new enhancement issue.
I think you can get the behavior you want with an override though. See the playground link below. Note that for the example to work I had to define column types as timestamptz because sqlc apparently doesn't recognize timestamp with time zone as type timestamptz when considering overrides.
https://play.sqlc.dev/p/46304e09afdea98104a6b7c2e4ec0aae63cfb17104714e3fceda8a4fa1e0c457
Just for reference, here is all types that has SQLDriverPGXV5 logic preceding (thus overriding) emit_pointers_for_null_types one:
date
pg_catalog.time
pg_catalog.timestamp
pg_catalog.timestamptz
timestamptz
uuid
interval
pg_catalog.interval
|
gharchive/issue
| 2023-08-17T19:08:42 |
2025-04-01T04:35:56.727376
|
{
"authors": [
"Flamefork",
"andrewmbenton",
"dandee",
"orisano"
],
"repo": "sqlc-dev/sqlc",
"url": "https://github.com/sqlc-dev/sqlc/issues/2630",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
304844124
|
Allow ValidJobOwner to check for multiple job owners
Feature Request
Please allow the ValidJobOwner check to accept a collection of valid values via the agent.validjobowner.name configuration.
Each client in my multiclient setup is given their own distinct job owner account to use to run their jobs. This prevents privilege escalation or cross client access via the executing stored procedure which is controlled by the client. Therefore I'd like to be able to provide a list of acceptable logins which can be used as job owners and for the test to fail is a job is owned by someone not in the list.
Please can you test with this branch - https://github.com/sqlcollaborative/dbachecks/tree/agent-job-owners
It has a couple of fixes, Test-DbaJobOwner doesnt report the successes!
@frank687 Let me know by @ ing me here and I will merge if you are happy
@SQLDBAWithABeard It looks like this is still using Test-DbaJobOwner.
PS> invoke-dbccheck -SqlInstance XXX -Check ValidJobOwner
Executing all tests in '.' with Tags ValidJobOwner
Executing script C:\Users\frank.henninger\Documents\GitHub\dbachecks\checks\Agent.Tests.ps1
Describing Valid Job Owner
Context Testing job owners on BuckeyeDev
[-] Error occurred in Context block 1.12s
PSInvalidCastException: Cannot convert value to type System.String.
ArgumentTransformationMetadataException: Cannot convert value to type System.String.
ParameterBindingArgumentTransformationException: Cannot process argument transformation on parameter 'Login'. Cannot convert value to type System.String.
at <ScriptBlock>, C:\Users\frank.henninger\Documents\GitHub\dbachecks\checks\Agent.Tests.ps1: line 85
at DescribeImpl, C:\Program Files\WindowsPowerShell\Modules\Pester\4.3.1\Functions\Describe.ps1: line 161
Here's the relevant section from the test.
Describe "Valid Job Owner" -Tags ValidJobOwner, $filename {
$targetowner = Get-DbcConfigValue agent.validjobowner.name
@(Get-SqlInstance).ForEach{
Context "Testing job owners on $psitem" {
@(Test-DbaJobOwner -SqlInstance $psitem -Login $targetowner -EnableException:$false).ForEach{
It "$($psitem.Job) owner should be in this list $targetowner on $($psitem.Server)" {
$psitem.CurrentOwner | Should -BeIn $psitem.TargetOwner -Because "The account that is the job owner is not what was expected"
}
}
}
}
}
@SQLDBAWithABeard This version works: ( Sorry not on this project so i cant just update the code )
Describe "Valid Job Owner" -Tags ValidJobOwner, $filename {
$targetowner = Get-DbcConfigValue agent.validjobowner.name
@(Get-SqlInstance).ForEach{
Context "Testing job owners on $psitem" {
@(Get-DbaAgentJob -SqlInstance $psitem -EnableException:$false).ForEach{
It "$($psitem.Name) owner $($psitem.OwnerLoginName) should be in this list $targetowner on $($psitem.Server)" {
$psitem.OwnerLoginName | Should -BeIn $TargetOwner -Because "The account that is the job owner is not what was expected"
}
}
}
}
}
Hmmm, what have I done here ? :-)
I will have to check when I get back, that code is almost what I wrote but you need to specify that $targetowner is an array.
I will update later and ask you to test apologies
Brilliant, it was this commit here c1e2319a119dae48999af4251071a89293d923f8
reverted my changes but you cna see the code
If taht works I will add it later (with the right code!)
@SQLDBAWithABeard Yes that worked
[string[]]$targetowner = Get-DbcConfigValue agent.validjobowner.name
I have updated the branch again so now it shows hte correct code!! (also enabled some github training :-)
Validated
|
gharchive/issue
| 2018-03-13T16:40:41 |
2025-04-01T04:35:56.733794
|
{
"authors": [
"SQLDBAWithABeard",
"frank687"
],
"repo": "sqlcollaborative/dbachecks",
"url": "https://github.com/sqlcollaborative/dbachecks/issues/385",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
393556432
|
Get-DbaBuildReference fix piping issue #4797
Type of Change
[X] Bug fix (non-breaking change, fixes #)
[ ] New feature (non-breaking change, adds functionality)
[ ] Breaking change (effects multiple commands or functionality)
[X] Ran manual Pester test and has passed (`.\tests\manual.pester.ps1)
[ ] Adding code coverage to existing functionality
[ ] Pester test is included
[ ] If new file reference added for test, has is been added to github.com/sqlcollaborative/appveyor-lab ?
[ ] Nunit test is included
[ ] Documentation
[ ] Build system
Purpose
Parameters don't seem to be bound within the begin section of a function when piping. Moved the whole 'region verifying parameters' section from begin to process and it works.
Approach
Screenshots
great move, thank you! 💯
|
gharchive/pull-request
| 2018-12-21T18:48:48 |
2025-04-01T04:35:56.738718
|
{
"authors": [
"jpomfret",
"potatoqualitee"
],
"repo": "sqlcollaborative/dbatools",
"url": "https://github.com/sqlcollaborative/dbatools/pull/4869",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
774949343
|
Export-DbaDacPackage - add the database name to the generated filename
Updated the Get-ExportFilePath to support -DatabaseName
Updated Export-DbaDacPackage to include the db name in the filename
Added integration tests to cover the code changes
Type of Change
[x] Bug fix (non-breaking change, fixes #7038 )
[ ] New feature (non-breaking change, adds functionality, fixes # )
[ ] Breaking change (effects multiple commands or functionality, fixes # )
[x] Ran manual Pester test and has passed (`.\tests\manual.pester.ps1)
[x] Adding code coverage to existing functionality
[x] Pester test is included
[ ] If new file reference added for test, has is been added to github.com/sqlcollaborative/appveyor-lab ?
[ ] Nunit test is included
[ ] Documentation
[ ] Build system
Purpose
Ensure that the generated filenames will contain the db name (when -FilePath is not used).
Approach
Added -DatabaseName to Get-ExportFilePath to solve the issue here and also to make it available for other export commands.
Commands to test
See the examples and integration tests.
@wsmelton , thanks for the feedback. I'll make those changes.
@wsmelton, let me know if any additional changes are needed.
@wsmelton, let me know if any additional changes are needed.
|
gharchive/pull-request
| 2020-12-26T23:42:49 |
2025-04-01T04:35:56.744351
|
{
"authors": [
"lancasteradam"
],
"repo": "sqlcollaborative/dbatools",
"url": "https://github.com/sqlcollaborative/dbatools/pull/7054",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
132340709
|
Robolectric tests fail - Unable to extract the trust manager v3.1.1
Robolectric tests fail with v3.1.1
java.lang.IllegalStateException: Unable to extract the trust manager on okhttp3.internal.Platform$Android@56bc3fac, sslSocketFactory is class sun.security.ssl.SSLSocketFactoryImpl
at okhttp3.OkHttpClient.<init>(OkHttpClient.java:186)
at okhttp3.OkHttpClient.<init>(OkHttpClient.java:151)
Related: #2323
I got this on an actual Android 5.1.1 device with OkHttp v3.1.1
java.lang.IllegalStateException: Unable to extract the trust manager on okhttp3.internal.Platform$Android@181dca9b, sslSocketFactory is class my.packagename.MySSLSocketFactory
at okhttp3.OkHttpClient.(OkHttpClient.java:186)
at okhttp3.OkHttpClient.(OkHttpClient.java:60)
@Jeff11 frustratingly, the Java SocketFactory don’t give us a sane mechanism to do proper certificate pinning, and so we hack in tons of nasty reflection. I’m not sure how to proceed with MySSLSocketFactory . . . I’m curious - why do you have a custom SSL socket factory?
We have our own certificate. The custom ssl socket factory uses a KeyStore with that certificate and falls back to the default KeyStore for other connections.
@Jeff11 just curious - what happens if you rename the field that contains the fallback SSL socket factory to delegate ? We have a special, disgusting hack that’ll make that work possibly, even if that field is private.
@swankjesse It's working. :D
Just to clarify - fixed in v3.1.2
I am getting this same error now running a test on emulator. Using okhttp 3.1.1 and Robolectric 3.0. What are my options to solve this?
|
gharchive/issue
| 2016-02-09T06:16:53 |
2025-04-01T04:35:56.787731
|
{
"authors": [
"IgorGanapolsky",
"Jeff11",
"shubhamchaudhary",
"swankjesse"
],
"repo": "square/okhttp",
"url": "https://github.com/square/okhttp/issues/2327",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
212358332
|
upload image ,content-size 0
the image file is approximately 17mb.
i used multipartbody to build a request.
here is the raw http request info
--04e8b019-ae62-4044-97ff-d8c13c8ae5f6
Content-Disposition: form-data; name="uploadFile"; filename="upload.jpg"
Content-Type: image/jpeg
Content-Length: 0
--04e8b019-ae62-4044-97ff-d8c13c8ae5f6--
the content-length is 0; how can i fix this?
Test case please.
sorry,it's my fault.
|
gharchive/issue
| 2017-03-07T08:06:24 |
2025-04-01T04:35:56.789946
|
{
"authors": [
"JokerHYC",
"swankjesse"
],
"repo": "square/okhttp",
"url": "https://github.com/square/okhttp/issues/3209",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
806282256
|
Attempt to invoke virtual method 'okhttp3.HttpUrl okhttp3.Request.url()' on a null object reference
Okhttp is giving me "Attempt to invoke virtual method 'okhttp3.HttpUrl okhttp3.Request.url()' on a null object reference" error after upgrading from fabric to firebase crashlytics, Please find below changes
Note: okhttp works fine without changes to firebase crashlytics
Android studio 4.1
Java 8
Android Gradle plugin 4.1.1
Gradle Version 6.5.1
okhttp : "3.14.7"
retrofit : "2.4.0"
glide : "4.1.1"
gson : "2.8.5"
In project / build.gradle
dependencies {
classpath 'com.android.tools.build:gradle:4.1.1'
classpath 'com.google.gms:google-services:4.3.4'
classpath 'me.tatarka:gradle-retrolambda:3.7.0'
classpath 'com.google.firebase:firebase-plugins:1.1.1'
classpath 'com.google.firebase:perf-plugin:1.3.1'
classpath 'com.google.firebase:firebase-crashlytics-gradle:2.4.1'
}
In app/build.gradle
apply plugin: 'com.google.gms.google-services'
apply plugin: 'com.google.firebase.crashlytics'
implementation platform('com.google.firebase:firebase-bom:26.4.0')
implementation 'com.google.firebase:firebase-crashlytics'
implementation 'com.google.firebase:firebase-analytics'
Please suggest me on fixing the issue
Thanks for your question. This issue tracker is only for bug reports with test cases and feature requests. Please ask usage questions on Stack Overflow.
https://stackoverflow.com/questions/tagged/okhttp
|
gharchive/issue
| 2021-02-11T10:59:21 |
2025-04-01T04:35:56.795153
|
{
"authors": [
"HemaMadangopal",
"swankjesse"
],
"repo": "square/okhttp",
"url": "https://github.com/square/okhttp/issues/6559",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
157802829
|
Add phantomjs to gemfile (used in acceptance tests)
@sheepgoesbaa
looks good to me
|
gharchive/pull-request
| 2016-06-01T01:05:22 |
2025-04-01T04:35:56.800759
|
{
"authors": [
"SheepGoesBaa",
"michaelfinch"
],
"repo": "square/shift",
"url": "https://github.com/square/shift/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
451102127
|
sqldelight for Kotlin multiplatform JS target
Are there any plans for a JS target for Kotlin multiplatform? I see there is a JS runtime available but no Driver.
There's no canonical implementation for JS. There's some WASM and emscripten compilations of SQLite and some pure-JS implementations. Do you have a target database implementation you're looking for?
Hmm, I'm an Android dev just getting started with Kotlin multiplatform. The bulk of my experience so far is with Room. Speaking with our web devs, it looks like one of these would be a good fit: https://github.com/kripken/sql.js or http://jsstore.net/
Writing a driver for either of those should be pretty straightforward so I'll give that a try.
|
gharchive/issue
| 2019-06-01T16:29:27 |
2025-04-01T04:35:56.802792
|
{
"authors": [
"JakeWharton",
"chrisestesaa01"
],
"repo": "square/sqldelight",
"url": "https://github.com/square/sqldelight/issues/1350",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1013341722
|
feat: Beeing able to enable autostart
Allows to enable autostart. May need testing, as it's my first attempt to work on a Terraform module.
👍 LGTM, nice addition
|
gharchive/pull-request
| 2021-10-01T13:21:25 |
2025-04-01T04:35:56.826304
|
{
"authors": [
"srb3",
"tuxpeople"
],
"repo": "srb3/terraform-libvirt-domain",
"url": "https://github.com/srb3/terraform-libvirt-domain/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
373499263
|
HOTFIX: fix typo in deployment provider name
https://github.com/travis-ci/travis.rb#lint did not catch this one 🤦♂️
Merging as a hotfix
|
gharchive/pull-request
| 2018-10-24T14:01:26 |
2025-04-01T04:35:56.827364
|
{
"authors": [
"bzz"
],
"repo": "src-d/enry",
"url": "https://github.com/src-d/enry/pull/172",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
423708691
|
Regression: panic on query using REGEXP
time="2019-03-21T12:34:39Z" level=info msg="audit trail" action=authorization address="127.0.0.1:60054" connection_id=1 permission=read pid=1 query="SELECT f.repository_id, f.blob_hash, f.commit_hash, f.file_path FROM (\n SELECT blob_hash, repository_id\n FROM blobs\n WHERE NOT IS_BINARY(blob_content) AND (\n blob_content REGEXP '(?i)facebook.*[\\'\\\\\"][0-9a-f]{32}[\\'\\\\\"]'\n OR blob_content REGEXP '(?i)twitter.*[\\'\\\\\"][0-9a-zA-Z]{35,44}[\\'\\\\\"]'\n OR blob_content REGEXP '(?i)github.*[\\'\\\\\"][0-9a-zA-Z]{35,40}[\\'\\\\\"]'\n OR blob_content REGEXP 'AKIA[0-9A-Z]{16}'\n OR blob_content REGEXP '(?i)heroku.*[0-9A-F]{8}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{12}'\n OR blob_content REGEXP '.*-----BEGIN ((RSA|DSA|OPENSSH|SSH2|EC) )?PRIVATE KEY-----.*'\n )\n) h INNER JOIN commit_files f ON h.blob_hash = f.blob_hash\n AND h.repository_id = f.repository_id\n AND f.file_path NOT REGEXP '^vendor.*'\n" success=true system=audit user=root
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x2 addr=0x7fe348ffa000 pc=0x7fe3eeaa6e2f]
runtime stack:
runtime.throw(0x127fd8c, 0x2a)
/usr/local/go/src/runtime/panic.go:616 +0x81
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:372 +0x28e
goroutine 53 [syscall]:
runtime.cgocall(0xeaada0, 0xc429d69770, 0x127d59c)
/usr/local/go/src/runtime/cgocall.go:128 +0x64 fp=0xc429d69728 sp=0xc429d696f0 pc=0x403a94
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma._Cfunc_SearchOnigRegex(0xc42cff0600, 0x12d0, 0x0, 0x7fe3b0000df0, 0x7fe3b0001770, 0x7fe3b0001730, 0x0, 0x0, 0x0, 0x0)
_cgo_gotypes.go:285 +0x50 fp=0xc429d69770 sp=0xc429d69728 pc=0x958bc0
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.(*Regexp).match.func1(0xc42cff0600, 0x12d0, 0xc400000000, 0x7fe3b0000df0, 0x7fe3b0001770, 0x7fe3b0001730, 0x0, 0x0, 0x0, 0xc42cff0600)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:209 +0x144 fp=0xc429d697d0 sp=0xc429d69770 pc=0x95eb14
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.(*Regexp).match(0xc427c0a230, 0xc42cff0600, 0x12d0, 0x1300, 0x12d0, 0x0, 0xc42cff0600)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:209 +0x9e fp=0xc429d69830 sp=0xc429d697d0 pc=0x959fae
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.(*Regexp).Match(0xc427c0a230, 0xc42cff0600, 0x12d0, 0x1300, 0x12d0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:444 +0x57 fp=0xc429d69878 sp=0xc429d69830 pc=0x95c397
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.(*Regexp).MatchString(0xc427c0a230, 0xc42cfef300, 0x12d0, 0x12d0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:449 +0x6a fp=0xc429d698b8 sp=0xc429d69878 pc=0x95c42a
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex.(*Oniguruma).Match(0xc42010a3d0, 0xc42cfef300, 0x12d0, 0x1454360)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex/regex_oniguruma.go:14 +0x42 fp=0xc429d698e8 sp=0xc429d698b8 pc=0x95f5f2
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Regexp).compareRegexp(0xc420dd6240, 0xc420e2c0f0, 0xc42c8704c0, 0x4, 0x4, 0x4, 0xf5f060, 0x2087040, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/comparison.go:271 +0x232 fp=0xc429d699b0 sp=0xc429d698e8 pc=0x9680d2
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Regexp).Eval(0xc420dd6240, 0xc420e2c0f0, 0xc42c8704c0, 0x4, 0x4, 0xf5f060, 0x2087040, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/comparison.go:206 +0x1f0 fp=0xc429d69a18 sp=0xc429d699b0 pc=0x967d80
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Or).Eval(0xc42000c780, 0xc420e2c0f0, 0xc42c8704c0, 0x4, 0x4, 0x4, 0xbec001, 0xc42b04e000, 0x12d0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:116 +0xdd fp=0xc429d69a78 sp=0xc429d69a18 pc=0x96e9dd
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Or).Eval(0xc42000c7c0, 0xc420e2c0f0, 0xc42c8704c0, 0x4, 0x4, 0xc42036a700, 0x0, 0x0, 0xc42036a720)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:107 +0x62 fp=0xc429d69ad8 sp=0xc429d69a78 pc=0x96e962
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Or).Eval(0xc42000c800, 0xc420e2c0f0, 0xc42c8704c0, 0x4, 0x4, 0x4, 0xf5f060, 0x2087040, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:107 +0x62 fp=0xc429d69b38 sp=0xc429d69ad8 pc=0x96e962
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Or).Eval(0xc42000c840, 0xc420e2c0f0, 0xc42c8704c0, 0x4, 0x4, 0xf5f060, 0x2087041, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:107 +0x62 fp=0xc429d69b98 sp=0xc429d69b38 pc=0x96e962
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*And).Eval(0xc420e021e0, 0xc420e2c0f0, 0xc42c8704c0, 0x4, 0x4, 0x0, 0x2087040, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:55 +0xdd fp=0xc429d69bf8 sp=0xc429d69b98 pc=0x96e4ed
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*FilterIter).Next(0xc420e0c1b0, 0x8d25baa9, 0x20f0800, 0x0, 0x3, 0x3)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/filter.go:110 +0x97 fp=0xc429d69c68 sp=0xc429d69bf8 pc=0x9821a7
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420e2c140, 0xc422145ee8, 0xc422145f7e, 0x489d36, 0xc422145f78, 0xc41547cb4e)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d fp=0xc429d69ce8 sp=0xc429d69c68 pc=0x75f30d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*trackedRowIter).Next(0xc420e02200, 0x18, 0x44a0e8, 0x3f5d883876c02, 0xc41547dad2, 0x1547dad222145ee8)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/process.go:145 +0x37 fp=0xc429d69d38 sp=0xc429d69ce8 pc=0x98bd27
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*iter).Next(0xc420e02220, 0x8d25b89e, 0x20f0800, 0x4d88cc, 0xc4202eac40, 0xc4200f04e0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/project.go:129 +0x38 fp=0xc429d69da8 sp=0xc429d69d38 pc=0x98dc28
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420e2c190, 0x2, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d fp=0xc429d69e28 sp=0xc429d69da8 pc=0x75f30d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).iterPartition(0xc4203bb3e0, 0x1455180, 0xc420dea000)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:249 +0x286 fp=0xc429d69f98 sp=0xc429d69e28 pc=0x981066
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start.func1(0xc4203bb3e0, 0xc42047d500, 0x1455180, 0xc420dea000)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:170 +0x3f fp=0xc429d69fc0 sp=0xc429d69f98 pc=0x99a6ff
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc429d69fc8 sp=0xc429d69fc0 pc=0x458821
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:169 +0x110
goroutine 1 [IO wait]:
internal/poll.runtime_pollWait(0x7fe3ec942fe0, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc4201f5718, 0x72, 0xc420273c00, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc4201f5718, 0xffffffffffffff00, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Accept(0xc4201f5700, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:372 +0x1a8
net.(*netFD).accept(0xc4201f5700, 0xc42010b108, 0x44a0e8, 0x3f5d831d51c7c)
/usr/local/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc42010b0f8, 0x454f50, 0xc420cd1870, 0xc420cd1878)
/usr/local/go/src/net/tcpsock_posix.go:136 +0x2e
net.(*TCPListener).Accept(0xc42010b0f8, 0x129eb90, 0xc4201f5780, 0x146f5e0, 0xc42010b108)
/usr/local/go/src/net/tcpsock.go:259 +0x49
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-vitess.v1/mysql.(*Listener).Accept(0xc4201f5780)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-vitess.v1/mysql/server.go:224 +0x10e
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/server.(*Server).Start(0xc42010b100, 0x25, 0xc420cd1ac0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/server/server.go:69 +0x2e
github.com/src-d/gitbase/cmd/gitbase/command.(*Server).Execute(0xc4204361e0, 0xc420391310, 0x1, 0x5, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/cmd/gitbase/command/server.go:198 +0x76f
github.com/src-d/gitbase/vendor/github.com/jessevdk/go-flags.(*Parser).ParseArgs(0xc4203ba5a0, 0xc420110010, 0x5, 0x5, 0x1, 0x2, 0xc4202025a0, 0xc4201f5280, 0xc4201f50c8)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/jessevdk/go-flags/parser.go:316 +0x80b
github.com/src-d/gitbase/vendor/github.com/jessevdk/go-flags.(*Parser).Parse(0xc4203ba5a0, 0x1187528, 0x7, 0x1259f79, 0x1c, 0x1259f79)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/jessevdk/go-flags/parser.go:186 +0x71
main.main()
/tmp/regression-207397520/src/github.com/src-d/gitbase/cmd/gitbase/main.go:52 +0x3c3
goroutine 19 [syscall]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 7 [runnable]:
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).Next(0xc420ee24e0, 0xc4204e6000, 0x5, 0xc422145748, 0x96e472, 0xc420de0960)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:269 +0x107
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*innerJoinIter).loadRight(0xc420203290, 0x0, 0x0, 0x7, 0x7, 0xf5f060, 0x2087040)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/innerjoin.go:298 +0x199
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*innerJoinIter).Next(0xc420203290, 0x3bab5303, 0x20f0800, 0xc, 0x4, 0x3)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/innerjoin.go:334 +0x59
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420dd8eb0, 0x90, 0x44a0e8, 0x3f5d8320d0475, 0xc43b030750, 0x3b0307502049cda8)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*iter).Next(0xc420ddbca0, 0x3bab5111, 0x20f0800, 0x10aebc0, 0xc420dc5920, 0x412ce8)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/project.go:129 +0x38
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420dd8f00, 0x4, 0xc420dc5990, 0x412ce8, 0x50, 0x10aebc0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*trackedRowIter).Next(0xc420ddbcc0, 0xc420dd9090, 0x4, 0xc420ddbd00, 0x4, 0x4)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/process.go:145 +0x37
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/server.(*Handler).ComQuery(0xc42036a080, 0xc420cd51e0, 0xc420d1a000, 0x31a, 0xc420d060f0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/server/handler.go:119 +0x2b2
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-vitess.v1/mysql.(*Conn).handleNextCommand(0xc420cd51e0, 0x1465ac0, 0xc42036a080, 0x8f457, 0x3b736918)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-vitess.v1/mysql/conn.go:730 +0x135a
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-vitess.v1/mysql.(*Listener).handle(0xc4201f5780, 0x146f5e0, 0xc42010b108, 0x1, 0xbf1cff17facb1f06, 0x3b736918, 0x20f0800)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-vitess.v1/mysql/server.go:430 +0xec7
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-vitess.v1/mysql.(*Listener).Accept
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-vitess.v1/mysql/server.go:238 +0xf4
goroutine 8 [semacquire]:
sync.runtime_Semacquire(0xc42047d50c)
/usr/local/go/src/runtime/sema.go:56 +0x39
sync.(*WaitGroup).Wait(0xc42047d500)
/usr/local/go/src/sync/waitgroup.go:129 +0x72
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start(0xc4203bb3e0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:163 +0x264
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).Next
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:266 +0x23d
goroutine 11 [semacquire]:
sync.runtime_SemacquireMutex(0xc420338c54, 0xc4200fa000)
/usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc420338c50)
/usr/local/go/src/sync/mutex.go:134 +0x108
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/cache.(*ObjectLRU).Put(0xc420338c30, 0x146be60, 0xc427d5db00)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/cache/object_lru.go:36 +0x51
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).cachePut(0xc420dfc280, 0x146be60, 0xc427d5db00)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:361 +0x4d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).fillRegularObjectContent(0xc420dfc280, 0x146be60, 0xc427d5db00, 0x0, 0xc42d60b7e0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:291 +0xd1
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).getNextObject(0xc420dfc280, 0xc42b123b00, 0xc42b123b00, 0x0, 0x0, 0x129f680)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:268 +0x1cc
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).objectAtOffset(0xc420dfc280, 0x2101d3, 0xd986fd04716d6035, 0xd986fd04a668aab2, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:201 +0x520
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).GetByOffset(0xc420dfc280, 0x2101d3, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:92 +0x8b
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*objectIter).Next(0xc422a48540, 0x4784b39603302000, 0x83939da4bf7c56a4, 0x30e79de9c7ec48da, 0x1473360)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:440 +0x6a
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem.(*packfileIter).Next(0xc421d50000, 0xc7ec48da83939da4, 0xc430e79de9, 0x14606c0, 0xc421d50000)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem/object.go:632 +0x38
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem.(*lazyPackfilesIter).Next(0xc420dd4000, 0x8ef49d, 0x1051ba0, 0xc4203e4040, 0xc420319f10)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem/object.go:544 +0x56
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/storer.(*MultiEncodedObjectIter).Next(0xc4203e4020, 0xc420d11b40, 0xc148d1, 0xc420de10e0, 0xc4203e4040)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/storer/object.go:237 +0x47
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object.(*BlobIter).Next(0xc4203e4040, 0x0, 0x0, 0x1456d01)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object/blob.go:115 +0x37
github.com/src-d/gitbase.(*blobRowIter).next(0xc420ddfb40, 0x2, 0x18, 0x18, 0xc420ddbf20, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/blobs.go:225 +0x40
github.com/src-d/gitbase.(*blobRowIter).Next(0xc420ddfb40, 0x0, 0x44a0e8, 0x3f5d8320fb48d, 0xc43b05b782, 0x3b05b782200bbc40)
/tmp/regression-207397520/src/github.com/src-d/gitbase/blobs.go:191 +0x7d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*FilterIter).Next(0xc420de11a0, 0x3bae0129, 0x20f0800, 0x0, 0x3, 0x3)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/filter.go:105 +0x38
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420dd91d0, 0xc4200bbee8, 0xc4200bbf7e, 0x0, 0xc4200bbf78, 0x98165f)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*trackedRowIter).Next(0xc420ddbf20, 0x10d59a0, 0x44a0e8, 0x3f5d8320fb33c, 0xc43b05b5bf, 0x3b05b5bf200bbee8)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/process.go:145 +0x37
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*iter).Next(0xc420ddbf40, 0x3badffd8, 0x20f0800, 0x4d88cc, 0xc4202eac40, 0xc4200f04e0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/project.go:129 +0x38
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420dd9220, 0x2, 0x0, 0x0, 0xc4203bb3e0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).iterPartition(0xc4203bb3e0, 0x1455180, 0xc420de27b0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:249 +0x286
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start.func1(0xc4203bb3e0, 0xc42047d500, 0x1455180, 0xc420de27b0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:170 +0x3f
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:169 +0x110
goroutine 54 [runnable]:
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma._Cfunc_NewOnigRegex(0x7fe3a0002680, 0x28, 0xc42c886150, 0xc42c886158, 0xc42c886160, 0xc42c886168, 0xc42c886170, 0xc400000000)
_cgo_gotypes.go:265 +0x4d
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.NewRegexp.func2(0x7fe3a0002680, 0x28, 0xc42c886150, 0xc42c886158, 0xc42c886160, 0xc42c886168, 0xc42c886170, 0x1)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:57 +0x2ca
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.NewRegexp(0xc420103500, 0x28, 0x0, 0xc42c886140, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:57 +0x14b
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.Compile(0xc420103500, 0x28, 0x0, 0xc4201e0c30, 0xc427c54340)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:75 +0x3e
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex.NewOniguruma(0xc420103500, 0x28, 0x11a388c, 0x9, 0xc4201e0cb8, 0xc42b548801, 0x8, 0x10)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex/regex_oniguruma.go:25 +0x39
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex.New(0x11a388c, 0x9, 0xc420103500, 0x28, 0x2114900, 0x0, 0x1f, 0x780, 0x20, 0xc4268af000)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex/regex.go:73 +0x80
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Regexp).compareRegexp.func1(0xc42036a000, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/comparison.go:254 +0x78
sync.(*Pool).Get(0xc42036a000, 0xc427c54340, 0xf6b760)
/usr/local/go/src/sync/pool.go:151 +0xab
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Regexp).compareRegexp(0xc420dd6240, 0xc420e100a0, 0xc427c64a40, 0x4, 0x4, 0x4, 0xf5f060, 0x2087040, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/comparison.go:263 +0x1c1
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Regexp).Eval(0xc420dd6240, 0xc420e100a0, 0xc427c64a40, 0x4, 0x4, 0xf5f060, 0x2087040, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/comparison.go:206 +0x1f0
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Or).Eval(0xc42000c780, 0xc420e100a0, 0xc427c64a40, 0x4, 0x4, 0x4, 0xbec001, 0xc42d3bf000, 0x748)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:116 +0xdd
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Or).Eval(0xc42000c7c0, 0xc420e100a0, 0xc427c64a40, 0x4, 0x4, 0xc427c54300, 0x0, 0x0, 0xc427c54340)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:107 +0x62
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Or).Eval(0xc42000c800, 0xc420e100a0, 0xc427c64a40, 0x4, 0x4, 0x4, 0xf5f060, 0x2087040, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:107 +0x62
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Or).Eval(0xc42000c840, 0xc420e100a0, 0xc427c64a40, 0x4, 0x4, 0xf5f060, 0x2087041, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:107 +0x62
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*And).Eval(0xc420e001e0, 0xc420e100a0, 0xc427c64a40, 0x4, 0x4, 0x0, 0x2087040, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/logic.go:55 +0xdd
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*FilterIter).Next(0xc420e041b0, 0x3bb37b99, 0x20f0800, 0x0, 0x3, 0x3)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/filter.go:110 +0x97
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420e100f0, 0xc420defee8, 0xc420deff7e, 0x0, 0xc420deff78, 0x98165f)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*trackedRowIter).Next(0xc420e00200, 0x10d59a0, 0x44a0e8, 0x3f5d832152cbf, 0xc43b0b2f42, 0x3b0b2f4220defee8)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/process.go:145 +0x37
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*iter).Next(0xc420e00220, 0x3bb3795b, 0x20f0800, 0x4d88cc, 0xc4202eac40, 0xc4200f04e0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/project.go:129 +0x38
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420e10140, 0x2, 0x0, 0x0, 0xc4203bb3e0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).iterPartition(0xc4203bb3e0, 0x1455180, 0xc420dea010)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:249 +0x286
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start.func1(0xc4203bb3e0, 0xc42047d500, 0x1455180, 0xc420dea010)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:170 +0x3f
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:169 +0x110
goroutine 47 [semacquire]:
sync.runtime_Semacquire(0xc42461a07c)
/usr/local/go/src/runtime/sema.go:56 +0x39
sync.(*WaitGroup).Wait(0xc42461a070)
/usr/local/go/src/sync/waitgroup.go:129 +0x72
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start(0xc420ee24e0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:163 +0x264
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).Next
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:266 +0x23d
goroutine 66 [runnable]:
sync.runtime_SemacquireMutex(0xc420338c54, 0xc42b576700)
/usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc420338c50)
/usr/local/go/src/sync/mutex.go:134 +0x108
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/cache.(*ObjectLRU).Get(0xc420338c30, 0x3fb06ae55e778633, 0x199ad95386c076e4, 0x199ad9537ae432d5, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/cache/object_lru.go:81 +0x50
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem.(*ObjectStorage).decodeObjectAt(0xc4244a6e88, 0x146ffa0, 0xc421304ae0, 0x146bec0, 0xc420e3db00, 0x9478eb, 0x0, 0x0, 0x7ae432d5199ad953, 0x9478eb)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem/object.go:388 +0x260
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem.(*ObjectStorage).getFromPackfile(0xc4244a6e88, 0x3fb06ae55e778633, 0x199ad95386c076e4, 0x7ae432d5, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem/object.go:378 +0x2cb
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem.(*ObjectStorage).EncodedObject(0xc4244a6e88, 0xb06ae55e77863303, 0x9ad95386c076e43f, 0x7ae432d519, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem/object.go:249 +0x6c
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object.GetBlob(0x1469ca0, 0xc4244a6e70, 0x3fb06ae55e778633, 0x199ad95386c076e4, 0x7ae432d5, 0x5e778633000081a4, 0x86c076e43fb06ae5, 0x7ae432d5199ad953)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object/blob.go:23 +0x4e
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object.(*FileIter).Next(0xc4278690e0, 0x7fe3e461a4e8, 0x1e, 0xc42b87bcc0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object/file.go:100 +0x13f
github.com/src-d/gitbase.(*commitFilesRowIter).next(0xc4244a6f50, 0xc4200f0358, 0x1, 0xc42996be18, 0x43d75b, 0xc42996bee8)
/tmp/regression-207397520/src/github.com/src-d/gitbase/commit_files.go:278 +0x88
github.com/src-d/gitbase.(*commitFilesRowIter).Next(0xc4244a6f50, 0x3, 0x44a0e8, 0x3f5d8ee375ba6, 0x8c23697, 0x8c2369700000003)
/tmp/regression-207397520/src/github.com/src-d/gitbase/commit_files.go:192 +0x7d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*FilterIter).Next(0xc4244a45a0, 0xf7d5a842, 0x20f0800, 0xc42996bee8, 0xc42996bf7e, 0x489d36)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/filter.go:105 +0x38
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420e2c690, 0x8c235d52996bee8, 0x5c9384e3, 0xc42996bd98, 0x489d36, 0x5c9384e3)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*trackedRowIter).Next(0xc4246182c0, 0xf7d5a780, 0x20f0800, 0xc4200f04e0, 0xc420ee2478, 0xc4202eac50)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/process.go:145 +0x37
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420e2c6e0, 0x2, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).iterPartition(0xc420ee24e0, 0x1455180, 0xc424448b40)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:249 +0x286
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start.func1(0xc420ee24e0, 0xc42461a070, 0x1455180, 0xc424448b40)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:170 +0x3f
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:169 +0x110
goroutine 82 [semacquire]:
sync.runtime_SemacquireMutex(0x2112fd4, 0xc427f9d800)
/usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0x2112fd0)
/usr/local/go/src/sync/mutex.go:134 +0x108
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.NewRegexp(0xc42047cf40, 0x9, 0x0, 0xc42d744820, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:55 +0xc2
github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma.Compile(0xc42047cf40, 0x9, 0x98, 0xc4201e0c30, 0xc4292053f0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/github.com/src-d/go-oniguruma/regex.go:75 +0x3e
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex.NewOniguruma(0xc42047cf40, 0x9, 0x11a388c, 0x9, 0xc4201e0cb8, 0xf2e001, 0x8, 0x10)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex/regex_oniguruma.go:25 +0x39
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex.New(0x11a388c, 0x9, 0xc42047cf40, 0x9, 0x2114900, 0x0, 0x1f, 0x880, 0x20, 0xc425d76000)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/internal/regex/regex.go:73 +0x80
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Regexp).compareRegexp.func1(0xc4204d55a0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/comparison.go:254 +0x78
sync.(*Pool).Get(0xc4204d55a0, 0xc4292053f0, 0xf6b760)
/usr/local/go/src/sync/pool.go:151 +0xab
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Regexp).compareRegexp(0xc420ddec00, 0xc420dfc4b0, 0xc42d7447d0, 0x5, 0x5, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/comparison.go:263 +0x1c1
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Regexp).Eval(0xc420ddec00, 0xc420dfc4b0, 0xc42d7447d0, 0x5, 0x5, 0x5, 0x5, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/comparison.go:206 +0x1f0
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression.(*Not).Eval(0xc420de2270, 0xc420dfc4b0, 0xc42d7447d0, 0x5, 0x5, 0x0, 0x2087040, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/expression/boolean.go:26 +0x5f
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*FilterIter).Next(0xc42437e270, 0xf1c1c992, 0x20f0800, 0xc42d60fee8, 0xc42d60ff7e, 0x489d36)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/filter.go:110 +0x97
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420dfc500, 0x2ae56b12d60fee8, 0x5c9384e3, 0xc42d60fd98, 0x489d36, 0x5c9384e3)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*trackedRowIter).Next(0xc4203e4400, 0xf1c1c86f, 0x20f0800, 0xc4200f04e0, 0xc420ee2478, 0xc4202eac50)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/process.go:145 +0x37
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc420dfc550, 0x2, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).iterPartition(0xc420ee24e0, 0x1455180, 0xc420dea030)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:249 +0x286
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start.func1(0xc420ee24e0, 0xc42461a070, 0x1455180, 0xc420dea030)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:170 +0x3f
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:169 +0x110
goroutine 83 [runnable]:
syscall.Syscall(0x0, 0xb, 0xc42d51c000, 0x1000, 0x1000, 0x1000, 0x0)
/usr/local/go/src/syscall/asm_linux_amd64.s:18 +0x5
syscall.read(0xb, 0xc42d51c000, 0x1000, 0x1000, 0xc42b0ca401, 0x0, 0x0)
/usr/local/go/src/syscall/zsyscall_linux_amd64.go:749 +0x5f
syscall.Read(0xb, 0xc42d51c000, 0x1000, 0x1000, 0x6, 0xc420098588, 0xc420098500)
/usr/local/go/src/syscall/syscall_unix.go:162 +0x49
internal/poll.(*FD).Read(0xc420434410, 0xc42d51c000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:153 +0x118
os.(*File).read(0xc4233f0000, 0xc42d51c000, 0x1000, 0x1000, 0x494d73, 0xc420434410, 0x0)
/usr/local/go/src/os/file_unix.go:226 +0x4e
os.(*File).Read(0xc4233f0000, 0xc42d51c000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/os/file.go:107 +0x6a
bufio.(*Reader).fill(0xc428390080)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadByte(0xc428390080, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:242 +0x39
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*teeReader).ReadByte(0xc42b8fe420, 0xac51b, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/scanner.go:477 +0x37
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Scanner).readType(0xc42c520960, 0x0, 0x1, 0xac51b)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/scanner.go:271 +0x33
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Scanner).readObjectTypeAndLength(0xc42c520960, 0x0, 0x1, 0xac51b, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/scanner.go:258 +0x2f
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Scanner).nextObjectHeader(0xc42c520960, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/scanner.go:198 +0xfa
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Scanner).SeekObjectHeader(0xc42c520960, 0xac51b, 0xc4204c0f00, 0x129f688, 0xc42a516e80)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/scanner.go:153 +0x75
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).objectHeaderAtOffset(0xc42c520a00, 0xac51b, 0xac51b, 0x2114900, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:114 +0x39
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).getObjectType(0xc42c520a00, 0xc42d094000, 0x804, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:165 +0xd8
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).objectAtOffset(0xc42c520a00, 0x316fc72, 0x8722d1ef7085fe5c, 0x8722d1efc4fef03c, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:214 +0x241
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile.(*Packfile).GetByOffset(0xc42c520a00, 0x316fc72, 0x1473360, 0xc4204d4b80, 0x146ffa0, 0xc421bf8060)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/format/packfile/packfile.go:92 +0x8b
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem.(*ObjectStorage).decodeObjectAt(0xc42031e018, 0x146ffa0, 0xc421bf8060, 0x146bec0, 0xc420e3cd80, 0x316fc72, 0x0, 0x0, 0xc4fef03c8722d1ef, 0x316fc72)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem/object.go:405 +0x187
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem.(*ObjectStorage).getFromPackfile(0xc42031e018, 0xdd1ba2b5a047ee15, 0x8722d1ef7085fe5c, 0xc4fef03c, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem/object.go:378 +0x2cb
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem.(*ObjectStorage).EncodedObject(0xc42031e018, 0x1ba2b5a047ee1503, 0x22d1ef7085fe5cdd, 0xc4fef03c87, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/storage/filesystem/object.go:249 +0x6c
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object.GetBlob(0x1469ca0, 0xc42031e000, 0xdd1ba2b5a047ee15, 0x8722d1ef7085fe5c, 0xc4fef03c, 0xa047ee15000081a4, 0x7085fe5cdd1ba2b5, 0xc4fef03c8722d1ef)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object/blob.go:23 +0x4e
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object.(*FileIter).Next(0xc429b17da0, 0x7fe3ef3b8688, 0x64, 0xc42c5f0a50)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-git.v4/plumbing/object/file.go:100 +0x13f
github.com/src-d/gitbase.(*commitFilesRowIter).next(0xc42031e0e0, 0xc4200f0358, 0x1, 0xc429967e18, 0x43d75b, 0xc429967ee8)
/tmp/regression-207397520/src/github.com/src-d/gitbase/commit_files.go:278 +0x88
github.com/src-d/gitbase.(*commitFilesRowIter).Next(0xc42031e0e0, 0x3, 0x44a0e8, 0x3f5d8ee415b28, 0x8cc3621, 0x8cc362100000003)
/tmp/regression-207397520/src/github.com/src-d/gitbase/commit_files.go:192 +0x7d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*FilterIter).Next(0xc423aa8330, 0xf7dfa7c4, 0x20f0800, 0xc429967ee8, 0xc429967f7e, 0x489d36)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/filter.go:105 +0x38
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc421b42190, 0x8cc355f29967ee8, 0x5c9384e3, 0xc429967d98, 0x489d36, 0x5c9384e3)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*trackedRowIter).Next(0xc4204d4c80, 0xf7dfa703, 0x20f0800, 0xc4200f04e0, 0xc420ee2478, 0xc4202eac50)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/process.go:145 +0x37
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql.(*spanIter).Next(0xc421b421e0, 0x2, 0x0, 0x0, 0x0, 0x0)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/session.go:346 +0x5d
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).iterPartition(0xc420ee24e0, 0x1455180, 0xc420dea040)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:249 +0x286
github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start.func1(0xc420ee24e0, 0xc42461a070, 0x1455180, 0xc420dea040)
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:170 +0x3f
created by github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan.(*exchangeRowIter).start
/tmp/regression-207397520/src/github.com/src-d/gitbase/vendor/gopkg.in/src-d/go-mysql-server.v0/sql/plan/exchange.go:169 +0x110
It panics in the same place as in old issue:
https://github.com/src-d/gitbase/issues/544
Now, we have a pool of matchers and it's still happens on 96-core machine.
So far, I think the problem happens when multiple threads reuse the same matcher (I'm not sure if it can happen) and other thread resized the region:
https://github.com/kkos/oniguruma/blob/341002b8a188494871d38fb19ad9d7d44f6aec00/src/regexec.c#L4723
It's not a race.
For sure it happens in the onig_search function (https://github.com/kkos/oniguruma/blob/master/src/regexec.c#L4700)
Even if we have a pool of resources + extra mutex which I added (https://github.com/src-d/gitbase/blob/fix-jenkins-oniguruma/vendor/github.com/src-d/go-oniguruma/regex.go#L204)
I'm closing this issue, because it won't be fixed - we switched back to go regexp engine (so far).
|
gharchive/issue
| 2019-03-21T12:38:38 |
2025-04-01T04:35:56.841008
|
{
"authors": [
"ajnavarro",
"kuba--"
],
"repo": "src-d/go-mysql-server",
"url": "https://github.com/src-d/go-mysql-server/issues/641",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
408698159
|
Replace Install/Download Engine Links
Signed-off-by: Ricardo Baeta ricardo@ricardobaeta.com
You're correct @dpordomingo ! And on a separate PR, which is not as important as this one because:
As for what @dpordomingo mentioned on documentation:
We have the link in Docs in the navbar
We have it on the open source page
We don't (but could) have it linked in the https://go.sourced.tech/engine-download page so that people who don't want to give their email still can find the documentation/repo during the user journey.
@marnovo @dpordomingo Sorry guys, but I've accidentally addressed Marcelo's requested changes on this PR commit https://github.com/src-d/landing/pull/360/commits/7800ac8851095a6c98b76d2b765235f26f0d1600
Regarding the discussion on supported resolutions & browsers, please use this reference (we still need to PR it to the guide I guess).
I restarted Drone.
If it pass, I'll merge and deploy as requested by @ricardobaeta
|
gharchive/pull-request
| 2019-02-11T09:40:10 |
2025-04-01T04:35:56.850685
|
{
"authors": [
"dpordomingo",
"marnovo",
"ricardobaeta"
],
"repo": "src-d/landing",
"url": "https://github.com/src-d/landing/pull/359",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2616911366
|
Remote deployment: allow local builds
When using small VPS, it doesn't make sense to build everything remotely. For this, provide an option to so we can build everything locally and nix copy it before doing remote deployment.
For now, we must do this manually, e.g.:
nix copy $(nix build .\#nixosConfigurations.naivete-me.config.system.build.toplevel --no-link --print-out-paths) --to ssh-ng://root@<ip>q
Workaround in the meanwhile:
https://github.com/srid/naivete-me/blob/2f5321e6bd30afd1d9e0f2e5a1bd7c486d358a83/justfile#L23-L29
|
gharchive/issue
| 2024-10-27T22:25:43 |
2025-04-01T04:35:56.872352
|
{
"authors": [
"srid"
],
"repo": "srid/nixos-unified",
"url": "https://github.com/srid/nixos-unified/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2268571676
|
docker start container-name command is not allowed to login CLI
Hello,
After we enable our nodes with containerlab deploy command, I wanted to test docker commands.
I made down all containers and then I did " docket start CONTAINERNAME".
Container is becoming up. however, we are not able to login via CLI. But bash is successfull.
Have you experienced such thing in your environment?
root@ubuntu:~/Arista# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6aa6409effdd ceosimage:4.30.5M "bash -c '/mnt/flash…" 11 hours ago Up About a minute CUSTOMER-AristaDCEVPNVXLAN-leaf1
root@ubuntu:~/Arista# docker exec -it CUSTOMER-AristaDCEVPNVXLAN-leaf1 Cli
Cannot connect to ConfigAgent
Entering standalone shell.
This is expected. The cEOS container does not start the Cli process automatically, you have to pass Cli command to it.
Well, as soon as you stop a container, the namespace with all its configuration is being terminated / destroyed.
Part of the namespace configuration is the assigned interfaces. So if you simply start the namespaces again, the wireing is gone and other things might not be properly initialized anymore. Hence you should again use containerlab deploy .... probably with the --reconfigure flag. Then all the interfaces will get recreated and the env should come up as expected.
|
gharchive/issue
| 2024-04-29T09:41:19 |
2025-04-01T04:35:56.879149
|
{
"authors": [
"hellt",
"mahmutaydin1",
"steiler"
],
"repo": "srl-labs/containerlab",
"url": "https://github.com/srl-labs/containerlab/issues/2026",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
987214632
|
Request: Add Support for aarch64
Hello All,
I tried installing containerlab on a pinebook pro and received the following error. If this is not possible please feel free to close issue. Thank you for creating container lab!
[juliopdx@pbpro srl-ceos-gobgp]$ bash -c "$(curl -sL https://get-clab.srlinux.dev)"
No prebuilt binary for linux-aarch64.
To build from source, go to https://github.com/srl-labs/containerlab
Failed to install containerlab
For support, go to https://github.com/srl-labs/containerlab/issues
[juliopdx@pbpro srl-ceos-gobgp]$
Hi @JulioPDX
Although it is possible to build arm64 image for containerlab, you won't find network os images in that architecture.
You would be able to build the vrnetlab based images I think, but not sure if that's enough for you
Hi @hellt
I should be good. I'll have to mess with a different architecture to get it running. Thanks you
|
gharchive/issue
| 2021-09-02T22:18:36 |
2025-04-01T04:35:56.881264
|
{
"authors": [
"JulioPDX",
"hellt"
],
"repo": "srl-labs/containerlab",
"url": "https://github.com/srl-labs/containerlab/issues/605",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1909978162
|
🛑 Plex is down
In e332729, Plex (https://plex.srueg.ch/identity) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Plex is back up in c5cfada after 9 minutes.
|
gharchive/issue
| 2023-09-23T19:35:10 |
2025-04-01T04:35:56.889733
|
{
"authors": [
"srueg"
],
"repo": "srueg/upptime",
"url": "https://github.com/srueg/upptime/issues/1008",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
835784045
|
Schema overhaul
This is needed for #90 but because there is so much renaming, I made a separate PR.
[x] create consolidated schema with new members table
[x] remove allow_list and replace it with members
[x] update roomdb types
[x] rename User to Member
[x] add role enums (member, moderator, admin)
[x] update insert-user
[x] overhaul web/handlers/admin
[x] overhaul web/handlers
[x] overhaul roomsrv and tests
[ ] copy previous allow-list stuff and use it as deny-list
Just some thoughts as I'm jumping through these changes. Still stuff to do but thought I'd loop you in already.
Following from "allow lists is now members" comes the realization that members need to have a nickname. Assuming that no-one wants to deal with just public keys and they might not have an invite. So the current allow list form needs another input field (and styling for that... cc @staltz :grimacing:) and a way to change the members role (not sure if we want to make this with a drop-down on the list or a separate edit page (which might be worthwhile if people want to change their nick or public key for instance).
Oh, yes it would make sense that the members page allows changing roles. A dropdown seems fine, to begin with. Just note that only admin should be able to change roles.
But nicknames? Why? Aliases are basically the nickname system.
Assuming that no-one wants to deal with just public keys and they might not have an alias.
Is this the reason? And in what context would they deal with public keys? Not sign-in, because that should be handled by SIWSSB (we don't want the fallback username/password system for all members. In fact we didn't even want it originally for anyone)
Is this the reason? And in what context would they deal with public keys?
if the members don't have a nickname, you just see the public key on who created the invite or who do these aliases belong to.
Aliases are basically the nickname system.
Not all users might have an alias. I might not want all my devices identified with an alias, especially since it's enumerable from the public, unless I switch the room into restricted mode, which might not be what the other room members want.
By the way: I don't think we need to spec this. It just makes sense for an implementation, I think. Simply starting from the case "what happens when you remove your only active alias" or "what do you show when there are two aliases?".
By the way: I don't think we need to spec this. It just makes sense for an implementation, I think.
That's good clarification. Thanks
if the members don't have a nickname, you just see the public key on who created the invite or who do these aliases belong to.
I see this as a possibility, but not a need, strictly. Seeing the raw public key is not pressing problem, as I see it. There are also considerations to make, such as minimizing how much the admin knows about its members (see some Security Considerations in the spec), and it's also possible to create confusion by having both nickname and alias (people might register one thinking that they wanted the other, or vice-versa). Attaching identifiable metadata to members should also ideally by signed by the member, so that's why I compared them to aliases.
Other thoughts about this
So the current allow list form needs another input field
I think it would make sense to merge the aliases and members list into one, and add role dropdown too. It would need some redesign, indeed. We might even need a "member details" page that lists all the details of that member and allows editing (e.g. revoke alias).
confusion by having both nickname and alias
I'd argue the welcome strings we have act like help/descriptions on the page and should do a good job there.
minimizing how much the admin knows about its members
Honestly, I don't see how moderators can effectively govern a room if they don't know who is who.
Additionally, this feels like invite consumption should then be grouped with alias creation. Last time I asked for this you were against it, claimed people might not want an invite... :shrug:
Attaching identifiable metadata to members should also ideally by signed by the member
Do you want to do this now? I'd rather put this on v2.2 or later since it would need considerable re-work and generalization.
Another approach to storing these on the room could be to hook into a configured peer and query it for type:about messages of the member, to shortcut this need for more signed off-chain redundancy.
I think it would make sense to merge the aliases and members list pages into one.
This sounds reasonable but can we do it as a follow-up? There is a lot of needed changes in here.
we had a call and among other things decided to: later merge aliases with members managment/editing. We don't want to have a 2nd database of the social data that is already in ssb (type:about showing maybe in future versions) at which point nicknames might become obsolete. Generally we agreed that we shouldn't try to guess what our users might and might not want.
Also did a high-level walk through with cblgh. Merrrging now!
|
gharchive/pull-request
| 2021-03-19T10:03:17 |
2025-04-01T04:35:56.918557
|
{
"authors": [
"cryptix",
"staltz"
],
"repo": "ssb-ngi-pointer/go-ssb-room",
"url": "https://github.com/ssb-ngi-pointer/go-ssb-room/pull/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1664010148
|
deep_training安装问题
为什么我在安装这个包的时候,有时候可以安装成功,有时候提示找不到?
应该跟pip 环境的仓库镜像有关系, 建议使用readme 说提供的安装方式,使用pypi官方仓库镜像 ,虽然慢点,但是可靠。
ERROR: Could not find a version that satisfies the requirement deep_training (from versions: none)
ERROR: No matching distribution found for deep_training
用readme里的方式还是会显示这样、、
是的 torch 需要最低1.10, 建议使用 torch 1.13 或者 torch 2.x
但是我在torch1.12.0+cu113的环境里安装不了deep_training、、
如果实在无法安装, 请下载安装包自行安装。
下载安装包安装显示这个,有点看不懂、、、:
ERROR: HTTP error 404 while getting http://pypi-devpi-proxy-svc.kube-system:3141/root/pypi/%2Bf/5fa/d320488aa69c7/fastdatasets-0.9.6-py3-none-any.whl#sha256=5fad320488aa69c7c76e7f78bda9ac042e7dc7c906f2a1ead6286c1cbbbe4d00 (from http://pypi-devpi-proxy-svc.kube-system:3141/root/pypi/+simple/fastdatasets/) (requires-python:>=3, <4)
ERROR: Could not install requirement fastdatasets<=1,>=0.9.6 from http://pypi-devpi-proxy-svc.kube-system:3141/root/pypi/%2Bf/5fa/d320488aa69c7/fastdatasets-0.9.6-py3-none-any.whl#sha256=5fad320488aa69c7c76e7f78bda9ac042e7dc7c906f2a1ead6286c1cbbbe4d00 (from deep-training==0.1.2) because of HTTP error 404 Client Error: Not Found for url: http://pypi-devpi-proxy-svc.kube-system:3141/root/pypi/%2Bf/5fa/d320488aa69c7/fastdatasets-0.9.6-py3-none-any.whl for URL http://pypi-devpi-proxy-svc.kube-system:3141/root/pypi/%2Bf/5fa/d320488aa69c7/fastdatasets-0.9.6-py3-none-any.whl#sha256=5fad320488aa69c7c76e7f78bda9ac042e7dc7c906f2a1ead6286c1cbbbe4d00 (from http://pypi-devpi-proxy-svc.kube-system:3141/root/pypi/+simple/fastdatasets/) (requires-python:>=3, <4)
请问您是操作系统呢 , tfrecords 不支持mac
|
gharchive/issue
| 2023-04-12T07:32:37 |
2025-04-01T04:35:56.944664
|
{
"authors": [
"Stark-zheng",
"ssbuild"
],
"repo": "ssbuild/chatglm_finetuning",
"url": "https://github.com/ssbuild/chatglm_finetuning/issues/163",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2021177239
|
Bulk upload: find existing patient against CID AND center, not just CID
Resolves Notion ticket.
Relates to patient_id vs CID for bulk upload
Thanks for the reviews! 👍
|
gharchive/pull-request
| 2023-12-01T16:11:58 |
2025-04-01T04:35:56.946463
|
{
"authors": [
"jamienoss"
],
"repo": "ssec-jhu/biospecdb",
"url": "https://github.com/ssec-jhu/biospecdb/pull/190",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1727134240
|
Chrome Dino Game
i want to add chrome dino game. please assign this under this project to me under GSSoc'23 tag
Hi @DEVANSHUKEJRIWAL , could you please elaborate on the features of the Crossword game with full details, demo / mock UI sample/Video, and tech stack? better share details documentation.
Can i work upon this??
|
gharchive/issue
| 2023-05-26T08:14:29 |
2025-04-01T04:35:56.959103
|
{
"authors": [
"DEVANSHUKEJRIWAL",
"admindebu"
],
"repo": "ssitvit/Games-and-Go",
"url": "https://github.com/ssitvit/Games-and-Go/issues/220",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
659862259
|
Support getting minio back from servicex transform request
ServiceX will soon have the code to return the minio end-point, username, and password from the reply to the transform request (or a transform get). Rather than having the user configure this, we should use this information:
[x] If the info is in the reply from a transformation.
[x] If there is no end-point information, then use the local end-point information.
[x] If someone provides a MinIoAdaptor, always use that
[x] If the information is missing, use the local information
[x] If we have a request in flight, but we have been restarted, re-query ServiceX for the proper end-point for minio.
[x] If we have a request in flight, but be have been restarted, re-query, and if that doesn't have the end-point info, fall back.
The description of how this is supported can be found here: https://github.com/ssl-hep/ServiceX_App/pull/33:
{'request_id': 'BR549', 'did': '123-456-789',
'columns': 'electron.eta(), muon.pt()',
'selection': None,
'tree-name': "Events",
'image': 'ssl-hep/foo:latest', 'chunk-size': 1000,
'workers': 42, 'result-destination': 'kafka',
'result-format': 'arrow',
'kafka-broker': 'http://ssl-hep.org.kafka:12345',
'minio-access-key': 'miniouser',
'minio-endpoint': 'minio.servicex.com:9000',
'minio-secret-key': 'leftfoot1',
'workflow-name': None,
'generated-code-cm': None}
One outstanding question:
When the minio info comes back for a locally run chart, it comes back as servicex--minio:9000 - but that points to nothing that exists.
How do we make things work for that?
|
gharchive/issue
| 2020-07-18T03:29:39 |
2025-04-01T04:35:56.964411
|
{
"authors": [
"gordonwatts"
],
"repo": "ssl-hep/ServiceX_frontend",
"url": "https://github.com/ssl-hep/ServiceX_frontend/issues/70",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
325596472
|
Regarding Rewrite Rules
Hi,SSL
I'm trying to install ezXSS with LNMP.
However, I have some troubles when I convert .htaccess to Nginx rewrite rules.
Could you provide nginx rewrite rules, if possible? Or I have to use LAMP environment?
More than thanks.
Hi SpZanG,
I'm no pro in nginx rewrite rules. Would something like this help: https://winginx.com/en/htaccess
|
gharchive/issue
| 2018-05-23T08:16:42 |
2025-04-01T04:35:56.966264
|
{
"authors": [
"SpZanG",
"ssl"
],
"repo": "ssl/ezXSS",
"url": "https://github.com/ssl/ezXSS/issues/13",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2356333748
|
Function: nodejs.esbuild options are not getting applied
When supplying a custom esbuild config to Function it's not getting applied
const myFunction = new sst.aws.Function('MyFunction', {
handler: 'src/main.handler',
url: true,
nodejs: {
install: [
'@nestjs/platform-express',
'@nestjs/microservices*',
'@nestjs/websockets*',
'@fastify/static',
'@fastify/view',
],
esbuild: {
plugins: [tscPlugin()],
},
},
})
A potential problem I see is that we are unmarshalling esbuild from json.
Does that mean that the plugin that is a javascript function won't get converted to go?
https://github.com/sst/ion/blob/dev/pkg/runtime/node.go#L70
How does esbuild handle the conversion from javascript to go?
@leon I was following your messages in Discord, did you make some progress on this?
No unfortunately not.
Tried a couple of things but got stuck.
It's a bit complicated because of the typescript / go marshaling.
Hope someone else can look at it soon 🤓
yeah this is quite tricky - we used to require you to pass in a path to a file that loaded the plugins, but don't really want to go back to thatwe might not be able to support esbuild plugins
just found this issue & bumping it, would love to find a solution here. i think the main one is for experimentalDecorators like @leon mentioned above because those are a very common use-case with frameworks like nestjs which i really wanna move into a sst dev environment. curious if there's an appetite to special case that?
Just found this issue and I must say it would be a bummer to not support plugins in some way. Just found that this plugin https://github.com/unplugin/unplugin-auto-import works directly with esbuild and I wanted to try it
Do we have a sense of priority / timeline for esbuild plugins being functional in sst dev? @jayair
(Maybe starting with the decorators plugin)
Curious bc that's currently the main blocker for my company to fully migrate over to ion (super close!) but unfortunately none of us are super experienced with Go to invest time on contributing a fix for this.
Otherwise will stick with v2 for the time being where sst dev seems to work despite the esbuild plugin dependencies
ps we can probably collapse the following issues into this:
https://github.com/sst/ion/issues/396
https://github.com/sst/ion/issues/681
Congrats on the v3 launch! 🎉
Yeah good call on those other issues.
I'll check with the team on the roadmap.
@thdxr any progress on this or any way to help get this moving? I saw this comment here that might be helpful to solving the issue of go vs js plugins https://github.com/evanw/esbuild/issues/821#issuecomment-797784638
yeah this is on my list we need to create a js bridge for esbuild plugins. it's a bit complicated to do this reliably but i'll be working on it this week
can you try 0.0.0-1725513341the nodejs field has a subfield called pluginsthis is a string that points to a plugins.mjs file somewherein that file you can export your plugins
export default [
tscPlugin()
]
give that a try
@thdxr I will give this a try over the weekend or later today, thanks a lot for working on this.
found some more issues - this is kind of a rabbit hole i'm not sure how stable we'll be able to get this
the issue is in pulumi the node process spawns and exits after the deploy is done so for live dev mode we need to have it implemented separatelyi just need to put more time into it and i think i can get it all working - esbuild is already doing this same thing under the hood they just don't expose it for me to reuse
just saw the recent activity – happy to test as well if needed
experimental support in v3.2.2
@thdxr
not sure if implemented correctly but i couldn't get it to work
version
sst 3.2.22
plugins.mjs
const esbuildDecorators = require('@anatine/esbuild-decorators');
export default [esbuildDecorators()];
sst.config.ts
async run() {
const api = new sst.aws.ApiGatewayV2('MyApi1');
api.route('ANY /{proxy+}', {
handler: 'src/index.handler',
nodejs: {
install: [
'@nestjs/microservices',
'pg-native',
'nats',
'mqtt',
'kafkajs',
'grpc',
'apollo-server-express',
'apollo-server-fastify',
'class-transformer/storage',
'cache-manager',
'@nestjs/websockets/socket-module',
'class-transformer',
'class-validator',
],
plugins: "plugins.mjs",
// esbuild: {
// plugins: [
// esbuildDecorators(),
// ],
// },
},
});
},
add the plugins file cause the lambda to timeout and return
it does not seem like sst dev is running
@yotamishak what's the error you are getting?
Because I just realized that esbuild supports decorators now.
@jayair
my use case is trying to get sst working with nestjs (seems like @leon is the same)
created a small example of hello world nestjs app
https://github.com/yotamishak/nest-ion
the app fails here because appService is not being bundled
this error occurs when using esbuild as well, but addiing a decorator plugin( like @anatine/esbuild-decorators) solves this
10:18:37.164
[Nest] 45714 - 10/16/2024, 7:18:37 AM LOG [NestFactory] Starting Nest application...
10:18:37.170
[Nest] 45714 - 10/16/2024, 7:18:37 AM LOG [InstanceLoader] AppModule dependencies initialized +6ms
10:18:37.172
[Nest] 45714 - 10/16/2024, 7:18:37 AM LOG [RoutesResolver] AppController {/}: +2ms
10:18:37.173
[Nest] 45714 - 10/16/2024, 7:18:37 AM LOG [RouterExplorer] Mapped {/, GET} route +1ms
10:18:37.174
[Nest] 45714 - 10/16/2024, 7:18:37 AM LOG [NestApplication] Nest application successfully started +1ms
10:18:37.210
[Nest] 45714 - 10/16/2024, 7:18:37 AM ERROR [ExceptionsHandler] Cannot read properties of undefined (reading 'getHello')
10:18:37.210
TypeError: Cannot read properties of undefined (reading 'getHello')
10:18:37.210
at AppController.getHello (/Users/yotam/dev/nest-ion/src/app.controller.ts:10:28)
10:18:37.210
at <anonymous> (/Users/yotam/dev/nest-ion/node_modules/.pnpm/@nestjs+core@9.4.3_@nestjs+common@9.4.3_reflect-metadata@0.1.14_rxjs@7.8.1__@nestjs+platform-_hhtgoj34sxagmh36m5cxwgzlrm/node_modules/@nestjs/core/router/router-execution-context.js:38:29)
10:18:37.210
at InterceptorsConsumer.intercept (/Users/yotam/dev/nest-ion/node_modules/.pnpm/@nestjs+core@9.4.3_@nestjs+common@9.4.3_reflect-metadata@0.1.14_rxjs@7.8.1__@nestjs+platform-_hhtgoj34sxagmh36m5cxwgzlrm/node_modules/@nestjs/core/interceptors/interceptors-consumer.js:11:20)
10:18:37.210
at <anonymous> (/Users/yotam/dev/nest-ion/node_modules/.pnpm/@nestjs+core@9.4.3_@nestjs+common@9.4.3_reflect-metadata@0.1.14_rxjs@7.8.1__@nestjs+platform-_hhtgoj34sxagmh36m5cxwgzlrm/node_modules/@nestjs/core/router/router-execution-context.js:46:60)
10:18:37.210
at <anonymous> (/Users/yotam/dev/nest-ion/node_modules/.pnpm/@nestjs+core@9.4.3_@nestjs+common@9.4.3_reflect-metadata@0.1.14_rxjs@7.8.1__@nestjs+platform-_hhtgoj34sxagmh36m5cxwgzlrm/node_modules/@nestjs/core/router/router-proxy.js:9:23)
10:18:37.210
at Layer.handle (/Users/yotam/dev/nest-ion/node_modules/.pnpm/express@4.18.2/node_modules/express/lib/router/layer.js:95:5)
10:18:37.210
at next (/Users/yotam/dev/nest-ion/node_modules/.pnpm/express@4.18.2/node_modules/express/lib/router/route.js:144:13)
10:18:37.210
at Route.dispatch (/Users/yotam/dev/nest-ion/node_modules/.pnpm/express@4.18.2/node_modules/express/lib/router/route.js:114:3)
10:18:37.210
at Layer.handle (/Users/yotam/dev/nest-ion/node_modules/.pnpm/express@4.18.2/node_modules/express/lib/router/layer.js:95:5)
10:18:37.210
at <anonymous> (/Users/yotam/dev/nest-ion/node_modules/.pnpm/express@4.18.2/node_modules/express/lib/router/index.js:284:15)
i tried using "esbuild": "0.24.0" to deploy without sst to see if it would work without esbuild plugin at all
looks like the decorators implementation introduced in esbuild 0.21.3 isnt gonna work for nestjs
https://github.com/nestjs/nest/issues/11414
@yotamishak that's the same example I'm trying and it works for me.
The error you have, is that on dev or deploy?
@jayair
Dev
Could you clarify what works?
Passing the plugin file to nodejs or esbuild out of the box?
From what I saw sst was using 0.20.0 that's why I tried testing bundling the code with serverless
Just out of the box, dev and deploy. I am using Nest's standalone mode though.
I'll publish the example later today.
Turns out it was the standalone mode NestFactory.createApplicationContext that was the difference. It doesn't work if I use the regular NestFactory.create.
I'm going to wait for Dax to take a look at the plugin support that he added.
This might be helpful in the meantime...
This is how I've been using Nestjs with Lambdas with CDK and SST v2 and v3. This also works with MikroORM and TypeORM.
import esbuildPluginTsc from "esbuild-plugin-tsc";
esbuild: {
plugins: [
esbuildPluginTsc({
tsconfigPath:
"packages/infrastructure/tsconfig.json",
}),
],
@jayair
standalone version of strips nest capabilities to be run as api/express app, which unfortunately is blocker for me (and gonna assume the same for most nestjs users).
did you end up sharing the example? messing around with a working example would help alot
all have my tests have been with dev. i'm try to use deploy but im running into similar issues as #1265 so im working through downgrading packages to get something up and running
@nevace
does this work for dev as well?
im getting the same results using "esbuild-plugin-tsc"
any chance you could share sst.config.ts and relevant versions you're using?
ive been able to get deploy to work with sst version "3.1.78"
i did have to move the nest dependencies out of install to external but i wanted them to be excluded anyway in the first place
does not work with sst version "3.2.0"
works with both mentioned esbuild plugins
import { esbuildDecorators } from '@anatine/esbuild-decorators';
import esbuildPluginTsc from 'esbuild-plugin-tsc';
api.route('ANY /{proxy+}', {
handler: 'src/index.handler',
nodejs: {
esbuild: {
external: [
'@nestjs/microservices',
'pg-native',
'nats',
'mqtt',
'kafkajs',
'grpc',
'apollo-server-express',
'apollo-server-fastify',
'class-transformer/storage',
'cache-manager',
'@nestjs/websockets/socket-module',
'class-transformer',
'class-validator',
],
// plugins: [esbuildDecorators()],
plugins: [esbuildPluginTsc()],
},
},
});
still experiences in the same issues here in dev/ deploy after 3.2.0
|
gharchive/issue
| 2024-06-17T04:26:59 |
2025-04-01T04:35:57.007685
|
{
"authors": [
"arpadgabor",
"farah-u",
"jayair",
"kujtimiihoxha",
"leon",
"nevace",
"thdxr",
"yotamishak"
],
"repo": "sst/ion",
"url": "https://github.com/sst/ion/issues/568",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
55208812
|
knowing which runtime was selected, and specifying a runtime
Could a couple lines be added to the readme to show
how to ask ExecJS which runtime it is using
how to tell execjs which runtime to use
Thanks!
(and sorry of this is covered somewhere already...)
This repository has been deprecated and has moved to https://github.com/rails/execjs.
|
gharchive/issue
| 2015-01-22T21:06:43 |
2025-04-01T04:35:57.010966
|
{
"authors": [
"jjb",
"josh"
],
"repo": "sstephenson/execjs",
"url": "https://github.com/sstephenson/execjs/issues/177",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1733123
|
Command to update defintions
It would be nice to have a command to update definitions. Something like:
ruby-build definitions update
At this moment to have this behavior we usually have to run.
cd ~/Sources/ruby-build
git pull
./install.sh
@josh: What do you recommend to Linux users?
@sometimesfood: Excellent, thanks! I still see my response as kind of a +1 on adding an update command to ruby-build.
Another option is to install ruby-build as an rbenv plugin by cloning it into ~/.rbenv/plugins:
$ mkdir -p ~/.rbenv/plugins
$ cd ~/.rbenv/plugins
$ git clone https://github.com/sstephenson/ruby-build.git
Then simply git pull in the ~/.rbenv/plugins/ruby-build directory when you want to update the definitions.
@sstephenson: Perfect, that works for me.
@trans: Sure, but for the time being rbenv-update='(cd ~/.rbenv/plugins/ruby-build && git pull)' is good enough for me.
|
gharchive/issue
| 2011-09-25T10:16:49 |
2025-04-01T04:35:57.014394
|
{
"authors": [
"citizen428",
"fesplugas",
"sstephenson"
],
"repo": "sstephenson/ruby-build",
"url": "https://github.com/sstephenson/ruby-build/issues/51",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
907010592
|
Generate cards inside the codeblock
Recently, I use this plugin: https://github.com/valentine195/obsidian-admonition
For example, I have an example below. Inside the code block, I also want to generate cards. Is it possible?
title: theorem
What is A ?:: B
That would lead to conflicts as detailed in #76.
|
gharchive/issue
| 2021-05-31T03:39:39 |
2025-04-01T04:35:57.043102
|
{
"authors": [
"jakeoung",
"st3v3nmw"
],
"repo": "st3v3nmw/obsidian-spaced-repetition",
"url": "https://github.com/st3v3nmw/obsidian-spaced-repetition/issues/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1147127267
|
support open-ended datetime intervals, fix camelcase violations
Related Issue(s): #137
Proposed Changes:
support open-ended datetime intervals (e.g., ../{datetime}, /{datetime}, {datetime}/.., or {datetime}/
fix a few camelcase violations
PR Checklist:
[X] I have added my changes to the CHANGELOG or a CHANGELOG entry is not required.
Closes #137
|
gharchive/pull-request
| 2022-02-22T16:34:37 |
2025-04-01T04:35:57.083568
|
{
"authors": [
"marchuffnagle",
"philvarner"
],
"repo": "stac-utils/stac-server",
"url": "https://github.com/stac-utils/stac-server/pull/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2619116585
|
Luego de solicitar cambios se borro toda la carpeta de src
Describe the bug
Tenia el proyecto casi terminado y cuadno solicite un cambio me borro todo el contenido de src
Link to the Bolt URL that caused the error
https://bolt.new/~/sb1-zhdxax
Steps to reproduce
Ingresa y cargar el proyecto.
Expected behavior
se debe cargar todo el pryecto
Screen Recording / Screenshot
No response
Platform
Browser name = Chrome
Full version = 130.0.0.0
Major version = 130
navigator.appName = Netscape
navigator.userAgent = Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36
performance.memory = {
"totalJSHeapSize": 95400378,
"usedJSHeapSize": 91794990,
"jsHeapSizeLimit": 4294705152
}
Username = agzaracho
Chat ID = 9cec15c976f7
Additional context
No response
wowww tambien me ha pasado a mi, se me ha borrado todo el proyecto
https://stackblitz.com/edit/sb1-3wzfyw
Por favor si te dicen como recuperarlo avisame
Nefasto. Me gaste demasiados tokens (incluso compre un refill de 30usd) para terminar el proyecto y justo cuando estaba ultimando detalles estéticos, se le da por borrar la carpeta src entera, simplemente, nefasto.
Solo tengo malas noticias @rubenmarques-gijon, no hubo manera de recuperar el proyecto. Tuve que comenzar, gracias a dios, desde un backup qe hice hace unos días.
Un desastre @agzaracho yo lo mismo , empezare a realizar backups en cada hito, aun asi la aplicacion esta increible. pero me he llevado una buena desilusión. Vamos a seguir, :)
|
gharchive/issue
| 2024-10-28T17:51:54 |
2025-04-01T04:35:57.099617
|
{
"authors": [
"agzaracho",
"rubenmarques-gijon"
],
"repo": "stackblitz/bolt.new",
"url": "https://github.com/stackblitz/bolt.new/issues/1068",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2647152223
|
Failed to build and deploy to Netlify
Describe the bug
here are the logs
> influencer-search@0.0.0 build
> tsc && vite build
Version 5.6.3
tsc: The TypeScript Compiler - Version 5.6.3
TS
COMMON COMMANDS
tsc
Compiles the current project (tsconfig.json in the working directory.)
tsc app.ts util.ts
Ignoring tsconfig.json, compiles the specified files with default compiler options.
tsc -b
Build a composite project in the working directory.
tsc --init
Creates a tsconfig.json with the recommended settings in the working directory.
tsc -p ./path/to/tsconfig.json
Compiles the TypeScript project located at the specified path.
tsc --help --all
An expanded version of this information, showing all possible compiler options
vite v4.5.5 building for production...
✓ 770 modules transformed.
transforming (773) node_modules/@heroicons/react/index.esm.js[vite:terser] terser not found. Since Vite v3, terser has become an optional dependency. You need to install it.
✓ built in 2.32s
error during build:
Error: terser not found. Since Vite v3, terser has become an optional dependency. You need to install it.
at loadTerserPath (file:///home/project/node_modules/vite/dist/node/chunks/dep-b2890f90.js:14398:19)
at Object.renderChunk (file:///home/project/node_modules/vite/dist/node/chunks/dep-b2890f90.js:14434:32)
at eval (file:///home/project/node_modules/rollup/dist/es/shared/node-entry.js:25569:40)
at <anonymous> (https://zp1v56uxy8rdx5ypatb0ockcb9tr6a-oci3.w-credentialless-staticblitz.com/blitz.f565b097.js:40:22667)
~/project 5s
❯ fatal error: too many writes on closed pipe
goroutine 6 [running]:
runtime.throw({0x9deaa, 0x1e})
runtime/panic.go:1047 +0x3 fp=0x83aed8 sp=0x83aeb0 pc=0x12250003
os.sigpipe()
runtime/os_js.go:144 +0x2 fp=0x83aef0 sp=0x83aed8 pc=0x13b70002
os.epipecheck(...)
os/file_unix.go:224
os.(*File).Write(0x80c020, {0x191e000, 0x1bf7e, 0x1c000})
os/file.go:183 +0x2d fp=0x83af78 sp=0x83aef0 pc=0x1607002d
main.runService.func1()
github.com/evanw/esbuild/cmd/esbuild/service.go:99 +0x7 fp=0x83afe0 sp=0x83af78 pc=0x1f630007
runtime.goexit()
runtime/asm_wasm.s:399 +0x1 fp=0x83afe8 sp=0x83afe0 pc=0x14070001
created by main.runService
github.com/evanw/esbuild/cmd/esbuild/service.go:97 +0x1e
goroutine 1 [chan receive]:
runtime.gopark(0xb66b0, 0xaecad8, 0xe, 0x17, 0x2)
runtime/proc.go:381 +0x28 fp=0x85f918 sp=0x85f8f0 pc=0x124c0028
runtime.chanrecv(0xaeca80, 0x85fa48, 0x1)
runtime/chan.go:583 +0x7f fp=0x85f9a0 sp=0x85f918 pc=0x106d007f
runtime.chanrecv1(0xaeca80, 0x85fa48)
runtime/chan.go:442 +0x2 fp=0x85f9c8 sp=0x85f9a0 pc=0x106b0002
syscall.fsCall({0x8fb39, 0x4}, {0x85fb30, 0x5, 0x5})
syscall/fs_js.go:520 +0x13 fp=0x85fa98 sp=0x85f9c8 pc=0x159f0013
syscall.Read(0x0, {0x8be000, 0x4000, 0x4000})
syscall/fs_js.go:388 +0xb fp=0x85fb88 sp=0x85fa98 pc=0x159b000b
internal/poll.ignoringEINTRIO(...)
internal/poll/fd_unix.go:794
internal/poll.(*FD).Read(0x830060, {0x8be000, 0x4000, 0x4000})
internal/poll/fd_unix.go:163 +0x57 fp=0x85fc20 sp=0x85fb88 pc=0x15ee0057
os.(*File).read(...)
os/file_posix.go:31
os.(*File).Read(0x80c018, {0x8be000, 0x4000, 0x4000})
os/file.go:118 +0x12 fp=0x85fc98 sp=0x85fc20 pc=0x16050012
main.runService(0x1)
github.com/evanw/esbuild/cmd/esbuild/service.go:134 +0x46 fp=0x85fde0 sp=0x85fc98 pc=0x1f600046
main.main()
github.com/evanw/esbuild/cmd/esbuild/main.go:240 +0x9e fp=0x85ff88 sp=0x85fde0 pc=0x1f59009e
runtime.main()
runtime/proc.go:250 +0x32 fp=0x85ffe0 sp=0x85ff88 pc=0x12460032
runtime.goexit()
runtime/asm_wasm.s:399 +0x1 fp=0x85ffe8 sp=0x85ffe0 pc=0x14070001
goroutine 2 [force gc (idle)]:
runtime.gopark(0xb6848, 0x3c65d0, 0x11, 0x14, 0x1)
runtime/proc.go:381 +0x28 fp=0x828fb8 sp=0x828f90 pc=0x124c0028
runtime.goparkunlock(...)
runtime/proc.go:387
runtime.forcegchelper()
runtime/proc.go:305 +0x1f fp=0x828fe0 sp=0x828fb8 pc=0x1249001f
runtime.goexit()
runtime/asm_wasm.s:399 +0x1 fp=0x828fe8 sp=0x828fe0 pc=0x14070001
created by runtime.init.5
runtime/proc.go:293 +0x2
goroutine 3 [GC sweep wait]:
runtime.gopark(0xb6848, 0x3c6960, 0xc, 0x14, 0x1)
runtime/proc.go:381 +0x28 fp=0x829798 sp=0x829770 pc=0x124c0028
runtime.goparkunlock(...)
runtime/proc.go:387
runtime.bgsweep(0x82e000)
runtime/mgcsweep.go:319 +0x21 fp=0x8297d0 sp=0x829798 pc=0x11790021
runtime.gcenable.func1()
runtime/mgc.go:178 +0x2 fp=0x8297e0 sp=0x8297d0 pc=0x110d0002
runtime.goexit()
runtime/asm_wasm.s:399 +0x1 fp=0x8297e8 sp=0x8297e0 pc=0x14070001
created by runtime.gcenable
runtime/mgc.go:178 +0x8
goroutine 4 [GC scavenge wait]:
runtime.gopark(0xb6848, 0x3c6ba0, 0xd, 0x14, 0x2)
Link to the Bolt URL that caused the error
https://bolt.new/~/sb1-hyjmhc
Steps to reproduce
Click Deploy
It will show this error
Expected behavior
App to be deployed in Netlify
Screen Recording / Screenshot
Platform
Browser name = Chrome
Full version = 130.0.0.0
Major version = 130
navigator.appName = Netscape
navigator.userAgent = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36
performance.memory = {
"totalJSHeapSize": 92532551,
"usedJSHeapSize": 81800519,
"jsHeapSizeLimit": 4294705152
}
Username = donvito
Chat ID = 61d2cf2f4b99
Additional context
Error:
Failed building the project. Make sure the build command is correct and try again.
I was able to fix the issue by prompting it. I think it still needs to be fixed for end users since they are not aware how to prompt it.
I am able to deploy to Netlify but not sure how to claim the URL. It doesn't show now in the chat.
Hi @donvito,
Really appreciate the detailed feedback here! We are tracking the build failure issue here: #2019. We are tracking the missing claim URL issue here: #2020.
|
gharchive/issue
| 2024-11-10T11:32:19 |
2025-04-01T04:35:57.105783
|
{
"authors": [
"donvito",
"endocytosis"
],
"repo": "stackblitz/bolt.new",
"url": "https://github.com/stackblitz/bolt.new/issues/1906",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2113354016
|
Add Background Reconciliation for Entities
Please describe the enhancement
Minder relies purely on event-based processing triggered by webhooks/user actions. Apart from edge-based triggering, Minder should also reconcile the entities in the background. Reconcilers under the scope of this issue are:
Continuously update the evaluation status for entities against registered rules.
Continuously evaluate the state of the webhook created by the minder on GitHub. (As the user may accidentally delete it)
These reconcilers are required to ensure all existing rule evaluations in minder are updated according to any newly added logic. See #1609 for example.
Solution Proposal
Continuously running the reconcilers is not feasible due to resource constraints and the provider's API limits. Reconcilers would run based on a cron job with a small enough batch size not to hit rate-limiting errors for any provider.
Describe alternatives you've considered
No response
Additional context
For more details see: https://discord.com/channels/1184987096302239844/1201083912521252885
Acceptance Criteria
No response
/assign
Hi @Vyom-Yadav I was thinking about this feature some time ago (though I haven't gotten far, thank you very much for picking it up) here are my notes from that time. Feel free to take anything you like from them or not if otherwise :-). A lot of it is very hand-wavy, as I said, I didn't get too deep.
I was thinking about having a separate process, a singleton as opposed to be a part of the potentially replicated minder instances. The idea behind this was that presumably we'd use a queue of entities to be updated and having a singleton writing to the queue just seemed simpler.
this new component (please name it reminder :-)) would periodically (say once an hour?) traverse the database, picking up projects that contain entities that haven't been updated in $TIME_PERIOD.
each project would get a time slot and could update up to N entities from the project within that time slot. Individual updates within that interval would be spaced apart with a jitter to avoid spikes. An entry to update an entity would be written to a work queue. We use watermill with an SQL back end, without thinking about the details too deep I was wondering if we could make this reminder a topic publisher and minder workers subscribers to the periodical update topics.
the minder processes being subscribers to the periodical update table would then issue an ExecuteEntityEventTopic
Good points, @jhrozek ! I'm also thinking about having this as a separate process. Based on the number of replicas, we can process an entity repeatedly, leading to rate-limiting errors.
The Singleton Client Binary would have the logic for scheduling reconciliation events. The approach I had in my mind is:
Pick entities from different owners/tokens so we can process multiple entities with larger batch sizes and not hate any rate-limiting errors.
Evan suggested running the reconciler once in 24 hours, but all those parameters would be configurable so that we can tune it accordingly.
Send a request to the server using RPCs. The server would then internally process those events. It would be similar to receiving webhook events; this time, it would be from the reminder process.
To scale up, we may shard the database so that each reconciler is attached to a unique set of data.
Good points, @jhrozek ! I'm also thinking about having this as a separate process. Based on the number of replicas, we can process an entity repeatedly, leading to rate-limiting errors.
The Singleton Client Binary would have the logic for scheduling reconciliation events. The approach I had in my mind is:
Pick entities from different owners/tokens so we can process multiple entities with larger batch sizes and not hit any rate-limiting errors.
Yes, this is what I tried to say with "a project", but yes, your description is more precise.
Evan suggested running the reconciler once in 24 hours, but all those parameters would be configurable so that we can tune it accordingly.
I think it's hard to know how often to run the reconciler without some more experiments and testing and I strongly agree with making it configurable. At the very least for testing I would have used something in the matter of seconds to see the reconciler triggered repeatedly.
Send a request to the server using RPCs. The server would then internally process those events. It would be similar to receiving webhook events; this time, it would be from the reminder process.
Interesting idea, I like that it's decoupled from the watermill implementation we have now.
To scale up, we may shard the database so that each reconciler is attached to a unique set of data.
I have literally 0 experience with database sharding so I don't feel qualified to answer. I know Evan does have a lot, maybe he'd have an opinion?
cc @evankanderson, what do you think about the above-mentioned approach (having reconciler as a separate process)?
FWIW, I think having a separate process for the reconciliation trigger makes sense and it's ideal. I'm not super sure about doing it with RPCs and would prefer events instead, but I don't have a strong opinion about this that would block it.
doing it with RPCs and would prefer events instead
It would be done using events only. RPCs would trigger events on the server. Our new process would invoke those RPCs.
I agree that this is probably worth a quick design. Trying to hit the design points:
Using a singleton to avoid needing coordination / locking across multiple Minder processes. This should work for now. It adds a small amount of deployment complexity, but that seems worth it. You may also want to consider having the singleton support a "one-shot" mode to address @jhrozek 's comment about wanting "second-level revisits" in development. You probably don't actually want constant revisits at the second level, but you want them on demand.
I think we'll need to be nimble on the pacing. I'd start with something simple, but we may need to spread out on some of the axes @Vyom-Yadav mentions. Right now, we're looking at scales of maybe 10k repositories, which is... one every 86 seconds at a 24-hour revisit. This will change if Minder takes off, of course.
My preference would be to use events on an externally-configured queue (i.e. rather than the in-binary configuration we have today with Watermill). Using an external queue allows us to "freeze" the contract between the two components so each can evolve independently. At the same time, that would be a major undertaking that doesn't provide much direct value, so I think the short-term answer is to have a "kick" RPC that triggers the Minder server to enqueue an event similar to the current webhook events. It's not clear that the event respondent needs to know whether they received a webhook or a reminder kick -- I'd aim to keep it that way; the reconciliation logic should be idempotent.
Oh, and the first point of scaling if we needed it would be a read replica, which should be able to scale to 2M repos under management at a scan rate of less than 300 rows/s. If we're reading in an index-aligned way, that should be trivial, and even if not, it should be pretty easy unless we do some sort of order by/limit/offset hijinks.
My preference would be to use events on an externally-configured queue (i.e. rather than the in-binary configuration we have today with Watermill). Using an external queue allows us to "freeze" the contract between the two components so each can evolve independently.
Which message queue would you suggest? I know you're keen on using Amazon SQS, still what would you suggest for the current use case?
so I think the short-term answer is to have a "kick" RPC that triggers the Minder server to enqueue an event similar to the current webhook events.
Now that you've mentioned using message queues, let's implement that only.
My preference would be to use events on an externally-configured queue (i.e. rather than the in-binary configuration we have today with Watermill). Using an external queue allows us to "freeze" the contract between the two components so each can evolve independently.
Which message queue would you suggest? I know you're keen on using Amazon SQS, still what would you suggest for the current use case?
(full disclosure: I don't have much of a working experience with message queues, so this is more of a gut feeling than anything backed by hard data)
I would prefer not to tie Minder to a specific message queue implementation. Wasn't an abstraction a reason why we chose Watermill back in the day in the first place? IIRC Watermill also have an SQS and Jetstream back end..
so I think the short-term answer is to have a "kick" RPC that triggers the Minder server to enqueue an event similar to the current webhook events.
Now that you've mentioned using message queues, let's implement that only.
One reason where a message queue might be nicer in this respect over an RPC call is that you might get a notion of when a task in a queue was finished as opposed to claimed by a worker who might disappear before the task had finished. I would hope that for this case it might not have a practical effect as the next "tick" would just reinsert the same refresh, but still..
I would prefer not to tie Minder to a specific message queue implementation. Wasn't an abstraction a reason why we chose Watermill back in the day in the first place? IIRC Watermill also have an SQS and Jetstream back end..
Watermill is a wrapper over the queue SDKs (With additional features). We would still use Watermill, but we have to decide whether we want to use Kafka, NATS, RabbitMQ, etc. Each one of them has a unique set of features, so we should decide what's best for the use case. Honestly, I also haven't worked much with message queues, so I'm unsure which one would be the best suited.
One reason where a message queue might be nicer in this respect over an RPC call is that you might get a notion of when a task in a queue was finished as opposed to claimed by a worker who might disappear before the task had finished. I would hope that for this case it might not have a practical effect as the next "tick" would just reinsert the same refresh, but still..
Agreed, using queues would be significantly better.
I would prefer not to tie Minder to a specific message queue implementation. Wasn't an abstraction a reason why we chose Watermill back in the day in the first place? IIRC Watermill also have an SQS and Jetstream back end..
Watermill is a wrapper over the queue SDKs (With additional features). We would still use Watermill, but we have to decide
Ah, then we just didn't understand each other, I read your earlier comment as wanting to go directly for the message queue.
My preference would be to use events on an externally-configured queue (i.e. rather than the in-binary configuration we have today with Watermill). Using an external queue allows us to "freeze" the contract between the two components so each can evolve independently.
@evankanderson Which message queue would you suggest? I know you're keen on using Amazon SQS, still what would you suggest for the current use case? (How about NATS?)
Sorry, missed this question -- I would suggest for now using Watermill, and making the "extract message queue" a separate epic which would apply to all our existing message queueing in Minder.
I'm partial to tools like SQS, EventBridge, or something like Google's Cloud Tasks which manage delivery as a weakly ordered set of deliveries, rather than a strongly-ordered log like Kafka. However, I think that's a separate "plumbing" task, so let's not conflate the two.
so I think the short-term answer is to have a "kick" RPC that triggers the Minder server to enqueue an event similar to the current webhook events.
Now that you've mentioned using message queues, let's implement that only.
Please don't! Feel free to put the RPC-to-Minder behind an interface that we could replace with a one-way delivery mechanism, but switching to an external message queueing system would be a major surgery that should be separate from the revisit mechanism.
(Basically, we'd be taking the vertical stack from the Watermill router diagram and externalizing it into configuration managed by a tool like Terraform)
To explain the externalized config further, the Watermill Router concept support a simple "named topic" fan-out, but SNS, EventBridge, and Google PubSub support filtering messages based on message attributes (e.g. attributes.tier=0 and attributes.type = "repo" in Google-syntax, or {"tier": [{"numeric": ["=", 0]}], "type": ["repo"]} in AWS-syntax).
Using one of the above providers would also allow us to absorb some of the existing middlewares -- some of the telemetry and the Poison / Retry management would move into the external service.
This is also a pretty big commitment that requires a design document and consideration of tradeoffs (cost, scale, difficulty-of-use, credentials), and we probably will end up putting it behind an interface to allow some agility later.
Using one of the above providers would also allow us to absorb some of the existing middlewares -- some of the telemetry and the Poison / Retry management would move into the external service.
That's exactly why I wanted to do it with queues.
Please don't! Feel free to put the RPC-to-Minder behind an interface that we could replace with a one-way delivery mechanism, but switching to an external message queueing system would be a major surgery that should be separate from the revisit mechanism.
Makes sense. Let's keep that as a separate issue. Later we can replace the RPC call with a message publish. Thanks for the detailed explanation.
One reason where a message queue might be nicer in this respect over an RPC call is that you might get a notion of when a task in a queue was finished as opposed to claimed by a worker who might disappear before the task had finished. I would hope that for this case it might not have a practical effect as the next "tick" would just reinsert the same refresh, but still..
Agreed, using queues would be significantly better.
I was suggesting the following pseudocode:
func (s *Server) MinderRevisit(ctx, req) (resp, err) {
msg := ConvertRevisitToMsg(req)
err := s.Router.Publish(msg)
return &EmptyResp{}, err
}
With the pattern of synchronously publishing the event and then returning the error from the publish, you shouldn't have to worry about the task being lost. In general a message queue will not give you a notice when the message has been delivered -- once the queue takes over, the delivery to all destinations is eventually guaranteed, but you don't get to know when it happened (it's asynchronous and may be plural -- one publish may result in 3 subscribers getting the message, so "done" is ambiguous in those circumstances).
Yeah, we will use something like that only, with some authn/z mechanism that works for both the reminder process and users trying to do one shot op.
|
gharchive/issue
| 2024-02-01T19:54:16 |
2025-04-01T04:35:57.136834
|
{
"authors": [
"JAORMX",
"Vyom-Yadav",
"evankanderson",
"jhrozek"
],
"repo": "stacklok/minder",
"url": "https://github.com/stacklok/minder/issues/2262",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2187789599
|
chore: add expiry label to images
See https://github.com/stackrox/stackrox/pull/10385 for Scanner V4
CI failure is unrelated
|
gharchive/pull-request
| 2024-03-15T06:04:45 |
2025-04-01T04:35:57.139051
|
{
"authors": [
"RTann"
],
"repo": "stackrox/scanner",
"url": "https://github.com/stackrox/scanner/pull/1446",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2148418984
|
chore(collector): Bumps collector version to 3.18.0
Description
A detailed explanation of the changes in your PR.
Feel free to remove this section if it is overkill for your PR, and the title of your PR is sufficiently descriptive.
Checklist
[ ] Investigated and inspected CI test results
[ ] Unit test and regression tests added
[ ] Evaluated and added CHANGELOG entry if required
[ ] Determined and documented upgrade steps
[ ] Documented user facing changes (create PR based on openshift/openshift-docs and merge into rhacs-docs)
If any of these don't apply, please comment below.
Testing Performed
Here I tell how I validated my change
TODO(replace-me)
Use this space to explain how you validated that your change functions exactly how you expect it.
Feel free to attach JSON snippets, curl commands, screenshots, etc. Apply a simple benchmark: would the information you
provided convince any reviewer or any external reader that you did enough to validate your change.
It is acceptable to assume trust and keep this section light, e.g. as a bullet-point list.
It is acceptable to skip testing in cases when CI is sufficient, or it's a markdown or code comment change only.
It is also acceptable to skip testing for changes that are too taxing to test before merging. In such case you are
responsible for the change after it gets merged which includes reverting, fixing, etc. Make sure you validate the change
ASAP after it gets merged or explain in PR when the validation will be performed.
Explain here why you skipped testing in case you did so.
Have you created automated tests for your change? Explain here which validation activities you did manually and why so.
Reminder for reviewers
In addition to reviewing code here, reviewers must also review testing and request further testing in case the
performed one does not seem sufficient. As a reviewer, you must not approve the change until you understand the
performed testing and you are satisfied with it.
/retest
/retest
/retest
/retest
|
gharchive/pull-request
| 2024-02-22T07:39:21 |
2025-04-01T04:35:57.145496
|
{
"authors": [
"Stringy"
],
"repo": "stackrox/stackrox",
"url": "https://github.com/stackrox/stackrox/pull/10060",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2264010352
|
fix: ensure resource syncs trigger deployment enforcement
Description
A detailed explanation of the changes in your PR.
Feel free to remove this section if it is overkill for your PR, and the title of your PR is sufficiently descriptive.
Checklist
[ ] Investigated and inspected CI test results
[ ] Unit test and regression tests added
[ ] Evaluated and added CHANGELOG entry if required
[ ] Determined and documented upgrade steps
[ ] Documented user facing changes (create PR based on openshift/openshift-docs and merge into rhacs-docs)
If any of these don't apply, please comment below.
Testing Performed
Here I tell how I validated my change
TODO(replace-me)
Use this space to explain how you validated that your change functions exactly how you expect it.
Feel free to attach JSON snippets, curl commands, screenshots, etc. Apply a simple benchmark: would the information you
provided convince any reviewer or any external reader that you did enough to validate your change.
It is acceptable to assume trust and keep this section light, e.g. as a bullet-point list.
It is acceptable to skip testing in cases when CI is sufficient, or it's a markdown or code comment change only.
It is also acceptable to skip testing for changes that are too taxing to test before merging. In such case you are
responsible for the change after it gets merged which includes reverting, fixing, etc. Make sure you validate the change
ASAP after it gets merged or explain in PR when the validation will be performed.
Explain here why you skipped testing in case you did so.
Have you created automated tests for your change? Explain here which validation activities you did manually and why so.
Reminder for reviewers
In addition to reviewing code here, reviewers must also review testing and request further testing in case the
performed one does not seem sufficient. As a reviewer, you must not approve the change until you understand the
performed testing and you are satisfied with it.
#10886 👈
master
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @dhaus67 and the rest of your teammates on Graphite
|
gharchive/pull-request
| 2024-04-25T16:24:27 |
2025-04-01T04:35:57.153550
|
{
"authors": [
"dhaus67"
],
"repo": "stackrox/stackrox",
"url": "https://github.com/stackrox/stackrox/pull/10886",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1972927508
|
chore(testing): New scaffolding for Compliance 2.0 ui-e2e-tests
Description
This adds the scaffolding, and the most basic assertions, for ui-e2e-tests for the new Compliance 2.0 section. Only the landing page, /main/compliance-enhanced/status, is tested here.
In writing the test, I noticed that we were not setting a page title on the tab when visiting this page. In order to write this basic test, I fixed that oversight.
Caveats:
The data on the state is static for now, because the API and Compliance Operator integration are still in development.
Because there can be no API calls yet, the route matching/mock data is not included in this PR.
Checklist
[ ] Investigated and inspected CI test results
[x] Unit test and regression tests added
Testing Performed
Will observe results of this test in CI
Ran the test locally, to see it passing.
Reminder for reviewers
In addition to reviewing code here, reviewers must also review testing and request further testing in case the
performed one does not seem sufficient. As a reviewer, you must not approve the change until you understand the
performed testing and you are satisfied with it.
/test gke-nongroovy-e2e-tests
/test gke-ui-e2e-tests gke-qa-e2e-tests ocp-4-13-ui-e2e-tests ocp-4-10-ui-e2e-tests
|
gharchive/pull-request
| 2023-11-01T18:58:24 |
2025-04-01T04:35:57.158008
|
{
"authors": [
"vjwilson"
],
"repo": "stackrox/stackrox",
"url": "https://github.com/stackrox/stackrox/pull/8461",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2050682292
|
feat(go): Bump go to 1.21.5
Description
A detailed explanation of the changes in your PR.
Feel free to remove this section if it is overkill for your PR, and the title of your PR is sufficiently descriptive.
Checklist
[ ] Investigated and inspected CI test results
[ ] Unit test and regression tests added
[ ] Evaluated and added CHANGELOG entry if required
[ ] Determined and documented upgrade steps
[ ] Documented user facing changes (create PR based on openshift/openshift-docs and merge into rhacs-docs)
If any of these don't apply, please comment below.
Testing Performed
Here I tell how I validated my change
TODO(replace-me)
Use this space to explain how you validated that your change functions exactly how you expect it.
Feel free to attach JSON snippets, curl commands, screenshots, etc. Apply a simple benchmark: would the information you
provided convince any reviewer or any external reader that you did enough to validate your change.
It is acceptable to assume trust and keep this section light, e.g. as a bullet-point list.
It is acceptable to skip testing in cases when CI is sufficient, or it's a markdown or code comment change only.
It is also acceptable to skip testing for changes that are too taxing to test before merging. In such case you are
responsible for the change after it gets merged which includes reverting, fixing, etc. Make sure you validate the change
ASAP after it gets merged or explain in PR when the validation will be performed.
Explain here why you skipped testing in case you did so.
Have you created automated tests for your change? Explain here which validation activities you did manually and why so.
Reminder for reviewers
In addition to reviewing code here, reviewers must also review testing and request further testing in case the
performed one does not seem sufficient. As a reviewer, you must not approve the change until you understand the
performed testing and you are satisfied with it.
/retest
Probably obsoleted by https://github.com/stackrox/stackrox/pull/10305
|
gharchive/pull-request
| 2023-12-20T14:41:02 |
2025-04-01T04:35:57.164544
|
{
"authors": [
"SimonBaeumer",
"porridge"
],
"repo": "stackrox/stackrox",
"url": "https://github.com/stackrox/stackrox/pull/9129",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
145369407
|
feat: auth saga test
test for auth/sagas - authorize
covered: authorize, logout saga
missing: authFlow saga
|
gharchive/pull-request
| 2016-04-02T12:41:15 |
2025-04-01T04:35:57.177650
|
{
"authors": [
"michalvlcek"
],
"repo": "stackscz/re-app",
"url": "https://github.com/stackscz/re-app/pull/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
235453491
|
Add tests
Now that we a have a working architecture we should add tests.
Basic py.test infrastructure was added in ccc3e73f400059fd1e8cfb981c342281834ad7e3 and Travis CI integration was added in 145866432a7d14096f7be3a9de736dfb8334c489.
Test coverage is still very low, so I'm keeping this issue open until it has been significantly increased.
(cc @CatoTH)
There is now enough test coverage to close this issue.
|
gharchive/issue
| 2017-06-13T06:55:37 |
2025-04-01T04:35:57.180288
|
{
"authors": [
"torfsen"
],
"repo": "stadt-karlsruhe/geoextract",
"url": "https://github.com/stadt-karlsruhe/geoextract/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2034859826
|
🛑 Testnet Explorer is down
In cd05cf4, Testnet Explorer (http://testnet.explorer.stakr.space) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Testnet Explorer is back up in b389b2e after 9 minutes.
|
gharchive/issue
| 2023-12-11T05:28:08 |
2025-04-01T04:35:57.191969
|
{
"authors": [
"stakrspace"
],
"repo": "stakrspace/upptime",
"url": "https://github.com/stakrspace/upptime/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1117428734
|
🛑 Proxima/DNS Server is down
In dcda890, Proxima/DNS Server (http://proxima01.its-telekom.eu/livewatch.php) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Proxima/DNS Server is back up in 871ff78.
|
gharchive/issue
| 2022-01-28T13:49:39 |
2025-04-01T04:35:57.211720
|
{
"authors": [
"stamateas"
],
"repo": "stamateas/upptime",
"url": "https://github.com/stamateas/upptime/issues/3204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.