id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1243808548
Adds Plastitanium, Plastitanium glass, Titanium, Titanium Glass, as well as Titanium and Plastitanium Glass shards About the PR I was looking around and stumbled upon sprites for Plastitanium, and Titanium stuff. Decided there would be no harm to add em. Right now they're sheets used for nothing, but I do intend to add stuff for them to be used on in the future :cl: add: Adds Plastitanium Sheets, Plastitanium Glass Sheets, Titanium Sheets, Titanium Glass Sheets, as well as Titanium and Plastitanium Glass shards. We don't currently accept prototypes that are not actually used for anything, sorry. If you added the walls themselves so mappers can make the syndie stuff it'd probably get merged. Im not sure of the current maintainer stance on new resources, so I cant comment too much on adding more. What i will say is that adding new prototypes with no use and no in-game way of acquiring them almost certainly qualifies bloat content. If anything, the sprites should be removed until decided on whether or not they will be used. The vague promise of "I'll use them soon" doesnt seem to justify their inclusion. I’m general are they used for anything unique? Or is it just shuttle walls and floors etc? To add to others, a use for shards is construction of spears, otherwise construction graphs for making shuttle walls or plast whatever walls will do it. Tests are also failing because you need to create a stack prototype for each new material here: /Prototypes/Stacks/Materials/
gharchive/pull-request
2022-05-21T00:35:39
2025-04-01T06:40:26.460580
{ "authors": [ "Ablankmann", "EmoGarbage404", "Peptide90", "Zumorica", "metalgearsloth" ], "repo": "space-wizards/space-station-14", "url": "https://github.com/space-wizards/space-station-14/pull/8314", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1709259660
recover from a checkpoint file Motivation part of #4090 Changes add a command line arg --checkpoint-file to specify URI for checkpoint file. the file can be from an URL (https://) local file (file://) the node will download/copy the file to local disk and execute the recovery logic add a command line arg --restore-layer to specify which layer to restart the mesh recovery logic include the following steps: read recovery file check if the node's most recent ATX is in the checkpoint file. if not, copy its latest ATX and dependencies (prev/positioning ATX, and the poet proof) from the old db backup the old database files create a new database file, save the checkpoint ATXs, accounts and the data for the node's most recent ATX if any set the effective genesis layer to (restore layer - 1) resume the app init logic sqlite recovery table only allow 1 entry, and records the the restore layer. ATXs re-gossip when ATXs are not included in the checkpoint file, the network will re-gossip these ATXs by syncing them from peers as soon as a node recovers the checkpoint data. this is achieved by syncer requesting current epoch ATXs immediately after recovery gossip handlers reject data (proposal/ballot/blocks) that are before the restore layer debug API service add querying account states at specific layer admin recover RPC the only way (i can figure out) to preserve PoST data is to make the node commit suicide (log fatal) after receiving the recover RPC. checkpoint systest does the following: submit transactions to spawn account and transfer coins get accounts nonce/balance issue checkpoint RPC to every node check all the returned checkpoint data are the same issue recovery RPC to every node (node will copy the local checkpoint file to $dataDir/recovery/) and restart check that all accounts have the same nonce/balance as before the checkpoint run testSmeshing() the nodes that tests miners are generating proposals make boostrapper serve the checkpoint data add two new nodes with --checkpoint=http://bootstrapper-0:80/checkpoint run testSmeshing on these two nodes bors try bors try bors try bors try bors try bors try closed for the changes were merged in 3 separate PRs
gharchive/pull-request
2023-05-15T03:40:29
2025-04-01T06:40:26.488582
{ "authors": [ "countvonzero" ], "repo": "spacemeshos/go-spacemesh", "url": "https://github.com/spacemeshos/go-spacemesh/pull/4387", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1979561290
Update CI to use merge queues instead of bors Motivation Bors it seems has finally been discontinued. This updates our CI to work with Github Merge Queues in a similar fashion as it did before when we were using bors: PRs immediately trigger the jobs in ci.yml and only allow adding to a merge queue if all jobs in this file pass While in the merge queue jobs in both ci.yml and systest.yml are executed and only of all of them pass the code is merged to the target branch. Manual execution of system tests now needs to be done via Actions -> System tests -> Run Workflow on the GH homepage NOTE: merging this requires updating the branch protection rules for develop - see test-merge-queue branch protection rules as a reference. Changes a dummy systest workflow is added that is only executed on the PR trigger and always passes this is needed because systest-status needs to be a requirement for both triggering the merge and successfully completing it removed bors configuration from ci and systest jobs. Test Plan n/a TODO [x] Explain motivation or link existing issue(s) [x] Test changes and document test plan [x] Update documentation as needed [x] Update changelog as needed is it possible to run manual workflow from a fork? when you add several changes to a queue, do we run systests once or once per item in the queue? With GH merge groups the system tests would be executed once per group, so if multiple PRs are pending for a merge they would trigger one system test run (assuming there aren't any conflicts for merging all of them at once). Closed for now in favour of self-hosted bors-ng instance
gharchive/pull-request
2023-11-06T16:30:23
2025-04-01T06:40:26.494496
{ "authors": [ "dshulyak", "fasmat" ], "repo": "spacemeshos/go-spacemesh", "url": "https://github.com/spacemeshos/go-spacemesh/pull/5228", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
121945488
BUG: fix graph file name generation for default requirement version The meaning of requirement equal to "" and null was changed in b9bdf379e5, but the Javascript code was not updated so graphs won't load. Sync the javascript side with Python. Extend the web tests to run with null and "" requirements, to cover this. Going to merge soon, as master is broken without this...
gharchive/pull-request
2015-12-13T21:35:25
2025-04-01T06:40:26.500773
{ "authors": [ "pv" ], "repo": "spacetelescope/asv", "url": "https://github.com/spacetelescope/asv/pull/354", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
528823185
v0.3.1 Fixes PYPI upload. Coverage remained the same at 70.48% when pulling 23d6a870926ed9ba003c35406f8592989db8d364 on hover2pi:master into 080c248d417f73e5a529554918b7ece0b014151d on spacetelescope:master.
gharchive/pull-request
2019-11-26T16:06:29
2025-04-01T06:40:26.502668
{ "authors": [ "coveralls", "hover2pi" ], "repo": "spacetelescope/awesimsoss", "url": "https://github.com/spacetelescope/awesimsoss/pull/51", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
141262836
NER differences Could someone explain me the difference between doc.ent_type_ and doc.ents[0].label_ ? Is it the same thing but a different way to get it ? Thanks! The .label attribute is available on Span objects. ent_type is available on Token objects. So yes, the values should match in an entity.
gharchive/issue
2016-03-16T12:47:33
2025-04-01T06:40:26.570131
{ "authors": [ "jesuisnicolasdavid", "syllog1sm" ], "repo": "spacy-io/spaCy", "url": "https://github.com/spacy-io/spaCy/issues/293", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2450334985
[Bug] 방 이름이 너무 길면 방 이름이 X 버튼에 겹침 Summary It closes #801 사진과 같이 방 이름에 해당하는 부분의 오른쪽 패딩을 X 버튼 크기만큼 더했습니다. Images or Screenshots Further Work Do something... LGTM
gharchive/pull-request
2024-08-06T08:59:07
2025-04-01T06:40:26.574607
{ "authors": [ "imYourChoi", "jinhyeonkwon" ], "repo": "sparcs-kaist/taxi-front", "url": "https://github.com/sparcs-kaist/taxi-front/pull/802", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
174610347
"I forgot my password" from app not working I've verified that it works on an iPhone but on Android (Samsung Galaxy S6 Android version 6.0.1) on the Particle app if you click the link "I forgot my password" it seems to open a mobile web page with the message "Looks like you got lost". The menu item from this page doesn't offer a link to retrieve a password and requires a user to know and go to the website to reset a password. I suspect it is an outdated URL/resource that is not auto directing/updated. Seconded @idokleinman and thirded by Chris.
gharchive/issue
2016-09-01T19:51:14
2025-04-01T06:40:26.589332
{ "authors": [ "bloukingfisher", "jenesaisdiq" ], "repo": "spark/photon-tinker-android", "url": "https://github.com/spark/photon-tinker-android/issues/10", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
102640356
Incompatible native module: serialport Loaded spark-dev module, got the following in the Incompatible Package window: Error message: Cannot find module '/Users/leo3/.atom/packages/spark-dev/node_modules/serialport/build/serialport/v1.6.3/Release/atom-shell-v0.22.3-darwin-x64/serialport.node' Particle menu is blank (I'm guess because Spark Dev 0.0.25 stops loading upon error). OS: Mac OSX 10.10.5 Hardware: Mac Pro (Mid 2012) Atom: 1.0.7 Yeah, this is caused by bug in apm. You can solve it by following steps 4 and 5 from Linux instructions
gharchive/issue
2015-08-23T17:45:23
2025-04-01T06:40:26.591877
{ "authors": [ "leo3linbeck", "suda" ], "repo": "spark/spark-dev", "url": "https://github.com/spark/spark-dev/issues/107", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
53640621
XSL message in a XSLT transform throws a RuntimeError source = Nokogiri::XML(File.read monograph_filename) xsl = Nokogiri::XSLT(File.read 'ipm.xsl' ) transformed = xsl.transform(source) The last line throws a runtime error because the xls file contains a message <xsl:otherwise> <xsl:message>Warning: Class not handled: <xsl:value-of select="$vDiv"/></xsl:message> </xsl:otherwise> So we get the error RuntimeError: Warning: Class not handled: date lrvdt Is there a way for me to suppress the warning so I don't get a runtime error? @higherpixels - Can you provide a working example of this issue? This would save time in trying to reproduce, as well as saving me the embarassment of admitting I have no idea how XSLT works! Background for this bug-report policy is here: http://www.nokogiri.org/tutorials/getting_help.html Thanks! This should recreate the error source = Nokogiri::XML(File.read "ipm5547_eng.xml") xsl = Nokogiri::XSLT(File.read 'ipm.xsl' ) transformed = xsl.transform(source) http://temp-host.com/download.php?file=nx92xy http://temp-host.com/download.php?file=cc54gq Excellent, thanks so much! I've reproduced it and am looking into it. On Thu, Jan 22, 2015 at 9:57 AM, higherpixels notifications@github.com wrote: This should recreate the error source = Nokogiri::XML(File.read "ipm5547_eng.xml") xsl = Nokogiri::XSLT(File.read 'ipm.xsl' ) transformed = xsl.transform(source) http://temp-host.com/download.php?file=nx92xy http://temp-host.com/download.php?file=cc54gq — Reply to this email directly or view it on GitHub https://github.com/sparklemotion/nokogiri/issues/1217#issuecomment-71032120 . OK, so I've done a bit of research, and it appears as though using xsl:message is expected to raise an error of some sort. My sources: https://msdn.microsoft.com/en-us/library/ms256441(v=vs.110).aspx http://www.devguru.com/technologies/xslt/8437 The second reference notably mentions "The xsl:message element is primarily used to report errors by displaying a text message". What is the behavior that you expect in this case? I'm seeing the same issue. I came across this issue because I was debugging an xsl template and had read a couple of articles that recommended the use of xsl:message to provide helpful feedback in figuring out the order and selection of templates. To address @flavorjones question in the comment above, based on what I read I would expect the behavior to be displaying the message content on the console or returned by the XSLT.transform method somehow. Reference: "The xsl:message element is optional. Processors are not required to support it. However most do, and usually they do so by printing messages on the console." http://www.ibm.com/developerworks/library/x-tipxslmsg/ I'm affected by this as well, and it would be very useful to have the choice of what to do, and here are two possible ways the API could be changed to support it: xslt = Nokogiri::XSLT::Stylesheet.parse_stylesheet_doc xslt_file output = xslt.transform(input) do |message| puts message #raise an exception if you want, but if you don't we just keep processing end or xslt = Nokogiri::XSLT::Stylesheet.parse_stylesheet_doc xslt_file xslt.handle_messages do |message| puts message #raise an exception if you want, but if you don't we just keep processing end output = xslt.transform(input) Alternatively, instead of raising exceptions directly, the block could take an second parameter xslt = Nokogiri::XSLT::Stylesheet.parse_stylesheet_doc xslt_file xslt.handle_messages do |message, transform| puts message transform.continue end output = xslt.transform(input) @frivoal What would you think about adding the error to the Document#errors array, if the xsl:message does not declare terminate="yes"? @flavorjones Just to be clear on what you're proposing: it does not declare terminate="yes" you would just continue processing and store the error in Document#errors it does declare terminate="yes", abort imediately (and store the message in Document#errors ? and print the message on stderr?) Yes, this would work. Transformations that are supposed to continue would continue, and those that are supposed to stop would stop. I think it would probably be better to put the messages in something like document#messages (at least when terminate="yes" is not declared) since these are not actually errors, and may not even be warnings at all, but that's less important. This would be a question of API intuitiveness, not of capabilities or correctness. Another small limitation of that design would be when the non-terminating messages are meant to give progress reports in a long running task, just storing them in Document#errors and letting the caller read them at the end defeats the purpose. This is secondary, since the processing would still occur normally, but the messages would be less useful than could be. That said, your proposal is actually compatible with mine: the default message handler would do what you said, and you could let users supply their won, using something like a handle_message method on XSLT::Stylesheet, which would take a block, and pass it the message and terminate as a boolean, and allowing it to signal somehow if it wants processing to continue or stop. I guess that would be ideal, getting the best of both worlds. A brief investigation of libxlst and its xsltMessage function leads me to believe that it's not currently possible to filter xsl:message messages differently from other parse errors and warnings. @nwellnhof if you have a moment, please let me know if I'm missing something. And if that's the case, then I can update Nokogiri's code to only raise an exception if the parsing failed (i.e., xsltApplyStylesheet returning NULL), and otherwise we can stash all of the warnings and messages in an accessor. A brief investigation of libxlst and its xsltMessage function leads me to believe that it's not currently possible to filter xsl:message messages differently from other parse errors and warnings. That's correct. libxslt's error handling is extremely limited. It never implemented what libxml2 calls "structured" error handling. Thank you! I'll schedule some time to make Nokogiri's behavior a bit more robust, then.
gharchive/issue
2015-01-07T15:00:57
2025-04-01T06:40:26.613844
{ "authors": [ "flavorjones", "frivoal", "higherpixels", "kmeister2000", "mejackreed", "nwellnhof" ], "repo": "sparklemotion/nokogiri", "url": "https://github.com/sparklemotion/nokogiri/issues/1217", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1664673793
Update to latest htmlunit neko What problem is this PR intended to solve? This is a WIP for #2565. Feedback and help welcome. I had a look at this but it's probably a bit too involved for my xerces and nokogiri/htmlunit knowledge. Was your thinking to essentially have two xerces versions eventually packaged? (Retain the mainline org.apache one as well as the shaded, refactored/simplified one inside htmlunit)? It looks like nokogiri seems to try and reuse the same xerces xonfiguration for its own direct stuff as well as for use by htmlunit so seemed to require a change to the configuration approach.
gharchive/pull-request
2023-04-12T14:23:33
2025-04-01T06:40:26.616060
{ "authors": [ "chadlwilson", "flavorjones" ], "repo": "sparklemotion/nokogiri", "url": "https://github.com/sparklemotion/nokogiri/pull/2856", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
110451340
Is there any current issues to be worked on the sparsehash / densehash structures? Hi there, I checked this code description and in 2005 it informed "It would be nice to rework this C class to follow the C++ API as closely as possible (eg have a set_deleted_key() instead of using a #define like this code does now). I believe the code compiles and runs, if anybody is interested in using it now, but it's subject to major change in the future, as people work on it. Craig Silverstein" I would like to check if this set_deleted_key() issue was resolved, else what are the current issues. (I am looking for a theme to work in software engineering monography.) thanks! Hi, most of the current issues are present here on Github in the issue-tracker :-) Some of them have been addressed in this branch which needs to be merged: https://github.com/sparsehash/sparsehash/tree/issue-fixes There's also a fork here: https://github.com/sparsehash/sparsehash-c11 which aims to remove all the type traits and conditional build logic that isn't required now that C++11 is widely available. Ideally, it needs a better build process (CMake or plain old Makefile). Patches and contributions are more than welcome. This project has been static for some time :-) Cheers, Donovan.
gharchive/issue
2015-10-08T13:30:23
2025-04-01T06:40:26.623381
{ "authors": [ "donovanhide", "pyrobit" ], "repo": "sparsehash/sparsehash", "url": "https://github.com/sparsehash/sparsehash/issues/113", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
923995576
No module named 'scheduled_tasks' Something is still a miss. Where does 'scheduled_tasks' come from? I see you import it everywere but ... Traceback (most recent call last): File "D:\02_Stock_Analysis\Stocksera-spartan737\Stocksera\scheduled_tasks\main.py", line 2, in import scheduled_tasks.get_reddit_trending_stocks.scrape_reddit as scrape_reddit ModuleNotFoundError: No module named 'scheduled_tasks' Scheduled_tasks is a folder name found in the main directory Hi. I have rewrote the pipeline of the entire scheduled_tasks section. You only need to edit tasks_to_run.py in the main parent directory directly. I have also removed the section for due diligence, as I have decided to focus on other sections of the application first. Please clone the latest version of the repo
gharchive/issue
2021-06-17T14:31:06
2025-04-01T06:40:26.626533
{ "authors": [ "3dhistory", "spartan737" ], "repo": "spartan737/Stocksera", "url": "https://github.com/spartan737/Stocksera/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
112862563
Tactical graphics milsymbol dosen't draw tactical graphics and except a few point symbols that are referenced in other appendixes. If I would implement tactical graphics it would probably be done in another library, so that it will be easy to choose if you want to import it or not. I have for quite some time been collecting ideas for how I would be able to create tactical graphics, and unfortunately my time might be too limited to implement the whole tactical graphics appendix, but I have some ideas that I might try out. My idea is to start out with some, maybe five or ten, different graphics and see how it works out. If anyone has any input, wishes, questions, please post them here and I'll try to answer. (And if you need this done faster, you can always take me in as a consultant.) If you want inspiration, I know of two open source libraries that can draw control measures: milsymb-js / milsymb-java MilSymb I have experimented with milsymb-java/js. The interface is relatively straight forward and it can generate KML and GeoJSON. I have not looked at the implementation, but I know that the point symbology is font based. Usually it only makes sense to draw control measures on a map, so the library should generate output that can be used with OpenLayers and Leaflet. GeoJson is one option, but it does not support curved lines. milsymb-js/java solves this by converting curves to polylines. This works, but it requires many line segments to look good. SVG will look better, but you'll probably need to adjust the SVG output every time the map is zoomed in or out. Canvas is also an option. Thank you for your input. My idea at the moment is SVG, I have looked at the d3 layers examples that exists for both openlayers and Leaflet, so it's something like that I have in mind, but I keep thinking about it and see if I find some time to make some initial tests. :+1: The initial plan is to support point symbols for tactical graphics. I'm adding functionality to inject new SIDC (so that you only will have tactical symbols if you want them and not otherwise) at the moment, and it will also require a rewrite of how the text labels are placed, but I got an idea for how to solv that. I have now added support injecting support for new SIDC and for non standard bounding boxes, this is the first step to supporting point graphics from tactical symbols. Next step is to be able to add new icon building blocks, and to be able to override text placements. All except two tactical points are now implemented, please have a look and give feedback. Since the size of the symbols isn't specified in the standard I had to improvise some. http://www.spatialillusions.com/milsymbol-dev/docs/milsymbol-2525c-tactical-points-svg.html Does this work for you @szechyjs ? Impressive work @spatialillusions! I'll have a look at them. Implementing something for graphics other than points will be done here: https://github.com/spatialillusions/milgraphics
gharchive/issue
2015-10-22T18:28:20
2025-04-01T06:40:26.633665
{ "authors": [ "kjellmf", "spatialillusions", "szechyjs" ], "repo": "spatialillusions/milsymbol", "url": "https://github.com/spatialillusions/milsymbol/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
470256894
WIP: UNR-1545: Add documentation for offloading and actor groups Contributions: We are not currently taking public contributions - see our contributions policy. However, we are accepting issues and we do want your feedback. Description Documentation of offloading includes additions to the GDK concepts doc, a new reference page which covers the newly added SpatialStatics, best practices, and a description of the usage of offloading in the example project. Primary reviewers @ElleEss @mattyoung-improbable Closing this because there's another PR with the most up to date draft.
gharchive/pull-request
2019-07-19T10:33:02
2025-04-01T06:40:26.636142
{ "authors": [ "tenevdev" ], "repo": "spatialos/UnrealGDK", "url": "https://github.com/spatialos/UnrealGDK/pull/1180", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
324627470
Accessor attribute doesn't log if it not in $appends array. Is it intended behavior? If so, I think, it's good to point out this moment in documentation. Update: I foregt to say, that I use 1.16.0 version on laravel 5.4 Update 2: Accessor attribute get equal value in "attributes" and "old" arrays. So, as I can see, I can't use dynamic attributes to log changes. Where I can find contribution process, maybe I can implement this feature. An accessor attribute isn't an attribute like all the others. And primary an accessor attribute doesn't have an old and dirty state. Thanks for clarifying. Maybe I can be wrong, but in $changes property of Activity class contains a serialized model (If I use LogsActivity trait). So, if I append to a serialized model dynamic attribute and in next update it changes log will contain changes. I think this is crucial feature and worth noting in documentation. Sorry for link to your concurrent, but it will be cool if this package has something like this. That's out of scope for this package, implement that in your own app.
gharchive/issue
2018-05-19T12:51:26
2025-04-01T06:40:26.639047
{ "authors": [ "Gummibeer", "fotonmoton", "freekmurze" ], "repo": "spatie/laravel-activitylog", "url": "https://github.com/spatie/laravel-activitylog/issues/383", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1268724934
add php 8.1 support to composer.json fixes #195 Please pull this PR in Why is this taking so long? I believe this is unnecessary as the ^8.0 constraint means that 8.1 is also included (as is 8.2, and so on). @freekmurze Please correct me if I'm wrong, but I believe this PR can be closed without merging as the referenced issue (#195) is not related to this directly. I believe this is unnecessary as the ^8.0 constraint means that 8.1 is also included (as is 8.2, and so on). @freekmurze Please correct me if I'm wrong, but I believe this PR can be closed without merging as the referenced issue (#195) is not related to this directly. My comment is after testing en composer does not accept 8.1 with this setting.
gharchive/pull-request
2022-06-12T23:02:26
2025-04-01T06:40:26.645057
{ "authors": [ "darviscommerce", "devinfd", "lintaba", "patinthehat" ], "repo": "spatie/pdf-to-image", "url": "https://github.com/spatie/pdf-to-image/pull/197", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
834748223
Ray not sending integers Describe the bug: I'm using Ray in WordPress and framework-agnostic PHP projects and for some reason integers don't show up in Ray app. Versions: Ray version: 1.14.5 PHP version: 7.4.13 Ray WP plugin: 1.2.4 To Reproduce: Try sending an integer value, i.e.: ray(25); Desktop: OS: macOS Version 10.13.6 I'm also noticing this in Laravel Posting this as a potential workaround for people before this gets fixed. It's simple but took a little too long to occur to me to do this. Just typecast any integers to a string when outputting in the debugger. ray((string) 25); Can confirm I'm also seeing this on Drupal. PHP Library version: 1.21.2 Ray client: 1.14.5 PHP: 8.0.3 MacOS Also having this issue, even when multiple arguments are fed to the ray() function, as soon as an integer is include all these are not showing up. Hapenned to me as well +1. Using Laravel + Linux (ubuntu) Terminal Log: TypeError: e.includes is not a function at D (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:367:19406) at /tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:367:18468 at Array.map (<anonymous>) at /tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:367:18444 at Array.map (<anonymous>) at /tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:367:18198 at s.handle_request (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:156:784) at s (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:149:883) at u.dispatch (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:149:905) at s.handle_request (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:156:784) Have same issue with M1 mac. Until lates update everything works just fine. Thanks for the reports and especially thanks too @vicenterusso for the logs which gave me an idea about what could be causing the bug :) This has been fixed and will be included in the 1.14.6 patch version (releasing now).
gharchive/issue
2021-03-18T12:30:57
2025-04-01T06:40:26.651269
{ "authors": [ "AdrianMrn", "Muffinman", "SpencerCloud", "ajimatahari", "lorlab", "nlemsieh", "nnerijuss", "tomcoonen", "vicenterusso" ], "repo": "spatie/ray", "url": "https://github.com/spatie/ray/issues/379", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2055162829
СУЩЕСТВУЕТ ИНТЕРЕСНЕНЬКИЙ БАГ) https://github.com/spbu-math-cs/GiveGift-frontend/assets/57632427/54db6c39-79fa-44c3-bfb5-bf4e9f8d05b3 ДУМАЮ ЧТО НАДО ПОФИКСИТЬ) https://github.com/spbu-math-cs/GiveGift-frontend/assets/57632427/b9fe8f3b-a47f-4aec-b09a-8961e9f0ef21
gharchive/issue
2023-12-24T16:56:38
2025-04-01T06:40:26.653483
{ "authors": [ "Amirelkanov" ], "repo": "spbu-math-cs/GiveGift-frontend", "url": "https://github.com/spbu-math-cs/GiveGift-frontend/issues/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
803779619
DNS: CNAME, NS or Load Balancer We have a few options when it comes to achieving a human-readable domain name for our API endpoint. Here are the first 3 that come to mind: CNAME - Points from UoB subdomain to our own domain NS - We host our own nameservers and have UoB delegate that subdomain Load Balancer - We have a static IP address pointing to an Elastic IP (reserved for us by AWS) that assigns to a NAT Gateway that links to a subnet where the lambda is hosted inside the VPC. We are not deploying to production so can close this
gharchive/issue
2021-02-08T17:37:44
2025-04-01T06:40:26.695363
{ "authors": [ "joekendal" ], "repo": "spe-uob/HealthcareLake", "url": "https://github.com/spe-uob/HealthcareLake/issues/112", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1140215486
Take into account (x-)nullable when validating defaults Fixes # 1462. CC: @nielsbox Pull Request Test Coverage Report for Build 1853825683 9 of 9 (100.0%) changed or added relevant lines in 2 files are covered. No unchanged relevant lines lost coverage. Overall coverage increased (+0.002%) to 97.063% Totals Change from base Build 1819970533: 0.002% Covered Lines: 2842 Relevant Lines: 2928 💛 - Coveralls
gharchive/pull-request
2022-02-16T15:41:29
2025-04-01T06:40:26.700474
{ "authors": [ "RobbeSneyders", "coveralls" ], "repo": "spec-first/connexion", "url": "https://github.com/spec-first/connexion/pull/1463", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2168016127
Identify TR specs that are too far behind their Editor's Drafts Given https://github.com/w3c/browser-specs/blob/main/index.json, for specs with organization == W3C, the tool can fetch the two URLs available at either {series.releaseUrl, series.nightlyUrl} or {release.url, nightly.url}, and compare their publication dates (<time class="dt-updated">). If they're different, the SLI is the age of the release, and the SLO could be 4 weeks. IETF specs have a similar feature using <time class="published">. Sounds good in general although I would be wary of a metric that encourages republishing unchanged drafts with a new date just to keep the metric happy. Also depending on the document status: updating a CR Snapshot with another CR Snapshot requires a new round of horizontal review, a transition request, a mandatory one-week period for related groups to object, a once-weekly meeting to review the transition request and then scheduling publication. That will never happen in 4 weeks and 3 months is a more realistic minimum, assuming horizontal re-review starts on the day the previous CRS was published. updating a Rec with an edited Rec with proposed corrections or amendments requires a 4-week AC review plus a couple of weeks to review the responses, giving an absolute minimum of 6 weeks even if the process is started the same day the previous Recommendation is published, with no time for public review.
gharchive/issue
2024-03-05T00:01:02
2025-04-01T06:40:26.704066
{ "authors": [ "jyasskin", "svgeesus" ], "repo": "speced/spec-maintenance", "url": "https://github.com/speced/spec-maintenance/issues/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1974375133
feat(revit/objects): new rebar class and sending rebar from revit see details in https://www.notion.so/speckle/Rebar-Support-in-Revit-Revit-workflow-c1922ee7c1d0450184b16221c6b939b6?pvs=4 Sending rebar from Revit is implemented. A separate ticket is created for receiving as it is more complicated.
gharchive/issue
2023-11-02T14:24:06
2025-04-01T06:40:26.706607
{ "authors": [ "bimgeek", "paloknapo" ], "repo": "specklesystems/speckle-sharp", "url": "https://github.com/specklesystems/speckle-sharp/issues/3022", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2573072600
Add export without displayValue. Currently Speckle uses host applications to calculate meshes. First, there can be scenarios when people want to extract only alpha-numerical data. Second, it is very inconvenient to write a code specifically for each software and keep it updated. Third, it is a huge time waist and depends on application quality. E.g. Revit sucks. What we are currently after (and hope it will be implemented in the future in Speckle): we extend geometry kit to cover main surface types from OpenCascade save data as collections of faces (surfaces with edge curves; curves are fully covered already) write a single post-process to calculate Breps and mesh them (if necessary) based on Speckle geometry kit using whole power of OpenCascade. Currently in Revit Convertor the method GetElementDisplayValue() is implemented in about 30(!) places. I haven't check other connectors. I presume it could be optimized to quickly turn it off. Hi @nickger, we can look at implementing this as a send setting in Revit (and maybe other host apps where this would make sense), though it can be a rather counterintuitive for the majority of specklers. Regarding meshing breps with opencascade: this is something we've never tried before, but it opens a slippery slope. We currently benefit from not having to mesh things client side/server side and being able to instantly display them! I'm going to close this issue, but the first idea - send without display values - is captured in our backlog! Well, currently you mesh things on client's side, moreover, in the host application during sending. This is unnecessary load at least, and for me as developer - huge head ache: if I want to do further geometry processing, I must implement it on client's side as well. Think of exporting BReps as a third export option. perhaps we need to understand what you're after better: for us meshing things client side saves a lot of headaches. happy to hear what you're building!
gharchive/issue
2024-10-08T12:32:03
2025-04-01T06:40:26.710937
{ "authors": [ "didimitrie", "nickger" ], "repo": "specklesystems/speckle-sharp", "url": "https://github.com/specklesystems/speckle-sharp/issues/3639", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1577494787
fix(objects): Added fix for sketchup blocks using dynamic blockDefinition prop SketchUp sends every detached property with the prefixed @. This causes issues with the transformedGeometry computed property of blocks, which uses BlockInstance.blockDefinition and BlockDefinition.geometry properties. Both of these properties, on sketchup objects, would end up being deserialised as dynamic props, which would throw null reference errors. This PR addresses this issue, while also adding warnings on the code to inform us in the future about this. @JR-Morgan I'm re-requesting your review, but mostly to open the conversation of how cringy this is codewise... and that maybe we should not ever merge this 😅 Closing as we decided to fix this on the SketchUp side.
gharchive/pull-request
2023-02-09T08:59:38
2025-04-01T06:40:26.713382
{ "authors": [ "AlanRynne", "teocomi" ], "repo": "specklesystems/speckle-sharp", "url": "https://github.com/specklesystems/speckle-sharp/pull/2126", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2094838965
Direct contributors to the current CONTRIBUTING.md fixes # no issue logged, this is a simple spot suggestion [X] I have read the Contribution Guidelines [ ] I have commented on the issue above and discussed the intended changes [ ] A maintainer has signed off on the changes and the issue was assigned to me [ ] All newly added code is adequately covered by tests [ ] All existing tests are still running without errors [X] The documentation was modified to reflect the changes OR no documentation changes are required. Changes The current PR template contains a link to CONTRIBUTING.md. However, when clicking on it, Github redirects to a 404: In short, the URL is missing /blob/main. This PR fixes the issue, and so contributors don't have to look too deeply. Merged! Thank you for your contribution. Much appreciated! 👍
gharchive/pull-request
2024-01-22T22:02:33
2025-04-01T06:40:26.745926
{ "authors": [ "patriksvensson", "tonycknight" ], "repo": "spectreconsole/spectre.console", "url": "https://github.com/spectreconsole/spectre.console/pull/1435", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2274314964
chore(main): release 0.0.1 :robot: I have created a release beep boop 0.0.1 (2024-05-01) Bug Fixes add production stage to Dockerfile (427a05c) add ReconcileFlaggedCVERule and update global manifest (c2b954c) group version, namespace, vuln counter (62fbc75) helm chart nil pointer dereference for SA (42f47f3) include all src code in Dockerfile build context (525b3e7) rename charts to chart (96a1825) typo (eae9f24) update v1 to v1alpha1 and CRD (9a50145) v1 to v1alpha1 (7cf2316) Other add extra files (d4bfa93) add github workflows (be53017) deps: update actions/checkout digest to 0ad4b8f (#19) (9f389b8) deps: update azure/setup-helm digest to fe7b79c (#20) (deee20d) deps: update codecov/codecov-action digest to 5ecb98a (#21) (53760d4) release 0.0.1 (27d0852) release 0.0.1 (5fa1a9b) upgrade controller-gen (eb6ab5a) upgrade github.com/docker/docker (7d6417c) upgrade github.com/kubescape/storage (358cb4f) upgrade golang.org/x/net (4488f72) upgrade to go1.22 (c2c1bf8) This PR was generated with Release Please. See documentation. :robot: Release is at https://github.com/spectrocloud-labs/validator-plugin-kubescape/releases/tag/v0.0.1 :sunflower:
gharchive/pull-request
2024-05-01T22:54:31
2025-04-01T06:40:26.759581
{ "authors": [ "TylerGillson" ], "repo": "spectrocloud-labs/validator-plugin-kubescape", "url": "https://github.com/spectrocloud-labs/validator-plugin-kubescape/pull/27", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1153763563
How to use dynamic mixing with multi-gpu training in LibriMix recipe? Hi everyone, I am trying to run sepformer with dynamic mixing on my own dataset. I created a dummy .csv file with 20000 rows, and set batch_size=2 in hparam. But when I trained on 8 gpus using the following command, the iteration of each epoch was still 10000. srun -p gpu --gres=gpu:8 --cpus-per-task=40 python -m torch.distributed.launch --nproc_per_node=8 train.py hparams/sepformer-lrs2mix.yaml --distributed_launch --distributed_backend='nccl' I noticed that the dynamic_mix_data_prep_librimix function in recipes/LibriMix/separation/dynamic_mixing.py returns a Dataloader, thus I specified sampler=DistributedSampler(dataset), then I got 1250 iterations per epoch. All GPUs were running, but the convergence was really slow like only one GPU was working. I added logger.info(f"rank{self.rank}, ids:{batch.id}") in fit_batch function, but only rank0 outputs. Then I tried to set lr=0.0012, 8 times larger than the default, but the model failed to converge this time. I am new to the torch DDP, and the official google colab multi-gpu tutorial is a bit brief, so I'd like to ask how to use dynamic mixing with multi-gpu, should I modify the optimizer or other components? Thanks a lot! Can you set batchsize=1 per gpu as well? Hi @ycemsubakan , I try batchsize=1 and gpu=8, which leads to 2500 iterations per epoch. After 80 epochs si-snr is stable at 12.3, while single gpu gets 16.8. DDP seems to be working fine, except for abnormal memory usage, it should be less than 10G each GPU. I am not sure why you are getting much lower performance. I only tried with 2 gpus however. Did you try that? How much memory is used with single gpu? Can you try to see that? The explanation for this could be bursts due to longer sequences. Can you try limiting the sequence length to see if you get this behavior even when you limit the length of training sequences? For all the experiments, I limited the length of training sequences to less than 3s (16k x 3 = 48000). Single gpu and batchsize=1 will use 10519MB memory. I will try with 2 gpus and report the result soon. Thank you very much! Hi @ycemsubakan . Yesterday I decided to train sepformer on WSJ0-2mix dataset using default hyerparameters. Here are the current si-snr on eval set , after 20 hours training: No Augment Dynamic Mixing 1 GPU 16.7 17.1 8 GPUs 17.1 16.9 SB baseline 20.4 22.4 It is worth pointing out that overfitting has already occurred in multi-gpu training, so there should be no better results. The results may indicate that 8 GPU training will hurt the performance badly. Hi @caixy97, Here's a training curve on WSJ0-2Mix with DM, with 1 gpu: epoch: 1, lr: 1.50e-04 - train si-snr: -5.43e+00 - valid si-snr: -7.11e+00 epoch: 2, lr: 1.50e-04 - train si-snr: -8.40e+00 - valid si-snr: -9.21e+00 epoch: 3, lr: 1.50e-04 - train si-snr: -1.02e+01 - valid si-snr: -1.10e+01 epoch: 4, lr: 1.50e-04 - train si-snr: -1.21e+01 - valid si-snr: -1.26e+01 epoch: 5, lr: 1.50e-04 - train si-snr: -1.35e+01 - valid si-snr: -1.36e+01 epoch: 6, lr: 1.50e-04 - train si-snr: -1.45e+01 - valid si-snr: -1.38e+01 epoch: 7, lr: 1.50e-04 - train si-snr: -1.52e+01 - valid si-snr: -1.50e+01 epoch: 8, lr: 1.50e-04 - train si-snr: -1.55e+01 - valid si-snr: -1.53e+01 epoch: 9, lr: 1.50e-04 - train si-snr: -1.57e+01 - valid si-snr: -1.54e+01 epoch: 10, lr: 1.50e-04 - train si-snr: -1.62e+01 - valid si-snr: -1.59e+01 epoch: 11, lr: 1.50e-04 - train si-snr: -1.66e+01 - valid si-snr: -1.63e+01 epoch: 12, lr: 1.50e-04 - train si-snr: -1.69e+01 - valid si-snr: -1.65e+01 epoch: 13, lr: 1.50e-04 - train si-snr: -1.72e+01 - valid si-snr: -1.66e+01 epoch: 14, lr: 1.50e-04 - train si-snr: -1.74e+01 - valid si-snr: -1.69e+01 epoch: 15, lr: 1.50e-04 - train si-snr: -1.77e+01 - valid si-snr: -1.71e+01 epoch: 16, lr: 1.50e-04 - train si-snr: -1.77e+01 - valid si-snr: -1.72e+01 epoch: 17, lr: 1.50e-04 - train si-snr: -1.78e+01 - valid si-snr: -1.75e+01 epoch: 18, lr: 1.50e-04 - train si-snr: -1.79e+01 - valid si-snr: -1.76e+01 epoch: 19, lr: 1.50e-04 - train si-snr: -1.82e+01 - valid si-snr: -1.79e+01 epoch: 20, lr: 1.50e-04 - train si-snr: -1.84e+01 - valid si-snr: -1.77e+01 epoch: 21, lr: 1.50e-04 - train si-snr: -1.85e+01 - valid si-snr: -1.79e+01 epoch: 22, lr: 1.50e-04 - train si-snr: -1.87e+01 - valid si-snr: -1.81e+01 epoch: 23, lr: 1.50e-04 - train si-snr: -1.88e+01 - valid si-snr: -1.82e+01 epoch: 24, lr: 1.50e-04 - train si-snr: -1.89e+01 - valid si-snr: -1.85e+01 epoch: 25, lr: 1.50e-04 - train si-snr: -1.91e+01 - valid si-snr: -1.85e+01 epoch: 26, lr: 1.50e-04 - train si-snr: -1.89e+01 - valid si-snr: -1.84e+01 epoch: 27, lr: 1.50e-04 - train si-snr: -1.87e+01 - valid si-snr: -1.84e+01 epoch: 28, lr: 1.50e-04 - train si-snr: -1.89e+01 - valid si-snr: -1.87e+01 epoch: 29, lr: 1.50e-04 - train si-snr: -1.92e+01 - valid si-snr: -1.89e+01 epoch: 30, lr: 1.50e-04 - train si-snr: -1.93e+01 - valid si-snr: -1.89e+01 epoch: 31, lr: 1.50e-04 - train si-snr: -1.95e+01 - valid si-snr: -1.88e+01 epoch: 32, lr: 1.50e-04 - train si-snr: -1.95e+01 - valid si-snr: -1.91e+01 epoch: 33, lr: 1.50e-04 - train si-snr: -1.96e+01 - valid si-snr: -1.90e+01 epoch: 34, lr: 1.50e-04 - train si-snr: -1.95e+01 - valid si-snr: -1.91e+01 epoch: 35, lr: 1.50e-04 - train si-snr: -1.93e+01 - valid si-snr: -1.92e+01 epoch: 36, lr: 1.50e-04 - train si-snr: -1.96e+01 - valid si-snr: -1.93e+01 epoch: 37, lr: 1.50e-04 - train si-snr: -1.97e+01 - valid si-snr: -1.94e+01 epoch: 38, lr: 1.50e-04 - train si-snr: -1.99e+01 - valid si-snr: -1.95e+01 epoch: 39, lr: 1.50e-04 - train si-snr: -1.98e+01 - valid si-snr: -1.95e+01 epoch: 40, lr: 1.50e-04 - train si-snr: -1.99e+01 - valid si-snr: -1.95e+01 epoch: 41, lr: 1.50e-04 - train si-snr: -2.00e+01 - valid si-snr: -1.95e+01 epoch: 42, lr: 1.50e-04 - train si-snr: -2.01e+01 - valid si-snr: -1.96e+01 epoch: 43, lr: 1.50e-04 - train si-snr: -2.02e+01 - valid si-snr: -1.97e+01 epoch: 44, lr: 1.50e-04 - train si-snr: -2.03e+01 - valid si-snr: -1.97e+01 epoch: 45, lr: 1.50e-04 - train si-snr: -2.04e+01 - valid si-snr: -1.98e+01 epoch: 46, lr: 1.50e-04 - train si-snr: -2.04e+01 - valid si-snr: -1.97e+01 epoch: 47, lr: 1.50e-04 - train si-snr: -2.05e+01 - valid si-snr: -1.99e+01 epoch: 48, lr: 1.50e-04 - train si-snr: -2.05e+01 - valid si-snr: -2.00e+01 epoch: 49, lr: 1.50e-04 - train si-snr: -2.00e+01 - valid si-snr: -1.99e+01 epoch: 50, lr: 1.50e-04 - train si-snr: -2.02e+01 - valid si-snr: -2.00e+01 epoch: 51, lr: 1.50e-04 - train si-snr: -2.03e+01 - valid si-snr: -1.99e+01 epoch: 52, lr: 1.50e-04 - train si-snr: -2.04e+01 - valid si-snr: -2.00e+01 epoch: 53, lr: 1.50e-04 - train si-snr: -2.05e+01 - valid si-snr: -2.00e+01 epoch: 54, lr: 1.50e-04 - train si-snr: -2.05e+01 - valid si-snr: -1.98e+01 epoch: 55, lr: 1.50e-04 - train si-snr: -2.02e+01 - valid si-snr: -1.99e+01 epoch: 56, lr: 1.50e-04 - train si-snr: -2.03e+01 - valid si-snr: -2.00e+01 epoch: 57, lr: 1.50e-04 - train si-snr: -2.04e+01 - valid si-snr: -2.01e+01 epoch: 58, lr: 1.50e-04 - train si-snr: -2.05e+01 - valid si-snr: -2.02e+01 epoch: 59, lr: 1.50e-04 - train si-snr: -2.03e+01 - valid si-snr: -2.01e+01 epoch: 60, lr: 1.50e-04 - train si-snr: -2.04e+01 - valid si-snr: -2.01e+01 epoch: 61, lr: 1.50e-04 - train si-snr: -2.05e+01 - valid si-snr: -2.03e+01 epoch: 62, lr: 1.50e-04 - train si-snr: -2.07e+01 - valid si-snr: -2.03e+01 epoch: 63, lr: 1.50e-04 - train si-snr: -2.07e+01 - valid si-snr: -1.99e+01 epoch: 64, lr: 1.50e-04 - train si-snr: -2.08e+01 - valid si-snr: -2.03e+01 epoch: 65, lr: 1.50e-04 - train si-snr: -2.09e+01 - valid si-snr: -2.04e+01 epoch: 66, lr: 1.50e-04 - train si-snr: -2.10e+01 - valid si-snr: -2.04e+01 epoch: 67, lr: 1.50e-04 - train si-snr: -2.05e+01 - valid si-snr: -2.04e+01 epoch: 68, lr: 1.50e-04 - train si-snr: -2.06e+01 - valid si-snr: -2.02e+01 epoch: 69, lr: 1.50e-04 - train si-snr: -2.06e+01 - valid si-snr: -2.04e+01 epoch: 70, lr: 1.50e-04 - train si-snr: -2.07e+01 - valid si-snr: -2.04e+01 epoch: 71, lr: 1.50e-04 - train si-snr: -2.08e+01 - valid si-snr: -2.04e+01 epoch: 72, lr: 1.50e-04 - train si-snr: -2.09e+01 - valid si-snr: -2.05e+01 epoch: 73, lr: 1.50e-04 - train si-snr: -2.10e+01 - valid si-snr: -2.06e+01 epoch: 74, lr: 1.50e-04 - train si-snr: -2.11e+01 - valid si-snr: -2.05e+01 epoch: 75, lr: 1.50e-04 - train si-snr: -2.11e+01 - valid si-snr: -2.04e+01 epoch: 76, lr: 1.50e-04 - train si-snr: -2.11e+01 - valid si-snr: -2.06e+01 epoch: 77, lr: 1.50e-04 - train si-snr: -2.12e+01 - valid si-snr: -2.06e+01 epoch: 78, lr: 1.50e-04 - train si-snr: -2.12e+01 - valid si-snr: -2.05e+01 epoch: 79, lr: 1.50e-04 - train si-snr: -2.13e+01 - valid si-snr: -2.07e+01 epoch: 80, lr: 1.50e-04 - train si-snr: -2.13e+01 - valid si-snr: -2.07e+01 epoch: 81, lr: 1.50e-04 - train si-snr: -2.14e+01 - valid si-snr: -2.08e+01 Epoch loaded: 81 - test si-snr: -2.06e+01 epoch: 82, lr: 1.50e-04 - train si-snr: -2.11e+01 - valid si-snr: -2.07e+01 epoch: 83, lr: 1.50e-04 - train si-snr: -2.09e+01 - valid si-snr: -2.06e+01 epoch: 84, lr: 1.50e-04 - train si-snr: -2.10e+01 - valid si-snr: -2.07e+01 epoch: 85, lr: 1.50e-04 - train si-snr: -2.11e+01 - valid si-snr: -2.08e+01 epoch: 86, lr: 1.50e-04 - train si-snr: -2.09e+01 - valid si-snr: -2.07e+01 epoch: 87, lr: 1.50e-04 - train si-snr: -2.10e+01 - valid si-snr: -2.09e+01 epoch: 88, lr: 1.50e-04 - train si-snr: -2.12e+01 - valid si-snr: -2.09e+01 epoch: 89, lr: 1.50e-04 - train si-snr: -2.12e+01 - valid si-snr: -2.09e+01 epoch: 90, lr: 1.50e-04 - train si-snr: -2.13e+01 - valid si-snr: -2.09e+01 epoch: 91, lr: 1.50e-04 - train si-snr: -2.13e+01 - valid si-snr: -2.09e+01 epoch: 92, lr: 1.50e-04 - train si-snr: -2.14e+01 - valid si-snr: -2.08e+01 epoch: 93, lr: 1.50e-04 - train si-snr: -2.11e+01 - valid si-snr: -2.10e+01 epoch: 94, lr: 1.50e-04 - train si-snr: -2.12e+01 - valid si-snr: -2.09e+01 epoch: 95, lr: 1.50e-04 - train si-snr: -2.13e+01 - valid si-snr: -2.10e+01 epoch: 96, lr: 1.50e-04 - train si-snr: -2.13e+01 - valid si-snr: -2.09e+01 epoch: 97, lr: 1.50e-04 - train si-snr: -2.14e+01 - valid si-snr: -2.10e+01 epoch: 98, lr: 1.50e-04 - train si-snr: -2.14e+01 - valid si-snr: -2.08e+01 epoch: 99, lr: 1.50e-04 - train si-snr: -2.15e+01 - valid si-snr: -2.10e+01 epoch: 100, lr: 1.50e-04 - train si-snr: -2.16e+01 - valid si-snr: -2.10e+01 epoch: 101, lr: 1.50e-04 - train si-snr: -2.16e+01 - valid si-snr: -2.10e+01 epoch: 102, lr: 1.50e-04 - train si-snr: -2.17e+01 - valid si-snr: -2.11e+01 epoch: 103, lr: 1.50e-04 - train si-snr: -2.17e+01 - valid si-snr: -2.11e+01 epoch: 104, lr: 1.50e-04 - train si-snr: -2.17e+01 - valid si-snr: -2.11e+01 epoch: 105, lr: 1.50e-04 - train si-snr: -2.18e+01 - valid si-snr: -2.11e+01 epoch: 106, lr: 1.50e-04 - train si-snr: -2.18e+01 - valid si-snr: -2.11e+01 epoch: 107, lr: 1.50e-04 - train si-snr: -2.18e+01 - valid si-snr: -2.11e+01 epoch: 108, lr: 1.50e-04 - train si-snr: -2.19e+01 - valid si-snr: -2.11e+01 epoch: 109, lr: 7.50e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.14e+01 epoch: 110, lr: 7.50e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.15e+01 epoch: 111, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.15e+01 epoch: 112, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.15e+01 epoch: 113, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.15e+01 epoch: 114, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.15e+01 epoch: 115, lr: 7.50e-05 - train si-snr: -2.19e+01 - valid si-snr: -2.15e+01 epoch: 116, lr: 7.50e-05 - train si-snr: -2.17e+01 - valid si-snr: -2.15e+01 epoch: 117, lr: 7.50e-05 - train si-snr: -2.18e+01 - valid si-snr: -2.15e+01 epoch: 118, lr: 7.50e-05 - train si-snr: -2.19e+01 - valid si-snr: -2.15e+01 epoch: 119, lr: 7.50e-05 - train si-snr: -2.19e+01 - valid si-snr: -2.16e+01 epoch: 120, lr: 7.50e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.16e+01 epoch: 121, lr: 7.50e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.16e+01 epoch: 122, lr: 7.50e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.16e+01 epoch: 123, lr: 7.50e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.16e+01 epoch: 124, lr: 7.50e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.16e+01 epoch: 125, lr: 7.50e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.16e+01 epoch: 126, lr: 7.50e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.16e+01 epoch: 127, lr: 7.50e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.16e+01 epoch: 128, lr: 7.50e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.16e+01 epoch: 129, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.16e+01 epoch: 130, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.16e+01 epoch: 131, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.16e+01 epoch: 132, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.16e+01 epoch: 133, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.17e+01 epoch: 134, lr: 7.50e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.16e+01 epoch: 135, lr: 7.50e-05 - train si-snr: -2.24e+01 - valid si-snr: -2.16e+01 epoch: 136, lr: 7.50e-05 - train si-snr: -2.24e+01 - valid si-snr: -2.16e+01 epoch: 137, lr: 7.50e-05 - train si-snr: -2.24e+01 - valid si-snr: -2.16e+01 epoch: 138, lr: 7.50e-05 - train si-snr: -2.24e+01 - valid si-snr: -2.16e+01 epoch: 139, lr: 3.75e-05 - train si-snr: -2.26e+01 - valid si-snr: -2.18e+01 epoch: 140, lr: 3.75e-05 - train si-snr: -2.26e+01 - valid si-snr: -2.18e+01 epoch: 141, lr: 3.75e-05 - train si-snr: -2.26e+01 - valid si-snr: -2.18e+01 epoch: 142, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.18e+01 epoch: 143, lr: 3.75e-05 - train si-snr: -2.19e+01 - valid si-snr: -2.19e+01 epoch: 144, lr: 3.75e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.19e+01 epoch: 145, lr: 3.75e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.19e+01 epoch: 146, lr: 3.75e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.20e+01 epoch: 147, lr: 3.75e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.19e+01 epoch: 148, lr: 3.75e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.19e+01 epoch: 149, lr: 3.75e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.19e+01 epoch: 150, lr: 3.75e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.20e+01 epoch: 151, lr: 3.75e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.20e+01 epoch: 152, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.20e+01 epoch: 153, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.20e+01 epoch: 154, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.20e+01 epoch: 155, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.20e+01 epoch: 156, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.20e+01 epoch: 157, lr: 7.50e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.19e+01 epoch: 158, lr: 7.50e-05 - train si-snr: -2.19e+01 - valid si-snr: -2.19e+01 epoch: 159, lr: 7.50e-05 - train si-snr: -2.20e+01 - valid si-snr: -2.19e+01 epoch: 160, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.21e+01 epoch: 161, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.21e+01 epoch: 162, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.21e+01 epoch: 163, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.21e+01 epoch: 164, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.21e+01 epoch: 165, lr: 3.75e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.21e+01 epoch: 166, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.21e+01 epoch: 167, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.21e+01 epoch: 168, lr: 3.75e-05 - train si-snr: -2.21e+01 - valid si-snr: -2.21e+01 Epoch loaded: 167 - test si-snr: -2.21e+01 epoch: 169, lr: 3.75e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.21e+01 epoch: 170, lr: 3.75e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.21e+01 epoch: 171, lr: 3.75e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.21e+01 epoch: 172, lr: 1.87e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.22e+01 epoch: 173, lr: 1.87e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.22e+01 epoch: 174, lr: 1.87e-05 - train si-snr: -2.22e+01 - valid si-snr: -2.22e+01 epoch: 175, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 176, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 177, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 178, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 179, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 Epoch loaded: 178 - test si-snr: -2.22e+01 epoch: 180, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 181, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 182, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 183, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 184, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 185, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 186, lr: 1.87e-05 - train si-snr: -2.23e+01 - valid si-snr: -2.22e+01 epoch: 187, lr: 9.37e-06 - train si-snr: -2.23e+01 - valid si-snr: -2.23e+01 epoch: 188, lr: 9.37e-06 - train si-snr: -2.23e+01 - valid si-snr: -2.23e+01 Epoch loaded: 188 - test si-snr: -2.23e+01 So, your current run seems to be ahead of this one with 1 gpu. Regarding multi-gpu: I haven't tested the code in depth for this. But from memory, the experiments I did with DDP, 2 gpu seemed like the performance drop was not significant. I would need to study what's happening to be able to tell you what's going on. Do you have results with 2 gpus? Hi, I am sorry for late reply. I don't have 2 gpus result yet. I will run several experiments. After getting the results we can discuss further. Besides, 8 gpus training is over, I updated the results in the table above. I see.. it seems for some reason multigpu updates is effecting the gradient. And yes, looking forward for the 2 gpu results. I found a rather shocking problem, when sepformer with batch_size=1 can converge well, but when batch_size=2 it does not converge. @JusperLee , thanks for pointing this out. I will try to see what's going on. The proposed results are obtained with batch_size=1. Yes, this is clear to me. I have tried several of these operations so far without converging. I doubled the learning rate because the batch_size increased: lr=0.00015 -> 0.0003. Another reason is that the learning rate is too big, so I use warmup to change the learning rate dynamically. To avoid the problems caused by mixing precision, all the above experiments were performed with and without mixing precision. Still, all the solutions could not solve the problem of non-convergence. My initial suspicion is that the network structure itself may be unstable. @JusperLee , is your network not converging with default parameters? For all networks large learning rate can cause issues.. But regarding batchsize, I will check if something fishy is happening inside the network. Hi, I did the following experiment. On a single gpu, with all the variables being the same, I tried batchsize 1 and 2. Here's what I get with batchsize 1: epoch: 1, lr: 1.50e-04 - train si-snr: -5.78e+00 - valid si-snr: -7.92e+00 epoch: 2, lr: 1.50e-04 - train si-snr: -9.04e+00 - valid si-snr: -9.79e+00 epoch: 3, lr: 1.50e-04 - train si-snr: -1.07e+01 - valid si-snr: -1.14e+01 epoch: 4, lr: 1.50e-04 - train si-snr: -1.20e+01 - valid si-snr: -1.27e+01 epoch: 5, lr: 1.50e-04 - train si-snr: -1.32e+01 - valid si-snr: -1.34e+01 epoch: 6, lr: 1.50e-04 - train si-snr: -1.38e+01 - valid si-snr: -1.41e+01 epoch: 7, lr: 1.50e-04 - train si-snr: -1.44e+01 - valid si-snr: -1.46e+01 epoch: 8, lr: 1.50e-04 - train si-snr: -1.48e+01 - valid si-snr: -1.49e+01 epoch: 9, lr: 1.50e-04 - train si-snr: -1.51e+01 - valid si-snr: -1.53e+01 epoch: 10, lr: 1.50e-04 - train si-snr: -1.54e+01 - valid si-snr: -1.57e+01 epoch: 11, lr: 1.50e-04 - train si-snr: -1.57e+01 - valid si-snr: -1.58e+01 epoch: 12, lr: 1.50e-04 - train si-snr: -1.60e+01 - valid si-snr: -1.59e+01 epoch: 13, lr: 1.50e-04 - train si-snr: -1.62e+01 - valid si-snr: -1.62e+01 epoch: 14, lr: 1.50e-04 - train si-snr: -1.63e+01 - valid si-snr: -1.64e+01 epoch: 15, lr: 1.50e-04 - train si-snr: -1.65e+01 - valid si-snr: -1.66e+01 epoch: 16, lr: 1.50e-04 - train si-snr: -1.67e+01 - valid si-snr: -1.66e+01 epoch: 17, lr: 1.50e-04 - train si-snr: -1.69e+01 - valid si-snr: -1.69e+01 epoch: 18, lr: 1.50e-04 - train si-snr: -1.70e+01 - valid si-snr: -1.69e+01 epoch: 19, lr: 1.50e-04 - train si-snr: -1.71e+01 - valid si-snr: -1.70e+01 epoch: 20, lr: 1.50e-04 - train si-snr: -1.72e+01 - valid si-snr: -1.71e+01 epoch: 21, lr: 1.50e-04 - train si-snr: -1.73e+01 - valid si-snr: -1.73e+01 epoch: 22, lr: 1.50e-04 - train si-snr: -1.71e+01 - valid si-snr: -1.68e+01 epoch: 23, lr: 1.50e-04 - train si-snr: -1.71e+01 - valid si-snr: -1.72e+01 epoch: 24, lr: 1.50e-04 - train si-snr: -1.72e+01 - valid si-snr: -1.70e+01 epoch: 25, lr: 1.50e-04 - train si-snr: -1.73e+01 - valid si-snr: -1.73e+01 epoch: 26, lr: 1.50e-04 - train si-snr: -1.74e+01 - valid si-snr: -1.72e+01 epoch: 27, lr: 1.50e-04 - train si-snr: -1.76e+01 - valid si-snr: -1.75e+01 epoch: 28, lr: 1.50e-04 - train si-snr: -1.77e+01 - valid si-snr: -1.76e+01 epoch: 29, lr: 1.50e-04 - train si-snr: -1.78e+01 - valid si-snr: -1.77e+01 epoch: 30, lr: 1.50e-04 - train si-snr: -1.79e+01 - valid si-snr: -1.78e+01 epoch: 31, lr: 1.50e-04 - train si-snr: -1.80e+01 - valid si-snr: -1.79e+01 epoch: 32, lr: 1.50e-04 - train si-snr: -1.81e+01 - valid si-snr: -1.79e+01 epoch: 33, lr: 1.50e-04 - train si-snr: -1.82e+01 - valid si-snr: -1.80e+01 epoch: 34, lr: 1.50e-04 - train si-snr: -1.82e+01 - valid si-snr: -1.80e+01 epoch: 35, lr: 1.50e-04 - train si-snr: -1.83e+01 - valid si-snr: -1.79e+01 epoch: 36, lr: 1.50e-04 - train si-snr: -1.83e+01 - valid si-snr: -1.82e+01 epoch: 37, lr: 1.50e-04 - train si-snr: -1.84e+01 - valid si-snr: -1.82e+01 epoch: 38, lr: 1.50e-04 - train si-snr: -1.85e+01 - valid si-snr: -1.83e+01 epoch: 39, lr: 1.50e-04 - train si-snr: -1.85e+01 - valid si-snr: -1.82e+01 epoch: 40, lr: 1.50e-04 - train si-snr: -1.85e+01 - valid si-snr: -1.83e+01 epoch: 41, lr: 1.50e-04 - train si-snr: -1.86e+01 - valid si-snr: -1.84e+01 epoch: 42, lr: 1.50e-04 - train si-snr: -1.87e+01 - valid si-snr: -1.85e+01 epoch: 43, lr: 1.50e-04 - train si-snr: -1.86e+01 - valid si-snr: -1.84e+01 epoch: 44, lr: 1.50e-04 - train si-snr: -1.87e+01 - valid si-snr: -1.84e+01 epoch: 45, lr: 1.50e-04 - train si-snr: -1.87e+01 - valid si-snr: -1.85e+01 epoch: 46, lr: 1.50e-04 - train si-snr: -1.88e+01 - valid si-snr: -1.86e+01 epoch: 47, lr: 1.50e-04 - train si-snr: -1.88e+01 - valid si-snr: -1.86e+01 epoch: 48, lr: 1.50e-04 - train si-snr: -1.88e+01 - valid si-snr: -1.86e+01 epoch: 49, lr: 1.50e-04 - train si-snr: -1.89e+01 - valid si-snr: -1.87e+01 epoch: 50, lr: 1.50e-04 - train si-snr: -1.90e+01 - valid si-snr: -1.87e+01 And here's what I get with batchsize 2: epoch: 1, lr: 1.50e-04 - train si-snr: -3.79e+00 - valid si-snr: -6.96e+00 epoch: 2, lr: 1.50e-04 - train si-snr: -6.60e+00 - valid si-snr: -8.55e+00 epoch: 3, lr: 1.50e-04 - train si-snr: -7.97e+00 - valid si-snr: -9.54e+00 epoch: 4, lr: 1.50e-04 - train si-snr: -8.78e+00 - valid si-snr: -9.93e+00 epoch: 5, lr: 1.50e-04 - train si-snr: -9.56e+00 - valid si-snr: -1.11e+01 epoch: 6, lr: 1.50e-04 - train si-snr: -1.03e+01 - valid si-snr: -1.16e+01 epoch: 7, lr: 1.50e-04 - train si-snr: -1.09e+01 - valid si-snr: -1.24e+01 epoch: 8, lr: 1.50e-04 - train si-snr: -1.16e+01 - valid si-snr: -1.28e+01 epoch: 9, lr: 1.50e-04 - train si-snr: -1.21e+01 - valid si-snr: -1.35e+01 epoch: 10, lr: 1.50e-04 - train si-snr: -1.25e+01 - valid si-snr: -1.40e+01 epoch: 11, lr: 1.50e-04 - train si-snr: -1.29e+01 - valid si-snr: -1.43e+01 epoch: 12, lr: 1.50e-04 - train si-snr: -1.33e+01 - valid si-snr: -1.45e+01 epoch: 13, lr: 1.50e-04 - train si-snr: -1.34e+01 - valid si-snr: -1.47e+01 epoch: 14, lr: 1.50e-04 - train si-snr: -1.39e+01 - valid si-snr: -1.50e+01 epoch: 15, lr: 1.50e-04 - train si-snr: -1.40e+01 - valid si-snr: -1.51e+01 epoch: 16, lr: 1.50e-04 - train si-snr: -1.42e+01 - valid si-snr: -1.54e+01 epoch: 17, lr: 1.50e-04 - train si-snr: -1.44e+01 - valid si-snr: -1.56e+01 epoch: 18, lr: 1.50e-04 - train si-snr: -1.46e+01 - valid si-snr: -1.58e+01 epoch: 19, lr: 1.50e-04 - train si-snr: -1.48e+01 - valid si-snr: -1.58e+01 epoch: 20, lr: 1.50e-04 - train si-snr: -1.48e+01 - valid si-snr: -1.60e+01 epoch: 21, lr: 1.50e-04 - train si-snr: -1.51e+01 - valid si-snr: -1.62e+01 epoch: 22, lr: 1.50e-04 - train si-snr: -1.50e+01 - valid si-snr: -1.60e+01 epoch: 23, lr: 1.50e-04 - train si-snr: -1.51e+01 - valid si-snr: -1.61e+01 epoch: 24, lr: 1.50e-04 - train si-snr: -1.53e+01 - valid si-snr: -1.62e+01 epoch: 25, lr: 1.50e-04 - train si-snr: -1.54e+01 - valid si-snr: -1.65e+01 epoch: 26, lr: 1.50e-04 - train si-snr: -1.53e+01 - valid si-snr: -1.66e+01 epoch: 27, lr: 1.50e-04 - train si-snr: -1.56e+01 - valid si-snr: -1.67e+01 epoch: 28, lr: 1.50e-04 - train si-snr: -1.57e+01 - valid si-snr: -1.67e+01 epoch: 29, lr: 1.50e-04 - train si-snr: -1.54e+01 - valid si-snr: -1.64e+01 epoch: 30, lr: 1.50e-04 - train si-snr: -1.54e+01 - valid si-snr: -1.66e+01 epoch: 31, lr: 1.50e-04 - train si-snr: -1.57e+01 - valid si-snr: -1.65e+01 epoch: 32, lr: 1.50e-04 - train si-snr: -1.58e+01 - valid si-snr: -1.70e+01 epoch: 33, lr: 1.50e-04 - train si-snr: -1.59e+01 - valid si-snr: -1.70e+01 epoch: 34, lr: 1.50e-04 - train si-snr: -1.61e+01 - valid si-snr: -1.71e+01 epoch: 35, lr: 1.50e-04 - train si-snr: -1.60e+01 - valid si-snr: -1.72e+01 epoch: 36, lr: 1.50e-04 - train si-snr: -1.62e+01 - valid si-snr: -1.72e+01 epoch: 37, lr: 1.50e-04 - train si-snr: -1.62e+01 - valid si-snr: -1.71e+01 epoch: 38, lr: 1.50e-04 - train si-snr: -1.62e+01 - valid si-snr: -1.71e+01 epoch: 39, lr: 1.50e-04 - train si-snr: -1.62e+01 - valid si-snr: -1.73e+01 epoch: 40, lr: 1.50e-04 - train si-snr: -1.63e+01 - valid si-snr: -1.74e+01 epoch: 41, lr: 1.50e-04 - train si-snr: -1.65e+01 - valid si-snr: -1.74e+01 epoch: 42, lr: 1.50e-04 - train si-snr: -1.64e+01 - valid si-snr: -1.73e+01 epoch: 43, lr: 1.50e-04 - train si-snr: -1.66e+01 - valid si-snr: -1.76e+01 epoch: 44, lr: 1.50e-04 - train si-snr: -1.65e+01 - valid si-snr: -1.76e+01 epoch: 45, lr: 1.50e-04 - train si-snr: -1.64e+01 - valid si-snr: -1.76e+01 epoch: 46, lr: 1.50e-04 - train si-snr: -1.66e+01 - valid si-snr: -1.77e+01 epoch: 47, lr: 1.50e-04 - train si-snr: -1.67e+01 - valid si-snr: -1.77e+01 epoch: 48, lr: 1.50e-04 - train si-snr: -1.68e+01 - valid si-snr: -1.76e+01 epoch: 49, lr: 1.50e-04 - train si-snr: -1.67e+01 - valid si-snr: -1.78e+01 epoch: 50, lr: 1.50e-04 - train si-snr: -1.69e+01 - valid si-snr: -1.79e+01 While it is true that the performance is a bit lower with batchsize=2, I am not really observing a major instability. I am closing the issue for now as from what I observe the default code seems to stable with batchsize=2. Feel free to open it again. By comparing my code and speechbrain, I found that it might be a problem with the middle copy part of the speechbrain, but I didn't look into why it caused this problem. https://github.com/speechbrain/speechbrain/blob/b5d2836e3d0eabb541c5bdbca16fb00c49cb62a3/speechbrain/lobes/models/dual_path.py#L960 Hm.. I am not sure why this would cause a problem.. that copy basically just replicates the dual computation module.. Le mar. 3 mai 2022, à 01 h 25, Kai Li (李凯) @.***> a écrit : By comparing my code and speechbrain, I found that it might be a problem with the middle copy part of the speechbrain, but I didn't look into why it caused this problem. https://github.com/speechbrain/speechbrain/blob/b5d2836e3d0eabb541c5bdbca16fb00c49cb62a3/speechbrain/lobes/models/dual_path.py#L960 — Reply to this email directly, view it on GitHub https://github.com/speechbrain/speechbrain/issues/1314#issuecomment-1115750402, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEGOFX7SRGSXKSPJPBTDCZTVIC2C5ANCNFSM5PQKTBRA . You are receiving this because you modified the open/close state.Message ID: @.***>
gharchive/issue
2022-02-28T07:50:33
2025-04-01T06:40:26.784507
{ "authors": [ "JusperLee", "caixy97", "ycemsubakan" ], "repo": "speechbrain/speechbrain", "url": "https://github.com/speechbrain/speechbrain/issues/1314", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1767490776
Basic SECURITY.md It is good to have at least some rudimentary security policy set up for SpeechBrain. The GitHub template suggests two sections: Supported versions and Reporting a vulnerability. I have made the security updates paragraph a bit clearer.
gharchive/pull-request
2023-06-21T12:27:36
2025-04-01T06:40:26.787617
{ "authors": [ "Gastron" ], "repo": "speechbrain/speechbrain", "url": "https://github.com/speechbrain/speechbrain/pull/2041", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1626737703
Log Improvement: Options for only the first 10 column families are reported to the log The options of column families are reported to the log at the top of every log file. However, if there are more than 10 column families (not very common but definitely allowed and occurs in practice), only the options of the first 10 are reported to the log. Throughout the log file, any other log line that is associated with any column family will be reported. So, you find in the log information about column families whose options you don't know. @udi-speedb, do you know if these options are printed to the OPTIONS file? also, i dont know if this is a bug since its definitely intentional @Yuval-Ariel: I do not know if its in the options. I assume it is. I agree that it's intentional , but I still think it should be considered a bug. I believe that a log file should allow a person to see all the information that a log file provides. In addition, I might have access only to to a log file (e.g., log parsing tool) and should be able to use it only to parse and process. @Yuval-Ariel: I do not know if its in the options. I assume it is. I agree that it's intentional , but I still think it should be considered a bug. I believe that a log file should allow a person to see all the information that a log file provides. In addition, I might have access only to to a log file (e.g., log parsing tool) and should be able to use it only to parse and process. You will see in the log events, stats, etc. related to the "missing" column families, but no options for them I was working on something earlier that would only print/return options that were different than the default. This could be useful if we wanted to keep the logs (or options files) shorter and pruned. I can try to resurrect that code... @Yuval-Ariel: I do not know if its in the options. I assume it is. I agree that it's intentional , but I still think it should be considered a bug. I believe that a log file should allow a person to see all the information that a log file provides. In addition, I might have access only to to a log file (e.g., log parsing tool) and should be able to use it only to parse and process. You will see in the log events, stats, etc. related to the "missing" column families, but no options for them I was working on something earlier that would only print/return options that were different than the default. This could be useful if we wanted to keep the logs (or options files) shorter and pruned. I can try to resurrect that code... @mrambacher As part of the log parser tool, I am displaying a diff between baseline options files (options files that are generated from official RocksDB / Speedb releases whose values are the defaults for that release) and options as displayed in the log file. I am opening a new issue that will keep the limit of 10 cf-s but, unlike now, will print their names to the log. This is a simpler change and also a more useful one as most users do not have more than 10 cf-s anyway. In addition, this will apply to any number of cf-s in the log so it's useful either way. Until this issue is resolved (I am not sure if reporting the options for all of the cf-s is a valid solution when there are many cf-s) I have added https://github.com/speedb-io/speedb/issues/520 My concerns with reporting all of the options is that, when there are many cf-s (their number is not limited), we may bloat the log file with the text reporting the options of all of the cf-s. This may be a bigger issue when log files are rotated frequently, as the options are reported at the top of every rolled log. thats why @mrambacher suggestion of reporting only the options that are different than the first cf is a great one. i believe doing this is irrelevant of the log-parser and it would have several beneficial effects: reduce confusion since it would immediately popup an option thats different reduce writing to the log allow for all the cfs options to be printed to the log - what this issue is all about @mrambacher - Please attach a sample log output when you have one ready, so we would be able to better understand how that would look (and also estimate the effort of the log parser's adaptation). @mrambacher - Could you please add a reference for the pr-s on which you rely as infrastructure for this one? This is being resolved in stages that will require several PRs: #619 changes the serialize methods to use Properties/Maps instead of strings. This allows later formatting to be implemeneted #648 allows only options that were changed to be part of the serialization. This allows the output written to the Dump to be shorter and only contain the pertinent information, thereby shrinking the size of the LOG. #651 adds a pluggable formatter that allows options to be serialized in different formats (such as that written to the LOG) #719 changes the Options::Dump to use the Options internal code and not hard-coded values. This insures that all options are logged appropriately (as new ones are added) There will also be a subsequent PR that brings this altogether and removes the cap of the number of CFs that are written.
gharchive/issue
2023-03-16T05:17:22
2025-04-01T06:40:26.798989
{ "authors": [ "Yuval-Ariel", "mrambacher", "udi-speedb" ], "repo": "speedb-io/speedb", "url": "https://github.com/speedb-io/speedb/issues/430", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
121377926
Jinja support in Hugo Hi, We are really liking Hugo. Thanks for the great software. We would like to see Jinja template support in Hugo. My questions are: Is there any plan to add support for Jinja(via flosch/pongo2 maybe) templates? If there is no plan to add Jinja support by the core team, are you folks interested in accepting code contribution to this regard? Is there any guide on how to add new template support? Thanks. Hello @codefx9, there is already an issue (#1359) with the request of adding support for pongo2. Your effort could maybe give this enhancement a push. @digitalcraftsman, thanks for pointing this out. Closing this one.
gharchive/issue
2015-12-10T01:00:55
2025-04-01T06:40:26.832593
{ "authors": [ "codefx9", "digitalcraftsman" ], "repo": "spf13/hugo", "url": "https://github.com/spf13/hugo/issues/1697", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
299611833
Unusual camelCase repository name breaks vgo The x/vgo prototype enforces strict casing on imports. spf13/viper imports spf13/jwalterweatherman, which is not the same casing as the github repo. This breaks importing the library. Arguably vgo should not be enforcing case. However, even you, the author, don't use the correct case, which suggests to me maybe in this case the github repo should just be renamed to what people actually import it as. WDYT? I'm not the author, but I have an opinion: The camel-case makes a good test case for the x/vgo and simliar. So I vote to keep as is. My question would be: what do we gain from this test case? AFAICT, this repository is the only popular repository out there that violates the casing convention. Allowing mismatched case packages is problematic, because it will only work with vgo in specific situations ("is the repo on github", basically). The generic module hosting mecanisms described for vgo have as a requirement "must be able to use a dumb static content server," which means there must be no arbitrary transformations between the imported name and the name of the thing hosting the modules. In theory, vgo could add a bunch of special-casing for just Github... But otoh, this really is only a problem during the onboarding phase of vgo. Post vgo adoption, any such mismatching would be found instantly, and corrected before the module is even published. So, my question is: is it worth the pain to support this special case, if it results in a less consistent vgo UX, when github trivially allows renaming of repositories? AFAICT, this repository is the only popular repository out there that violates the casing convention. That I doubt. But as I said, I'm not the owner of this repo. If it somehow stops me from using vgo when that time comes, then I will maybe revisit this problem. But vgo is an early prototype, there are lots of "case issues" and other stuff to be ironed out. @bep seems to be fixed in a recent version of vgo. @danderson would you consider trying go get -u golang.org/x/vgo and if it works consider closing this issue? The "canonical Go import" part of this module has already been github.com/spf13/jwalterweatherman -- I say this because that is how it always has been in Hugo, the origin of this module. This is also the module name used in go.mod, which works fine for most. So, the only correct thing to do is to rename this repository to all lowercase. I will get @spf13 's attention about this and get it done.
gharchive/issue
2018-02-23T06:27:57
2025-04-01T06:40:26.837618
{ "authors": [ "bep", "danderson", "losinggeneration" ], "repo": "spf13/jWalterWeatherman", "url": "https://github.com/spf13/jWalterWeatherman/issues/22", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1898611088
OSError: [Errno 22] Invalid argument when trying to run cpuset-modify on user.slice Today I noticed that after a system upgrade if I attempt to run vfio-isolate cpuset-modify --cpus C0-15 user.slice, it no longer works: 17:02:29 ~ #> vfio-isolate cpuset-modify --cpus C0-15 user.slice OSError: [Errno 22] Invalid argument During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/bin/vfio-isolate", line 33, in <module> sys.exit(load_entry_point('vfio-isolate==0.5.2', 'console_scripts', 'vfio-isolate')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 200, in run_cli executor.run() File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 191, in run for undo in e.action.record_undo(e.params): File "/usr/lib/python3.11/site-packages/vfio_isolate/action/cpuset_modify.py", line 39, in record_undo cpus=cpu_set.get_cpus(), ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 69, in get_cpus return self.impl.get_cpus(self) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 232, in get_cpus CGroupV2.ensure_cpuset_controller_enabled(cpuset) File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 228, in ensure_cpuset_controller_enabled CGroupV2.enable_controller(cpuset, "cpuset") File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 222, in enable_controller with cpuset.open("cgroup.subtree_control", "w") as f: OSError: [Errno 22] Invalid argument Changing the cpuset from --cpus C0-15 to something else does not seem to make any difference. The command does work for system.slice, strangely enough. I'm running #> uname -a Linux Archiroo 6.1.53-1-lts #1 SMP PREEMPT_DYNAMIC Wed, 13 Sep 2023 09:32:00 +0000 x86_64 GNU/Linux with systemd 254.3-1, in case it is relevant. I saw the same behavior -- failed cpuset.open("cgroup.subtree_control"... system.slice worked for me also. I even checked /sys/fs/cgroup/user.slice/cgroup.subtree_control. It exists and appears normal. I am using kernel 6.5.5-arch1-1 with systemd 254.5-1. Not sure which update caused this or how recently. vfio-isolate seemed to become intermittent some weeks or even longer ago. I was seeing silent failures to restore the undo file and restore cores to the host. I had no time to look at it or even use it much, but I did just now find the same error message as roobre. Previously, I was experiencing silent failures to restore the undo file and free cores up to the host again. The error disappeared suddenly while I was adding print statements for logging and testing repeatedly. I have a feeling it may return on next boot - if it does, I'll try to document what exactly gets around it. Happy to run any other debugging needed. This is what I'm getting on a fresh boot of 6.5.6-arch2-1 I have a the vfio_isolate command-line used below along with a crude debug from this non-Python guy /usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py has a modified open() that does print (self.__path(file)) Those are the paths you see printed on their own lines vfio-isolate -v -u /var/run/libvirt/qemu/vfio-isolate-undo.bin drop-caches cpuset-modify --cpus C8-14,24-30 /system.slice cpuset-modify --cpus C8-14,24-30 /user.slice compact-memory cpu-governor performance C0-7,15-23,31 irq-affinity mask C0-7,15-23,31 /sys/fs/cgroup/system.slice/cgroup.controllers /sys/fs/cgroup/system.slice/cgroup.subtree_control FileNotFoundError: [Errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/bin/vfio-isolate", line 33, in <module> sys.exit(load_entry_point('vfio-isolate==0.5.2', 'console_scripts', 'vfio-isolate')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 200, in run_cli executor.run() File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 191, in run for undo in e.action.record_undo(e.params): File "/usr/lib/python3.11/site-packages/vfio_isolate/action/cpuset_modify.py", line 39, in record_undo cpus=cpu_set.get_cpus(), ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 70, in get_cpus return self.impl.get_cpus(self) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 233, in get_cpus CGroupV2.ensure_cpuset_controller_enabled(cpuset) File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 229, in ensure_cpuset_controller_enabled CGroupV2.enable_controller(cpuset, "cpuset") File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 223, in enable_controller with cpuset.open("cgroup.subtree_control", "w") as f: FileNotFoundError: [Errno 2] No such file or directory taskset -pc 8-14,24-30 2 pid 2's current affinity list: 0-31 vfio-isolate -v restore /var/run/libvirt/qemu/vfio-isolate-undo.bin Traceback (most recent call last): File "/usr/bin/vfio-isolate", line 33, in <module> sys.exit(load_entry_point('vfio-isolate==0.5.2', 'console_scripts', 'vfio-isolate')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 199, in run_cli cli(standalone_mode=False, obj=executor) File "/usr/lib/python3.11/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/core.py", line 1719, in invoke rv.append(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/decorators.py", line 45, in new_func return f(get_current_context().obj, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 171, in restore with open(undo_file, "rb") as f: ^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/var/run/libvirt/qemu/vfio-isolate-undo.bin' taskset -pc 0-31 2 pid 2's current affinity list: 8-14,24-30 pid 2's new affinity list: 0-31 [ranguvar@khufu ~]$ ls -l /sys/fs/cgroup/system.slice/cgroup.subtree_control -rw-r--r-- 1 root root 0 Oct 7 21:31 /sys/fs/cgroup/system.slice/cgroup.subtree_control [ranguvar@khufu ~]$ cat /sys/fs/cgroup/system.slice/cgroup.subtree_control memory pids I don't believe the failure to find the restore file is an issue, I believe it complains the same yet creates it happily in the past. Obviously it then isn't ready for the restore either.
gharchive/issue
2023-09-15T15:05:52
2025-04-01T06:40:26.847027
{ "authors": [ "Ranguvar", "roobre" ], "repo": "spheenik/vfio-isolate", "url": "https://github.com/spheenik/vfio-isolate/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
210019394
Add option to fill all rows with general order information In the order export, we should offer an option to fill all rows of an order with the general order information, instead of just the first row of an order. Background: Currently, the 1st row in the order export contains general information such as address; in the 2nd row per order are all the line items of the order. The general rows are not filled because they are redundant, but some of our customers would like them to always be filled. The first row of the order with lineItems contains basic (order level) info like addresses, prices, etc. If we put this info also to other rows, should we remove the first row because of a redundancy? What do you think @Siilwyn @hisabimbola? I think we shouldn't add more options and instead fill them all as a default behaviour unless there is a huge benefit that I'm missing. The redundancy is not that big of a problem imho. We should not remove the first row, but I am not sure whether we should do this without a flag, what if some users do not want the redundant rows 🙄 I added here https://github.com/sphereio/sphere-order-export/issues/48 a --fillAllRows parameter which will add this behavior. I'll just leave this here: https://www.youtube.com/watch?v=glZ1C-Yu5tw ^^ :wave:
gharchive/issue
2017-02-24T11:01:05
2025-04-01T06:40:26.851632
{ "authors": [ "Oehmi", "Siilwyn", "hisabimbola", "junajan" ], "repo": "sphereio/sphere-order-export", "url": "https://github.com/sphereio/sphere-order-export/issues/47", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
604343976
Fix PHP 7.4 deprecation There's a deprecation at https://github.com/spicywebau/craft-embedded-assets/blob/master/src/Service.php#L196 with PHP 7.4. Should be changed to: $code = ($value instanceof Twig_Markup ? (string)$value : is_string($value)) ? $value : ''; fix has been released in v2.2.1.1
gharchive/issue
2020-04-21T23:19:12
2025-04-01T06:40:26.870057
{ "authors": [ "engram-design", "pvldigital" ], "repo": "spicywebau/craft-embedded-assets", "url": "https://github.com/spicywebau/craft-embedded-assets/issues/127", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
303314188
Name Spider-Gazelle There is already a project with the same name but for ruby. would not something better be more creative? https://github.com/cotag/spider-gazelle I saw that you are comiiter, but both projects can be confused with the same purpose but with equal names. What sets it apart is the language. IMHO =/
gharchive/issue
2018-03-08T00:44:00
2025-04-01T06:40:26.871339
{ "authors": [ "kalicki" ], "repo": "spider-gazelle/spider-gazelle", "url": "https://github.com/spider-gazelle/spider-gazelle/issues/2", "license": "WTFPL", "license_type": "permissive", "license_source": "github-api" }
118260240
change gremlin-server version for vagrant to 3.0.2, added composer.lock @chrismichaels84 can you just double check the sanity of the vagrant changes? I can't really test vagrant in my env yet (will try to get it up and running) Or @smolinari if you get a chance to test vagrant on this. @smolinari does this work in the end? can I merge this branch or do changes need to be made? @PommeVerte - yes. It is working fine. I think the issue I had yesterday afternoon was caused by a flaky internet connection and somehow screwed up the Gremlin install. This morning all worked as it should on the Gremlin/Neo4j side. Scott ok awesome. adding this then :) Theoretically, if Chris accepts my changes (plus some needed cleanup work), then we'd also have ODB up to 2.1.6 too. Scott
gharchive/pull-request
2015-11-22T12:20:59
2025-04-01T06:40:26.874308
{ "authors": [ "PommeVerte", "smolinari" ], "repo": "spider/spider", "url": "https://github.com/spider/spider/pull/100", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1964978329
[feature] add spidernode CRD which preallocted ips for each node 1 Code requirement 2 observe opensource license 3 sign-off your commit 4 Is your feature request related to a problem? Please describe. 5 Describe the solution you'd like Now in aws ECS environment could eni-cni, and design like eni in ciliumnodes resource maybe spidernet could add another CRD, like spidernode which will preallocte ips for each node, because ip is from vpc, but only support ip not subnat, so spidernode record ip list which is preallocted. below is some likely steps: daemon(spider agent) Use spidernode as the source for IP allocation Synchronize spidernode and spiderpool resources Can do some extra operations on nodes, such as adding network interfaces, setting IP/MAC etc. controller(spider controller) Create and update spidernode from spiderpool Synchronize spec, status information with daemon decision 6 Describe alternatives you've considered 7 Additional context @yylt Hi, it looks like a public cloud vpc support right? At present, in the public cloud with spiderpool usage you could just define every node IPs for different SpiderIPPool resource with IPPool.Spec.NodeName or IPPool.Spec.NodeAffinity to restrict this SpiderIPPool resource only serves for the Node you specified. And you could check alibaba cloud and aws cloud blogs. For the new CRD support, our team will talk or add some public cloud support later. And you could also commit a PR to solve this proposal if you would like. Thansk. some CNIs record the ip resource in Node object, spiderpool record it in SpiderIPPool, currently, the affinity settings of SpiderIPPool support to bind some IP resource to a specific interface of a specific vm. That is convenient for creating an ipvlan interface from a specific master interface I think the SpiderIPPool way also meet the same expectation Another reason, as https://spidernet-io.github.io/spiderpool/v0.7/reference/crd-spiderippool description. The spec.ips is iprange format which can not compatible with eni close by #2536
gharchive/issue
2023-10-27T07:58:39
2025-04-01T06:40:26.882460
{ "authors": [ "Icarus9913", "weizhoublue", "yylt" ], "repo": "spidernet-io/spiderpool", "url": "https://github.com/spidernet-io/spiderpool/issues/2465", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2486459252
failed to cherry pick PR 3873 from cyclinder, to branch release-v1.0 commits 22a33847f9755f972c03aa121b41f19cc3b62a22 of cyclinder conflict when merging to branch release-v1.0, please manually cherry pick it by yourself. PR https://github.com/spidernet-io/spiderpool/pull/3873 , action https://github.com/spidernet-io/spiderpool/actions/runs/10557667197 Auto-merging test/e2e/reclaim/reclaim_test.go Auto-merging test/scripts/debugEnv.sh CONFLICT (content): Merge conflict in test/scripts/debugEnv.sh error: could not apply 22a33847... Merge pull request #3873 from cyclinder/coordinator/tune_pod_route hint: After resolving the conflicts, mark them with hint: "git add/rm <pathspec>", then run hint: "git cherry-pick --continue". hint: You can instead skip this commit with "git cherry-pick --skip". hint: To abort and get back to the state before "git cherry-pick", hint: run "git cherry-pick --abort". hint: Disable this message with "git config advice.mergeConflict false" https://github.com/spidernet-io/spiderpool/pull/3971
gharchive/issue
2024-08-26T09:57:59
2025-04-01T06:40:26.884877
{ "authors": [ "cyclinder", "weizhoublue" ], "repo": "spidernet-io/spiderpool", "url": "https://github.com/spidernet-io/spiderpool/issues/3954", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2671778607
feat: Add make a example database Feature Description Make a test sample database Add sqlite database option is not exists Motivations Be able to test the self-hosted system quickly and easily Very thanks. Could you let me know more about your plans with self-hosting? I just want to gauge if termic would indeed the best fit for you or if there's a better way of achieving what you want to do. Do you already have data to show on the frontend? I don't believe it reasonable to think about termic as handling anything related to database management per se (one should instead rather see it as a pre-configured web app for browsing and searching terminology data). I expect people who self-host to already have their database available. But I can always add an example database to show what it can look like, yes. My idea is to analyze the option of having a place where you can upload translations and be able to consult them, in my case I am a translator and I do a lot of queries to see how certain terms have been translated in the past. I would like to be able to upload .po files (gettext) I think it would be great to have a docker compose that installs a blank database and if termic had a manager that allowed, once installed, to upload translations of tmx, po, csv files... I see. I guess that's a reasonable use of termic and something that could be supported. I originally made some tests with a SQLite database when first developing termic, but I eventually moved to PostgreSQL (which is a much better fit for deployment). If you plan on using termic locally though, this could totally be done. I can't give an estimate on how much time this will take to implement as I have quite a lot on my plate right now — but I hope I can have a look at it in January. Hello: 🙂 OK! Wonderful if work with sqlite. Being able to use Termic inside a docker container with a preloaded database would be great. Including with postgresql in separated container same stack. Something interesting would be to be able to remove translations from languages ​​that you don't need. I think an export option that allows you to send translations to Termic in their own format would be great. This would make it easier to contribute to the Termic project. None of this is urgent. Thank you very much anyway.
gharchive/issue
2024-11-19T11:07:36
2025-04-01T06:40:26.890371
{ "authors": [ "damufo", "spidersouris" ], "repo": "spidersouris/termic", "url": "https://github.com/spidersouris/termic/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1679875449
M1 Build for macOS An Apple Silicon release would be nice! Yep thanks, meant to but forgot, will do that in a day or two. The x86_64 version should work through Rosetta however. If you try and find that it doesn't please let me know. I replaced the x86_64 version with a universal binary.
gharchive/issue
2023-04-23T06:05:35
2025-04-01T06:40:26.891936
{ "authors": [ "electr1fy0", "spieglt" ], "repo": "spieglt/FlyingCarpet", "url": "https://github.com/spieglt/FlyingCarpet/issues/35", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
650944204
Processing killed on compute canada with the following SLURM config: #SBATCH --account=def-jcohen #SBATCH --time=0-03:00 # time (DD-HH:MM) #SBATCH --ntasks=128 # number of MPI processes #SBATCH --mem-per-cpu=4096M # memory; default unit is megabytes and this YML config for sct_run_batch: path_data: /scratch/jcohen/data-single-subject-master path_output: /scratch/jcohen/results task: /home/jcohen/code/spine-generic/spine-generic/processing/process_data.sh jobs: 128 i get this job killed: /home/jcohen/code/spine-generic/spine-generic/processing/process_data.sh: line 76: 4663 Killed sct_deepseg_sc -i ${file}.nii.gz -c $contrast -qc ${PATH_QC} -qc-subject ${SUBJECT} and this other job got the following error: OSError: [Errno 12] Cannot allocate memory This issue was solved by raising the memory per core, with this SLURM config: #SBATCH --account=def-jcohen #SBATCH --time=0-01:00 # time (DD-HH:MM) #SBATCH --ntasks=128 # number of MPI processes #SBATCH --mem-per-cpu=16384 # memory; default unit is megabytes #SBATCH --mail-user=*** #SBATCH --mail-type=ALL
gharchive/issue
2020-07-04T19:29:58
2025-04-01T06:40:26.914665
{ "authors": [ "jcohenadad" ], "repo": "spine-generic/spine-generic", "url": "https://github.com/spine-generic/spine-generic/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
826781245
[BUG] Unable to install using composer I tried installing the project using the composer method as outlined here: https://roadrunner.dev/docs/intro-install, in a fresh empty composer project, however, I get the following error: PHP Fatal error: Uncaught Error: Class 'Spiral\RoadRunner\Version' in ./vendor/spiral/roadrunner-cli/bin/rr:67 I tried this code: composer require spiral/roadrunner ./vendor/bin/rr get The version of RR used: ^2.0 (v2.0.1) @jwillp Feel free to close the issue if the problem was resolved.
gharchive/issue
2021-03-09T23:15:42
2025-04-01T06:40:27.013495
{ "authors": [ "48d90782", "jwillp" ], "repo": "spiral/roadrunner", "url": "https://github.com/spiral/roadrunner/issues/581", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1120098291
PAPP-22704: Update release note Please ensure your pull request (PR) adheres to the following guidelines: Please refer to our contributing documentation for any questions on submitting a pull request, link: Contribution Guide Pull Request Checklist Please check if your PR fulfills the following requirements: [ ] Testing of all the changes has been performed (for bug fixes / features) [ ] The readme.html has been reviewed and added / updated if needed (for bug fixes / features) [ ] Use the following format for the PR description: <App Name>: <PR Type> - <PR Description> [ ] Provide release notes as part of the PR submission which describe high level points about the changes for the upcoming GA release. [ ] Verify all checks are passing. [ ] Do NOT use the next branch of the forked repo. Create separate feature branch for raising the PR. [ ] Do NOT submit updates to dependencies unless it fixes an issue. Pull Request Type Please check the type of change your PR introduces: [ ] New App [ ] Bugfix [ ] Feature [ ] Code style update (formatting, renaming) [ ] Refactoring (no functional changes, no api changes) [ ] Documentation [ ] Other (please describe): Release Notes (REQUIRED) Provide release notes as part of the PR submission which describe high level points about the changes for the upcoming GA release. What is the current behavior? (OPTIONAL) Describe the current behavior that you are modifying. What is the new behavior? (OPTIONAL) Describe the behavior or changes that are being added by this PR. Other information (OPTIONAL) Any other information that is important to this PR such as screenshots of how the component looks before and after the change. Pay close attention to (OPTIONAL) Any specific code change or test case points which must be addressed/reviewed at the time of GA release. Screenshots (if relevant) Thanks for contributing! Note: github integration test pipeline is failed because of time out, run in gitlab. Backend run: https://cd.splunkdev.com/phantom-apps/app-tests/-/jobs/24903653 UI run: https://cd.splunkdev.com/phantom-apps/app-tests/-/jobs/24903654
gharchive/pull-request
2022-02-01T00:52:08
2025-04-01T06:40:27.070301
{ "authors": [ "mpan-splunk" ], "repo": "splunk-soar-connectors/sep14", "url": "https://github.com/splunk-soar-connectors/sep14/pull/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
825989173
ADDON-34291: Added Validation utility ADDON-34291 Created Predefined Validators for entity type - string, regex, number, URL, email, ipv4 and date. doValidation method of class Validator checks for validation based on entity type and returns dictionary containing error and errormsg if any. :tada: This PR is included in version 5.0.0-develop.1 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket: :tada: This PR is included in version 5.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2021-03-09T14:10:08
2025-04-01T06:40:27.074409
{ "authors": [ "dkhatri-crest", "rfaircloth-splunk" ], "repo": "splunk/addonfactory-ucc-generator", "url": "https://github.com/splunk/addonfactory-ucc-generator/pull/122", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1655006507
Update Node.js version The action is currently using an old version of nodejs. Ensure that this is updated to a more recent release. Won't fix, breaking change released
gharchive/issue
2023-04-05T06:18:02
2025-04-01T06:40:27.075335
{ "authors": [ "dfederschmidt", "mbruzda-splunk" ], "repo": "splunk/appinspect-api-action", "url": "https://github.com/splunk/appinspect-api-action/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
56053984
Feature Request I'd like to be able to whitelist/blacklist when using token..replacement. For example, if I''m referencing a file like "allhosts.csv" I want to be able to blacklist a set of hosts with a regex. Possibly an additional parameter like below: replace server name token.2.token = (SERVERNAME) token.2.replacementType = file token.2.replacement = $SPLUNK_HOME/etc/apps/oidemo/master_eventgen_replace/webhosts.sample:2 token.2.replacementWhitelist.0 = * token.2.replacementBlacklist.0 = webserver-[0-9] Totally valid request, but why wouldn't you just modify the source file or create a different copy of it for that token? I have a few scenarios going at the same time. For example, I need webserver-02 to spike in CPU at 30min past the hour. To do this I create eventgen stanzas and sample files: An "issue" sample file which spikes the CPU data at 30min past the hour for webserver-02 A noise sample for the first 30min of webserver-02 A problem sample file(s) which runs for the whole hour that generates cpu data for all hosts replacing SERVERNAME from allhosts.sample which has all hosts except webserver-02. Next I want to do something similar (say a spike in queries and cpu) for dbserver-02 but now I have to create another file with all hosts except dbserver-02 and non-db servers. As I continue to add scenarios I have to make a lot of files. A white/black list allows me to substantially reduce the number of replace files required and makes it easier to update/maintain. Make sense? this is a good idea. I just don't want to touch that code :). I'm leaving it open for the time being. Since this issue has been open for a while and we have released new versions 6.x.x of Eventgen, please recreate the issue if you still see fit.
gharchive/issue
2015-01-30T16:28:04
2025-04-01T06:40:27.080149
{ "authors": [ "arctan5x", "coccyx", "dataPhysicist" ], "repo": "splunk/eventgen", "url": "https://github.com/splunk/eventgen/issues/24", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2062927665
[BUG] O365 Mailbox Inbox Folder Shared with All Users. Field "object" doesn't exist. Correlation search, O365 Mailbox Inbox Folder Shared with All Users, is currently using a field called "object", as object=Inbox. But I do not see this field being sent as part of O365 exchange data. Instead, I see a field called Item.ParentFolder.Name with values such as Inbox, Calender, Contacts etc. Should "object=Inbox" be replaced with "Item.ParentFolder.Name=Inbox" for this correlation search? App Version: ESCU: 4.18.0 @atgithub11 this might be due to how the data for o365 is being collected in your environment. I believe for this detection we expect the user to be leveraging https://splunkbase.splunk.com/app/4055 let me know if this is the case? Hey @atgithub11 thanks for opening this issue. Here is some clarification that might help you understand this. You can see in the detection, the raw log does contain the field "Item.ParentFolder.Name" And this is what you are probably ingesting. But as @josehelps said, the analytics expects ingestion of O365 via the splunk app https://splunkbase.splunk.com/app/4055 as it is stated by the how to implement section how_to_implement: You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events. Behind the scene, this app is creating the field object and assigning its value based on some condition: In this case if the Operation field is "ModifyFolderPermissions" or "AddFolderPermissions", the value of the Object field will be set to Item.ParentFolder.Name. Hence the detection should be correct, and this is just an issue of ingestion. Hope this helps. I'll be closing this as complete, feel free to re-open the issue in case you have further questions
gharchive/issue
2024-01-02T20:49:38
2025-04-01T06:40:27.086579
{ "authors": [ "atgithub11", "josehelps", "nasbench" ], "repo": "splunk/security_content", "url": "https://github.com/splunk/security_content/issues/2937", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
419097942
Source typing container logs Right now, source types are derived from the container or pod name. Are there any plans to provide the option for users to set the source type? This allows users to create the same containers with a common source type rather different ones. I've seen over 1000 sourcetypes created on a single cluster. Hey Sayeed! Technically you can set the sourcetype by customizing the jq_transformer filter in the logging configmap: https://github.com/splunk/splunk-connect-for-kubernetes/blob/ec00d8fcf6b4030cca0ea434d8b54a606add85d0/manifests/splunk-kubernetes-logging/configMap.yaml#L177 Ideally, I'd like to see us support setting the sourcetype, among other options in annotations. Hey Matt Ideally, I'd like the users to set the sourcetype from Openshift (maybe via labels or annotations) since how containers are named is controlled by them. It can be anything, which makes setting source types via jq difficult. Currently a user can set the sourcetype using https://github.com/splunk/splunk-connect-for-kubernetes/blob/develop/helm-chart/splunk-kubernetes-logging/values.yaml#L108 I know its static, for setting sourcetypes dynamically through labels and annotations will require a significant overhaul of the current implementation. resolved as part of #294
gharchive/issue
2019-03-09T17:53:04
2025-04-01T06:40:27.090528
{ "authors": [ "chaitanyaphalak", "matthewmodestino", "rockb1017", "sayeedc" ], "repo": "splunk/splunk-connect-for-kubernetes", "url": "https://github.com/splunk/splunk-connect-for-kubernetes/issues/109", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1800420279
📝 Add contribution guidlines TODO : Development convention Contribution rules (Issue, PR, commit message, ...) :heavy_check_mark:
gharchive/issue
2023-07-12T08:06:54
2025-04-01T06:40:27.099046
{ "authors": [ "spontoreau" ], "repo": "spontoreau/mediateur", "url": "https://github.com/spontoreau/mediateur/issues/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1679798007
AttributeError: 'NoneType' object has no attribute 'lower' with some tracks when synced is enabled as a lyrics provider System OS Windows Python Version 3.11 (CPython) Install Source pip / PyPi Install version / commit hash v4.1.7 Expected Behavior vs Actual Behavior No response Steps to reproduce - Ensure to include actual links! spotdl download https://open.spotify.com/track/3i5bc53F2glMZC7GFXZQ7T Traceback [09:40:16] DEBUG MainThread - Downloader settings: {'audio_providers': ['youtube-music'], downloader.py:115 'lyrics_providers': ['synced', 'genius', 'azlyrics', 'musixmatch'], 'playlist_numbering': False, 'scan_for_songs': True, 'm3u': None, 'output': 'D:\\Music\\{album-artist}\\{album} ({year})\\{disc-number} - {track-number} - {title}.{output-ext}', 'overwrite': 'metadata', 'search_query': None, 'ffmpeg': 'ffmpeg', 'bitrate': None, 'ffmpeg_args': None, 'format': 'mp3', 'save_file': None, 'filter_results': True, 'threads': 4, 'cookie_file': None, 'restrict': False, 'print_errors': True, 'sponsor_block': False, 'preload': True, 'archive': None, 'load_config': True, 'log_level': 'DEBUG', 'simple_tui': False, 'fetch_albums': True, 'id3_separator': '/', 'ytm_data': False, 'add_unavailable': False, 'generate_lrc': True, 'force_update_metadata': True, 'only_verified_results': True, 'sync_without_deleting': True, 'max_filename_length': 100} [09:40:16] DEBUG MainThread - FFmpeg path: ffmpeg downloader.py:133 [09:40:16] INFO MainThread - Scanning for known songs, this might take a while... downloader.py:152 [09:40:25] DEBUG MainThread - Found 2018 known songs downloader.py:158 [09:40:28] DEBUG MainThread - Archive: 0 urls downloader.py:192 [09:40:28] DEBUG MainThread - Downloader initialized downloader.py:194 [09:40:28] INFO MainThread - Processing query: https://open.spotify.com/track/3i5bc53F2glMZC7GFXZQ7T search.py:123 [09:40:29] DEBUG MainThread - Found 1 songs in 0 lists search.py:249 [09:40:29] INFO MainThread - Fetching 1 album downloader.py:228 [09:40:30] ERROR asyncio_0 - Traceback (most recent call last): progress_handler.py:358 File "C:\spotDL\venv\Lib\site-packages\spotdl\download\downloader.py", line 495, in search_and_download lyrics = self.search_lyrics(song) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\spotdl\download\downloader.py", line 350, in search_lyrics lyrics = lyrics_provider.get_lyrics(song.name, song.artists) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\spotdl\providers\lyrics\synced.py", line 62, in get_lyrics lyrics = syncedlyrics.search(f"{name} - {artists[0]}", allow_plain_format=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\__init__.py", line 40, in search lrc = provider.get_lrc(search_term) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py", line 30, in get_lrc a_tag = soup.find_all("a", string=lambda t: text_match(t) > 80, limit=4) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 2030, in find_all return self._find_all(name, attrs, string, limit, generator, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 841, in _find_all found = strainer.search(i) ^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 2320, in search found = self.search_tag(markup) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 2291, in search_tag if found and self.string and not self._matches(found.string, self.string): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 2352, in _matches return match_against(markup) ^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py", line 30, in <lambda> a_tag = soup.find_all("a", string=lambda t: text_match(t) > 80, limit=4) ^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py", line 26, in <lambda> text_match = lambda t: rapidfuzz.fuzz.token_sort_ratio(_t(search_term), _t(t)) ^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py", line 25, in <lambda> _t = lambda s: s.lower().replace("-", "") ^^^^^^^ AttributeError: 'NoneType' object has no attribute 'lower' ╭─────────────────── Traceback (most recent call last) ────────────────────╮ │ C:\spotDL\venv\Lib\site-packages\spotdl\download\downloader.py:495 in │ │ search_and_download │ │ │ │ 492 │ │ │ │ │ │ ) │ │ 493 │ │ │ │ │ 494 │ │ │ # Find song lyrics and add them to the song object │ │ ❱ 495 │ │ │ lyrics = self.search_lyrics(song) │ │ 496 │ │ │ if lyrics is None: │ │ 497 │ │ │ │ logger.debug( │ │ 498 │ │ │ │ │ "No lyrics found for %s, lyrics providers: %s" │ │ │ │ C:\spotDL\venv\Lib\site-packages\spotdl\download\downloader.py:350 in │ │ search_lyrics │ │ │ │ 347 │ │ """ │ │ 348 │ │ │ │ 349 │ │ for lyrics_provider in self.lyrics_providers: │ │ ❱ 350 │ │ │ lyrics = lyrics_provider.get_lyrics(song.name, song.ar │ │ 351 │ │ │ if lyrics: │ │ 352 │ │ │ │ logger.debug( │ │ 353 │ │ │ │ │ "Found lyrics for %s on %s", song.display_name │ │ │ │ C:\spotDL\venv\Lib\site-packages\spotdl\providers\lyrics\synced.py:62 in │ │ get_lyrics │ │ │ │ 59 │ │ - The lyrics of the song or None if no lyrics were found. │ │ 60 │ │ """ │ │ 61 │ │ │ │ ❱ 62 │ │ lyrics = syncedlyrics.search(f"{name} - {artists[0]}", allo │ │ 63 │ │ │ │ 64 │ │ return lyrics │ │ 65 │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\__init__.py:40 in search │ │ │ │ 37 │ lrc = None │ │ 38 │ for provider in _providers: │ │ 39 │ │ logger.debug(f"Looking for an LRC on {provider.__class__.__ │ │ ❱ 40 │ │ lrc = provider.get_lrc(search_term) │ │ 41 │ │ if is_lrc_valid(lrc, allow_plain_format): │ │ 42 │ │ │ logger.info( │ │ 43 │ │ │ │ f'synced-lyrics found for "{search_term}" on │ │ {provider.__class__.__name__}' │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py:30 │ │ in get_lrc │ │ │ │ 27 │ │ href_match = lambda h: h.startswith("/lyric/") │ │ 28 │ │ a_tags_boud = SoupStrainer("a", href=href_match) │ │ 29 │ │ soup = generate_bs4_soup(self.session, url, parse_only=a_ta │ │ ❱ 30 │ │ a_tag = soup.find_all("a", string=lambda t: text_match(t) > │ │ 31 │ │ if not a_tag: │ │ 32 │ │ │ return None │ │ 33 │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:2030 in find_all │ │ │ │ 2027 │ │ if not recursive: │ │ 2028 │ │ │ generator = self.children │ │ 2029 │ │ _stacklevel = kwargs.pop('_stacklevel', 2) │ │ ❱ 2030 │ │ return self._find_all(name, attrs, string, limit, generat │ │ 2031 │ │ │ │ │ │ │ _stacklevel=_stacklevel+1, **kwargs │ │ 2032 │ findAll = find_all # BS3 │ │ 2033 │ findChildren = find_all # BS2 │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:841 in _find_all │ │ │ │ 838 │ │ │ except StopIteration: │ │ 839 │ │ │ │ break │ │ 840 │ │ │ if i: │ │ ❱ 841 │ │ │ │ found = strainer.search(i) │ │ 842 │ │ │ │ if found: │ │ 843 │ │ │ │ │ results.append(found) │ │ 844 │ │ │ │ │ if limit and len(results) >= limit: │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:2320 in search │ │ │ │ 2317 │ │ # Don't bother with Tags if we're searching for text. │ │ 2318 │ │ elif isinstance(markup, Tag): │ │ 2319 │ │ │ if not self.string or self.name or self.attrs: │ │ ❱ 2320 │ │ │ │ found = self.search_tag(markup) │ │ 2321 │ │ # If it's text, make sure the text matches. │ │ 2322 │ │ elif isinstance(markup, NavigableString) or \ │ │ 2323 │ │ │ │ isinstance(markup, str): │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:2291 in search_tag │ │ │ │ 2288 │ │ │ │ │ found = markup │ │ 2289 │ │ │ │ else: │ │ 2290 │ │ │ │ │ found = markup_name │ │ ❱ 2291 │ │ if found and self.string and not self._matches(found.stri │ │ 2292 │ │ │ found = None │ │ 2293 │ │ return found │ │ 2294 │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:2352 in _matches │ │ │ │ 2349 │ │ │ return markup is not None │ │ 2350 │ │ │ │ 2351 │ │ if isinstance(match_against, Callable): │ │ ❱ 2352 │ │ │ return match_against(markup) │ │ 2353 │ │ │ │ 2354 │ │ # Custom callables take the tag as an argument, but all │ │ 2355 │ │ # other ways of matching match the tag name as a string. │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py:30 │ │ in <lambda> │ │ │ │ 27 │ │ href_match = lambda h: h.startswith("/lyric/") │ │ 28 │ │ a_tags_boud = SoupStrainer("a", href=href_match) │ │ 29 │ │ soup = generate_bs4_soup(self.session, url, parse_only=a_ta │ │ ❱ 30 │ │ a_tag = soup.find_all("a", string=lambda t: text_match(t) > │ │ 31 │ │ if not a_tag: │ │ 32 │ │ │ return None │ │ 33 │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py:26 │ │ in <lambda> │ │ │ │ 23 │ │ # Just processing the `a` tags whose `href` attribute start │ │ 24 │ │ # and whose text is similar to the query too. │ │ https://github.com/maxbachmann/RapidFuzz#scorers │ │ 25 │ │ _t = lambda s: s.lower().replace("-", "") │ │ ❱ 26 │ │ text_match = lambda t: rapidfuzz.fuzz.token_sort_ratio(_t(s │ │ 27 │ │ href_match = lambda h: h.startswith("/lyric/") │ │ 28 │ │ a_tags_boud = SoupStrainer("a", href=href_match) │ │ 29 │ │ soup = generate_bs4_soup(self.session, url, parse_only=a_ta │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py:25 │ │ in <lambda> │ │ │ │ 22 │ │ │ │ 23 │ │ # Just processing the `a` tags whose `href` attribute start │ │ 24 │ │ # and whose text is similar to the query too. │ │ https://github.com/maxbachmann/RapidFuzz#scorers │ │ ❱ 25 │ │ _t = lambda s: s.lower().replace("-", "") │ │ 26 │ │ text_match = lambda t: rapidfuzz.fuzz.token_sort_ratio(_t(s │ │ 27 │ │ href_match = lambda h: h.startswith("/lyric/") │ │ 28 │ │ a_tags_boud = SoupStrainer("a", href=href_match) │ ╰──────────────────────────────────────────────────────────────────────────╯ AttributeError: 'NoneType' object has no attribute 'lower' None [09:40:32] DEBUG asyncio_1 - Synced failed to find lyrics for Emerson, Lake & Palmer - Piano downloader.py:358 Concerto No. 1 - i. 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster [09:40:32] DEBUG asyncio_1 - Genius failed to find lyrics for Emerson, Lake & Palmer - Piano downloader.py:358 Concerto No. 1 - i. 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster [09:40:33] DEBUG asyncio_1 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Piano downloader.py:358 Concerto No. 1 - i. 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster [09:40:36] DEBUG asyncio_1 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Piano downloader.py:358 Concerto No. 1 - i. 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster [09:40:36] DEBUG asyncio_1 - No lyrics found for Emerson, Lake & Palmer - Piano Concerto No. 1 - i. downloader.py:497 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:36] INFO asyncio_1 - downloader.py:540 None [09:40:39] DEBUG asyncio_3 - Synced failed to find lyrics for Emerson, Lake & Palmer - C'est La Vie downloader.py:358 - 2017 Remastered Version None [09:40:39] DEBUG asyncio_2 - Synced failed to find lyrics for Emerson, Lake & Palmer - Lend Your downloader.py:358 Love to Me Tonight - 2017 Remastered Version [09:40:39] DEBUG asyncio_2 - Genius failed to find lyrics for Emerson, Lake & Palmer - Lend Your downloader.py:358 Love to Me Tonight - 2017 Remastered Version [09:40:40] DEBUG asyncio_3 - Genius failed to find lyrics for Emerson, Lake & Palmer - C'est La Vie downloader.py:358 - 2017 Remastered Version [09:40:40] DEBUG asyncio_2 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Lend Your downloader.py:358 Love to Me Tonight - 2017 Remastered Version [09:40:41] DEBUG asyncio_3 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - C'est La downloader.py:358 Vie - 2017 Remastered Version None [09:40:42] DEBUG asyncio_0 - Synced failed to find lyrics for Emerson, Lake & Palmer - Hallowed Be downloader.py:358 Thy Name - 2017 Remastered Version [09:40:42] DEBUG asyncio_0 - Genius failed to find lyrics for Emerson, Lake & Palmer - Hallowed Be downloader.py:358 Thy Name - 2017 Remastered Version None [09:40:42] DEBUG asyncio_1 - Synced failed to find lyrics for Emerson, Lake & Palmer - Nobody Loves downloader.py:358 You Like I Do - 2017 Remastered Version [09:40:42] DEBUG asyncio_0 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Hallowed downloader.py:358 Be Thy Name - 2017 Remastered Version [09:40:43] DEBUG asyncio_1 - Genius failed to find lyrics for Emerson, Lake & Palmer - Nobody Loves downloader.py:358 You Like I Do - 2017 Remastered Version [09:40:43] DEBUG asyncio_2 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Lend downloader.py:358 Your Love to Me Tonight - 2017 Remastered Version [09:40:43] DEBUG asyncio_2 - No lyrics found for Emerson, Lake & Palmer - Lend Your Love to Me downloader.py:497 Tonight - 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:43] INFO asyncio_2 - downloader.py:540 [09:40:44] DEBUG asyncio_3 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - C'est La downloader.py:358 Vie - 2017 Remastered Version [09:40:44] DEBUG asyncio_3 - No lyrics found for Emerson, Lake & Palmer - C'est La Vie - 2017 downloader.py:497 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:44] DEBUG asyncio_1 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Nobody downloader.py:358 Loves You Like I Do - 2017 Remastered Version [09:40:44] INFO asyncio_3 - downloader.py:540 None [09:40:45] DEBUG asyncio_3 - Synced failed to find lyrics for Emerson, Lake & Palmer - The Enemy downloader.py:358 God Dances With the Black Spirits - 2017 Remastered Version [09:40:46] DEBUG asyncio_3 - Genius failed to find lyrics for Emerson, Lake & Palmer - The Enemy downloader.py:358 God Dances With the Black Spirits - 2017 Remastered Version [09:40:46] DEBUG asyncio_0 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Hallowed downloader.py:358 Be Thy Name - 2017 Remastered Version [09:40:46] DEBUG asyncio_0 - No lyrics found for Emerson, Lake & Palmer - Hallowed Be Thy Name - downloader.py:497 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:46] INFO asyncio_0 - downloader.py:540 [09:40:46] DEBUG asyncio_3 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - The Enemy downloader.py:358 God Dances With the Black Spirits - 2017 Remastered Version [09:40:47] DEBUG asyncio_1 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Nobody downloader.py:358 Loves You Like I Do - 2017 Remastered Version [09:40:47] DEBUG asyncio_1 - No lyrics found for Emerson, Lake & Palmer - Nobody Loves You Like I downloader.py:497 Do - 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:47] INFO asyncio_1 - downloader.py:540 [09:40:48] DEBUG asyncio_3 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - The downloader.py:358 Enemy God Dances With the Black Spirits - 2017 Remastered Version [09:40:48] DEBUG asyncio_3 - No lyrics found for Emerson, Lake & Palmer - The Enemy God Dances With downloader.py:497 the Black Spirits - 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:48] INFO asyncio_3 - downloader.py:540 None [09:40:50] DEBUG asyncio_2 - Synced failed to find lyrics for Emerson, Lake & Palmer - Closer to downloader.py:358 Believing - 2017 Remastered Version None [09:40:50] DEBUG asyncio_3 - Synced failed to find lyrics for Emerson, Lake & Palmer - Two Part downloader.py:358 Invention in D Minor - 2017 Remastered Version [09:40:50] DEBUG asyncio_3 - Genius failed to find lyrics for Emerson, Lake & Palmer - Two Part downloader.py:358 Invention in D Minor - 2017 Remastered Version [09:40:51] DEBUG asyncio_2 - Genius failed to find lyrics for Emerson, Lake & Palmer - Closer to downloader.py:358 Believing - 2017 Remastered Version [09:40:51] DEBUG asyncio_3 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Two Part downloader.py:358 Invention in D Minor - 2017 Remastered Version [09:40:52] DEBUG asyncio_2 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Closer to downloader.py:358 Believing - 2017 Remastered Version None [09:40:53] DEBUG asyncio_0 - Synced failed to find lyrics for Emerson, Lake & Palmer - L.A. Nights downloader.py:358 - 2017 Remastered Version [09:40:54] DEBUG asyncio_0 - Genius failed to find lyrics for Emerson, Lake & Palmer - L.A. Nights downloader.py:358 - 2017 Remastered Version [09:40:54] DEBUG asyncio_3 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Two Part downloader.py:358 Invention in D Minor - 2017 Remastered Version [09:40:54] DEBUG asyncio_0 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - L.A. downloader.py:358 Nights - 2017 Remastered Version [09:40:54] DEBUG asyncio_3 - No lyrics found for Emerson, Lake & Palmer - Two Part Invention in D downloader.py:497 Minor - 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:54] DEBUG asyncio_2 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Closer downloader.py:358 to Believing - 2017 Remastered Version [09:40:54] DEBUG asyncio_2 - No lyrics found for Emerson, Lake & Palmer - Closer to Believing - downloader.py:497 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:54] INFO asyncio_3 - downloader.py:540 [09:40:54] INFO asyncio_2 - downloader.py:540 None [09:40:55] DEBUG asyncio_1 - Synced failed to find lyrics for Emerson, Lake & Palmer - New Orleans downloader.py:358 - 2017 Remastered Version [09:40:55] DEBUG asyncio_1 - Genius failed to find lyrics for Emerson, Lake & Palmer - New Orleans downloader.py:358 - 2017 Remastered Version [09:40:55] DEBUG asyncio_1 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - New downloader.py:358 Orleans - 2017 Remastered Version [09:40:58] DEBUG asyncio_0 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - L.A. downloader.py:358 Nights - 2017 Remastered Version [09:40:58] DEBUG asyncio_0 - No lyrics found for Emerson, Lake & Palmer - L.A. Nights - 2017 downloader.py:497 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:58] INFO asyncio_0 - downloader.py:540 [09:40:58] DEBUG asyncio_1 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - New downloader.py:358 Orleans - 2017 Remastered Version [09:40:58] DEBUG asyncio_1 - No lyrics found for Emerson, Lake & Palmer - New Orleans - 2017 downloader.py:497 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:58] INFO asyncio_1 - downloader.py:540 None [09:41:02] DEBUG asyncio_3 - Synced failed to find lyrics for Emerson, Lake & Palmer - Food for downloader.py:358 Your Soul - 2017 Remastered Version None [09:41:03] DEBUG asyncio_2 - Synced failed to find lyrics for Emerson, Lake & Palmer - Tank - 2017 downloader.py:358 Remastered Version [09:41:03] DEBUG asyncio_3 - Genius failed to find lyrics for Emerson, Lake & Palmer - Food for downloader.py:358 Your Soul - 2017 Remastered Version [09:41:04] DEBUG asyncio_2 - Genius failed to find lyrics for Emerson, Lake & Palmer - Tank - 2017 downloader.py:358 Remastered Version None [09:41:04] DEBUG asyncio_3 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Food for downloader.py:358 Your Soul - 2017 Remastered Version [09:41:04] DEBUG asyncio_0 - Synced failed to find lyrics for Emerson, Lake & Palmer - Pirates - downloader.py:358 2017 Remastered Version [09:41:05] DEBUG asyncio_0 - Genius failed to find lyrics for Emerson, Lake & Palmer - Pirates - downloader.py:358 2017 Remastered Version [09:41:05] DEBUG asyncio_2 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Tank - downloader.py:358 2017 Remastered Version [09:41:06] DEBUG asyncio_0 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Pirates - downloader.py:358 2017 Remastered Version [09:41:07] DEBUG asyncio_3 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Food for downloader.py:358 Your Soul - 2017 Remastered Version [09:41:07] DEBUG asyncio_3 - No lyrics found for Emerson, Lake & Palmer - Food for Your Soul - 2017 downloader.py:497 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:41:08] INFO asyncio_3 - downloader.py:540 [09:41:08] DEBUG asyncio_2 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Tank - downloader.py:358 2017 Remastered Version [09:41:08] DEBUG asyncio_2 - No lyrics found for Emerson, Lake & Palmer - Tank - 2017 Remastered downloader.py:497 Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:41:08] INFO asyncio_2 - downloader.py:540 [09:41:09] DEBUG asyncio_0 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Pirates downloader.py:358 - 2017 Remastered Version [09:41:09] DEBUG asyncio_0 - No lyrics found for Emerson, Lake & Palmer - Pirates - 2017 Remastered downloader.py:497 Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:41:10] INFO asyncio_0 - downloader.py:540 [09:41:10] ERROR MainThread - https://open.spotify.com/track/3i5bc53F2glMZC7GFXZQ7T - downloader.py:258 AttributeError: 'NoneType' object has no attribute 'lower' Other details No response imo that's an issue with synced lyrics library. this function should return None instead of raising an exception https://github.com/rtcq/syncedlyrics/pull/8 v4.1.8 will catch such exceptions and will debug print them
gharchive/issue
2023-04-23T00:42:12
2025-04-01T06:40:27.113120
{ "authors": [ "azumukupoe", "xnetcat" ], "repo": "spotDL/spotify-downloader", "url": "https://github.com/spotDL/spotify-downloader/issues/1815", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
678122153
Migration fails on 'add_bootstrap_location' for postgres I started seeing the following error when trying to run the backend on the latest code: migration file "20200809202832_add_bootstrap_location.js" failed migration failed with error: insert into "locations" ("id", "target", "type") values ($1, $2, $3) - invalid input syntax for type uuid: "bootstrap" It seems related to #1890, however, I thought postgres migrations were run in CI now, so not sure how this crept through. Yep, the bug is fixed by #1935, but I'll do a proper fix of the actual issue of tests not being run for that code :grin:
gharchive/issue
2020-08-13T03:30:43
2025-04-01T06:40:27.124869
{ "authors": [ "Rugvip", "andrewthauer" ], "repo": "spotify/backstage", "url": "https://github.com/spotify/backstage/issues/1937", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
186310363
Report when a time series is filtered out during ingest, try#2 Report dropped-by-filter when a filter drops a time series during ingest. This fixes a "XXX:" comment. Also updated tests. Current coverage is 45.18% (diff: 16.66%) Merging #124 into master will decrease coverage by 0.01% @@ master #124 diff @@ ========================================== Files 598 598 Lines 15421 15427 +6 Methods 0 0 Messages 0 0 Branches 1585 1585 ========================================== + Hits 6970 6971 +1 - Misses 7989 7994 +5 Partials 462 462 Powered by Codecov. Last update eef540e...472eea2 Thanks! Decrease in test coverage is due to the added lines in the semantic module. Ignoring them for now.
gharchive/pull-request
2016-10-31T15:17:20
2025-04-01T06:40:27.128116
{ "authors": [ "codecov-io", "gabrielgerhardsson", "udoprog" ], "repo": "spotify/heroic", "url": "https://github.com/spotify/heroic/pull/124", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
110177045
Highlight root tasks in UI There's currently no easy way of knowing which tasks in the web UI are root tasks. So if one task creates tens of sub tasks, there will be a large number of tasks in the UI but only one will actually show the entire task graph. It'd be very useful if there was a way to filter the web UI to only show tasks that are root nodes so as to be able to easily see entire task graphs without specifically needing to know the names of those tasks. Agreed. Want to send a PR? :) If I get chance I'll look into it (but can't make any promises...) closing this issue. [x] It has been inactive for +4 months. [ ] It's not about luigi core, so not as many users are affected about this. [ ] The change seems quite big, it's unlikely to be sporadically picked up. [x] The owner haven't responded or disappeared. [ ] I don't understand what this issue is about. [ ] There exists a reasonable workaround for this. [ ] We need to check if this hasn't been fixed by now (for old issues). [ ] This is kind of by design and not a bug. [ ] Resolving this would probably add a lot of complexity. Every open issue adds some clutter, and we try to make the issues fewer and make it easier for new collaborators to find. Currently we try to close any issue that meets the first checkbox + one other. Feel free to reopen this issue at any point if you have the intent to continue to work this. :)
gharchive/issue
2015-10-07T08:39:17
2025-04-01T06:40:27.135624
{ "authors": [ "Tarrasch", "boosh", "mfcabrera" ], "repo": "spotify/luigi", "url": "https://github.com/spotify/luigi/issues/1282", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
132382482
[Spark Configuration] Can't pass Spark property using the conf when the option contains an equals In spark configuration, the conf can't have some options with an equals inside: [spark] conf: spark.executor.extraJavaOptions=-Darchaius.deployment.environment=dev Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/luigi/worker.py", line 162, in run new_deps = self._run_get_new_deps() File "/usr/local/lib/python2.7/dist-packages/luigi/worker.py", line 113, in _run_get_new_deps task_gen = self.task.run() File "/usr/local/lib/python2.7/dist-packages/luigi/contrib/spark.py", line 235, in run args = list(map(str, self.spark_command() + self.app_command())) File "/usr/local/lib/python2.7/dist-packages/luigi/contrib/spark.py", line 214, in spark_command command += self._dict_arg('--conf', self.conf) File "/usr/local/lib/python2.7/dist-packages/luigi/contrib/spark.py", line 138, in conf return self._dict_config(configuration.get_config().get("spark", "conf", None)) File "/usr/local/lib/python2.7/dist-packages/luigi/contrib/spark.py", line 261, in _dict_config return dict(map(lambda i: i.split('='), config.split('|'))) ValueError: dictionary update sequence element #0 has length 3; 2 is required As a fix, it could be split only on the first equals : luigi/contrib/spark.py def _dict_config(self, config): if config and isinstance(config, six.string_types): return dict(map(lambda i: i.split('=',1), config.split('|'))) great if you want to submit a PR!
gharchive/issue
2016-02-09T10:28:02
2025-04-01T06:40:27.140417
{ "authors": [ "MezianeMehdi", "erikbern" ], "repo": "spotify/luigi", "url": "https://github.com/spotify/luigi/issues/1539", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
894621901
Make task.disable_window be default source of window int Description When accessing task.disable_window property, use the property directly. Do not call the deprecated task.disable_window_seconds. Motivation and Context Fixes #3029 Current implementation gives a deprecation warning even when accessing the correct task.disable_window property, which is confusing. Fixing this means that the deprecation warning becomes more meaningful - it will only appear when task.disable_window_seconds is incorrectly accessed. Have you tested this? If so, how? Have ran this change locally for me in my employers' Luigi test suite and removed 21 warnings when running a small test on a single task. However, that test suite does not use disable_window. Please could one of @dlstadther , @Tarrasch or another maintainer approve the workflow so that tests run?
gharchive/pull-request
2021-05-18T17:38:41
2025-04-01T06:40:27.143573
{ "authors": [ "jamescooke" ], "repo": "spotify/luigi", "url": "https://github.com/spotify/luigi/pull/3081", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
508648987
Improve addition of Java to Docker image Previously, the Docker file added a bunch of utilities and downloaded the version of Java directly in the image that was eventually created. This left a bunch of unnecessary and potentially vulnerable packages on the image that was used in production. This change makes the build a multi-stage build and ensures that the network utilities required for downloading only exist on a disposed stage. In addition to the change to a multi-stage build, this change also swaps from the Pivotal Distribution of OpenJDK to AdoptOpenJDK as part of our commitment to move to an industry standard distribution. It also swaps from Java 8 to Java 11. Trust me, you'll be fine. I run few manual checks of the ./wait-for-it.sh script and can confirm that it is working as expected. LGTM @nebhale Per some internal team discussions, we feel like we need more time to do testing with the apps that run on SCDF to ensure that they run on both JDK 8 and 11 without any issues. We are going to do such testing in the next release cycle. Can we keep this PR with all the cleanup that you added, but use the latest JDK 8 that has all the patches applied? @sobychacko You should feel free to make that change before merging. Should the 11 version be updated to the latest? @sobychacko - Can we merge this to a JDK11 branch for now? Closing in lieu of #5
gharchive/pull-request
2019-10-17T18:31:49
2025-04-01T06:40:27.164267
{ "authors": [ "dturanski", "nebhale", "sobychacko", "tzolov" ], "repo": "spring-cloud/openjdk-docker", "url": "https://github.com/spring-cloud/openjdk-docker/pull/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
934905707
client with multiple profiles Describe the bug I try to start my client application with multiple profiles by setting the active profile to: stage, support-stage I even see this line in the log: 2021-07-01 17:20:20.236 INFO [springboot-no-ui-tests,,] [ restartedMain] c.c.c.ConfigServicePropertySourceLocator : Located environment: name=springboot-no-ui-tests, profiles=[stage,support-stage], label=null, version=e9e9fda77b46039faa314499ba133f7af8411ef9, state=null but for some reason only the first profile is being updated from the config server. Does Spring cloud config knows how to handle multiple profiles coming from clients? The reason behind it, is because we want to have two repositories in GIT one for developers to change for their configuration and one for support people. When the client will run we want it to read the configuration from both places for the same app. for example here is the Spring cloud config server configuration for that: repos: cloud-waf-support: pattern: - 'springboot-no-ui-tests/support*' cloneOnStart: true uri: https://.../v1/repos/cloud-waf-support default-label: main cloud-waf: pattern: - 'springboot-no-ui-tests/*' cloneOnStart: true uri: https://.../v1/repos/cloud-waf default-label: main If you have another idea that will solve the issue it can be great Eventually I solve this requirement: repo for support and repo for dev with the composite solution: cloud: config: server: composite: - type: git uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/fallback-repo default-label: main force-pull: true username: yyy password: xxx clone-on-start: true repos: springboot-no-ui-tests: clone-on-start: true pattern: - 'springboot-no-ui-tests' clone-submodules: false uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/dev-repo force-pull: true default-label: main username: yyy password: xxx - type: git uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/fallback-repo default-label: main force-pull: true username: xxx password: yyy clone-on-start: true repos: springboot-no-ui-tests: clone-on-start: true pattern: - 'springboot-no-ui-tests' clone-submodules: false uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/support-repo force-pull: true default-label: main username: yyy password: xxx This can be closed.
gharchive/issue
2021-07-01T14:25:42
2025-04-01T06:40:27.174428
{ "authors": [ "avnerstr" ], "repo": "spring-cloud/spring-cloud-config", "url": "https://github.com/spring-cloud/spring-cloud-config/issues/1923", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2442945202
When requesting the log4j2-spring.xml configuration file, a 401 error occurs Versions: spring-boot-starter-parent : 3.3.2 spring-cloud-dependencies: 2023.0.3 jdk: 21 Details: My "Spring Cloud Config Server" has a security configuration applied, therefore authentication is required to access the service. spring: security: user: name: root password: password The source code can be seen at https://github.com/luidoc/demo-config-server-native.git On the other hand, I have a sample application that uses the "Spring Cloud Config Server" service, both to obtain its configuration file and to obtain the log4j2 configuration file (log4j2-spring.xml). The source code can be seen at https://github.com/luidoc/config-server-client.git Below I show the bootstrap.yml configuration file: spring: main: allow-bean-definition-overriding: true profiles: active: dev application: name: config-server-client config: import: optional:configserver=${spring.cloud.config.url} cloud: config: url: http://localhost:8888 uri: http://localhost:8888 username: root password: 'password' name: ${spring.application.name} profile: dev label: main logging: config: ${spring.cloud.config.url}/${spring.application.name}/${spring.cloud.config.profile}/${spring.cloud.config.label}/log4j2-spring.xml?resolvePlaceholders=false log4j2: Configuration: allowedProtocols: 'http,https' When running the sample application, it fails to obtain the log4j2 configuration file from the "Config Server" service, resulting in an http 401 error. The error trace can be seen below: java -jar .\target\config-server-client.jar Logging system failed to initialize using configuration from 'http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false' java.lang.IllegalStateException: Could not initialize Log4J2 logging from http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.loadConfiguration(Log4j2SpringBootLoggingSystem.java:198) at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.load(Log4J2LoggingSystem.java:246) at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.loadConfiguration(Log4J2LoggingSystem.java:238) at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithSpecificConfig(AbstractLoggingSystem.java:67) at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:58) at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.initialize(Log4J2LoggingSystem.java:225) at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.initialize(Log4j2SpringBootLoggingSystem.java:96) at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:335) at org.springframework.boot.context.logging.LoggingApplicationListener.initialize(LoggingApplicationListener.java:298) at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEnvironmentPreparedEvent(LoggingApplicationListener.java:246) at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEvent(LoggingApplicationListener.java:223) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:370) at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:149) at org.springframework.cloud.bootstrap.BootstrapApplicationListener.bootstrapServiceContext(BootstrapApplicationListener.java:195) at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:114) at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:77) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:370) at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1363) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1352) at com.example.demo.ConfigServerClientApplication.main(ConfigServerClientApplication.java:10) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) at java.base/java.lang.reflect.Method.invoke(Method.java:580) at org.springframework.boot.loader.launch.Launcher.launch(Launcher.java:91) at org.springframework.boot.loader.launch.Launcher.launch(Launcher.java:53) at org.springframework.boot.loader.launch.JarLauncher.main(JarLauncher.java:58) Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1998) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1599) at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.getConfigurationSource(Log4j2SpringBootLoggingSystem.java:247) at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.loadConfiguration(Log4j2SpringBootLoggingSystem.java:159) ... 48 more 18:44:35.385 [main] ERROR org.springframework.boot.SpringApplication - Application run failed java.lang.IllegalStateException: java.lang.IllegalStateException: Could not initialize Log4J2 logging from http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:347) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.logging.LoggingApplicationListener.initialize(LoggingApplicationListener.java:298) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEnvironmentPreparedEvent(LoggingApplicationListener.java:246) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEvent(LoggingApplicationListener.java:223) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) ~[spring-boot-3.3.2.jar!/:3.3.2] at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[?:?] at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:370) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:149) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.cloud.bootstrap.BootstrapApplicationListener.bootstrapServiceContext(BootstrapApplicationListener.java:195) ~[spring-cloud-context-4.1.4.jar!/:4.1.4] at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:114) ~[spring-cloud-context-4.1.4.jar!/:4.1.4] at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:77) ~[spring-cloud-context-4.1.4.jar!/:4.1.4] at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) ~[spring-boot-3.3.2.jar!/:3.3.2] at java.base/java.lang.Iterable.forEach(Iterable.java:75) [?:?] at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:370) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1363) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1352) [spring-boot-3.3.2.jar!/:3.3.2] at com.example.demo.ConfigServerClientApplication.main(ConfigServerClientApplication.java:10) [!/:1.0.0-SNAPSHOT] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[?:?] at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?] at org.springframework.boot.loader.launch.Launcher.launch(Launcher.java:91) [config-server-client.jar:1.0.0-SNAPSHOT] at org.springframework.boot.loader.launch.Launcher.launch(Launcher.java:53) [config-server-client.jar:1.0.0-SNAPSHOT] at org.springframework.boot.loader.launch.JarLauncher.main(JarLauncher.java:58) [config-server-client.jar:1.0.0-SNAPSHOT] Caused by: java.lang.IllegalStateException: Could not initialize Log4J2 logging from http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.loadConfiguration(Log4j2SpringBootLoggingSystem.java:198) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.load(Log4J2LoggingSystem.java:246) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.loadConfiguration(Log4J2LoggingSystem.java:238) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithSpecificConfig(AbstractLoggingSystem.java:67) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:58) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.initialize(Log4J2LoggingSystem.java:225) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.initialize(Log4j2SpringBootLoggingSystem.java:96) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:335) ~[spring-boot-3.3.2.jar!/:3.3.2] ... 41 more Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1998) ~[?:?] at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1599) ~[?:?] at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.getConfigurationSource(Log4j2SpringBootLoggingSystem.java:247) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.loadConfiguration(Log4j2SpringBootLoggingSystem.java:159) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.load(Log4J2LoggingSystem.java:246) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.loadConfiguration(Log4J2LoggingSystem.java:238) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithSpecificConfig(AbstractLoggingSystem.java:67) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:58) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.initialize(Log4J2LoggingSystem.java:225) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.initialize(Log4j2SpringBootLoggingSystem.java:96) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:335) ~[spring-boot-3.3.2.jar!/:3.3.2] ... 41 more 18:44:35.412 [main] ERROR org.springframework.boot.SpringApplication - Application run failed java.lang.IllegalStateException: java.lang.IllegalStateException: Could not initialize Log4J2 logging from http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:347) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.logging.LoggingApplicationListener.initialize(LoggingApplicationListener.java:298) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEnvironmentPreparedEvent(LoggingApplicationListener.java:246) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEvent(LoggingApplicationListener.java:223) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) ~[spring-boot-3.3.2.jar!/:3.3.2] at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[?:?] at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:370) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:149) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.cloud.bootstrap.BootstrapApplicationListener.bootstrapServiceContext(BootstrapApplicationListener.java:195) ~[spring-cloud-context-4.1.4.jar!/:4.1.4] at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:114) ~[spring-cloud-context-4.1.4.jar!/:4.1.4] at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:77) ~[spring-cloud-context-4.1.4.jar!/:4.1.4] at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) ~[spring-context-6.1.11.jar!/:6.1.11] at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) ~[spring-boot-3.3.2.jar!/:3.3.2] at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[?:?] at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:370) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1363) [spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1352) [spring-boot-3.3.2.jar!/:3.3.2] at com.example.demo.ConfigServerClientApplication.main(ConfigServerClientApplication.java:10) [!/:1.0.0-SNAPSHOT] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[?:?] at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?] at org.springframework.boot.loader.launch.Launcher.launch(Launcher.java:91) [config-server-client.jar:1.0.0-SNAPSHOT] at org.springframework.boot.loader.launch.Launcher.launch(Launcher.java:53) [config-server-client.jar:1.0.0-SNAPSHOT] at org.springframework.boot.loader.launch.JarLauncher.main(JarLauncher.java:58) [config-server-client.jar:1.0.0-SNAPSHOT] Caused by: java.lang.IllegalStateException: Could not initialize Log4J2 logging from http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.loadConfiguration(Log4j2SpringBootLoggingSystem.java:198) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.load(Log4J2LoggingSystem.java:246) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.loadConfiguration(Log4J2LoggingSystem.java:238) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithSpecificConfig(AbstractLoggingSystem.java:67) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:58) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.initialize(Log4J2LoggingSystem.java:225) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.initialize(Log4j2SpringBootLoggingSystem.java:96) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:335) ~[spring-boot-3.3.2.jar!/:3.3.2] ... 41 more Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1998) ~[?:?] at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1599) ~[?:?] at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.getConfigurationSource(Log4j2SpringBootLoggingSystem.java:247) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.loadConfiguration(Log4j2SpringBootLoggingSystem.java:159) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.load(Log4J2LoggingSystem.java:246) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.loadConfiguration(Log4J2LoggingSystem.java:238) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithSpecificConfig(AbstractLoggingSystem.java:67) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:58) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.initialize(Log4J2LoggingSystem.java:225) ~[spring-boot-3.3.2.jar!/:3.3.2] at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.initialize(Log4j2SpringBootLoggingSystem.java:96) ~[log4j-spring-boot-2.23.1.jar!/:2.23.1] at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:335) ~[spring-boot-3.3.2.jar!/:3.3.2] ... 41 more On the other hand, I can perfectly obtain the configuration file using curl. curl --location 'http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false' \ --header 'Authorization: Basic cm9vdDpwYXNzd29yZA==' <?xml version="1.0" encoding="UTF-8"?> <Configuration status="WARN"> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg %n" /> </Console> </Appenders> <Loggers> <Root level="debug"> <AppenderRef ref="Console" /> </Root> </Loggers> </Configuration> Sample Config Server: https://github.com/luidoc/demo-config-server-native.git Demo application: https://github.com/luidoc/config-server-client.git Yes, I've tried that before, but it doesn't work. As you can see, an error occurs (error 401): Logging system failed to initialize using configuration from 'http://root:password@localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false' java.lang.IllegalStateException: Could not initialize Log4J2 logging from http://root:password@localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.loadConfiguration(Log4j2SpringBootLoggingSystem.java:198) at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.load(Log4J2LoggingSystem.java:246) at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.loadConfiguration(Log4J2LoggingSystem.java:238) at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithSpecificConfig(AbstractLoggingSystem.java:67) at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:58) at org.springframework.boot.logging.log4j2.Log4J2LoggingSystem.initialize(Log4J2LoggingSystem.java:225) at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.initialize(Log4j2SpringBootLoggingSystem.java:96) at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:335) at org.springframework.boot.context.logging.LoggingApplicationListener.initialize(LoggingApplicationListener.java:298) at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEnvironmentPreparedEvent(LoggingApplicationListener.java:246) at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEvent(LoggingApplicationListener.java:223) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:370) at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:149) at org.springframework.cloud.bootstrap.BootstrapApplicationListener.bootstrapServiceContext(BootstrapApplicationListener.java:195) at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:114) at org.springframework.cloud.bootstrap.BootstrapApplicationListener.onApplicationEvent(BootstrapApplicationListener.java:77) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:81) at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:64) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:63) at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:370) at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1363) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1352) at com.example.demo.ConfigServerClientApplication.main(ConfigServerClientApplication.java:10) Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http://root:password@localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1998) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1599) at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.getConfigurationSource(Log4j2SpringBootLoggingSystem.java:247) at org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem.loadConfiguration(Log4j2SpringBootLoggingSystem.java:159) ... 43 more The server treats this call as if it were an anonymous access, as you can see in the log: TRACE 1144 --- [nio-8888-exec-1] o.s.s.w.a.ExceptionTranslationFilter : Sending AnonymousAuthenticationToken [Principal=anonymousUser, Credentials=[PROTECTED], Authenticated=true, Details=WebAuthenticationDetails [RemoteIpAddress=127.0.0.1, SessionId=null], Granted Authorities=[ROLE_ANONYMOUS]] to authentication entry point since access is denied org.springframework.security.access.AccessDeniedException: Access Denied at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:98) ~[spring-security-web-6.3.1.jar!/:6.3.1] Can you authenticate and access the file in other ways? I don't quite understand your question, if you mean if the file can be accessed from spring cloud config server url, yes, it is possible. If you run the example you will see that you can run a curl --location 'http://localhost:8888/config-server-client/dev/main/log4j2-spring.xml?resolvePlaceholders=false' --header 'Authorization: Basic cm9vdDpwYXNzd29yZA==' And you will get the content of the file as a response. In fact, if in the example of spring cloud config server I disable security or add this to the spring security filter of the server: .requestMatchers(new AntPathRequestMatcher("/**/**/**/log4j2.xml**")).permitAll() .requestMatchers(new AntPathRequestMatcher("/**/**/**/log4j2-spring.xml**")).permitAll() everything works correctly. I think the problem is in org.apache.logging.log4j.spring.boot.Log4j2SpringBootLoggingSystem , specifically in the getConfigurationSource method, that does not obtain the spring cloud config security configuration. I wouldn't expect Boot to know anything about the security credentials for fetching remote logging configuration. I would expect their to be a separate configuration for the Boot to fetch the remote logging configuration (if they support such a thing). I don't think this is a Spring Cloud Config problem. If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed. Please, would you be so kind as to let me know what information you are missing in order to evaluate this issue? I have clearly indicated the problem, the versions, as well as provided the URL of the examples to reproduce the problem. As I said above I don't think this is a Spring Cloud Config problem. After reviewing the log4j documentation, I have seen that according to what is indicated in https://logging.apache.org/log4j/2.x/manual/configuration.html the credentials are specified with the properties log4j2.Configuration.username and log4j2.Configuration.password. I have modified the example (https://github.com/luidoc/config-server-client.git) so that anyone who has the same problem can see how to solve it.
gharchive/issue
2024-08-01T17:03:19
2025-04-01T06:40:27.193615
{ "authors": [ "luidoc", "ryanjbaxter", "spring-cloud-issues" ], "repo": "spring-cloud/spring-cloud-config", "url": "https://github.com/spring-cloud/spring-cloud-config/issues/2446", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
301927031
Support pattern properties in messaging contracts We found that we were unable to use pattern properties such as anyAlphaUnicode() in messaging contracts and were told to open a bug report. We believe these changes would resolve the issue. @mfeygelson Please sign the Contributor License Agreement! Click here to manually synchronize the status of this Pull Request. See the FAQ for frequently asked questions. @mfeygelson Thank you for signing the Contributor License Agreement! Sweeeeet, great job @mfeygelson and congratulations on your first contribution!
gharchive/pull-request
2018-03-02T22:35:42
2025-04-01T06:40:27.197906
{ "authors": [ "marcingrzejszczak", "mfeygelson", "pivotal-issuemaster" ], "repo": "spring-cloud/spring-cloud-contract", "url": "https://github.com/spring-cloud/spring-cloud-contract/pull/564", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
675170012
How can I get response body size of a request in gateway? I have created a custom filter in which I want to access size of response body. Is there anyway? Either the content length header or read the whole response body into memory. Actually I am new to spring framework and backend. I tried the content length header. It worked. but sometime we didn't get this header from microservice. Could you please share a small code for your second method (read the whole response body into memory)? like in term of ServerWebExchange or ServerHttpResponse. same question here. any updates? same question here. any updates? Closing due to age of the question. If you would like us to look at this issue, please comment and we will look at re-opening the issue. I tried get request/response body size by rewrite NettyRoutingFilter and NettyWriteResponseFilter. Gateway version is 2.2.9.RELEASE. Write NettyRoutingFilter ...... nettyOutbound.withConnection(connection -> { connection.channel().attr(TRACE_ID).set(traceId); if (log.isTraceEnabled()) { log.trace("outbound route: " + connection.channel().id().asShortText() + ", inbound: " + exchange.getLogPrefix()); } }); return nettyOutbound.send(request.getBody().map(body -> { // get request body size here, put size in exchange int size = body.readableByteCount(); exchange.getAttributes().put("gw-request-body-size", size); return getByteBuf(body); })); }).responseConnection((res, connection) -> { ..... rewrite NettyWriteResponseFilter ....... if (log.isTraceEnabled()) { log.trace("NettyWriteResponseFilter start inbound: " + connection.channel().id().asShortText() + ", outbound: " + exchange.getLogPrefix()); } ServerHttpResponse response = exchange.getResponse(); // TODO: needed? final Flux<DataBuffer> body = connection .inbound() .receive() .retain() .map(byteBuf -> { // get response body size here and put the result in exchange int respSize = byteBuf.readableBytes(); exchange.getAttributes().put("gw-response-body-size", respSize); return wrap(byteBuf, response); }); ....... create custom filter and get request/response body size from exchange /** * @author luobo.hwz */ @Slf4j @Component public class RespBodySizeFilter implements GlobalFilter, Ordered { @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { return chain.filter(exchange) .then(Mono.defer(() -> { // get result from exchange Integer exchangeReq = exchange.getAttribute("gw-request-body-size"); Integer exchangeResp = exchange.getAttribute("gw-response-body-size"); log.info("req from exchange: {}", exchangeReq); log.info("resp from exchange: {}", exchangeResp); return Mono.empty(); })); } @Override public int getOrder() { // filter before NettyWriteResponseFilter return NettyWriteResponseFilter.WRITE_RESPONSE_FILTER_ORDER - 1; } }
gharchive/issue
2020-08-07T17:41:33
2025-04-01T06:40:27.207673
{ "authors": [ "ShahzebAnsari", "fzyzcjy", "spencergibb", "spring-cloud-issues", "sweat123" ], "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/1893", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1346025928
Too many tcp connections cause requests slow my spring、spring cloud and gateway version spring boot:2.3.12.RELEASE spring cloud version:Hoxton.SR12 gateway version:2.2.9.RELEASE i adjust reactor.netty.ioWorkerCount is cpu * 4 reactor.netty.ioSelectCount is 1 httpclient is default config now we production env if tcp client more than 10000,request will too slow and reach 20 seconds.but we don't reproduced in test env.so i need you help,how to do this?use fixed httpclient pool? Or other?And i think 10000 connections is very easy for gateway,but i don't why slowly. ok i find it. May I ask how it was resolved in the end? 请问最后是怎么解决的呢?
gharchive/issue
2022-08-22T08:31:42
2025-04-01T06:40:27.210689
{ "authors": [ "Yanch1994", "hymagic", "skyfour" ], "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/2710", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2056183968
Unexpected changes to query string Describe the bug I use gateway mvc to reverse proxy vite. ProxyExchangeHandlerFunction changes the query string part of the url, causing vite to reply with different response. Sample originally http://localhost:5174/src/components/HelloWorld.vue?scoped=e17ea971&index=0&type=style&vue=&lang.css change to http://localhost:5174/src/components/HelloWorld.vue?scoped=e17ea971&index=0&type=style&vue=&lang.css= There is an extra '=' character at the end. It all seems to be in ProxyExchangeHandlerFunction.handle caused by UriComponentsBuilder.replaceQueryParams. SCG consider lang.css as a url parameter, and its value is empty. Use stripPrefix or custum filter to remove the redundant char.
gharchive/issue
2023-12-26T08:44:01
2025-04-01T06:40:27.213769
{ "authors": [ "fredliex", "kimmking" ], "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/3197", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2505243782
Response body is not coming in SCG Describe the bug We are using Spring CLoud Gateway version 4.0.7 with Jetty client. Our call flow is as follows client --> SCG -> stub What is happening is, request is sent from client which will be forwarded to the stub via SCG, stub responds with header and data. At SCG to client -> the response body is not coming. The call flow goes into error state. We have attached sample application, and wireshark screenshot. Please take a look. scg-app-demo.zip Thank you! I'm sorry, we only support the netty http client and corresponding routing filters. I don't have the bandwidth to debug custom routing filters.
gharchive/issue
2024-09-04T12:39:44
2025-04-01T06:40:27.217082
{ "authors": [ "pwn-tndn", "spencergibb" ], "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/3513", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
393113136
README file dedup fixes #1319 The two files turn out to be exactly the same, so a simple include:: is enough. Codecov Report Merging #1329 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #1329 +/- ## ========================================= Coverage 67.72% 67.72% Complexity 1397 1397 ========================================= Files 202 202 Lines 5643 5643 Branches 567 567 ========================================= Hits 3822 3822 Misses 1587 1587 Partials 234 234 Flag Coverage Δ Complexity Δ #integration ? ? #unittests 67.72% <ø> (ø) 1397 <ø> (ø) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update a9ebc50...a00f9b1. Read the comment docs.
gharchive/pull-request
2018-12-20T16:12:27
2025-04-01T06:40:27.224177
{ "authors": [ "ChengyuanZhao", "codecov-io" ], "repo": "spring-cloud/spring-cloud-gcp", "url": "https://github.com/spring-cloud/spring-cloud-gcp/pull/1329", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
404829893
[WIP] Pubsub stream binder via synchronous pull So we have something to point at when discussing the streaming/synchronous doc. I am going to fix that commit history. I've created the branch off of pubsub-pull, which caused commits to be duplicated after rebase.
gharchive/pull-request
2019-01-30T15:39:22
2025-04-01T06:40:27.225458
{ "authors": [ "elefeint" ], "repo": "spring-cloud/spring-cloud-gcp", "url": "https://github.com/spring-cloud/spring-cloud-gcp/pull/1419", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
701976663
Firestore extractDatabasePath update Updating extractDatabasePath to avoid org.apache.commons.lang3 dependency @dmitry-s Notice the build error though.
gharchive/pull-request
2020-09-15T14:24:31
2025-04-01T06:40:27.226372
{ "authors": [ "dmitry-s", "meltsufin" ], "repo": "spring-cloud/spring-cloud-gcp", "url": "https://github.com/spring-cloud/spring-cloud-gcp/pull/2523", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
486372622
java.lang.IllegalArgumentException: name is required execute AiNickNameClient#getNickNameTypeForAi(String) fail. Cause by: name is required. java.lang.IllegalArgumentException: name is required. u0009at feign.template.QueryTemplate.create(QueryTemplate.java:66) u0009at feign.RequestTemplate.lambda$appendQuery$0(RequestTemplate.java:611) u0009at java.util.HashMap.compute(HashMap.java:1197) u0009at feign.RequestTemplate.appendQuery(RequestTemplate.java:609) u0009at feign.RequestTemplate.query(RequestTemplate.java:573) u0009at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) u0009at feign.RequestTemplate.extractQueryTemplates(RequestTemplate.java:909) u0009at feign.RequestTemplate.uri(RequestTemplate.java:423) u0009at feign.RequestTemplate.uri(RequestTemplate.java:390) u0009at feign.RequestTemplate.resolve(RequestTemplate.java:196) u0009at feign.ReflectiveFeign$BuildTemplateByResolvingArgs.resolve(ReflectiveFeign.java:320) u0009at feign.ReflectiveFeign$BuildTemplateByResolvingArgs.create(ReflectiveFeign.java:224) u0009at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:74) u0009at feign.hystrix.HystrixInvocationHandler$1.run(HystrixInvocationHandler.java:106) u0009at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:302) u0009at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:298) u0009at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46) u0009at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) u0009at rx.Observable.unsafeSubscribe(Observable.java:10327) u0009at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) u0009at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) u0009at rx.Observable.unsafeSubscribe(Observable.java:10327) u0009at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) u0009at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) u0009at rx.Observable.unsafeSubscribe(Observable.java:10327) u0009at rx.internal.operators.OperatorSubscribeOn$SubscribeOnSubscriber.call(OperatorSubscribeOn.java:100) u0009at com.netflix.hystrix.strategy.concurrency.HystrixContexSchedulerAction$1.call(HystrixContexSchedulerAction.java:56) u0009at com.netflix.hystrix.strategy.concurrency.HystrixContexSchedulerAction$1.call(HystrixContexSchedulerAction.java:47) u0009at com.xxx.feign.enhance.RequestContextHystrixConcurrencyStrategy$RequestContextCallable.call(RequestContextHystrixConcurrencyStrategy.java:77) u0009at org.springframework.cloud.sleuth.instrument.async.TraceCallable.call(TraceCallable.java:70) u0009at com.netflix.hystrix.strategy.concurrency.HystrixContexSchedulerAction.call(HystrixContexSchedulerAction.java:69) u0009at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55) u0009at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) u0009at java.util.concurrent.FutureTask.run(FutureTask.java:266) u0009at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) u0009at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) u0009at java.lang.Thread.run(Thread.java:748) but the code have name: @PostMapping(path = "/xxx") String getNameT(@RequestParam("data") String data); This happened by accident,in production environment Please learn how to properly format code and logs. Can you please show the full definition of the feign client? I would hate to step over something but we hit up a similar issue and thought would post it here. If its a different issue please let me know i could create an separate issue. That being said we encountered this when a RequestParam had && in them. Here is a test project that replicates this behavior. Feign Client Test that could be run that would show the above error. Another thing observed but have not written a test for this yet is if the QueryParameter contains a single '&' character in between and because of this code in RequestTemplate - it causes the parameter to be split up am assuming the resolution to above would resolve this as well. Also if we pass the parameter as SpringQueryMap it gets encoded correctly and the test passes. Here is a test Please let me know if this is valid. Hi @naavo - yes this looks like a different issue. Please create a separate one. Closing this one.
gharchive/issue
2019-08-28T13:09:16
2025-04-01T06:40:27.233034
{ "authors": [ "OlgaMaciaszek", "naavo", "sdcuike", "spencergibb" ], "repo": "spring-cloud/spring-cloud-openfeign", "url": "https://github.com/spring-cloud/spring-cloud-openfeign/issues/211", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2054874365
spring.cloud.stream.function.bindings property doesn't work as described in documentation In Functional Composition section it is mentioned the following: "For example, if we want to give our toUpperCase|wrapInQuotes a more descriptive name we can do so with the following property spring.cloud.stream.function.bindings.toUpperCase|wrapInQuotes-in-0=quotedUpperCaseInput" However it seems that this property doesn't work. I have an example. This is the yaml file. However when I try to use the name uppercaseAndReverseInput in this test it fails. I did different attempts but without positive results. @SimoneGiusso It looks like you are adding the property under spring.cloud.function.bindings..., which is the incorrect level for that property. It should be under spring.cloud.stream.function.bindings... here.
gharchive/issue
2023-12-23T19:13:19
2025-04-01T06:40:27.240005
{ "authors": [ "SimoneGiusso", "sobychacko" ], "repo": "spring-cloud/spring-cloud-stream", "url": "https://github.com/spring-cloud/spring-cloud-stream/issues/2878", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
225313536
Runtime errors with Boot 2.x Mostly for heads up but boot is starting to break a lot of things Caused by: java.lang.NoClassDefFoundError: org/springframework/boot/bind/RelaxedDataBinder at org.springframework.cloud.stream.binding.BindingService.validate(BindingService.java:174) at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:96) at org.springframework.cloud.stream.binding.BindableProxyFactory.bindInputs(BindableProxyFactory.java:221) at org.springframework.cloud.stream.binding.InputBindingLifecycle.start(InputBindingLifecycle.java:55) RelaxedDataBinder is gone, see https://github.com/spring-projects/spring-boot/issues/9000 I think we need to create 2.x so that we start to see what issues spring5/boot2 brings in. @jvalkeal Yes, the plan is to get started on a 2.x branch in May. Is there a build snapshot maven repository tracking the 2.x branch anywhere? I don't see one at http://repo.spring.io/libs-snapshot/org/springframework/cloud/spring-cloud-stream/ Not yet, will have one next week once the binders are upgraded as well. Closing this after merging @jvalkeal 's PR in the 2.0.x branch and making respective branches for all binders. @davidwadden To your question - the BOM for Elmhurst release train (Based on 2.0) is CI-built now and available here https://repo.spring.io/libs-snapshot-local/org/springframework/cloud/spring-cloud-stream-dependencies/Elmhurst.BUILD-SNAPSHOT/
gharchive/issue
2017-04-30T08:37:37
2025-04-01T06:40:27.244349
{ "authors": [ "davidwadden", "jvalkeal", "mbogoevici" ], "repo": "spring-cloud/spring-cloud-stream", "url": "https://github.com/spring-cloud/spring-cloud-stream/issues/935", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2280691852
Guide Rewrite This is a rewrite of the guide with many changes. In addition to content changes, structural changes were made to the guide as a potential template for future guide updates to follow. Addresses issues #33, #32, #31, #29, #28. Several changes were made to promote correctness, make for a better user experience, and allow the option to dual publish to Spring Academy in the future. The README.adoc has no text content, but instead links to 3 separate files. This allows spring.io/guides to publish the guide with the full context, including Spring Initializr, while allowing Spring Academy to only be concerned with the guide specific content, located in content.adoc. There should be no change for how users view the guide on spring.io. The README.adoc added an additional conditional section that can be set if rendering for Spring Academy to exclude certain sections. The initial and complete folders were removed in favor of keeping only the solution in the root project directory. This should simplify the user experience by providing fewer places to look for code and create an easier experience when importing the project to an IDE. The user can still follow along with a blank project by starting with Spring Initializr. This format of keeping the code in the root directory makes it possible to easily load the project to Spring Academy as well. A single build tool is used. For this guide, Gradle is used because it is the preferred build tool of the project driving the functionality of the guide, Spring Framework. Having a single build tool should simplify the user experience when importing the project to a local IDE. This ease of import is also important for the user experience when loading into Spring Academy. A lot of references to the common macros project were removed. This was done to prioritize correctness and user experience over ease of guide creation. A few problems exist when trying to use a common template. First, the wording may be slightly off, as demonstrated by issue #33. The 'web' text in the line This web application is 100% pure Java... comes from a common file. Second, some teachings were incorrect. For example, in the popular rest-service guide, we advise the user that @ComponentScan will check the com/example package, but that information is not correct. It will in fact check the com/example/restservice package in this particular guide. Third, the commands to package the application as a jar file were not correct. The problem is persistent to all guides as described here. References to dated technologies, i.e. web.xml and WAR files, were also removed. This was done when removing links to the common GitHub macro repository in favor of static text. The Build an executable JAR section was renamed to Building the Application and text was added for using Cloud Native Buildpacks and Native Image compilation. I think that instead of 3 separate files, using ifndef from AsciiDoc Conditionals would be a better approach. If we choose to publish this guide to Spring Academy, we could set the variable env-exclude-spring-initializr to exclude certain sections. This can easily be achieved using a downdoc command: npx downdoc -a env-exclude-spring-initializr README.adoc @Buzzardo what do you think about this use of a conditional? @joemoore do you think an approach could work for Spring Academy? Converted to draft until I come up with a better solution to importing some common components from the getting started macros GitHub repository. Thanks for the feedback @mbhave! I added a new file to the getting-started-macros repository that is intended to be a common section at the end of every guide. The 3 files in this PR can be checked into the common macro project at a later date. guide_intro.adoc spring_academy_intro.adoc spring_academy_see_also.adoc Before those files are checked into the macro project and used as a template for all guides to follow, I'd like to verify that the process to convert content to Spring Academy (reducer, downdoc) is successful. This requires a GitHub action to execute, which should be performed on a non-forked repo.
gharchive/pull-request
2024-05-06T11:47:26
2025-04-01T06:40:27.255373
{ "authors": [ "robertmcnees" ], "repo": "spring-guides/gs-scheduling-tasks", "url": "https://github.com/spring-guides/gs-scheduling-tasks/pull/34", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1048859058
PrimaryDefaultValidatorPostProcessor triggered at different time with AOT leads to incomplete Validator bean definition In this sample, in AOT mode, NoUniqueBeanDefinition is thrown for Validator beans, even though one of the two is annotated with @ConditionalOnMissingBean: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'integrationMessageHandlerMethodFactory': Unsatisfied dependency expressed through method 'messageHandlerMethodFactory' parameter 1; nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'org.springframework.validation.Validator' available: expected single matching bean but found 2: defaultValidator,mvcValidator The problem is a different order of processing for BeanDefinitionRegistryPostProcessor. I have changed a custom code to use a callback of framework and that fixed the issue. Unfortunately, calling these has the side effects of contributing quite some more bean definitions, certain with not the type I'd expect. I am investigating. This was already fixed as part of #1213 but I've added a test to validate the behaviour. @OlgaMaciaszek your sample app still doesn't work unfortunately as Spring Integration is not yet supported. Ok - the users will need to wait till that's done then for these kinds of projects. Anyway, that user-provided sample has allowed us to discover and fix at least 3 different issues :) . Thanks, @matus753.
gharchive/issue
2021-11-09T17:05:32
2025-04-01T06:40:27.267815
{ "authors": [ "OlgaMaciaszek", "snicoll" ], "repo": "spring-projects-experimental/spring-native", "url": "https://github.com/spring-projects-experimental/spring-native/issues/1243", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
482987753
GH-1071: JUnit 5 Support Improvements Resolves https://github.com/spring-projects/spring-amqp/issues/1071 Remove JUnit4 dependency from RabbitAvailableCondition (minor breaking API change) Add purgeAfterEach to @RabbitAvailable tabs not spaces in RabbitAvailableCondition (review with ?w=1) @LogLevels now requires level convert more tests. Does that mean that we are very close to remove hard dependency on JUnit 4 and only use it for rules in the test module? I wouldn't say "very close", but certainly "closer". We have one nut to crack - RepatableProcessor @Rule. In most cases, it should be easy to replace with @RepeatableTest but there is one test case where the RepeatableProcessor calls the test method on multiple threads. I added a couple more conversions with the push; I don't plan on doing any more today, so this can be merged.
gharchive/pull-request
2019-08-20T17:26:45
2025-04-01T06:40:27.271833
{ "authors": [ "artembilan", "garyrussell" ], "repo": "spring-projects/spring-amqp", "url": "https://github.com/spring-projects/spring-amqp/pull/1072", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
52857431
what is new for 2.0 ? Docs http://docs.spring.io/spring-android/docs/2.0.x/reference/pdf/spring-android-reference.pdf do not explain why there is major version increase. @royclarkson when do you plan to release 2.0? There was no activity on this repo since December. :cry: @paulvi 2.0 was started in response to some changes in Spring Framework's RestTemplate. Spring for Android's RestTemplate had made some different API choices, and the initial goal of 2.0 was to bring those two close to parity. The major version number update was because of some of these minor, but breaking changes. @WonderCsabo Unfortunately, we've had changes in priorities which mean this project is not receiving much attention right now. I'm happy to facilitate merging PRs, triaging issues, and pushing out releases. Thanks! I was just asking so we can communicate this correctly in the downstream AndroidAnnotations project. @royclarkson BTW what is stopping you from release 2.0 as the current HEAD? Are there any blocking issues or missing features? There's not anything critical, IIRC. I had gone through and merged in many of the improvements and updates from Spring Framework already. The main outstanding issue is the OkHttp support as mentioned in #24, and of course doc updates. Alright, OkHttp (#24) support is in a better place now. Here are a few other outstanding items that would be nice to have addressed prior to pushing a GA. remove Jackson 1.x support update dependencies and fix resulting issues https://jira.spring.io/browse/ANDROID-168 https://jira.spring.io/browse/ANDROID-163 https://jira.spring.io/browse/ANDROID-143 Went ahead and knocked out those first two items on the list. What is the ETA of final release of 2.0? 2.0.0.M3 is now available for testing. It includes the latest OkHttp improvements. @jaredsburrows unfortunately, I don't have a specific ETA for the GA. I'm juggling some priorities, but working on getting those last few issues cleaned up. Thanks. Hi, Roy. Will there be support for PATCH or should I like for another solution? If anyone has a good reference in solving this usecase, I'm all eyes. Thanks. We have PATCH support in the latest 2.0 milestone. You can see the usage in code via a search. Most versions (or all?) of the native Android HTTP clients do not support PATCH, however. You'll need to include the dependency for either the Android port of Apache HttpClient 4.3, or OkHttp to make use of it. Hope that helps.
gharchive/issue
2014-12-25T10:08:10
2025-04-01T06:40:27.280980
{ "authors": [ "WonderCsabo", "jaredsburrows", "paulvi", "primaproxima", "royclarkson" ], "repo": "spring-projects/spring-android", "url": "https://github.com/spring-projects/spring-android/issues/23", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2467653914
Token exchange failed when the subject token is a client_credentials granted access token Describe the bug Token exchange failed when the subject token is a client_credentials granted access token. To Reproduce build a spring authorization server which enables client_credentials grant for 'user-client' and token-exchange grant for 'messaging-client' spring: security: oauth2: authorizationserver: client: test-client: registration: client-id: "user-client" client-secret: "{noop}user" client-authentication-methods: - client_secret_basic authorization-grant-types: - client_credentials scopes: - user.read token-client: registration: client-id: "messaging-client" client-secret: "{noop}messaging" client-authentication-methods: - client_secret_basic authorization-grant-types: - urn:ietf:params:oauth:grant-type:token-exchange scopes: - message.read get access token for 'user-client' using client_credentials grant build resource server 'user-service' and 'messaging-service', user-service remote calls messaging-service Access resource server 'user-service' with the access token above token exchange failed due to the client_credentials granted access token does not have a 'princial' attribute, but OAuth2TokenExchangeAuthenticationProvider require a principal to be available via the subject_token for impersonation or delegation use cases: Expected behavior As per https://datatracker.ietf.org/doc/html/rfc8693,there doesn't appear to be any explicit prohibition against using an access token obtained through the client_credentials grant for token exchange. @wapkch As per https://datatracker.ietf.org/doc/html/rfc8693,there doesn't appear to be any explicit prohibition against using an access token obtained through the client_credentials grant for token exchange. This is true but at the same time the spec does not make any reference to the client_credentials grant type either. Furthermore, looking at the examples in Appendix A. Additional Token Exchange Examples, you will see the subject tokens referenced contain "sub":"bdc@example.net" and "sub":"user@example.net" that clearly indicate a user principal. As well, in 1.1. Delegation vs. Impersonation Semantics, it states: One common use case for an STS (as alluded to in the previous section) is to allow a resource server A to make calls to a backend service C on behalf of the requesting user B The word "user" is referenced in many parts in the spec, which implies there is a Resource Owner principal associated to the subject token. I'm curious, why are you using a client_credentials obtained access token to represent the subject token in the token_exchange grant? If the client needs another scope, (message.read as per your example) why wouldn't the messaging-client be configured for the client_credentials grant and obtain a new access token with the required scope? @jgrandja That makes a lot of sense! Thanks for the detailed explanation. You can close this now.
gharchive/issue
2024-08-15T08:38:41
2025-04-01T06:40:27.289993
{ "authors": [ "jgrandja", "wapkch" ], "repo": "spring-projects/spring-authorization-server", "url": "https://github.com/spring-projects/spring-authorization-server/issues/1691", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1529420673
Provide access to PreparedStatement in the JdbcTemplate classes. Hello, I am building an application where I need to have direct access to the resulting PreparedStatement before it is executed by the different methods of the template. Currently the classes hide it from developers, making it less flexible. I understand the approach, but in my case (where the execution happens in a reactor environment) it is impossible to make use of Spring Data. You can provide access to the PreparedStatement as you already have it. Thanks, Juan PS. Trying to use MemPOI. Could you please explain in more detail what you are trying to do and why? We generally don't give access to data structures we consider internal, because every such public API drives up maintenance. Hi Jan, thanks for such a quick answer. I think best is to show you a piece of code (incomplete) so you can understand the problem... `// Using Apache POI and MemPOI (https://github.com/firegloves/MemPOI) XSSFWorkbook workbook = new XSSFWorkbook(); try { // Contains a list of SQL statements with named parameters. List reports = loadReports(); if (reports != null) { MempoiBuilder builder = MempoiBuilder.aMemPOI() .withWorkbook(workbook) .withAdjustColumnWidth(true); int i = 0; for (String sql : reports) { PreparedStatement statement = dataSource.getConnection() .prepareStatement(rep.sql); // @Wired DataSource dataSource... NamedParameterJdbcTemplate template = new NamedParameterJdbcTemplate(dataSource); Map<String, ?> parameters = new HashMap<>(); // given parameters from and to are Instant parameters.put("from", Timestamp.from(from)); parameters.put("to", Timestamp.to(to)); String name = names.get(i); // Code below is just to show how MemPOI expects data to assemble the workbook. // It needs to run the assemblage together because the links between the workbook and the // sheets. Not ideal, but understandable. PreparedStatementCallback<MempoiSheet> prepStmtCallBack = preparedStatement -> { // MemPOI expects a PreparedStatement, but that is executed when the whole workbook is // assembled. See code below. MempoiSheet sheet = MempoiSheetBuilder.aMempoiSheet() .withSheetName(name) .withPrepStmt(preparedStatement) .build(); return sheet; }; // Is there anyway to attach this execution to the Future<> that is assembled by MemPOI? // see code below... MempoiSheet sheet = template.execute(sql, parameters, prepStmtCallBack); builder.addMempoiSheet(sheet); i++; } MemPOI memPOI = builder.build(); CompletableFuture<MempoiReport> fut = memPOI.prepareMempoiReport(); // Here is the problem: // MemPOI executes the queries from the prepared statements when // fut is executed... but at this point, all preparedstatements are closed. // Obviously, they were ran through when the callback was called. MempoiReport report = fut.get(); workbook.close(); // Now we add the pivots by hand. .toSeconds()); } else { log.error("Error: cannot load report information!"); } } catch (Exception ex) { ... }` If I could get the PreparedStatement with the parameters set and ready to be "queried" then that would solve my problem. Obviously, I am responsible for closing it and freeing all resources. I cannot use getPreparedStatementCreator, nor ParsedSql... there is simply no option in the way all the nice utility classes are protected. So I have no way to use such a great idea of the named parameters template outside of the limitations of the provided methods. That limits a lot.
gharchive/issue
2023-01-11T17:28:26
2025-04-01T06:40:27.359729
{ "authors": [ "jfarjona", "schauder" ], "repo": "spring-projects/spring-data-relational", "url": "https://github.com/spring-projects/spring-data-relational/issues/1410", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1522451188
Secured '/graphql' endpoint Hello, team Please help me to understand what I'm doing wrong. I'm trying to migrate our Spring Boot application from version 2.7.5 to 3.0.1. We have secured '/graphql' endpoint using SecurityFilterChain bean: .authorizeHttpRequests((authorizeRequests) -> authorizeRequests.requestMatchers("/graphql").access(new AdminAuthManager())) public static class AdminAuthManager implements AuthorizationManager<RequestAuthorizationContext> { @Override public AuthorizationDecision check(Supplier<Authentication> authentication, RequestAuthorizationContext object) { if (isAnonymous()) { return new AuthorizationDecision(false); } return new AuthorizationDecision(true); } } The request is received by my GraphQL controller and the correct response is returned from DB but at the end of request execution it is returned to spring AuthorizationFilter and SecurityContext is empty, this causes fail of the full request: 2023-01-06 13:03:27,995 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398385 DEBUG graphql.GraphQL - Execution 'fbce998b-d787-2caa-f857-0441c3b0d249' completed with zero errors 2023-01-06 13:03:27,996 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398386 DEBUG o.s.g.s.webmvc.GraphQlHttpHandler - Execution complete 2023-01-06 13:03:27,996 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398386 DEBUG o.s.w.c.r.async.WebAsyncManager - Started async request 2023-01-06 13:03:27,997 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398387 DEBUG o.s.w.c.r.async.WebAsyncManager - Async result set, dispatch to /admin/graphql 2023-01-06 13:03:27,997 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398387 DEBUG o.s.web.servlet.DispatcherServlet - Exiting but response remains open for further handling 2023-01-06 13:03:27,998 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: f9375945ff600474] 2398388 DEBUG o.s.security.web.FilterChainProxy - Securing POST /graphql 2023-01-06 13:03:28,007 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: eee2beca764645fd] 2398397 DEBUG o.s.s.w.a.AnonymousAuthenticationFilter - Set SecurityContextHolder to anonymous SecurityContext 2023-01-06 13:03:30,972 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: 14b43c683496f98d] 2401362 DEBUG o.s.s.w.a.Http403ForbiddenEntryPoint - Pre-authenticated entry point called. Rejecting access 2023-01-06 13:03:30,974 [Traceid: , SpanId: ] 2401364 DEBUG o.s.security.web.FilterChainProxy - Securing POST /error ...... 2023-01-06 13:03:30,980 [Traceid: 63b800023f4fe8e2a1e95d2c2d8981fa, SpanId: 4fe5b407a4134d43] 2401370 DEBUG o.s.s.w.a.AnonymousAuthenticationFilter - Set SecurityContextHolder to anonymous SecurityContext 2023-01-06 13:03:32,319 [Traceid: 63b800023f4fe8e2a1e95d2c2d8981fa, SpanId: a1e95d2c2d8981fa] 2402709 DEBUG o.s.s.w.a.Http403ForbiddenEntryPoint - Pre-authenticated entry point called. Rejecting access I don't understand why Authorization is not propagated in SecurityContext Thank you for any help Could you share a sample application (something we can git clone or download) that shows the issue? Hello, I've implemented sample application for you that reproduces the issue. Please download from here https://github.com/idun-corp/neo4j-test The issue itself is described in HELP.md Thanks @marichka-spin for the sample application. I've managed indeed to reproduce the issue but I'm not familiar enough with the security setup to understand what changed. @rwinch could you have a look? Here's my current understanding of the problem. The application configures a InternalAuthFilter before the UsernamePasswordAuthenticationFilter; this custom filter "hard codes" a PreAuthenticatedAuthenticationToken authentication in the Spring Security context. I guess this is a simplified version of an existing implementation. This authentication is propagated as expected from the SecurityContextHolder to the GraphQL context during the execution of the GraphQL request. The GraphQL Controller com.proptechos.neo4j.graphql.DogQueryResolver is called as expected as well and returns the response object. Once the response has been completed asynchronously in the org.springframework.graphql.server.webmvc.GraphQlHttpHandler, the security context is overwritten with an anonymous user: 2023-01-11T16:37:12.216+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.security.web.FilterChainProxy : Securing POST /graphql 2023-01-11T16:37:12.253+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.security.web.FilterChainProxy : Secured POST /graphql 2023-01-11T16:37:12.436+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.security.web.FilterChainProxy : Securing POST /graphql 2023-01-11T16:37:12.440+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.s.w.a.AnonymousAuthenticationFilter : Set SecurityContextHolder to anonymous SecurityContext 2023-01-11T16:37:12.441+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.s.w.a.Http403ForbiddenEntryPoint : Pre-authenticated entry point called. Rejecting access It looks like the async dispatch of the request is bypassing this custom filter. Is there something missing in the security configuration of this application? Do you know why we're getting a different behavior vs Spring GraphQL 1.0? If I'm not mistaken, we were already completing the GraphQL HTTP request asynchronously in the previous version. I'm wondering if this part of the Spring Security 6.0 upgrade docs is relevant. @bclozel sorry, have you managed to solve this issue? @marichka-spin sorry, have you managed to solve this issue? Is there any update on this isccue? I can actually reproduce it. We currently have the same problem, that the SecurityContext is not available after request filter. Thus Spring Security blocks the request. By disabling authorization for ASYNC dispatchers, we can work around. .authorizeHttpRequests { it.dispatcherTypeMatchers(DispatcherType.ASYNC).permitAll() } But still not sure, looks like a Bug in Spring Graphql to me. Investigating a bit deper in the problem, actually showed be that the root problem is that Spring for Graphql uses a async request dispatcher approach. Since, Spring Security 6 the SecurityContext is not "persisted" anymore automatically, and thus not available for subsequent requests. [see: Persisting Authentication] Therefore, we need to manually register the SecurityContextRepository, to make the context available for the async dispatcher. For me this solved the problem. Investigating a bit deper in the problem, actually showed be that the root problem is that Spring for Graphql uses a async request dispatcher approach. Since, Spring Security 6 the SecurityContext is not "persisted" anymore automatically, and thus not available for subsequent requests. [see: Persisting Authentication] Therefore, we need to manually register the SecurityContextRepository, to make the context available for the async dispatcher. http // ... .securityContext((securityContext) -> securityContext .securityContextRepository(new DelegatingSecurityContextRepository( new RequestAttributeSecurityContextRepository(), new HttpSessionSecurityContextRepository() )) ); For me this solved the problem. Hi, I have the same problem. This solution is not working for me, can you suggest in what direction to dig? By the way config.dispatcherTypeMatchers(DispatcherType.ASYNC).permitAll(); works fine, but it is not how security should be configured. Any updates on this issue? Running into the same problem. Sorry about the delayed response. I believe @StefanMessner is right as this behavior is due to a Spring Security behavior change that is documented in the migration guide. Adding the following to the sample application "fixes" the issue. @Bean public SecurityFilterChain filterChain(HttpSecurity http, InternalAuthFilter internalAuthFilter) throws Exception { return http.addFilterBefore(internalAuthFilter, UsernamePasswordAuthenticationFilter.class) // disabling explicit save .securityContext((securityContext) -> securityContext.requireExplicitSave(false)) ... Developers should look into this topic to decide whether changing the default in their application is a good fit. I'm closing this issue as a result. Hi everyone. Since Spring Security 6.0 the SecurityContext must be explicitly saved by the user if they wish the context to be available in subsequent requests. The built-in authentication mechanisms from Spring Security already do that and you can configure which SecurityContextRepository should be used to save the SecurityContext. However, when you specify a custom authentication filter without extending AuthenticationFilter/AuthenticationWebFilter, you also have to be aware that you should save the context. Considering the example from https://github.com/idun-corp/neo4j-test, there is an InternalAuthFilter that sets the SecurityContext in the SecurityContextHolder, but it does not save it anywhere else to be available in the ASYNC request. That said, I opened a PR with a suggested change to the implementation of InternalAuthFilter. The 6.0 Migration Guide has some details about this as well. Another solution would be to implement jakarta.servlet.Filter instead of extending OncePerRequestFilter, allowing the filter to be invoked again for the ASYNC dispatcher. @Component public class InternalAuthFilter implements Filter { protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { UserPrincipal principal = UserPrincipal.authenticated(); //Granting authorities List<SimpleGrantedAuthority> grantedAuthorities = principal.getRoles().stream() .map(SimpleGrantedAuthority::new) .collect(Collectors.toList()); final PreAuthenticatedAuthenticationToken authentication = new PreAuthenticatedAuthenticationToken(principal, null, grantedAuthorities); authentication.setAuthenticated(!grantedAuthorities.isEmpty()); authentication.setDetails(principal); SecurityContext context = SecurityContextHolder.createEmptyContext(); context.setAuthentication(authentication); SecurityContextHolder.setContext(context); filterChain.doFilter(request, response); } @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { doFilterInternal((HttpServletRequest) servletRequest, (HttpServletResponse) servletResponse, filterChain); } }
gharchive/issue
2023-01-06T11:45:14
2025-04-01T06:40:27.415805
{ "authors": [ "StefanMessner", "asndevever", "bclozel", "jelena-pesevski", "marcusdacoregio", "marichka-spin", "schmoellphi" ], "repo": "spring-projects/spring-graphql", "url": "https://github.com/spring-projects/spring-graphql/issues/594", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
702078874
GH-1587: Option to Correct Transactional Offsets Resolves https://github.com/spring-projects/spring-kafka/issues/1587 See javadoc for ConsumerProperties.setFixTxOffsets() for more information. cherry-pick to 2.5.x ... and cherry-picked to 2.5.x
gharchive/pull-request
2020-09-15T16:28:09
2025-04-01T06:40:27.428192
{ "authors": [ "artembilan", "garyrussell" ], "repo": "spring-projects/spring-kafka", "url": "https://github.com/spring-projects/spring-kafka/pull/1588", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
777561838
GH 920 - Topic-based retry support Please refer to the RetryTopicConfigurer class' JavaDoc for an overview of the functionalities: https://github.com/tomazfernandes/spring-kafka/blob/GH-920/spring-kafka/src/main/java/org/springframework/kafka/retrytopic/RetryTopicConfigurer.java I've separated the code in 5 commits: Pausing partitions in the MessageListenerContainer -> so that we don't pause the entire consumer and end up backing off the other partition's messages more than we should BackOff Manager and ListenerAdapter -> Functionality to read a timestamp header and manage the partition's consumption by listening to events RetryTopic functionality -> Adds the topics / consumers configuration functionality A few style checks I've missed Some improvements to the javadoc Thanks, @tomazfernandes we prefer rebasing PRs rather than adding merge commits (we'll rebase at the end anyway). Thanks, @tomazfernandes we prefer rebasing PRs rather than adding merge commits (we'll rebase at the end anyway). Thanks, @tomazfernandes we prefer rebasing PRs rather than adding merge commits (we'll rebase at the end anyway). Ok @garyrussell, I actually tried that, but couldn’t push or force push to this branch, it seemed to me because of the open PR (not really experienced with rebasing) Should I try squashing this merge / conflict resolution and rebasing again? Or maybe just leave the conflict there as it was? Thanks, @tomazfernandes we prefer rebasing PRs rather than adding merge commits (we'll rebase at the end anyway). Ok @garyrussell, I actually tried that, but couldn’t push or force push to this branch, it seemed to me because of the open PR (not really experienced with rebasing) Should I try squashing this merge / conflict resolution and rebasing again? Or maybe just leave the conflict there as it was? No don't worry; we'll squash and rebase before merging to master any way; weird that you couldn't force push, though. No don't worry; we'll squash and rebase before merging to master any way; weird that you couldn't force push, though. Hi @tomazfernandes I finally found some time to take a quick look at this. It is very impressive and seems to be a complete solution and will be a valuable addition to the framework. I like the approach. It's a huge amount of code to review, though so it will take some time. That said, I am inclined to merge it as-is and, perhaps, document it as an "experimental" feature, at least initially; I am sure we can get it into next month's 2.7.0-M2 milestone, as long as you have time to address a few issues. First a couple of style issues: we wrap our javadocs at column 90 (when possible) and code at 120 (when possible). Before the milestone, we will need at least some test cases; but the more coverage, the better; we don't want to trigger our Sonar coverage gate (we're currently a little short of 80%, the gate is currently 70%). We will need to document the feature in src/reference/asciidoc - it probably deserves a whole new chapter, rather than sprinkling stuff throughout the other sections (aside from documenting the new container properties). We would not need the docs for M2, but the sooner, the better. Our only strict asciidoctor rule is one-sentence-per-line, but you can see the other docs for examples. Thanks again for such a significant contribution!! Hi @tomazfernandes I finally found some time to take a quick look at this. It is very impressive and seems to be a complete solution and will be a valuable addition to the framework. I like the approach. It's a huge amount of code to review, though so it will take some time. That said, I am inclined to merge it as-is and, perhaps, document it as an "experimental" feature, at least initially; I am sure we can get it into next month's 2.7.0-M2 milestone, as long as you have time to address a few issues. First a couple of style issues: we wrap our javadocs at column 90 (when possible) and code at 120 (when possible). Before the milestone, we will need at least some test cases; but the more coverage, the better; we don't want to trigger our Sonar coverage gate (we're currently a little short of 80%, the gate is currently 70%). We will need to document the feature in src/reference/asciidoc - it probably deserves a whole new chapter, rather than sprinkling stuff throughout the other sections (aside from documenting the new container properties). We would not need the docs for M2, but the sooner, the better. Our only strict asciidoctor rule is one-sentence-per-line, but you can see the other docs for examples. Thanks again for such a significant contribution!! Hi @garyrussell, that's awesome news! I'm really glad you liked it, thank you very much, I'm really excited about it. Sure, I'll address these issues and implement the tests, no problem. There are a couple of things I've been meaning to improve as well, so I'll get to it. Also, if you have any suggestions please let me know. When would it be a good timeframe for me to finish the changes to get in M2? Hi @garyrussell, that's awesome news! I'm really glad you liked it, thank you very much, I'm really excited about it. Sure, I'll address these issues and implement the tests, no problem. There are a couple of things I've been meaning to improve as well, so I'll get to it. Also, if you have any suggestions please let me know. When would it be a good timeframe for me to finish the changes to get in M2? M2 is currently scheduled for Feb 17: https://github.com/spring-projects/spring-kafka/milestone/135 So, we have a few weeks. M2 is currently scheduled for Feb 17: https://github.com/spring-projects/spring-kafka/milestone/135 So, we have a few weeks. Sounds perfect @garyrussell. Thank you very much for the opportunity. Sounds perfect @garyrussell. Thank you very much for the opportunity. Hi @garyrussell, just a quick update. I've implemented some changes and improvements, as well as more than 90% test coverage, both integration and unitary. What's missing is updating the javadocs and addressing the style changes, as well as the documentation, which I plan on doing this week. Unfortunately I had covid and that set me back a couple of weeks, otherwise I'd probably have everything ready by now. I'll commit the code as is so you can take a look if you want - it won't build with Gradle due to checkstyle issues, but it should build and run normally on IntelliJ or by disabling checkstyle. What would be the deadline for committing the javadoc / style adjustments in order to make it into M2? Thanks! @tomazfernandes Thanks. The PR needs to be clean and reviews complete by end of day February 16. The reference manual (which has now moved to spring-kafka-docs/src/main.asciidoc) should be lowest priority and can miss M2 if necessary (although it would be nice to have). FYI, we are not working this Friday (12th) and next Monday (15th). Thanks for your concern Gary, it wasn't a fun ride at all, but thankfully not as bad as it could have been. I'm well now. About the code, I'll clean up the PR, think I should have it done by tomorrow. Then I'll work on the documentation until the 16th, but hopefully I'll have it ready sooner. Of course, if you feel that's too close to the M2 release date we can push it back to the following release, although I'd really like if we can make it for M2. Hi @garyrussell, I've cleaned the PR and formatted the javadocs and code the way you asked. If you could take a look, I think it should be fine now. I'll start working on the documentation soon to have it ready for M2, please let me know if there's anything else that needs to be done or anything I might have missed. Thanks again for the opportunity! Please rebase to master; thanks. @garyrussell, seems like this time I got the rebase right. I'll start working on the documentation. Thanks. Hi @garyrussell, I've created the documentation for the non-blocking retry functionality and updated the relevant parts of the previous documentations. Hope it's ok, please let me know if there's anything else needed. Thanks! Thanks a lot @garyrussell! I'll make these adjustments today after my 9-5. Also, there are a few other improvements / functionalities I'd like do add, such as: make DLT and DLT processing optional include the possibility of configuring a global timeout for the retries include the ability to configure retries from the application.properties files some other minor things Do you think it's worth trying to get those into M2, or maybe go with what we have now and create a new PR afterwards for the next release (considering all goes well)? I'd probably have it done by Tuesday (since you won't be working next Friday and Monday). @garyrussell thanks for your comments, I agree on the DLT part. For the group id, it gets suffixed with the retry topic's suffix as long as the user provides a groupId for the main topic. You can check it out in RetryTopicConfigurer.java:431, that's one of the functions for the customiser added to the KafkaListenerABPP processListener method. So each retry topic / dlt should get it's own consumer group. I think there might be some minor adjustments to make in that logic to cover all possibilities for the main endpoint, maybe we can look into that some time after M2. But it should be good for most use cases. Also, I think I should be able to retrieve the application.properties from the ApplicationContext in order to create a retry configuration, shouldn't I? Or even create a bean for that purpose. Maybe there's a more "Springy" way to do that, which probably is the case for other places of the code as well, feel free to point them out as you see them. I'll take a look into how they're handled internally by the Spring Boot app. As for the other improvements, I think I'll try to code them until Tuesday and maybe commit it to a separate branch, if this PR hasn't been merged by then. Then you can decide whether or not it's small enough for you to review and merge to M2, or if it goes to the release after that - I'll be good with either. Also maybe I can separate the commits so it doesn't have to be an all or nothing decision. Thanks a lot again! I'll add this groupId part to the documentation when I make the changes later today. Also, I think I should be able to retrieve the application.properties from the ApplicationContext in order to create a retry configuration, shouldn't I? Or even create a bean for that purpose. Right, the properties are in the environment, but it's separation of concerns; application.properties/.yml is processed by Boot's auto configuration in KafkaAnnotationDrivenConfiguration and KafkaProperties . I think it would be too confusing for automatic configuration via properties to be handled in two different places. Also Boot has much goodness in its property mapping - e.g. some-timeout: 5s Vs. 5000ms for delays, etc., and completion hints/validation in IDE editors, we don't want to reinvent all that here and, if it's not supported, it would be inconvenient for users. I am sure the Boot team will be receptive to a PR submitted to enhance the auto configuration for this important enhancement. Sounds great @garyrussell, I'll look into adding it to Spring Boot's properties then, probably after M2. Thank you very much for your support. @tomazfernandes you don’t need to comment “done” on each change. It spams our inboxes. Just comment if you disagree about something. Thanks. Sorry about that @garyrussell, thanks for letting me know. Hi @garyrussell, just a quick update. I have: fixed a couple of bugs in the original PR, other than that everything is the same there. created a new PR with a few improvements and new features, and updated the documentation accordingly. If you want to see the diff between the new and the original PR you can check it out here: https://github.com/tomazfernandes/spring-kafka/pull/2 I know you're probably having a busy day ahead of the new version so feel free to choose whether to include the second PR in M2 or to push it back to the next version. I think it should add value, but I know it's too close to the release date, so I'm good with either way. I also realised I haven't written anything for the 'What's new' part of the documentation, should we write something there? If so, is that something you'd like me to do, or do you prefer doing it yourself? FYI, today is a holiday here, and I'm under quarantine, so I should be available for anything that comes up. Thanks once again for the opportunity, it's been a very exciting experience! Thanks guys for the very precious work that you did/are doing. I am very glad, as this is the exact feature I need right now :). One question though, as I cannot properly configure the back-off for the retries. My config is as follows: @Bean public ConcurrentKafkaListenerContainerFactory<String, String> kafkaRetryListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(retryConsumerFactory()); return factory; } private ConsumerFactory<String, String> retryConsumerFactory() { Map<String, Object> configProps = new HashMap<>(); configProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); return new DefaultKafkaConsumerFactory<>(configProps); } And the listener annotated method is: @KafkaListener(topics = "test-topic", containerFactory = "kafkaRetryListenerContainerFactory", groupId = "test-group") @RetryableTopic( backoff = @Backoff(100L), attempts = 10, kafkaTemplate = "kafkaTemplate", fixedDelayTopicStrategy = FixedDelayStrategy.SINGLE_TOPIC ) public void consumeRetry30m(String msg) { log.info("Received message: {}", msg); throw new RuntimeException(); } Case that I encounter is that the delay seems to be applied only for the first retry, then it goes after ~10s for each consecutive attempt. I've tried various configuration, e.g. FixedDelayStrategy.MULTIPLE_TOPICS, higher delay, maxDelay. Is there some sample code, like a PoC, showing how to use this e2e (or some integration test maybe)? Thanks again! Hello @Duncol, thanks for trying out this feature! Which Spring Kafka version are you using? There was an issue with the delay precision that should have been fixed in RC1. Also note that for a 100ms delay you’d have to set the pollTimeout to at most 50ms, as well as partitionIdleEvent, which might not be ideal. Is this delay amount a requirement? In RC1 it should have more accurate timings OOTB with delays above 1 second. Hi Tomaz, thanks for the quick response! I am using spring-kafka 2.7.0-M2. This is not a requirement (the requirement is far less frequent TBH), just wanted to check how is this behaving and stumbled across this weird, nearly linear 'magic' 10s delay, which I don' where it comes from. I've tried setting this for 60s and at start it seems quite fine, but then it happened to be quite spontaneous: (looks like the 2nd and 3rd msg is delayed by configured backoff + 10s) but when message throughput is less 'quiet' (seems like less than 10s in betweeen), I get almost exact 10s delay for each: Maybe my configuration is not sufficient somehow? I've provided solely the @Backoff(60000L) for this case. Is there something to be configured for the kafka perhaps, or more in the retry feature itself? @Duncol Try upgrading to 2.7.0-RC1; with M2, it was tightly coupled to the poll timeout and idle interval; I saw similar issues with M2 and @tomazfernandes made some improvements that are included in RC1. @garyrussell @tomazfernandes I've upgraded to the RC1, but now it does not seems to retry at all. I still get the KafkaBackoffException (more concise in RC1), logging the approx backoff, but nothing happens after that (needs restart of the service to kick next (single) retry). M2 works well for my configuration (except the aforementioned issues with backof times) and the only thing I've changed was the M2 -> RC1 and config data types (int/long -> String) in @RetryableTopic More info about my approach: My retry is based on RuntimeExceptions (exception thrown -> should retry, no exception -> no further retries). First retry is initiated in try/catch around the core service; catch block sends wrapped message to first retry topic via KafkaTemplate. When first topic retry is exhausted and the wrapped msg lands in the DLT, I pass that msg to another retry topic (via KafkaTemplate as for the initial retry topic). After second retry topic 'completes', the DLT just logs the msg, with no further passing. Hi @Duncol, thanks a lot for bringing this up! There's indeed a bug when we use the same factory for the KafkaListener and RetryableTopic annotations. It'll be fixed ASAP, but for now as a workaround if you specify a different factory instance for the RetryableTopic annotation it should work. This scenario will be added to our integration tests so that it doesn't happen again. Please let us know how it turns out. Thanks again! @tomazfernandes There still seems to be something amiss - when I changed my test app to use a different factory, I see this 2021-04-12 16:49:44.152 INFO 35392 --- [ kgh920-0-C-1] com.example.demo.Kgh920Application : foo from kgh920 2021-04-12 16:49:44.657 INFO 35392 --- [etry-1000-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-retry-1000 2021-04-12 16:49:45.167 INFO 35392 --- [etry-2000-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-kgh920-retry-2000-1, groupId=kgh920-retry-2000] Seeking to offset 3 for partition kgh920-retry-2000-0 2021-04-12 16:49:45.167 WARN 35392 --- [etry-2000-0-C-1] essageListenerContainer$ListenerConsumer : Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic kgh920-retry-2000 is not ready for consumption, backing off for approx. 490 millis. 2021-04-12 16:49:47.175 INFO 35392 --- [etry-2000-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-retry-2000 2021-04-12 16:49:47.681 INFO 35392 --- [etry-4000-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-kgh920-retry-4000-4, groupId=kgh920-retry-4000] Seeking to offset 3 for partition kgh920-retry-4000-0 2021-04-12 16:49:47.681 WARN 35392 --- [etry-4000-0-C-1] essageListenerContainer$ListenerConsumer : Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic kgh920-retry-4000 is not ready for consumption, backing off for approx. 1494 millis. 2021-04-12 16:49:49.688 INFO 35392 --- [etry-4000-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-retry-4000 2021-04-12 16:49:50.200 INFO 35392 --- [etry-8000-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-kgh920-retry-8000-5, groupId=kgh920-retry-8000] Seeking to offset 3 for partition kgh920-retry-8000-0 2021-04-12 16:49:50.200 WARN 35392 --- [etry-8000-0-C-1] essageListenerContainer$ListenerConsumer : Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic kgh920-retry-8000 is not ready for consumption, backing off for approx. 3488 millis. 2021-04-12 16:49:53.699 INFO 35392 --- [etry-8000-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-retry-8000 2021-04-12 16:49:54.210 INFO 35392 --- [gh920-dlt-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-kgh920-dlt-6, groupId=kgh920-dlt] Seeking to offset 3 for partition kgh920-dlt-0 2021-04-12 16:49:54.210 WARN 35392 --- [gh920-dlt-0-C-1] essageListenerContainer$ListenerConsumer : Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic kgh920-dlt is not ready for consumption, backing off for approx. 7489 millis. 2021-04-12 16:50:01.705 INFO 35392 --- [gh920-dlt-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-dlt @RetryableTopic(attempts = "5", backoff = @Backoff(delay = 1000, multiplier = 2.0), listenerContainerFactory = "retryFactory") @KafkaListener(id = "kgh920", topics = "kgh920") public void listen(String in, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { LOG.info(in + " from " + topic); throw new RuntimeException("test"); } The first retry is +500ms instead of 1s, the next retry is +2s (correct), the next is +2.5s instead of 4, the next retry is +4s instead of 8. I then see 8 seconds before it goes to the DLT - in earlier versions, I am sure that it went straight to the DLT after the 8 second delivery attempt failed. @garyrussell, there are two things going on there: the first is a bug I've found now where the current topic's delay is being used instead of the next topic's - so it'd wrongly be 0, 1s, 2s, 4s, 8s. The 500ms+- differences are related to the poll timeout, which has to be a lot smaller in the retry topics if we have low backoffs such as 1s - not much time to go through all the pause - partition idle event - resume container - resume consumer process considering it takes about 500ms to get there in the first place. I've changed the default configuration to do that in the latest PR regarding this. I've already fixed this bug and will submit a PR after I run the tests. If you can test this out again tomorrow with the fixes it'll be great, I think everything should work as expected. I'll also open the PR for the factories bug. Thanks! @tomazfernandes Separate ListenerFactory works, thanks for the quick response, I can move forward now :). Just one side question (maybe more for you @garyrussell ) - whatis the planned release date of spring-kafka 2.7.0? @Duncol Later today https://github.com/spring-projects/spring-kafka/milestones Found something strange when writing IT for my retry feature - seems like messages addressed for the -retry topics are doubled? Found something strange when writing IT for my retry feature - seems like messages addressed for the -retry topics are doubled (except the first one)? I might be raising a false alarm due to some misunderstanding of deeper internals (still learning Kafka), but thought it would be worth mentioning. Hi @Duncol! Can you share the code where you’re getting this list from? Is it a batch consumer? Thanks for mentioning, your feedback is very important for us! @tomazfernandes (just mind it's WIP and I'm looking for a cleaner way to register RecordInterceptor just for tests :) ) The previous screen shows this collection after all retries. Hmm, that’s strange. What’s the “callbackKafkaListenerContainerFactory” for? The retry topic’s mechanism relies on the battle tested dead letter publishing recovered to forward messages, and we didn’t see any behavior like this before. The only possibility I see from the feature side would be if we’re registering two consumers per topic, which again never happened in our tests. So what comes to mind is this: Can you check if you have two consumer instances for each topic? You might be able to notice that by putting a breakpoint in the pollAndInvoke method in the KafkaMessageListenerContainer class and checking the instance id for each message consumption. The other possibility I see would be if you’re for some reason registering two interceptors for the same factory instance, not really sure how to check for that but probably a breakpoint in the interceptor assignment will do. @garyrussell, any thoughts on this? According to your screenshot all the interceptors are placing their records into the same consumerRecords collection. So, it might not be a surprise to see the same data in the tests when you produce a record into a topic. Rings a bell? @tomazfernandes 'callbackKafkaListenerContainerFactory' is for our main @KafkaListener. I'm not placing the @RetryableTopic directly over this listener - instead, I have separate handler with two listening methods (each having @KafkaListener + @RetryableTopic over them). I also have one method with @DltHandler. It goes something like this: (external service) --msg--> @KafkaListener(containerFactory = "callbackKafkaListenerContainerFactory") --on-error--> @KafkaListener(topics = "first-topic", containerFactory = "callbackRetryKafkaListenerContainerFactory") @RetryableTopic(listenerContainerFactory = callbackRetryAuxKafkaListenerContainerFactory) --retry-exhaustion--> @DltHandler --send-to-next-retry-topic--> @KafkaListener(topics = "first-topic", containerFactory = "callbackRetryKafkaListenerContainerFactory") @RetryableTopic(listenerContainerFactory = callbackRetryAuxKafkaListenerContainerFactory) --retry-exhaustion--> @DltHandler (do nothing more) callbackRetryKafkaListenerContainerFactory and callbackRetryAuxKafkaListenerContainerFactory are having the same ConsumerFactory Each of those methods have own logging which prove, that the retry count is correct (i.e. no doubled messages) I could perhaps place @RetryableTopic directly over the: @KafkaListener(containerFactory = "callbackKafkaListenerContainerFactory") But I wanted to decouple the code this way @artembilan so in each retry, the message goes through both @KafkaListener's and @RetryableTopic's listenerContainerFactory (and thus - configured consumer)? Such scenario would match my outcome. I would expect, that only initial msg arrival is consumed by consumer configured for callbackRetryKafkaListenerContainerFactory (i.e. @KafkaListener) and the retries are just utilizing callbackRetryAuxKafkaListenerContainerFactory's (i.e. @RetryableTopic) consumer for single message consumption. Is this the way the retry works? Hmm, well, then you have two listeners per topic, hence two instances of the same message in you collection... That’s the expected behavior, right? I didn’t understand what exactly you’re trying to achieve with this pattern: a single @KafkaListener method with @RetryableTopic should suffice. Also, unless that’s somehow a requirement, you don’t need to handle forwarding to the next topic manually in the DLT method, instead you should let the exception go all the way back to the listener (outside of any try/catch) and the framework will handle message forwarding for you. Makes sense? @Duncol Perhaps you misunderstood @tomazfernandes when we had that bug (needing two factories). This is what he meant... @SpringBootApplication public class Kgh920Application { private static final Logger LOG = LoggerFactory.getLogger(Kgh920Application.class); public static void main(String[] args) { SpringApplication.run(Kgh920Application.class, args); } @RetryableTopic(attempts = "5", backoff = @Backoff(delay = 1000, multiplier = 2.0), listenerContainerFactory = "retryFactory") @KafkaListener(id = "kgh920", topics = "kgh920") public void listen(String in, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { LOG.info(in + " from " + topic); throw new RuntimeException("test"); } @DltHandler public void dlt(String in, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { LOG.info(in + " from " + topic); } @Bean ConcurrentKafkaListenerContainerFactory<?, ?> retryFactory( ConcurrentKafkaListenerContainerFactoryConfigurer configurer, ObjectProvider<ConsumerFactory<Object, Object>> kafkaConsumerFactory, KafkaProperties kafkaProps) { ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>(); configurer.configure(factory, kafkaConsumerFactory .getIfAvailable(() -> new DefaultKafkaConsumerFactory<>(kafkaProps.buildConsumerProperties()))); return factory; } } i.e. specify a different factory on the retry annotation. Using a different factory is no longer needed now that the bug has been fixed, but it you still want to do it, it needs to go on the retry annotation. Hmm, well, then you have two listeners per topic, hence two instances of the same message in you collection... That’s the expected behavior, right? I didn’t understand what exactly you’re trying to achieve with this pattern: a single @KafkaListener method with @RetryableTopic should suffice. Also, unless that’s somehow a requirement, you don’t need to handle forwarding to the next topic manually in the DLT method, instead you should let the exception go all the way back to the listener (outside of any try/catch) and the framework will handle message forwarding for you. Makes sense? Given the fact, that there is a '-retry' topic created (single topic strategy) for the retry I assumed that the listener for the initial topic (without the '-retry' suffix) is somehow consuming messages from the '-retry' topics, thus - duplicate msg. I'll check it with the fixes, thanks a lot again, much appreciate work you are doing! The manual forward to the next topic is due to different backoff/attempt requirement. Can it be reconfigured on the same retry somehow, as a 'second tier approach' maybe? Hi @Duncol, sorry, I totally missed this message, was cleaning up the inbox and found it now. How is the feature working for you, is it behaving as expected? Can you share more details on your retrial requirements, such as number of attempts, delays, etc? Maybe we can work something out to include this second tier. Thanks and sorry again for taking this long to reply. Hi @tomazfernandes, sorry for lack of response, but I was quite occupied with some other tasks. I'll try to implement your suggestion (which looks quite nice and seems a nearly golden bullet for our case - and any further granular adjustments that we might need regarding changing the back-off) and let you know ASAP. One other thing I've stumbled across which might be interesting is that it seems that @RetryableTopic creates additional consumer with the same clientId. This causes the app to throw an exception (logged as WARN) regarding initializing MBean ('Already Exists'): javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-info,id=retry-30m-0 at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436) ~[?:?] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1855) ~[?:?] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:955) ~[?:?] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:890) ~[?:?] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:320) ~[?:?] at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) ~[?:?] at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64) ~[kafka-clients-2.6.0.jar:?] at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:814) ~[kafka-clients-2.6.0.jar:?] at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:632) ~[kafka-clients-2.6.0.jar:?] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:699) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:317) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:384) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:206) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:384) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:312) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:257) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.3.jar:5.3.3] at java.lang.Iterable.forEach(Iterable.java:75) ~[?:?] at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:940) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:591) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:767) ~[spring-boot-2.4.2.jar:2.4.2] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759) ~[spring-boot-2.4.2.jar:2.4.2] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:426) ~[spring-boot-2.4.2.jar:2.4.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:326) ~[spring-boot-2.4.2.jar:2.4.2] at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:123) ~[spring-boot-test-2.4.2.jar:2.4.2] at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:124) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:124) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:190) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:132) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:244) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.junit.jupiter.SpringExtension.postProcessTestInstance(SpringExtension.java:138) ~[spring-test-5.3.3.jar:5.3.3] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$6(ClassBasedTestDescriptor.java:350) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.executeAndMaskThrowable(ClassBasedTestDescriptor.java:355) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$7(ClassBasedTestDescriptor.java:350) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?] at java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:312) ~[?:?] at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735) ~[?:?] at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734) ~[?:?] at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) ~[?:?] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestInstancePostProcessors(ClassBasedTestDescriptor.java:349) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$instantiateAndPostProcessTestInstance$4(ClassBasedTestDescriptor.java:270) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.instantiateAndPostProcessTestInstance(ClassBasedTestDescriptor.java:269) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$2(ClassBasedTestDescriptor.java:259) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at java.util.Optional.orElseGet(Optional.java:362) ~[?:?] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$3(ClassBasedTestDescriptor.java:258) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.execution.TestInstancesProvider.getTestInstances(TestInstancesProvider.java:31) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$prepare$0(TestMethodTestDescriptor.java:101) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:100) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:65) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$prepare$1(NodeTestTask.java:111) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.prepare(NodeTestTask.java:111) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:79) ~[junit-platform-engine-1.7.0.jar:1.7.0] at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?] at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) ~[junit-platform-engine-1.7.0.jar:1.7.0] at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?] at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:170) [junit-platform-launcher-1.2.0.jar:1.2.0] at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:154) [junit-platform-launcher-1.2.0.jar:1.2.0] at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:90) [junit-platform-launcher-1.2.0.jar:1.2.0] at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:142) [surefire-junit-platform-2.22.0.jar:2.22.0] at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:117) [surefire-junit-platform-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417) [surefire-booter-2.22.0.jar:2.22.0] I've tried separate listenerContainerFactory for the @RetryableTopic with explicit clientId, different from the one I set for particular @KafkaListener, but the one from @KafkaListener takes precedence. When I comment out the @Retryable topic, those additional consumers are not being registered and problem disappears. Have you encounter something similar maybe? Those WARNs do not break anything, but as I've read, it may cut off some metrics and pollutes out logfile a bit :) Hi @Duncol, thanks for bringing this up! I'm having trouble reproducing your issue. When we specify a clientIdPrefix in the @KafkaListener annotation, the prefix gets suffixed by the topic's suffix (e.g. retry-250). And when we don't, Kafka's ConsumerConfig class has a monotonically increasing number that it appends to the consumer's id so that they're unique at least within the same app instance (line 576). In my tests all consumers end up with different client ids. Which Spring Kafka version are you using? Can you share more details on how we can reproduce the issue? Maybe @garyrussell has something to add? Thanks again for your input! I can't reproduce it either; when I enable info logging for sample-04 I get these client ids when I add clientIdPrefix = "test" to the listener: client.id = test-retry-10000-0 client.id = test-retry-4000-0 client.id = test-0 client.id = test-retry-2000-0 client.id = test-dlt-0 client.id = test-retry-8000-0 The exception seems to indicate you have multiple listeners with the same clientIdPrefix. If you can put together a minimal example that exhibits the behavior, we can take a look.
gharchive/pull-request
2021-01-03T02:49:21
2025-04-01T06:40:27.492290
{ "authors": [ "Duncol", "artembilan", "garyrussell", "tomazfernandes" ], "repo": "spring-projects/spring-kafka", "url": "https://github.com/spring-projects/spring-kafka/pull/1664", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
126955355
Tomcat error when trying to run my app as Spring Boot in Spring Tool suite 2016-01-15 15:34:08.625 INFO 7604 --- [ main] .s.p.TmoSpaceProductofferingsApplication : No active profile set, falling back to default profiles: default 2016-01-15 15:34:08.644 INFO 7604 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@7a55af6b: startup date [Fri Jan 15 15:34:08 EST 2016]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@223d2c72 2016-01-15 15:34:11.031 INFO 7604 --- [ main] o.s.b.f.xml.XmlBeanDefinitionReader : Loading XML bean definitions from class path resource [cassandra-config.xml] 2016-01-15 15:34:11.341 INFO 7604 --- [ main] o.s.b.f.s.DefaultListableBeanFactory : Overriding bean definition for bean 'beanNameViewResolver' with a different definition: replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter.class]] 2016-01-15 15:34:11.437 INFO 7604 --- [ main] o.s.b.f.s.DefaultListableBeanFactory : Overriding bean definition for bean 'ignoredPathsWebSecurityConfigurerAdapter' with a different definition: replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.security.SpringBootWebSecurityConfiguration; factoryMethodName=ignoredPathsWebSecurityConfigurerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/security/SpringBootWebSecurityConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.actuate.autoconfigure.ManagementWebSecurityAutoConfiguration; factoryMethodName=ignoredPathsWebSecurityConfigurerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/actuate/autoconfigure/ManagementWebSecurityAutoConfiguration.class]] 2016-01-15 15:34:11.450 INFO 7604 --- [ main] a.ConfigurationClassBeanDefinitionReader : Skipping bean definition for [BeanMethod:name=cassandraTemplate,declaringClass=org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration]: a definition for bean 'cassandraTemplate' already exists. This top-level bean definition is considered as an override. 2016-01-15 15:34:12.132 INFO 7604 --- [ main] c.s.PropertySourcesPlaceholderConfigurer : Loading properties file from class path resource [cassandra-cloud.properties] 2016-01-15 15:34:12.146 INFO 7604 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=972e8ed7-ce81-3d36-b368-7a743778b055 2016-01-15 15:34:12.158 INFO 7604 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring 2016-01-15 15:34:12.307 INFO 7604 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$93eaf2bd] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2016-01-15 15:34:12.726 INFO 7604 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http) 2016-01-15 15:34:12.745 INFO 7604 --- [ main] o.apache.catalina.core.StandardService : Starting service Tomcat 2016-01-15 15:34:12.747 INFO 7604 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.0.30 2016-01-15 15:34:13.014 INFO 7604 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2016-01-15 15:34:13.014 INFO 7604 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 4370 ms 2016-01-15 15:34:13.724 ERROR 7604 --- [cat-startStop-1] org.apache.catalina.core.ContainerBase : A child container failed during start java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].StandardContext[]] at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_65] at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_65] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:916) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:871) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1408) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1398) [tomcat-embed-core-8.0.30.jar:8.0.30] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_65] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].StandardContext[]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154) [tomcat-embed-core-8.0.30.jar:8.0.30] ... 6 common frames omitted Caused by: java.lang.NoSuchMethodError: javax.servlet.ServletContext.addFilter(Ljava/lang/String;Ljavax/servlet/Filter;)Ljavax/servlet/FilterRegistration$Dynamic; at org.springframework.boot.context.embedded.AbstractFilterRegistrationBean.onStartup(AbstractFilterRegistrationBean.java:225) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.FilterRegistrationBean.onStartup(FilterRegistrationBean.java:41) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.selfInitialize(EmbeddedWebApplicationContext.java:225) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.access$000(EmbeddedWebApplicationContext.java:85) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext$1.onStartup(EmbeddedWebApplicationContext.java:209) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.tomcat.TomcatStarter.onStartup(TomcatStarter.java:55) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5244) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) [tomcat-embed-core-8.0.30.jar:8.0.30] ... 6 common frames omitted 2016-01-15 15:34:13.727 ERROR 7604 --- [ main] org.apache.catalina.core.ContainerBase : A child container failed during start java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost]] at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_65] at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_65] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:916) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.StandardService.startInternal(StandardService.java:441) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:769) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.startup.Tomcat.start(Tomcat.java:344) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.initialize(TomcatEmbeddedServletContainer.java:89) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.(TomcatEmbeddedServletContainer.java:76) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getTomcatEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:462) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:168) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.createEmbeddedServletContainer(EmbeddedWebApplicationContext.java:160) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:130) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:532) [spring-context-4.2.4.RELEASE.jar:4.2.4.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:764) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.doRun(SpringApplication.java:357) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:305) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1124) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1113) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at com.tmobile.space.productofferings.TmoSpaceProductofferingsApplication.main(TmoSpaceProductofferingsApplication.java:12) [classes/:na] Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154) [tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1408) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1398) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_65] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_65] at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65] Caused by: org.apache.catalina.LifecycleException: A child container failed during start at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:924) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:871) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) [tomcat-embed-core-8.0.30.jar:8.0.30] ... 6 common frames omitted 2016-01-15 15:34:13.728 WARN 7604 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat 2016-01-15 15:34:13.736 ERROR 7604 --- [ main] o.s.boot.SpringApplication : Application startup failed org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:133) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:532) ~[spring-context-4.2.4.RELEASE.jar:4.2.4.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:764) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.doRun(SpringApplication.java:357) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:305) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1124) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1113) [spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at com.tmobile.space.productofferings.TmoSpaceProductofferingsApplication.main(TmoSpaceProductofferingsApplication.java:12) [classes/:na] Caused by: org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.initialize(TomcatEmbeddedServletContainer.java:99) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.(TomcatEmbeddedServletContainer.java:76) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getTomcatEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:462) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:168) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.createEmbeddedServletContainer(EmbeddedWebApplicationContext.java:160) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:130) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] ... 8 common frames omitted Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardServer[-1]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.startup.Tomcat.start(Tomcat.java:344) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.initialize(TomcatEmbeddedServletContainer.java:89) ~[spring-boot-1.3.1.RELEASE.jar:1.3.1.RELEASE] ... 13 common frames omitted Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardService[Tomcat]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:769) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ~[tomcat-embed-core-8.0.30.jar:8.0.30] ... 15 common frames omitted Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.StandardService.startInternal(StandardService.java:441) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ~[tomcat-embed-core-8.0.30.jar:8.0.30] ... 17 common frames omitted Caused by: org.apache.catalina.LifecycleException: A child container failed during start at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:924) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262) ~[tomcat-embed-core-8.0.30.jar:8.0.30] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ~[tomcat-embed-core-8.0.30.jar:8.0.30] ... 19 common frames omitted 2016-01-15 15:34:13.744 INFO 7604 --- [ Thread-1] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@223d2c72: startup date [Fri Jan 15 15:34:07 EST 2016]; root of context hierarchy use javax.servlet servlet-api 3.0 or above
gharchive/issue
2016-01-15T20:47:22
2025-04-01T06:40:27.550252
{ "authors": [ "Riyaz71234", "pradeepgoudra" ], "repo": "spring-projects/spring-loaded", "url": "https://github.com/spring-projects/spring-loaded/issues/162", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
504541005
Surefire failing on Java 13 How to reproduce git clone https://github.com/spring-projects/spring-petclinic.git cd spring-petclinic $ ./mvnw package [INFO] Scanning for projects... [INFO] [INFO] ------------< org.springframework.samples:spring-petclinic >------------ [INFO] Building petclinic 2.1.0.BUILD-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- jacoco-maven-plugin:0.8.2:prepare-agent (default) @ spring-petclinic --- [INFO] argLine set to -javaagent:/Users/pkumari/.m2/repository/org/jacoco/org.jacoco.agent/0.8.2/org.jacoco.agent-0.8.2-runtime.jar=destfile=/Users/pkumari/java/spring-petclinic/target/jacoco.exec [INFO] [INFO] --- git-commit-id-plugin:2.2.6:revision (default) @ spring-petclinic --- [INFO] dotGitDirectory /Users/pkumari/java/spring-petclinic/.git [INFO] git.build.user.name Priti Kumari [INFO] git.build.user.email pkumari@redhat.com [INFO] git.branch master [INFO] --always = true [INFO] --dirty = -dirty [INFO] --abbrev = 7 [INFO] Tag refs [[Ref[refs/tags/1.5.x=c36452a2c34443ae26b4ecbba4f149906af14717]]] [INFO] Created map: [{}] [INFO] evalCommit is [f284b29501946f933ee4473ecf59096d013df31c] [INFO] git.commit.id.describe f284b29 [INFO] git.commit.id f284b29501946f933ee4473ecf59096d013df31c [INFO] git.commit.id.abbrev f284b29 [INFO] git.dirty false [INFO] git.commit.user.name Stephane Nicoll [INFO] git.commit.user.email snicoll@pivotal.io [INFO] git.commit.message.full Upgrade to Spring Boot 2.1.9.RELEASE [INFO] git.commit.message.short Upgrade to Spring Boot 2.1.9.RELEASE [INFO] git.commit.time 2019-10-03T17:29:27+0530 [INFO] git.remote.origin.url https://github.com/spring-projects/spring-petclinic.git [INFO] git.tags [INFO] evalCommit is [f284b29501946f933ee4473ecf59096d013df31c] [INFO] Tag refs [[Ref[refs/tags/1.5.x=c36452a2c34443ae26b4ecbba4f149906af14717]]] [INFO] Created map: [{}] [INFO] git.closest.tag.name [INFO] evalCommit is [f284b29501946f933ee4473ecf59096d013df31c] [INFO] Tag refs [[Ref[refs/tags/1.5.x=c36452a2c34443ae26b4ecbba4f149906af14717]]] [INFO] Created map: [{}] [INFO] git.closest.tag.commit.count [INFO] git.total.commit.count 665 [INFO] git.build.time 2019-10-09T15:19:55+0530 [INFO] git.build.version 2.1.0.BUILD-SNAPSHOT [INFO] git.build.host Pritis-MacBook-Pro [INFO] git.commit.id.describe-short f284b29 [INFO] found property git.tags [INFO] found property git.closest.tag.commit.count [INFO] found property git.build.version [INFO] found property git.commit.user.name [INFO] found property git.commit.id.abbrev [INFO] found property git.branch [INFO] found property git.build.host [INFO] found property git.commit.id.describe-short [INFO] found property git.total.commit.count [INFO] found property git.commit.id.describe [INFO] found property git.build.user.email [INFO] found property git.commit.id [INFO] found property git.commit.message.short [INFO] found property git.commit.user.email [INFO] found property git.closest.tag.name [INFO] found property git.commit.time [INFO] found property git.build.time [INFO] found property git.build.user.name [INFO] found property git.dirty [INFO] found property git.commit.message.full [INFO] found property git.remote.origin.url [INFO] Reading existing properties file [/Users/pkumari/java/spring-petclinic/target/classes/git.properties] (for module petclinic)... [INFO] Properties file [/Users/pkumari/java/spring-petclinic/target/classes/git.properties] is up-to-date (for module petclinic)... [INFO] [INFO] --- spring-boot-maven-plugin:2.1.9.RELEASE:build-info (default) @ spring-petclinic --- [INFO] [INFO] --- wro4j-maven-plugin:1.8.0:run (default) @ spring-petclinic --- [INFO] /Users/pkumari/java/spring-petclinic/src/main/less [INFO] Executing the mojo: [INFO] Wro4j Model path: /Users/pkumari/java/spring-petclinic/src/main/wro/wro.xml [INFO] targetGroups: null [INFO] minimize: true [INFO] ignoreMissingResources: null [INFO] parallelProcessing: false [INFO] buildDirectory: /Users/pkumari/java/spring-petclinic/target [INFO] destinationFolder: /Users/pkumari/java/spring-petclinic/target [INFO] cssDestinationFolder: /Users/pkumari/java/spring-petclinic/target/classes/static/resources/css [INFO] The following groups will be processed: [petclinic] [INFO] folder: /Users/pkumari/java/spring-petclinic/target/classes/static/resources/css [INFO] processing group: petclinic.css [WARNING] Less warnings are: [WARNING] 10:1 Cannot link source map. Css result location is not know and could not be deduced from input less source.. [INFO] file size: petclinic.css -> 152399 bytes [INFO] /Users/pkumari/java/spring-petclinic/target/classes/static/resources/css/petclinic.css (152399 bytes) [INFO] folder: /Users/pkumari/java/spring-petclinic/target [INFO] processing group: petclinic.js [INFO] [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ spring-petclinic --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 2 resources [INFO] Copying 35 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ spring-petclinic --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 25 source files to /Users/pkumari/java/spring-petclinic/target/classes [INFO] [INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ spring-petclinic --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /Users/pkumari/java/spring-petclinic/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ spring-petclinic --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 11 source files to /Users/pkumari/java/spring-petclinic/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ spring-petclinic --- [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] [INFO] Results: [INFO] [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 1. See FAQ web page and the dump file /Users/pkumari/java/spring-petclinic/target/surefire-reports/2019-10-09T15-20-03_591-jvmRun1.dumpstream [INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 10.828 s [INFO] Finished at: 2019-10-09T15:20:04+05:30 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (default-test) on project spring-petclinic: There are test failures. [ERROR] [ERROR] Please refer to /Users/pkumari/java/spring-petclinic/target/surefire-reports for the individual test results. [ERROR] Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. [ERROR] The forked VM terminated without properly saying goodbye. VM crash or System.exit called? [ERROR] Command was /bin/sh -c cd /Users/pkumari/java/spring-petclinic && /Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/bin/java -javaagent:/Users/pkumari/.m2/repository/org/jacoco/org.jacoco.agent/0.8.2/org.jacoco.agent-0.8.2-runtime.jar=destfile=/Users/pkumari/java/spring-petclinic/target/jacoco.exec -jar /Users/pkumari/java/spring-petclinic/target/surefire/surefirebooter2569740085637980547.jar /Users/pkumari/java/spring-petclinic/target/surefire 2019-10-09T15-20-03_591-jvmRun1 surefire12554066826053002032tmp surefire_015926845538724409623tmp [ERROR] Error occurred in starting fork, check output in log [ERROR] Process Exit Code: 134 [ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called? [ERROR] Command was /bin/sh -c cd /Users/pkumari/java/spring-petclinic && /Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/bin/java -javaagent:/Users/pkumari/.m2/repository/org/jacoco/org.jacoco.agent/0.8.2/org.jacoco.agent-0.8.2-runtime.jar=destfile=/Users/pkumari/java/spring-petclinic/target/jacoco.exec -jar /Users/pkumari/java/spring-petclinic/target/surefire/surefirebooter2569740085637980547.jar /Users/pkumari/java/spring-petclinic/target/surefire 2019-10-09T15-20-03_591-jvmRun1 surefire12554066826053002032tmp surefire_015926845538724409623tmp [ERROR] Error occurred in starting fork, check output in log [ERROR] Process Exit Code: 134 [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:669) [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:282) [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245) [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183) [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011) [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857) [ERROR] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137) [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210) [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156) [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148) [ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117) [ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81) [ERROR] at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56) [ERROR] at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128) [ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305) [ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192) [ERROR] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105) [ERROR] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:956) [ERROR] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) [ERROR] at org.apache.maven.cli.MavenCli.main(MavenCli.java:192) [ERROR] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [ERROR] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [ERROR] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [ERROR] at java.base/java.lang.reflect.Method.invoke(Method.java:567) [ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282) [ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225) [ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406) [ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347) [ERROR] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [ERROR] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [ERROR] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [ERROR] at java.base/java.lang.reflect.Method.invoke(Method.java:567) [ERROR] at org.apache.maven.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:39) [ERROR] at org.apache.maven.wrapper.WrapperExecutor.execute(WrapperExecutor.java:122) [ERROR] at org.apache.maven.wrapper.MavenWrapperMain.main(MavenWrapperMain.java:61) [ERROR] [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException macOs mojave version 10.14.6 (18G103) and default java version $ java -version openjdk version "13" 2019-09-17 OpenJDK Runtime Environment AdoptOpenJDK (build 13+33) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 13+33, mixed mode, sharing) I searched the issue in stack overflow [1] but could not be able to find any concrete solution that can fit here. Please have a look. [1] https://stackoverflow.com/questions/55272870/surefire-maven-plugin-corrupted-stdout-by-directly-writing-to-native-stream-in and default java version Java 13 is not supported by Spring Boot 2.1.x that Spring Petclinic currently uses. Switching to Java 11 should fix the problem. This is a build issue but it would be worth investigating why tests are not invoked with Java 13. Ran into the same issue, make sure JAVA_HOME is set -- for whatever reason it's not exported by default on macOS. /usr/libexec/java_home will print the directory, if you're using a version manager however you'll have to use whatever mechanism it provides to set the variable (asdf comes with a script, sdkman should just work, IIRC) The issue seems to be related to the jacoco agent. Running as SYSTEM Building in workspace /var/lib/jenkins/workspace/petclinic [petclinic] $ /bin/sh -xe /tmp/jenkins8131710444436746716.sh git clone https://github.com/spring-projects/spring-petclinic.git Cloning into 'spring-petclinic'... cd spring-petclinic mvn package [INFO] Scanning for projects... [ERROR] Internal error: java.lang.ArrayIndexOutOfBoundsException: 8393 -> [Help 1] org.apache.maven.InternalErrorException: Internal error: java.lang.ArrayIndexOutOfBoundsException: 8393 at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:120) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288) at org.apache.maven.cli.MavenCli.main (MavenCli.java:192) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356) Caused by: java.lang.ArrayIndexOutOfBoundsException: 8393 at org.codehaus.plexus.util.xml.pull.MXParser.parsePI (MXParser.java:2502) at org.codehaus.plexus.util.xml.pull.MXParser.nextImpl (MXParser.java:1283) at org.codehaus.plexus.util.xml.pull.MXParser.next (MXParser.java:1131) at org.codehaus.plexus.util.xml.pull.MXParser.nextTag (MXParser.java:1116) at org.apache.maven.model.io.xpp3.MavenXpp3ReaderEx.parsePluginExecution (MavenXpp3ReaderEx.java:3585) at org.apache.maven.model.io.xpp3.MavenXpp3ReaderEx.parsePlugin (MavenXpp3ReaderEx.java:3380) at org.apache.maven.model.io.xpp3.MavenXpp3ReaderEx.parseBuild (MavenXpp3ReaderEx.java:1302) at org.apache.maven.model.io.xpp3.MavenXpp3ReaderEx.parseModel (MavenXpp3ReaderEx.java:2833) at org.apache.maven.model.io.xpp3.MavenXpp3ReaderEx.read (MavenXpp3ReaderEx.java:4690) at org.apache.maven.model.io.xpp3.MavenXpp3ReaderEx.read (MavenXpp3ReaderEx.java:875) at org.apache.maven.model.io.DefaultModelReader.read (DefaultModelReader.java:105) at org.apache.maven.model.io.DefaultModelReader.read (DefaultModelReader.java:82) at org.apache.maven.model.building.DefaultModelProcessor.read (DefaultModelProcessor.java:84) at org.apache.maven.model.building.DefaultModelBuilder.readModel (DefaultModelBuilder.java:544) at org.apache.maven.model.building.DefaultModelBuilder.build (DefaultModelBuilder.java:276) at org.apache.maven.project.DefaultProjectBuilder.build (DefaultProjectBuilder.java:432) at org.apache.maven.project.DefaultProjectBuilder.build (DefaultProjectBuilder.java:400) at org.apache.maven.project.DefaultProjectBuilder.build (DefaultProjectBuilder.java:363) at org.apache.maven.graph.DefaultGraphBuilder.collectProjects (DefaultGraphBuilder.java:414) at org.apache.maven.graph.DefaultGraphBuilder.getProjectsForMavenReactor (DefaultGraphBuilder.java:405) at org.apache.maven.graph.DefaultGraphBuilder.build (DefaultGraphBuilder.java:82) at org.apache.maven.DefaultMaven.buildGraph (DefaultMaven.java:507) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:219) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288) at org.apache.maven.cli.MavenCli.main (MavenCli.java:192) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356) [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/InternalErrorException Build step 'Execute shell' marked build as failure Finished: FAILURE can anyone solve it
gharchive/issue
2019-10-09T09:54:18
2025-04-01T06:40:27.591177
{ "authors": [ "ameelmohammad", "prietyc123", "rumbletumjum", "snicoll" ], "repo": "spring-projects/spring-petclinic", "url": "https://github.com/spring-projects/spring-petclinic/issues/458", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
248264828
Support InheritableThreadLocal tenancyContextHolder Strategy Support an InheritableThreadLocal strategy for tenantyContext just as in Spring Security. PR available #8
gharchive/issue
2017-08-06T18:53:00
2025-04-01T06:40:27.596086
{ "authors": [ "gonzalad" ], "repo": "spring-projects/spring-tenancy", "url": "https://github.com/spring-projects/spring-tenancy/issues/7", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
469244798
springfox-swagger2 version 2.9.2 not compatible with springboot version 2.2.0.M4 Hi, I am using swagger2 2.9.2, springboot 2.2.0.M4, Hateos 0.25.1.RELEASE. Below is the rumtime error I'm getting: An attempt was made to call a method that does not exist. The attempt was made from the following location: springfox.documentation.spring.web.plugins.DocumentationPluginsManager.createContextBuilder(DocumentationPluginsManager.java:152) " The following method did not exist: org.springframework.plugin.core.PluginRegistry.getPluginFor(Ljava/lang/Object;Lorg/springframework/plugin/core/Plugin;)Lorg/springframework/plugin/core/Plugin; The method's class, org.springframework.plugin.core.PluginRegistry, is available from the following locations: jar:file:/C:/Users/EKHTJHD/.gradle/caches/modules-2/files-2.1/org.springframework.plugin/spring-plugin-core/2.0.0.M1/189f78af81f23eef12018a4d4cf50b8a6df8ec0d/spring-plugin-core-2.0.0.M1.jar!/org/springframework/plugin/core/PluginRegistry.class It was loaded from the following location: file:/C:/Users/EKHTJHD/.gradle/caches/modules-2/files-2.1/org.springframework.plugin/spring-plugin-core/2.0.0.M1/189f78af81f23eef12018a4d4cf50b8a6df8ec0d/spring-plugin-core-2.0.0.M1.jar Action: Correct the classpath of your application so that it contains a single, compatible version of org.springframework.plugin.core.PluginRegistry " Below is my gradle configuration: plugins { id 'org.springframework.boot' version '2.2.0.M4' //id 'org.springframework.boot' version '2.1.6.RELEASE' id 'java' } apply plugin: 'io.spring.dependency-management' group = 'com.in28minutes.rest.webservices' version = '0.0.1-SNAPSHOT' sourceCompatibility = '1.8' configurations { developmentOnly runtimeClasspath { extendsFrom developmentOnly } } repositories { mavenCentral() maven { url 'https://repo.spring.io/snapshot' } maven { url 'https://repo.spring.io/milestone' } } dependencies { //implementation 'org.springframework.boot:spring-boot-starter-data-jpa' implementation 'org.springframework.boot:spring-boot-starter-web' compile group: 'org.springframework.hateoas', name: 'spring-hateoas', version: '0.25.1.RELEASE' compile group: 'io.springfox', name: 'springfox-swagger2', version: '2.9.2' compile group: 'io.springfox', name: 'springfox-swagger-ui', version: '2.9.2' //developmentOnly 'org.springframework.boot:spring-boot-devtools' runtimeOnly 'com.h2database:h2' testImplementation('org.springframework.boot:spring-boot-starter-test') { exclude group: 'org.junit.vintage', module: 'junit-vintage-engine' exclude group: 'junit', module: 'junit' } } test { useJUnitPlatform() } If I change springboot version as below then the issue disappears. id 'org.springframework.boot' version '2.1.6.RELEASE' Duplicate of #2932 Facing the same issue. When can we expect the delivery time for this fix ? I'm also facing the same issue and have had no success of fixing it. A proposed solution is to add this dependency: <dependency> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> <version>1.2.0.RELEASE</version> </dependency> but it doesn't work for me. Swagger 2.9.2 and Spring Boot 2.2.0-RELEASE I'm also facing the same issue and have had no success of fixing it. A proposed solution is to add this dependency: <dependency> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> <version>1.2.0.RELEASE</version> </dependency> but it doesn't work for me. Swagger 2.9.2 and Spring Boot 2.2.0-RELEASE I have the same issue with Spring Boo 2.2.0-RELEASE Can confirm for version 2.2.1.RELEASE Yep, I'm facing the same issue for that boot version Same issue on version 2.2.2.RELEASE I fixed with above suggestion using spring-plugin-core:1.2.0.RELEASE compile("org.springframework.plugin:spring-plugin-core:1.2.0.RELEASE") { force = true } and also removed dependencies org.springframework.boot:spring-boot-starter-data-rest org.springframework.data:spring-data-rest-hal-browser +1 (2.2.2.RELEASE) +1 (2.2.2.RELEASE) Spring boot: 2.2.2 swagger2: 3.0.0-SNAPSHOT Working for me. I'm using jcenter-snapshots jcenter http://oss.jfrog.org/artifactory/oss-snapshot-local/ If you are not using RepositoryRestMvc, disable it in your Application: @SpringBootApplication(exclude = {RepositoryRestMvcAutoConfiguration.class}) @dilipkrish Do you happen to know how complicated fixing this? As far as I understand 3.x is using spring-plugin-core:2.x right? How hard is backporting it? I would like to use spring-hateoas and springfox-swagger together but as far as I understand, this is not possible right now. @jahidakhtargit @dilipkrish Could one of you please rename this issue so that it will reflect that multiple release versions are involved, something like: springfox-swagger 2.x is not compatible with spring-boot 2.2.x @Splash34 I have the same issue. But i resolved with your solution. I have the same issue. But i resolved with @LukeHackett solution. Springboot 2.2.X does not support SpringFox. Instead I recommend to migrate from springfox to OpenAPI to support Swagger UI. You need to remove all the swagger 2 and springfox dependencies from your project and add the below dependency. org.springdoc springdoc-openapi-ui 1.3.4 for more details go to this link: https://springdoc.org/migrating-from-springfox.htmlou need to update Addition to this to support springboot 2.2.x you need to update spring core plug in org.springframework.plugin spring-plugin-core 2.0.0.RELEASE Springboot 2.2.X does not support springfox. Instead I recommend to migrate from springfox to OpenAPI to support Swagger UI. You need to remove all the swagger 2 and springfox dependencies from your project and add the below dependency. <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-ui</artifactId> <version>1.3.4</version> </dependency> for more details go to this link: https://springdoc.org/migrating-from-springfox.html Addition to this to support springboot 2.2.x you need to update springcore plugin <dependency> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> <version>2.0.0.RELEASE</version> </dependency> Am currently testing with the latest Spring-boot version 2.2.6 and this issue is persisting with Swagger 2.9.2 version. This is bit annoying as it always blocks your development flow and you end up putting in efforts to troubleshoot which go un-productive. I see lot of developers posting similar problems, so it would be best if we can find the resolution on the versions soon or any work-around which can be taken into production projects. Thanks. Happy Coding !! I can confirm that this is also happening with Spring Boot 2.2.6 and Swagger 2.9.2. For me, everything is working just as it should. I started having this issue the moment I added this dependency to my pom.xml file: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> If I remove this dependency, the app starts up fine again. I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. This resolved my issue. Thanks. Thanks, resolved! I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. This resolved my issue. Thanks. Thanks, resolved! org.springframework.boot:spring-boot-starter-data-rest Yes, it seems it's the org.springframework.boot:spring-boot-starter-data-rest to be the issue, as soon as I remove it, everything works. I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. It works for me with spring-boot-starter-parent 2.2.6.RELEASE. Thanks! I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. It works for me with spring-boot-starter-parent 2.2.6.RELEASE. Thanks! Update: It seems comflict with my neo4j http driver (I am using old neo4j version 2.2.3). I get warnings like " o.s.d.n.mapping.neo4jpersistentproperty : owning classinfo is null for property". After remove the swagger from pom.xml, the neo4j http driver works. I'm using Spring Boot 2.3.0, and try to use HATEOAS and Swagger 2 as many of you here, with similar problems of compatibility between them. I already tried every suggestion from here without luck. I found this post that works for me, considering I'm using Gradle https://dev.to/otaviotarelhodb/how-to-use-springfox-2-9-2-with-spring-hateoas-2-on-gradle-project-6mn Actually works for me with HATEOAS and SWAGGER with the latest Spring Boot version by now. I'm using Spring Boot 2.3.0, and try to use HATEOAS and Swagger 2 as many of you here, with similar problems of compatibility between them. I already tried every suggestion from here without luck. I found this post that works for me, considering I'm using Gradle https://dev.to/otaviotarelhodb/how-to-use-springfox-2-9-2-with-spring-hateoas-2-on-gradle-project-6mn Actually works for me with HATEOAS and SWAGGER with the latest Spring Boot version by now. Thanks! It works for me :) I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. Works fine. Thanks Change the version of Springfox of 2.9.2 for LATEST, like this: <!--for Swagger Endpoints support--> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>LATEST</version> </dependency> <!--for Swagger UI support--> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>LATEST</version> </dependency> it's working for me @fornazieri its meant to be as much compatible with previous versions as possible but 3.0.0 is zero config. You can just drop in the springfox-boot-starter dependency and remove any manual dependencies you've added. Also no longer need the @Enable... annotations, if you're using spring boot. yeahh, i changed my version for 3.0.0 and removed the @EnableSwagger2 annotation, now it's working perfectly, thanksss @dilipkrish That's an issue for SB 2.1- in springfox 3.0.0 too. Since we are still on SB 2.0.x and can't upgrade SB just yet ended up with redefining custom beans like this: @Primary @Component("documentationPluginsManager") public class CustomDocumentationPluginsManager extends DocumentationPluginsManager { @Autowired @Qualifier("modelNamesRegistryFactoryPluginRegistry") private PluginRegistry<ModelNamesRegistryFactoryPlugin, DocumentationType> modelNameRegistryFactoryPlugins; @Override public ModelNamesRegistryFactoryPlugin modelNamesGeneratorFactory(DocumentationType documentationType) { return Optional.ofNullable(modelNameRegistryFactoryPlugins.getPluginFor( documentationType)).orElseGet(DefaultModelNamesRegistryFactory::new); } } Basically polyfilling the failing methods with calls to available methods in PluginRegistry in spring-plugin-core:1.2.0.RELEASE. Had to override 3 beans - DocumentationPluginsManager, SchemaPluginsManager, TypeNameExtractor - and 1 class BodyParameterSpecificationProvider with preserving its fully qualified package springfox.documentation.builders since it's not a Spring bean. Man, I solved my problem with only this dependency org.springdoc springdoc-openapi-ui 1.5.1 I started to see this issue after upgrading Sentry to v3, I can't see how it's related but 2.9.2 used to work with the dependency spring-plugin-core-1.x. Upgrading to spring-plugin-core-2.0.0 did not solve it. I've upgraded to springfox 3.0.0 following the docs. application.properties implementation 'io.springfox:springfox-boot-starter:3.0.0' implementation 'io.springfox:springfox-swagger-ui:3.0.0' and removed the @Enable... from SwaggerConfig. I'm now getting another err. and it does not work at all. 09:12:36.547 [restartedMain] ERROR s.d.s.w.p.DocumentationPluginsBootstrapper - Unable to scan documentation context default java.lang.IllegalStateException: Model already registered with different name. at springfox.documentation.schema.TypeNameIndexingAdapter.checkTypeRegistration(TypeNameIndexingAdapter.java:55) at springfox.documentation.schema.TypeNameIndexingAdapter.registerUniqueType(TypeNameIndexingAdapter.java:82) tried using this classpath("org.springframework.boot:spring-boot-gradle-plugin:2.2.13.RELEASE") implementation 'io.springfox:springfox-boot-starter:3.0.0' Still getting same error Still getting the same error with 2.4.5 version.Anyone resolved it please reply. Same error, latest version None of the above fixed the issue for me, but, I've managed to get the 3.0.0-SNAPSHOT version of the springfox library to work, with the following changes to my pom.xml. Although, going forward, I'm probably look to other libraries to provide swagger such as Spring doc <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> <exclusions> <exclusion> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-spring-webflux</artifactId> <version>3.0.0-SNAPSHOT</version> <exclusions> <exclusion> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> <version>2.0.0.RELEASE</version> </dependency> THANK YOU
gharchive/issue
2019-07-17T14:21:47
2025-04-01T06:40:27.662714
{ "authors": [ "Code88Hary", "CodeAndChoke", "EvgeniGordeev", "JYOTIRANJANj", "Kakau-preto", "M-Thirumal", "RaveKev", "Splash34", "WJie12", "alexiz10", "awesomeankur", "bagraercan", "chavesrodolfo", "dennisdaotvlk", "dilipkrish", "egch", "fornazieri", "ghostAmaru", "hendisantika", "iVieL", "jahidakhtargit", "jenni", "jonatan-ivanov", "marcusvoltolim", "mimkorn", "nurkan2313", "pawanmundhra", "shirisha-96", "sntour", "suerain", "thiagohmoreira", "ticoaraujo", "tk-png", "xavierKress" ], "repo": "springfox/springfox", "url": "https://github.com/springfox/springfox/issues/3052", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1182348035
Revert "Create LICENSE" Reverts sprshr/parcel-locker#4 close #5
gharchive/pull-request
2022-03-27T05:15:44
2025-04-01T06:40:27.668138
{ "authors": [ "sprshr" ], "repo": "sprshr/parcel-locker", "url": "https://github.com/sprshr/parcel-locker/pull/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1480538136
Export SIWE Description This PR exports the SiweMessage class from siwe as part of ssx and ssx-server. This is done as these packages can be used together commonly and was done to reduce dependencies Type [x] New feature (non-breaking change which adds functionality) Diligence Checklist [x] I have performed a self-review of my code [x] My changes generate no new warnings Codecov Report Base: 72.75% // Head: 72.77% // Increases project coverage by +0.02% :tada: Coverage data is based on head (248666f) compared to base (745e1dd). Patch coverage: 100.00% of modified lines in pull request are covered. Additional details and impacted files @@ Coverage Diff @@ ## main #35 +/- ## ========================================== + Coverage 72.75% 72.77% +0.02% ========================================== Files 22 22 Lines 2679 2681 +2 Branches 173 173 ========================================== + Hits 1949 1951 +2 Misses 730 730 Impacted Files Coverage Δ packages/ssx-sdk/src/index.ts 100.00% <100.00%> (ø) packages/ssx-server/src/index.ts 100.00% <100.00%> (ø) Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. :umbrella: View full report at Codecov. :loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
gharchive/pull-request
2022-12-07T00:23:54
2025-04-01T06:40:27.677028
{ "authors": [ "codecov-commenter", "skgbafa" ], "repo": "spruceid/ssx", "url": "https://github.com/spruceid/ssx/pull/35", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
479762679
BUG-1: Fixed the bug to make the table look pretty. Fixed the bug that I was asked to fix. Since you're already working on this code, please capitalize the second "t" in the Time tracking column heading.
gharchive/pull-request
2019-08-12T17:29:22
2025-04-01T06:40:27.678173
{ "authors": [ "a-aaronson", "sam-surname" ], "repo": "sprydevs/kanboard", "url": "https://github.com/sprydevs/kanboard/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
262196847
dicttoxml is GPLv2-licensed I see in this patch the dicttoxml library was added as a requirement, which is GPLv2-licensed. Unfortunately the GPLv2 is not compatible with the Apache2.0 license, which moto is licensed under, and as far as I understand it, this would require moto to switch to a GPLv2-based licensing scheme. (The difference between GPLv2 and LGPLv2 is that LGPL allows you to link libraries in to non-GPL-licensed code, whereas GPL considers even linking of libraries to create a "derivative work"). Any chance we could get a version of moto that doesn't depend on GPLv2-licensed code? Also I noticed that dicttoxml is only used in one place so it seems like it might not be that hard to replace it with something else. @drmorr0 Great find, thank you for this. Yes, we can remove this dependency immediately and provide you with a new version of Moto without a problematic license. Fantastic, thank you. We've been using an ancient version of moto that didn't have this dependency and I was hoping to upgrade to a more recent version -- it's really useful software. Let me know if there's any way I can help with the change. @drmorr0 once #1231 is merged I'll release a new version for you. Please give that a review if you have a moment. @drmorr0 Moto version 1.1.21 is now released and does not depend on dicttoxml Nice! Thank you so much!
gharchive/issue
2017-10-02T19:21:59
2025-04-01T06:40:27.682310
{ "authors": [ "JackDanger", "drmorr0" ], "repo": "spulec/moto", "url": "https://github.com/spulec/moto/issues/1229", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
182322852
instance descriptions should include toplevel Public{IpAddress,DnsName} AWS/boto3 .describe_instances calls return instances with top level PublicDnsName and PublicIpAddress keys for instances with public or elastic IPs. moto currently returns this information only nested inside the NetworkInterfaces dict. I believe that this is relevant to the failing test case given here: https://github.com/spulec/moto/pull/730
gharchive/issue
2016-10-11T17:19:10
2025-04-01T06:40:27.684091
{ "authors": [ "majuscule" ], "repo": "spulec/moto", "url": "https://github.com/spulec/moto/issues/729", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
221626533
ContainerInstance deregistration Features Implemented deregister_container_instance - tests added for behaviour around exception raising etc. Changes Changed ContainerInstance object attributes to use snake case for consistency Added myself to the authors list Coverage decreased (-0.07%) to 93.466% when pulling f3aff0f356196f29c3b7598499bf452fcd05e650 on gjtempleton:TaskDraining into 30b1de507cb08794edade449505fbddd0a8a043b on spulec:master. Coverage decreased (-0.2%) to 93.342% when pulling 69b86b2c7a25b225fc9da2f76f6dbb9f5adfee23 on gjtempleton:TaskDraining into 30b1de507cb08794edade449505fbddd0a8a043b on spulec:master. Hey, this looks great. It seems to be breaking on Python 3. If you can get that fixed, I'll be happy to merge. Coverage increased (+0.2%) to 93.724% when pulling 3cbeb551604aba658eb4993bce81dee97b7dc138 on gjtempleton:TaskDraining into 30b1de507cb08794edade449505fbddd0a8a043b on spulec:master. Serves me right for testing locally using 3.5 rather than 3.6. All good now. Coverage increased (+0.005%) to 93.724% when pulling 47bc23f4810051a7b6670f276ed5229fd00baa6a on gjtempleton:TaskDraining into 34c711189f4961eeee6a5de32e8106ec0bdb48bf on spulec:master. Looks great, thank you!
gharchive/pull-request
2017-04-13T17:10:14
2025-04-01T06:40:27.691103
{ "authors": [ "coveralls", "gjtempleton", "spulec" ], "repo": "spulec/moto", "url": "https://github.com/spulec/moto/pull/897", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2203839812
remote ssh kernel connection Description What steps will reproduce the problem? tried to connect to posgres database and it encountered a problem with sqlalchemy Traceback Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Traceback (most recent call last): File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 437, in data return to_qvariant(self.get_bgcolor(index)) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\msqm\AppData\Local\Programs\Python\Python311\Lib\site-packages\spyder\widgets\collectionseditor.py", line 524, in get_bgcolor python_type = value['python_type'] ~~~~~^^^^^^^^^^^^^^^ KeyError: 'python_type' Versions Spyder version: 5.5.1 (pip) Python version: 3.11.7 64-bit Qt version: 5.15.2 PyQt5 version: 5.15.10 Operating System: Windows-10-10.0.19045-SP0 Dependencies # Mandatory: atomicwrites >=1.2.0 : 1.4.1 (OK) chardet >=2.0.0 : 5.2.0 (OK) cloudpickle >=0.5.0 : 3.0.0 (OK) cookiecutter >=1.6.0 : 2.6.0 (OK) diff_match_patch >=20181111 : 20230430 (OK) intervaltree >=3.0.2 : 3.1.0 (OK) IPython >=8.13.0,<9.0.0,!=8.17.1 : 8.22.2 (OK) jedi >=0.17.2,<0.20.0 : 0.19.1 (OK) jellyfish >=0.7 : 1.0.3 (OK) jsonschema >=3.2.0 : 4.21.1 (OK) keyring >=17.0.0 : 24.3.1 (OK) nbconvert >=4.0 : 7.16.2 (OK) numpydoc >=0.6.0 : 1.6.0 (OK) paramiko >=2.4.0 : 3.4.0 (OK) parso >=0.7.0,<0.9.0 : 0.8.3 (OK) pexpect >=4.4.0 : 4.9.0 (OK) pickleshare >=0.4 : 0.7.5 (OK) psutil >=5.3 : 5.9.8 (OK) pygments >=2.0 : 2.17.2 (OK) pylint >=2.5.0,<3.1 : 3.0.4 (OK) pylint_venv >=3.0.2 : 3.0.3 (OK) pyls_spyder >=0.4.0 : 0.4.0 (OK) pylsp >=1.10.0,<1.11.0 : 1.10.0 (OK) pylsp_black >=2.0.0,<3.0.0 : 2.0.0 (OK) qdarkstyle >=3.2.0,<3.3.0 : 3.2.3 (OK) qstylizer >=0.2.2 : 0.2.2 (OK) qtawesome >=1.2.1 : 1.3.0 (OK) qtconsole >=5.5.1,<5.6.0 : 5.5.1 (OK) qtpy >=2.1.0 : 2.4.1 (OK) rtree >=0.9.7 : 1.2.0 (OK) setuptools >=49.6.0 : 65.5.0 (OK) sphinx >=0.6.6 : 7.2.6 (OK) spyder_kernels >=2.5.0,<2.6.0 : 2.5.1 (OK) textdistance >=4.2.0 : 4.6.1 (OK) three_merge >=0.1.1 : 0.1.1 (OK) watchdog >=0.10.3 : 4.0.0 (OK) zmq >=22.1.0 : 25.1.2 (OK) # Optional: cython >=0.21 : None (NOK) matplotlib >=3.0.0 : 3.8.3 (OK) numpy >=1.7 : 1.26.4 (OK) pandas >=1.1.1 : 2.2.1 (OK) scipy >=0.17.0 : 1.12.0 (OK) sympy >=0.7.3 : None (NOK) Hey @mikesmith5446, thanks for reporting. It seems there's a mismatch between your Spyder and remote Spyder-kernels version. Please be sure of having at least version 2.5.0 of Spyder-kernels before trying to connect to it. Ok thanks, I will try that Best regards, Mike A. Smith Senior Digital Development Specialist Novonesis PO BOX 576 77 Perrys Chapel Church Road Franklinton NC 27525 USA Phone: +1 9194941706 Mobile: +1 9196185811 E-mail: @.@.> Teams: @.@.> Find Novonesis on: Webhttps://www.novonesis.com/ | LinkedInhttps://www.linkedin.com/company/novonesis | Xhttps://twitter.com/novonesis | Facebookhttps://www.facebook.com/novonesisbio | Instagramhttps://www.instagram.com/novonesis/ Novozymes North America, Inc. (reg. no.: 13-2639630). Registered address: CT Corporation System, 111 8th Avenue, New York, NY 10011, United States of America, part of Novonesis Group. This e-mail (including any attachments) is for the intended addressee(s) only and may contain confidential and/or proprietary information protected by law. You are hereby notified that any unauthorized reading, disclosure, copying or distribution of this e-mail or use of information herein is strictly prohibited. If you are not an intended recipient, you should delete this e-mail immediately. Thank you. Your personal data will be processed in accordance with our privacy policyhttps://www.novonesis.com/en/privacy-policy. From: Carlos Cordoba @.> Sent: Wednesday, March 27, 2024 12:19 PM To: spyder-ide/spyder @.> Cc: Mike A. Smith @.>; Mention @.> Subject: Re: [spyder-ide/spyder] remote ssh kernel connection (Issue #21923) Hey @mikesmith5446https://github.com/mikesmith5446, thanks for reporting. It seems there's a mismatch between your Spyder and remote Spyder-kernels version. Please be sure of having at least version 2.5.0 of Spyder-kernels before trying to connect to it. Reply to this email directly, view it on GitHubhttps://github.com/spyder-ide/spyder/issues/21923#issuecomment-2023177258, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ANI6W4PBMXX7EURA5KI4KJTY2LWNZAVCNFSM6AAAAABFESXETOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRTGE3TOMRVHA. You are receiving this because you were mentioned.Message ID: @.@.>> Closing due to lack of response. Hopefully, you managed to fix this problem.
gharchive/issue
2024-03-23T11:40:52
2025-04-01T06:40:27.711376
{ "authors": [ "ccordoba12", "mikesmith5446" ], "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/21923", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2499539847
The text "Run code" does not fit in the button (partially visible) at the Step №3 "IPython Console" of Introduction tour Issue Report Checklist [x] Searched the issues page for similar reports [x] Read the relevant sections of the Spyder Troubleshooting Guide and followed its advice [x] Reproduced the issue after updating with conda update spyder (or pip, if not using Anaconda) [x] Could not reproduce inside jupyter qtconsole (if console-related) [x] Tried basic troubleshooting (if a bug/error) [x] Restarted Spyder [x] Reset preferences with spyder --reset [x] Reinstalled the latest version of Anaconda [x] Tried the other applicable steps from the Troubleshooting Guide [x] Completed the Problem Description, Steps to Reproduce and Version sections below Problem Description The text "Run code" does not fit in the button (partially visible) at the Step №3 "IPython Console" of Introduction tour (Help --> Show tour). After button was clicked the text become is fully visible. What steps reproduce the problem? After installation and the first launch of the program there is a modal window with Introduction tour Go to Step №3 of Introduction tour OR Run the program Choose Help --> Show tour Go to Step №3 of Introduction tour What is the expected output? What do you see instead? Expected Result: the text of the button Run code fits within the button Actual Result: the text does not fit in the button Versions Spyder version: 5.5.1 Python version: 3.12.3 Qt version: 5.15.13 PyQt version: 5.15.10 Operating System name/version: Kubuntu 24.04.1 LTS x86_64 Dependencies # Mandatory: atomicwrites >=1.2.0 : 1.4.1 (OK) chardet >=2.0.0 : 5.2.0 (OK) cloudpickle >=0.5.0 : 3.0.0 (OK) cookiecutter >=1.6.0 : 2.6.0 (OK) diff_match_patch >=20181111 : 20230430 (OK) intervaltree >=3.0.2 : 3.0.2 (OK) IPython >=8.13.0,<9.0.0,!=8.17.1 : 8.20.0 (OK) jedi >=0.17.2,<0.20.0 : 0.19.1 (OK) jellyfish >=0.7 : 0.10.0 (OK) jsonschema >=3.2.0 : 4.10.3 (OK) keyring >=17.0.0 : 24.3.1 (OK) nbconvert >=4.0 : 6.5.3 (OK) numpydoc >=0.6.0 : 1.6.0 (OK) parso >=0.7.0,<0.9.0 : 0.8.3 (OK) pexpect >=4.4.0 : 4.9.0 (OK) pickleshare >=0.4 : 0.7.5 (OK) psutil >=5.3 : 5.9.8 (OK) pygments >=2.0 : 2.17.2 (OK) pylint >=2.5.0,<3.1 : 3.0.3 (OK) pylint_venv >=3.0.2 : 3.0.2 (OK) pyls_spyder >=0.4.0 : 0.4.0 (OK) pylsp >=1.10.0,<1.11.0 : 1.10.0 (OK) pylsp_black >=2.0.0,<3.0.0 : 2.0.0 (OK) qdarkstyle >=3.2.0,<3.3.0 : 3.2.3 (OK) qstylizer >=0.2.2 : 0.2.2 (OK) qtawesome >=1.2.1 : 1.2.3 (OK) qtconsole >=5.5.1,<5.6.0 : 5.5.1 (OK) qtpy >=2.1.0 : 2.4.1 (OK) rtree >=0.9.7 : 1.2.0 (OK) setuptools >=49.6.0 : 68.1.2 (OK) sphinx >=0.6.6 : 7.2.6 (OK) spyder_kernels >=2.5.0,<2.6.0 : 2.5.0 (OK) textdistance >=4.2.0 : 4.6.0 (OK) three_merge >=0.1.1 : 0.1.1 (OK) watchdog >=0.10.3 : 3.0.0 (OK) xdg >=0.26 : 0.28 (OK) zmq >=22.1.0 : 24.0.1 (OK) # Optional: cython >=0.21 : None (NOK) matplotlib >=3.0.0 : 3.6.3 (OK) numpy >=1.7 : 1.26.4 (OK) pandas >=1.1.1 : None (NOK) scipy >=0.17.0 : 1.11.4 (OK) sympy >=0.7.3 : 1.12 (OK) Hi @0bs01ete thank you for the report! I was unable to reproduce this (however I used Windows and Spyder 6.0.2 installed from our installers to check): Could it be possible for you to check if this is still happening with the latest release (Spyder 6.0.2)? You could do the check installing our Linux installer available over the Spyder GitHub release page: https://github.com/spyder-ide/spyder/releases/latest Let us know if using a more recent Spyder version helps! Closing due to lack of response
gharchive/issue
2024-09-01T15:29:32
2025-04-01T06:40:27.726726
{ "authors": [ "0bs01ete", "dalthviz" ], "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/22409", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
242016751
RuntimeError when closing spyder Description of your problem When I open spyder from the command line and then quit, the following error is displayed in the terminal: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/spyder/app/mainwindow.py", line 2117, in hideEvent if plugin.isAncestorOf(self.last_focused_widget): RuntimeError: wrapped C/C++ object of type QPushButton has been deleted It happens only when the 'prompt when existing' option is selected, so I guess the error is related to the confirmation dialog. Versions and main components Spyder Version: 3.1.4 Python Version: 3.6.1 Qt Version: 5.9.1 PyQt Version: PyQt5 5.9 Operating system: Archlinux Dependencies jedi >=0.9.0 : 0.10.2 (OK) matplotlib >=1.0 : 2.0.2 (OK) nbconvert >=4.0 : 4.3.0 (OK) numpy >=1.7 : 1.13.1 (OK) pandas >=0.13.1 : 0.20.1 (OK) psutil >=0.3 : 5.2.1 (OK) pycodestyle >=0.6: 2.3.1 (OK) pyflakes >=0.6.0 : 1.5.0 (OK) pygments >=2.0 : 2.2.0 (OK) pylint >=0.25 : 1.7.2 (OK) qtconsole >=4.2.0: 4.3.0 (OK) rope >=0.9.4 : 0.10.5 (OK) sphinx >=0.6.6 : 1.6.3 (OK) sympy >=0.7.3 : 1.1 (OK) Thanks for reporting. We'll fix this error in Spyder 3.2.
gharchive/issue
2017-07-11T11:36:02
2025-04-01T06:40:27.732537
{ "authors": [ "ccordoba12", "j4321" ], "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/4725", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
333944442
Error while starting spyder through anaconda navigator Description What steps will reproduce the problem? Starting spyder What is the expected output? What do you see instead? Expected output: Spyder should start What Happens: It throws an error Please provide any additional information below File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/init.py", line 511, in toggled=lambda checked: self.toggle_view(checked), File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/ipythonconsole.py", line 725, in toggle_view self.create_new_client(give_focus=False) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/ipythonconsole.py", line 1033, in create_new_client self.connect_client_to_kernel(client) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/ipythonconsole.py", line 1059, in connect_client_to_kernel stderr_file) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/ipythonconsole.py", line 1477, in create_kernel_manager_and_kernel_client config=None, autorestart=True) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 958, in new inst.setup_instance(*args, **kwargs) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 986, in setup_instance super(HasTraits, self).setup_instance(*args, **kwargs) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 977, in setup_instance value.instance_init(self) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 1691, in instance_init self._resolve_classes() File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 1696, in _resolve_classes self.klass = self._resolve_string(self.klass) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 1507, in _resolve_string return import_item(string) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/utils/importstring.py", line 34, in import_item module = import(package, fromlist=[obj]) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/jupyter_client/session.py", line 61, in from jupyter_client.jsonutil import extract_dates, squash_dates, date_default File "/home/ankk98/anaconda3/lib/python3.6/site-packages/jupyter_client/jsonutil.py", line 11, in from dateutil.parser import parse as _dateutil_parse File "/home/ankk98/anaconda3/lib/python3.6/site-packages/dateutil/parser.py", line 158 l.append("%s=%s" % (attr, value)) ^ SyntaxError: invalid syntax Version and main components Spyder Version: 3.2.6 Python Version: 3.6.4 Qt Versions: 5.6.2, PyQt5 5.6 on Linux Dependencies pyflakes >=0.6.0 : 1.6.0 (OK) pycodestyle >=2.3: 2.3.1 (OK) pygments >=2.0 : 2.2.0 (OK) pandas >=0.13.1 : None (NOK) numpy >=1.7 : 1.14.0 (OK) sphinx >=0.6.6 : 1.6.6 (OK) rope >=0.9.4 : 0.10.7 (OK) jedi >=0.9.0 : 0.11.1 (OK) psutil >=0.3 : 5.4.3 (OK) nbconvert >=4.0 : 5.3.1 (OK) sympy >=0.7.3 : 1.1.1 (OK) cython >=0.21 : 0.27.3 (OK) qtconsole >=4.2.0: 4.3.1 (OK) IPython >=4.0 : 6.2.1 (OK) pylint >=0.25 : 1.8.2 (OK) Reinstalling Anaconda fixed the issue. :) Thanks for letting us know about it!
gharchive/issue
2018-06-20T06:47:44
2025-04-01T06:40:27.743524
{ "authors": [ "Ankk98", "ccordoba12" ], "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/7314", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
358600648
An error ocurred while starting the kernel Description What steps will reproduce the problem? Traceback (most recent call last): File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\__main__.py", line 11, in start.main() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\start.py", line 284, in main kernel.initialize() File "", line 2, in initialize File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\traitlets\config\application.py", line 87, in catch_config_error return method(app, *args, **kwargs) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 474, in initialize self.init_io() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 326, in init_io sys.stdout.flush() AttributeError: 'NoneType' object has no attribute 'flush' Traceback (most recent call last): File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\__main__.py", line 11, in start.main() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\start.py", line 284, in main kernel.initialize() File "", line 2, in initialize File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\traitlets\config\application.py", line 87, in catch_config_error return method(app, *args, **kwargs) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 474, in initialize self.init_io() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 326, in init_io sys.stdout.flush() AttributeError: 'NoneType' object has no attribute 'flush' Traceback (most recent call last): File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\__main__.py", line 11, in start.main() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\start.py", line 284, in main kernel.initialize() File "", line 2, in initialize File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\traitlets\config\application.py", line 87, in catch_config_error return method(app, *args, **kwargs) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 474, in initialize self.init_io() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 326, in init_io sys.stdout.flush() AttributeError: 'NoneType' object has no attribute 'flush' Traceback (most recent call last): File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\__main__.py", line 11, in start.main() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\start.py", line 284, in main kernel.initialize() File "", line 2, in initialize File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\traitlets\config\application.py", line 87, in catch_config_error return method(app, *args, **kwargs) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 474, in initialize self.init_io() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 326, in init_io sys.stdout.flush() AttributeError: 'NoneType' object has no attribute 'flush' Traceback (most recent call last): File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\__main__.py", line 11, in start.main() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\spyder_kernels\console\start.py", line 284, in main kernel.initialize() File "", line 2, in initialize File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\traitlets\config\application.py", line 87, in catch_config_error return method(app, *args, **kwargs) File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 474, in initialize self.init_io() File "C:\Users\wufangzhao\AppData\Local\Programs\Python\Python36󈛄\lib\site‑packages\ipykernel\kernelapp.py", line 326, in init_io sys.stdout.flush() AttributeError: 'NoneType' object has no attribute 'flush' Versions Spyder version: 4.0.0.dev0 7e4c213 Python version: 3.6.6 Qt version: 5.9.3 PyQt5 version: 5.9.2 Operating System: Windows 10 Dependencies pygments >=2.0 : 2.2.0 (OK) sphinx >=0.6.6 : 1.7.8 (OK) pyls >=0.19.0 : 0.21.0 (OK) nbconvert >=4.0 : 5.3.1 (OK) pandas >=0.13.1 : None (NOK) numpy >=1.7 : None (NOK) sympy >=0.7.3 : None (NOK) cython >=0.21 : None (NOK) qtconsole >=4.2.0 : 4.4.1 (OK) IPython >=4.0 : 6.5.0 (OK) matplotlib >=2.0.0: None (NOK) pylint >=0.25 : 2.1.1 (OK) This is fixed by running in a system terminal (cmd.exe) pip install ipykernel==4.8.2
gharchive/issue
2018-09-10T12:28:23
2025-04-01T06:40:27.748761
{ "authors": [ "ccordoba12", "wufangzhao" ], "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/7871", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
565024741
PR: Change variable explorer title for NumPy object arrays Description of Changes [x] Wrote at least one-line docstrings (for any new functions) [x] Added unit test(s) covering the changes (if testable) [x] Included a screenshot or animation (if affecting the UI, see Licecap) Change the title for NumPy arrays in the variable explorer Issue(s) Resolved Fixes #11331 Affirmation By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under Spyder's MIT (Expat) license. I certify the above statement is true and correct: Steff456 Please post an image @steff456, please make your branch to derive from our 4.x branch (instead of master), with the following commands: git checkout 4.x git checkout fix-11331 git rebase --onto 4.x master fix-11331 git push -f origin fix-11331
gharchive/pull-request
2020-02-14T00:11:24
2025-04-01T06:40:27.753948
{ "authors": [ "ccordoba12", "goanpeca", "steff456" ], "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/pull/11555", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1073179508
PR: Limit the number of flags in the editor Description of Changes If there are too many flags in the document, the flags will slow down the printing. This disables the flags of a type if there are more than 10000. To reproduce, create a file with: with open("test.py", "w") as f: for i in range(10000): f.write("aaaaaa\n") This will create 10000 error flags, which will slow down spyder significantly [ ] Wrote at least one-line docstrings (for any new functions) [ ] Added unit test(s) covering the changes (if testable) [ ] Included a screenshot or animation (if affecting the UI, see Licecap) Issue(s) Resolved Fixes # Affirmation By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under Spyder's MIT (Expat) license. I certify the above statement is true and correct: @impact27, one thing I forgot to mention: could you add a test for this? Just to check that we're not trying to display more than MAX_FLAGS in the scrollflag area. I am not sure how to do that. the paint code does a bunch of "painter.drawRect" but I think the result is just an image. I wouldn't know how to check how many rectangles are drawn. What if you check the length of self._dict_flag_list[flag_type] for a certain flag type that goes over MAX_FLAGS in a file? The filtering is applied at the print stage, so the dictionary would contain more than the limit of elements Ok, no problem then. Creating a test for this is way harder than I thought.
gharchive/pull-request
2021-12-07T10:27:04
2025-04-01T06:40:27.759687
{ "authors": [ "ccordoba12", "impact27" ], "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/pull/16974", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1370134789
Fiscal Week Using the latest default PBIX file without modifications, creating a table visual that shows 'Fiscal Week Year', 'FW Year', FW StartOfWeek', and 'FW EndOfWeek', it appears that the values are duplicated for Fiscal Week Year values "FW53-2021" and "FW01-2022". Is that expected behavior? If I also add the 'Date' field, I can see the following, where 1/1/2022 is the only date in "FW01-2022" but its 'FW StartOfWeek' value is not valid. In my case, the report requester is looking for FW53-2021 to look as it does, but for FW01-2022 to start on 2022-01-02 instead of 2022-01-01 as it does above. Do I need to tweak settings to achieve that, or does the above situation reflect a potential bug? A sample PBIX showing this issue is attached. DAX Date Template - Fiscal Week Issue.zip Thanks for reporting the issue. There is certainly a problem, and the worst news is that also Bravo for Power BI has an issue, generating another wrong result for the ISO calendar. I will try to fix the template in Bravo first, then if I have time I'll backport the changes on this template, even though I think we'll deprecate this template because Bravo provides much more flexibility. However, I'll keep you posted. Thanks for your help! Using the latest version of Bravo does seem to give us what we're looking for now. The 53 weeks piece was probably not an accurate requirement. Thanks - I'll check whether I can fix the calculation in the DateTemplate, but it's not a high priority now. I finally realized that in your report you mixed a Fiscal Week column (which is not FW prefixed) with other columns that are FW prefixed. This explains the inconsistency. I can close the issue.
gharchive/issue
2022-09-12T15:52:49
2025-04-01T06:40:27.768681
{ "authors": [ "jasonhendrix", "marcosqlbi" ], "repo": "sql-bi/DaxDateTemplate", "url": "https://github.com/sql-bi/DaxDateTemplate/issues/71", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1743785620
Sporadic SyntaxException w/ Python3.11 We use mako for rendering CICD Pipeline Definitions (YAML files). It is a rather complex template w/ a lot of Mako-Function-Definitions and nesting + multithreading. The resulting documents vary in size (depending on input-parameters), and are typically between ~300 kiB and ~1.2 MiB. The template was used w/ Python-versions 3.6 .. 3.10. When upgrading to 3.11, we saw (and still see) sporadic SyntaxExceptions, which roughly occur in about 5% of the time (w/ unchanged template-parameters, of course!). I started working on a minimalised reproducer. If instantiating the same template w/ same parameters 64 times using 2 threads, I almost always see at least one exception stacktrace. The incriminated lines vary, whereas the Mako-part of the stacktrace seems to be always the same. The error does not seem to occur when limiting concurrency to just one thread. Thus, I suspect a race-condition, probably within Mako's codebase. The error occurs for latest versions of python3 alpine (3.11.3-r11) when running inside a virtualisation container, and archlinux (3.11.3-1) when running natively. Example Stacktrace Traceback (most recent call last): File "/home/redacted/.local/lib/python3.11/site-packages/mako/pyparser.py", line 36, in parse return _ast_util.parse(code, "<unknown>", mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/_ast_util.py", line 91, in parse return compile(expr, filename, mode, PyCF_ONLY_AST) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SystemError: AST constructor recursion depth mismatch (before=63, after=65) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/mnt/shared_profile/src/sap/makobug-reproducer/concourse/replicator.py", line 140, in render definition_descriptor = self._render(definition_descriptor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/shared_profile/src/sap/makobug-reproducer/concourse/replicator.py", line 211, in _render t = mako.template.Template(template_contents, lookup=self.lookup) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/template.py", line 300, in __init__ (code, module) = _compile_text(self, text, filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/template.py", line 677, in _compile_text source, lexer = _compile( ^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/template.py", line 657, in _compile node = lexer.parse() ^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/lexer.py", line 248, in parse if self.match_python_block(): ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/lexer.py", line 392, in match_python_block self.append_node( File "/home/redacted/.local/lib/python3.11/site-packages/mako/lexer.py", line 129, in append_node node = nodecls(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/parsetree.py", line 158, in __init__ self.code = ast.PythonCode(text, **self.exception_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/ast.py", line 42, in __init__ expr = pyparser.parse(code.lstrip(), "exec", **exception_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/pyparser.py", line 38, in parse raise exceptions.SyntaxException( mako.exceptions.SyntaxException: (SystemError) AST constructor recursion depth mismatch (before=63, after=65) ('import os\n\nimport oci.auth as oa\nimport model.cont') at line: 2 char: 1 It is probably worth mentioning that by decreasing the template's output size, the likelihood of this error seems to become smaller. I could share a copy of my somewhat slimmed-down reproducer; it still contains most of the code from the repository I referenced above, if this is considered helpful. it could be a bug in py3.11 itself. I haven't dealt with that code in probably more than 10 years so, sure, a real reproducer, as small as possible (it really should just be a single template) is an absolute minimum to do anything here. I created a reproducer (makobug-reproducer.tar.gz). To use, you need (obviously) python3.11 + install packages from requirements.txt. To run, simply execute run-me.py For your convenience, I also built an OCI Container Image (aka Docker Image); Dockerfile is included in uploaded tar-file. So as an alternative to installing and running locally, you might run: docker pull eu.gcr.io/gardener-project/cc/makobug-reproducer:1 docker run eu.gcr.io/gardener-project/cc/makobug-reproducer:1 cc-utils/run-me.py I hard-coded parameters such as amount of worker-threads (line 106, max_workers) and the amount of renderings to do (line 124, range). To my experience, this script will yield exceptions as the one I pasted above almost always; I did see occasional runs were no stacktraces were printed. Re-running at most twice always gave stacktraces again. hi that .tar.gz is 283 source files. it's an entire application. unfortunately I can't run an unknown application of enormous complexity and unknown origin within my own environment; instead, please supply a single, self contained mako template that reproduces the problem when compiled. if this is not feasible, I'm sure if this issue is widespread it will soon enough be reported in other forms as python 3.11 gains adoption. or print statements, or dump to a file, etc... The best option is probably to try a custom version of mako that has try: t = mako.template.Template(template_contents, lookup=self.lookup) except SyntaxException: print(template_contents) raise @zzzeek maybe doing something like this, probably logging to a logger, may make sense regardless of this issue though Mako already does this, poster could also turn this on, see https://docs.makotemplates.org/en/latest/usage.html#handling-exceptions @zzzeek : sorry for replying thus late. I dumped the contents of template_contents into a (gzipped) file I created this using (as suggested) the following code-block: try: t = mako.template.Template(template_contents, lookup=self.lookup) except: with open('/src/sap/makobug-reproducer/template_contents.dump', 'w') as f: f.write(template_contents) it is probably worth mentioning that using a bare except was the only way I could actually handle this exception. Neither of SystemError, mako.exceptions.SyntaxException, SyntaxError worked. hi and thanks for this. Unfortunately no error is reproduced, I can load this template under Python 3.11.3 and compile it without issue. what happens if you use the given template in a program like this (assume you put the content into foo.mako)? from mako.template import Template t = Template(filename="foo.mako") print(t.code) what's the exact Python 3.11 version you are using? Unfortunately no error is reproduced, I can load this template under Python 3.11.3 and compile it without issue. did you try this w/ multithreading (at least two threads) and multiple executions? As I explained initially, this error occurs sporadically, and only (according to my observations) when concurrency is involved. If using e.g. four threads, the issue occurs almost always at least once if doing the template-instantiation ~256 times As stated above, I can reproduce this error in the following environments: python3 alpine (3.11.3-r11) - running within a virtualisation container (aka docker container) python (3.11.3-2 and 3.11.3-2) from arch linux (no virtualisation involved) If running this just a couple of times, or running it hundreds / thousands of times, but single-threaded, this error never occurs. However, it occurs quite frequently if doing multithreading (but only as of python3.11). I will try whether I can reproduce the error using the approach you shared and will write another update to this issue. you can try adding threads to the POC but I can't see any way that threads would have an effect here, the Mako compilation process works on a local datastructure that is not accessible to other functions. I had assumed the error was sporadic only because this particular template was not getting compiled every time. it's also not clear why, if you are using a file-based TemplateLookup, that this template would be getting recompiled at all. Mako writes a module file to disk and reuses that. @zzzeek : originally, it was planned to have a multitude of templates. considering that we actually have just one (and might do some caching anyhow), I might change this and cache the template. In the reproducer I uploaded the multithreading is done in the run-me.py script Mako writes a module file to disk and reuses that. @zzzeek : would you be so kind as to give me a hint to the code doing that? That might be a very good explanation for the race-condition I assume. Although not a good explanation of why this seems to only affect python3.11, admittedly. hi - You use a TemplateLookup and give it a file path to store modules, and then use the TemplateLookup to retrieve .mako files from the filesystem as compiled templates. This works best when you have .mako files that you are loading and rendering in your application. the second example at https://docs.makotemplates.org/en/latest/usage.html#using-templatelookup illustrates how to configure TemplateLookup with a file path. The module caching thing is not readily available for on-the-fly templates, what you could do for on-the-fly templates is write them out to .mako files, then use TemplateLookup to access them. Otherwise for local in-memory Template objects, Mako does not make use of global state when compiling, although there is a global set of compiled template modules (after the compilation is done) that are indirectly linked to template URLs or in-memory identifiers; I can see here there is a potential for key conflicts if you have anonymously-identified Template objects but this map isn't used when compilation proceeds. The only thing I can see that could conceivably be some kind of "global" would be when we use the compile() Python builtin we pass a module identifier to it, and for an anonymous Template like you have, that identifer will be hash(id(template)), so there could be re-use of the same id with different template contents. That would be very unusual if compile() somehow held onto state from a previous call. There are many ways to fix your code here. One is to put the lock in your code: template_mutex = threading.Lock() def step_template(name): step_file = ci.util.existing_file(os.path.join(steps_dir, name + '.mako')) with template_mutex: return mako.template.Template(filename=step_file) Another, better and much more idiomatic way is to use TemplateLookup as mentioned, since these are file based templates: lookup = TemplateLookup(directories=[steps_dir]) def step_template(name): step_file = ci.util.existing_file(os.path.join(steps_dir, name + '.mako')) return lookup.get_template(name + ".mako") TemplateLookup uses a mutex for its compilation, so that would eliminate the problem. Then, you will get a lot less compile calls if you give your lookup a module directory: lookup = TemplateLookup(directories=[steps_dir], module_directory='/tmp/mako_modules') def step_template(name): step_file = ci.util.existing_file(os.path.join(steps_dir, name + '.mako')) return lookup.get_template(name + ".mako") your program will put .py files into /tmp/mako_modules that get reused. I could further narrow the issue down. If I add just lock Template._compile_text and Template._compile_from_file from parallel execution, the issue also does not appear. 300 # if plain text, compile code in memory only 301 if text is not None: 302 with lock: 303 (code, module) = _compile_text(self, text, filename) 304 self._code = code 305 self._source = text 306 ModuleInfo(module, None, self, filename, code, text, uri) 307 elif filename is not None: 308 # if template filename and a module directory, load 309 # a filesystem-based module file, generating if needed 310 if module_filename is not None: 311 path = module_filename 312 print(f'{module_filename=}') 313 elif module_directory is not None: 314 print(f'{module_directory=}') 315 path = os.path.abspath( 316 os.path.join( 317 os.path.normpath(module_directory), u_norm + ".py" 318 ) 319 ) 320 else: 321 path = None 322 with lock: 323 module = self._compile_from_file(path, filename) 324 else: 325 raise exceptions.RuntimeException( 326 "Template requires text or filename" 327 ) _compile_from_file() calls _compile_text() so that code would deadlock if _compile_from_file() does not have a path well I just successfully executed it w/o issues :-) @zzzeek : switching to TemplateLookup sounds like a good idea, too. However, I still think there is a bug in template.Template. Instantiating multiple instances of a class and calling their methods should not run into race-conditions I think At this point you should have enough information to create a single short script that demonstrates the problem, take my script at https://github.com/sqlalchemy/mako/issues/378#issuecomment-1600811371 and adjust well I just successfully executed it w/o issues :-) which means it's being called with a path, which seems to indicate there are other calls to TemplateLookup against the same file with different arguments as far as I understand the code, _compile_from_file calls _compile_module_file, so no deadlock :-) yeah, I think this should be feasible My code calls Template at two locations. one time using a fpath, one time passing a string. I also changed it to always pass a string (in this case, the race-condition still occurs - so I think there is a race-condition involved in the "_compile" method technically, you sometimes do raise an exception already ;-) as far as I understand the code, _compile_from_file calls _compile_module_file, so no deadlock :-) take a look. there's a conditional, so it can go either way in my case, path is always None (I checked this by adding a print..) I am not saying adding a lock is a good idea for a fix. this is how far I came in finding the root-cause OK in your code you are locking outside _compile_text() , so that's why that's OK I started (after observing that adding some caching will reduce likelihood of the error) to add lock to full __init__, then started to reduce the lines of code I had to lock and still not get an error anyhow: using this knowledge, I can certainly fix my code. Still, I think this is a bug in mako - albeit one that might not affect many users besides me it may very well be a bug in py3.11 itself. since you can reproduce, work to iteratively reduce your program one step at a time, ensuring reproduction each time, down to a script that looks like this one @zzzeek : interestingly, switching to mako.lookup.TemplateLookup as you suggested did seem to fix the issue (after removing the caching I added earlier, and of course, after removing again the lock I added to mako's code). It will still be an interesting task to add a reproducer for sure. I have also been encountering the same issue in my app. It also occurs intermittently and only with Python 3.11. https://www.reddit.com/r/Tautulli/comments/1042t13/error_syntaxexception_systemerror_ast_constructor/ I am already using TemplateLookup. https://github.com/Tautulli/Tautulli/blob/ea6c6078df410f333a060016dfce18c21ad134c9/plexpy/webserve.py#L126 I think I solved my issue. I was initializing a new TemplateLookup every single time I served a template. Re-factoring my code so that I only initialize it once seems to have fixed it. OK but we want to figure out why concurrent calls to compile() is causing this problem (And also why my test script above does not seem to have this problem) I have been trying to reproduce the error with a small test script but have been unsuccessful. I thus far also did not succeed in creating a minified reproducer. Will update the issue once I do find some more time. I am also seeing this occasionally pop up in a production application since updating to python 3.11. It's very rare, relative to the number of template renders. We are using a customised TemplateLookup that inherits from the mako TemplateLookup. Thanks for all the information provided on this issue - I think that will really help narrow this down for us. Just chiming in that I've also seen this behavior sporadically with Python 3.11 It occurs when using TemplateLookup.get_template. It's happening very rarely, I'd say about once a week in a nightly job that calls hundreds of renders via an API that uses mako. Each API call handles a single render, the lookup looks roughly like: lookup = TemplateLookup(directories=[templates_path]) try: template = lookup.get_template(specific_template) except TemplateLookupException: template = lookup.get_template(common_template) return template Exception observed is: API Exception: SyntaxException('(SystemError) AST constructor recursion depth mismatch (before=93, after=85) (\'if <condition>:pass\') in file <file_path> This seems more an issue with python: this pytest issue reports the same problem without mako involvement https://github.com/pytest-dev/pytest/issues/10874 wow howd you find that? someone in that pytest issue actually linked this one, and github links it if you scroll up
gharchive/issue
2023-06-06T12:38:25
2025-04-01T06:40:27.821014
{ "authors": [ "CaselIT", "JonnyWong16", "Mark-Hetherington", "ccwienk", "zsblevins", "zzzeek" ], "repo": "sqlalchemy/mako", "url": "https://github.com/sqlalchemy/mako/issues/378", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
357129707
Restore-DbaDatabase apply log backups with -FileMapping Version 0.9.399 Have an existing database in full recovery. Using the latest full backup, restore it as a copy of that database. Use -FileMapping to provide the new physical files. Leave it in no recovery to restore transaction logs from the existing database to the newly restored database. Like log shipping. Issue is that -FileMapping does not seem to work with -Continue parameter as the error message says cannot use the physical files of the existing database Steps to Reproduce <# # Restore a full backup of a database # Say for instance your database is called MyDB # Restore it as MyDB2, to the same drives # these 4 variables must be changed to suit $db = 'MyDB2' $filemap = @{'MyDB_Data'='D:\SQLData\MyDB2.mdf';'MyDB_log'='L:\SQLLogs\MyDB2_log.ldf'} $pathFull = 'X:\SQLBackup\pathToFullBackup.bak' $pathLogs = 'X:\SQLBackup\pathToLogBackups' # This should restore the full backup as a second database in norecovery Restore-DbaDatabase -SqlInstance localhost -Path $pathFull -DatabaseName $db -FileMapping $filemap -NoRecovery -WithReplace -MaintenanceSolutionBackup -Verbose # -OutputScriptOnly # Now to restore transaction logs via -Continue Restore-DbaDatabase -SqlInstance localhost -Path $pathLogs -DatabaseName $db -FileMapping $filemap -NoRecovery -MaintenanceSolutionBackup -Verbose -Continue # -OutputScriptOnly # Cannot restore any log backups as the physical files of MyDB are in use (which we are not trying to use) #> Expected Behavior Transaction log backups should restore Actual Behavior Error throws as if we we were trying to restore the physical files of the existing database Environmental data PSVersion 4.0 WSManStackVersion 3.0 SerializationVersion 1.1.0.1 CLRVersion 4.0.30319.42000 BuildVersion 6.3.9600.18968 PSCompatibleVersions {1.0, 2.0, 3.0, 4.0} PSRemotingProtocolVersion 2.2 Microsoft SQL Server 2016 (SP1-CU10-GDR) (KB4293808) - 13.0.4522.0 (X64) Jul 17 2018 22:41:29 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows Server 2012 R2 Standard 6.3 (Build 9600: ) (Hypervisor) Having problems replicating this one. $filemap = @{RestoreTimeClean = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2SP2\MSSQL\DATA\MapClean.mdf'; RestoreTimeClean_log = 'C:\Program Files\Microsoft SQL Server\MS SQL10_50.SQL2008R2SP2\MSSQL\DATA\mapclean_log.ldf'} $null = Restore-DbaDatabase -SqlInstance localhost\sql2008r2sp2 -Path C:\github\appveyor-lab\RestoreTimeClean\RestoreTimeClean.bak -WithReplace Get-DbaDatabaseFile -SqlInstance localhost\sql2008r2sp2 -Database restoretimeclean | select LogicalName, PhysicalName $null = Restore-DbaDatabase -SqlInstance localhost\sql2008r2sp2 -Path C:\github\appveyor-lab\RestoreTimeClean\RestoreTimeClean.bak -DatabaseName rt2 -NoRecovery -WithReplace -FileMapping $filemap $null = Restore-DbaDatabase -SqlInstance localhost\sql2008r2sp2 -Path C:\github\appveyor-lab\RestoreTimeClean\ -DatabaseName rt2 -Continue -FileMapping $filemap Get-DbaDatabaseFile -SqlInstance localhost\sql2008r2sp2 -Database rt2 | select LogicalName, PhysicalName Which I'm pretty sure mirrors your logic. And it's working as expected. Thats in ps V4, I just need to try and find a PS4 box to try it on as well to confirm. Hi @Stuart-Moore , I think I'm doing the exact same as you, I can't quite believe it! I'm on dbatools version 0.9.381. I wouldn't rule out that I'm doing something stupid though No feedback from user since september, and I've never managed to replicate this even on PS4
gharchive/issue
2018-09-05T08:55:49
2025-04-01T06:40:27.859612
{ "authors": [ "Stuart-Moore", "labyrinthsystems" ], "repo": "sqlcollaborative/dbatools", "url": "https://github.com/sqlcollaborative/dbatools/issues/3972", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
488905295
Fix Repair-DbaOrphanUser skipping contained databases Fix Repair-DbaOrphanUser skipping contained databases Type of Change [ ] Bug fix (non-breaking change, fixes #) [x] New feature (non-breaking change, adds functionality) [ ] Breaking change (effects multiple commands or functionality) [x] Ran manual Pester test and has passed (`.\tests\manual.pester.ps1) [ ] Adding code coverage to existing functionality [ ] Pester test is included [ ] If new file reference added for test, has is been added to github.com/sqlcollaborative/appveyor-lab ? [ ] Nunit test is included [ ] Documentation [ ] Build system Purpose Fix for issue #5887 so that contained databases are not ignored when attempting to fix orphaned users. Approach Eliminates the code that looks to see if the database is contained. The existing db level checks cover handling contained users correctly (see SQL below for DB creation on testing this). Commands to test Create database, logins, users, contained users, orphaned users using SQL code at https://gist.github.com/sirsql/760c0c052fbd3d7ef6be5c0f9db09887 Then run repair-dbadborphanuser -sqlinstance localhost Screenshots Learning thanks @sirsql - while waiting for claudio's review, i reformatted it to OTSB standard using Invoke-DbatoolsFormatter. I'm not sure why this one in particular got reformatted, but wanted to let you know so that you can sync the branch before making any changes. Hi @sirsql can you please also remove from Remove-DbaDbOrphanUser and Get-DbaDbOrphanUser to be equal on all commands @sirsql - will you have time to update Remove and Get? I'll get it done this weekend Don't know what the deal is with the formatting. Ran - invoke-dbatoolsformatter "C:\Users\nic\OneDrive\Documents\GitHub\dbatools\functions\Get-DbaDbOrphanUser.ps1" invoke-dbatoolsformatter "C:\Users\nic\OneDrive\Documents\GitHub\dbatools\functions\Repair-DbaDbOrphanUser.ps1" Invoke-DbatoolsFormatter "C:\Users\nic\OneDrive\Documents\GitHub\dbatools\functions\Remove-DbaDbOrphanUser.ps1" Also did a pull prior to doing anything else and there's the merge conflict (no conflict on my machine and able to pull from upstream without issues). Are there possible problems with the workspace setup that I am using in vscode? I think your local branch is extremely out of date. I'll reformat the files then remove the changes to Remove- would you be able to update then resubmit a PR just for the single file? The formatting issue may be a problem with the version of psscriptanalyzer which I've found to cause problems. @niphlod is there any way you can look into that? I have not been able to update psscriptanalyzer in months. I just wanted to check in on this Please comment on your issue whether this PR fixed you problem. The fix was merged 3 months ago.
gharchive/pull-request
2019-09-04T02:21:50
2025-04-01T06:40:27.869447
{ "authors": [ "ClaudioESSilva", "imyourdba", "potatoqualitee", "sirsql", "wsmelton" ], "repo": "sqlcollaborative/dbatools", "url": "https://github.com/sqlcollaborative/dbatools/pull/6013", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }