id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1352421425
|
🛑 Vinos Divertidos is down
In 5783ed1, Vinos Divertidos (https://vinosdivertidos.es) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Vinos Divertidos is back up in e1925a1.
|
gharchive/issue
| 2022-08-26T15:29:20 |
2025-04-01T04:34:46.062122
|
{
"authors": [
"kitos9112"
],
"repo": "kitos9112/uptime",
"url": "https://github.com/kitos9112/uptime/issues/304",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
378686444
|
Be able to control redirect behavior per quest / bring cookies when redirect
Feature Request
Description
I'm using Fuel as my http client to simulate login on a site as an auto script, where I found the short of unable to control redirect behavior a bit awkward, since Fuel.get / post is stateless, which do not have cookie management built in, when I sign in, server sends back an 302 redirect, fuel follows redirect and sent a new request, but did not bring cookies with it, which gets 302 redirect because it's unauthorized, in the end I land back on the login page again, making three request pointless and unusable without the remove all interceptor hack.
Proposed Solution
As #104 suggests,
Fuel.get(url) // Follows redirects (behaviour unchanged).
Fuel.get(url, follow = true) // Explicitly follow redirects.
Fuel.get(url, follow = false) // Does not follow redirects.
or
Fuel
.get(url)
.followRedirect(false) //disable redirect
or even
Fuel.get(url, fowardCookieWhenRedirect = true) // process cookie from Set-cookie / Set-cookie2 and forward to next request
Alternatives I've considered
Fuel.get(url, redirectHandler = { request, response -> /**
epic magic that returns an request config or null to stop redirection
*/ }
Additional context
Nop
Hi there!
In the current master and released versions, the redirection you ask for is available through allowRedirects
request.allowRedirects(false)
// For example
Fuel.get(url).allowRedirects(false)
Please note that this will act as an "error", but as long as you use a Result, you can then inspect the response[Headers.LOCATION] and handle the request yourself.
Your other request is handled by #263 (and #105, #458), which are coming after version 2.0.0.
Since this is a duplicate I am closing this issue, but feel free to respond here!
Thanks for reply, throwing an exception is understandable here. just remember to write .allowRedirects into documentations and examples.
Thanks!
@wusatosi you are absolutely right. We are updating the docs for 2.0.0 <3
Thanks for the excellent support! loving your work!
Oh bug report on the redirect part, when redirected, the Response object returned did not fulfill any information about the actual response, nor the response header and the statues code (which is -1) is not empty value.
@SleeplessByte
In 2.0.0 (master branch) response errors should have full response values.
For now you I think the bug is that you still need to add 300...399 as valid status in order to be able to get the whole response, but I'm not sure of the top of my head. You do this by removing all the interceptors first and then adding back validStatusInterceptor(100...399).
The response variable should not be affected by the validStatusInterceptor
interceptor, so your comment is on point! Also has been fixed in master :)
On Fri, 30 Nov 2018, 06:23 wusatosi, notifications@github.com wrote:
Thanks for the reply, this is a weird design choice, I think there should
not be "valid status code" or "valid response" filter to filter the
response variable returns, since it is reasonable to argue that every
response return from the remote which follows the HTTP schema is "a valid
response", it is just not "appropriate" to feed it to the deserializer.
I know that this might cause by the internal mechanic of Fuel (and is a
bug), but from a user perspective it is weird to see this.
Thanks for the reminder, I completely forgot that I can get response
object from the exception. And it's nice to see that 2.0.0 will bring so
much improvements. ❤️
Tell me if you need any help, I might be able to contribute! 😃
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kittinunf/Fuel/issues/514#issuecomment-443095123, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AB35WKiN7Z8zElrArJxQuF5MDMhGqu9uks5u0MDngaJpZM4YUeZ3
.
|
gharchive/issue
| 2018-11-08T11:16:54 |
2025-04-01T04:34:46.074630
|
{
"authors": [
"SleeplessByte",
"wusatosi"
],
"repo": "kittinunf/Fuel",
"url": "https://github.com/kittinunf/Fuel/issues/514",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
816283389
|
JNI RegisterNatives called with pending exception java.lang.ClassNotFoundException: Didn't find class "org.audiostream.AudioIn"
Hi, I've created a Google colab so you can simply recreate the error.
My code is:
from audiostream import get_input
from android.permissions import request_permissions, Permission
request_permissions([Permission.RECORD_AUDIO])
def mic_callback(buf):
print("got", len(buf))
print("KEYWORD_TO_SEARCH_IN_ADB_LOGCAT")
mic = get_input(callback=mic_callback)
but in adb logcat I read:
02-25 10:53:48.359 1299 1568 F org.test.myapp: java_vm_ext.cc:570] JNI DETECTED ERROR IN APPLICATION: JNI RegisterNatives called with pending exception java.lang.ClassNotFoundException: Didn't find class "org.audiostream.AudioIn" on path: DexPathList[[zip file "/data/app/org.test.myapp-VZcDiTQAVptYQlU1RG8YjQ==/base.apk"],nativeLibraryDirectories=[/data/app/org.test.myapp-VZcDiTQAVptYQlU1RG8YjQ==/lib/arm, /data/app/org.test.myapp-VZcDiTQAVptYQlU1RG8YjQ==/base.apk!/lib/armeabi-v7a, /system/lib, /system/product/lib]]
and then, the application crashed.
Thanks for your help. I believe that if we create a working google colab it can also be integrated into the homepage to be used as a starting point.
getting the same error 😔
can someone please help us ?
hi, did you use buildozer for android application?
I solved this issue by copying java source directory of the audiostream package to dists/myapp/src/main/java/org
(it can be found in audiostream/audiostream/platform/android/org)
I don't know why buildozer didn't copy it to the dists directory.
thanks bro, it worked.
Now If I am thinking about it looks that I should have thought of this.
|
gharchive/issue
| 2021-02-25T10:04:21 |
2025-04-01T04:34:46.081433
|
{
"authors": [
"adarsh1783",
"aod1310",
"iacoposk8"
],
"repo": "kivy/audiostream",
"url": "https://github.com/kivy/audiostream/issues/37",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
675586512
|
Documentation issue with on_ref_press
In kivy/uix/label.py L1008, there is a documentation issue with on_ref_press.
widget.on_ref_press(print_it) should be widget.bind(on_ref_press=print_it).
Indeed, feel free to edit directly in the page you linked, that'll create a pull request and you'll be officially a contributor :)
|
gharchive/issue
| 2020-08-08T20:50:10 |
2025-04-01T04:34:46.083042
|
{
"authors": [
"Kateba72",
"tshirtman"
],
"repo": "kivy/kivy",
"url": "https://github.com/kivy/kivy/issues/7037",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
16727242
|
Methods stubbed using receive:withArguments: are still called if arguments do not match.
When a selector is stubbed on an object using receive:withArguments:, the selector on the object should not be invoked. This works fine in the case where the arguments specified match the arguments that the selector is called with by the production code.
However, then the arguments do not match (i.e. the spec fails) the selector that has been stubbed is still invoked. While it could be argued that this isn't really a big deal, it can still be quite confusing if you're not expecting it. I just spent rather a long time thinking that I hadn't stubbed the selector correctly when it was actually a very subtle bug in my production code that was causing the wrong arguments to be passed.
receive:withArguments: 如果参数是标量,记得要使用theValue装箱,用@符号是没有用的,比如:
[[car should] receive:@selector(changeToGear:) withArguments: theValue(3)];
|
gharchive/issue
| 2013-07-14T12:17:30 |
2025-04-01T04:34:46.085092
|
{
"authors": [
"goonzoid",
"ios122"
],
"repo": "kiwi-bdd/Kiwi",
"url": "https://github.com/kiwi-bdd/Kiwi/issues/328",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
408037828
|
Upgrade react-native to v0.58.3
Upgrade other native dependencies as well
Write to readme how to test on android when you do react-native upgrade
closes #1414
@RobinCsl 🚀
|
gharchive/pull-request
| 2019-02-08T07:37:58 |
2025-04-01T04:34:46.086250
|
{
"authors": [
"tbergq"
],
"repo": "kiwicom/mobile",
"url": "https://github.com/kiwicom/mobile/pull/1470",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
610925973
|
Get operations by operationId
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md#fixed-fields-8
Tools and libraries MAY use the operationId to uniquely identify an operation
sometime,we need retest a specified API, using operationId will be more intuitive and efficient than using filtering.
for example:
/todo:
get:
tags:
- Todo
summary: Todo List
operationId: todo_list_todo_get
responses:
'200':
description: Successful Response'
'404':
description: Not Found
post:
tags:
- Todo
summary: Create Todo
operationId: todo_post_todo_post
Something like:
@schema.parametrize(method="post", endpoint="/todo")
def test_no_server_errors(case):
# do something with the case
pass
VS
@schema.parametrize(operationId="todo_post_todo_post")
def test_no_server_errors(case):
# do something with the case
pass
Hi @dongfangtianyu
It is a great idea :) I'll work on it
Hi @Stranger6667
I am trying to done this issues, but I have some problems and need help.
First, I added a parameter to parameterize()
def parametrize(
self,
method: Optional[Filter] = NOT_SET,
endpoint: Optional[Filter] = NOT_SET,
tag: Optional[Filter] = NOT_SET,
operationId: Optional[Filter] = NOT_SET, ############# there
validate_schema: Union[bool, NotSet] = NOT_SET,
)
When run pre-commit, I see message: too many arguments (6 / 5) (too many arguments)
May be something wrong. Do you have any ideas to remind me?
Hi @dongfangtianyu,
Sure! you need to add # pylint: disable=too-many-arguments on the first line. Probably we'll need to reconsider this pylint rule. It will look something like this:
def parametrize( # pylint: disable=too-many-arguments
self,
method: Optional[Filter] = NOT_SET,
endpoint: Optional[Filter] = NOT_SET,
tag: Optional[Filter] = NOT_SET,
operationId: Optional[Filter] = NOT_SET, ############# there
validate_schema: Union[bool, NotSet] = NOT_SET,
)
thanks @Stranger6667 ,it's worked!
I can continue to code.
Resolved via #554 & #558
This feature will lend in the next release today or tomorrow
|
gharchive/issue
| 2020-05-01T19:28:51 |
2025-04-01T04:34:46.091445
|
{
"authors": [
"Stranger6667",
"dongfangtianyu"
],
"repo": "kiwicom/schemathesis",
"url": "https://github.com/kiwicom/schemathesis/issues/546",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2381838964
|
build: update swift-syntax to swiftlang/swift-syntax
apple/swift-syntax has been moved to swiftlang/swift-syntax. So I'm updating its URLs in Package.swift and Package.resolved.
I need to merge #96 first to fix CI failing.
|
gharchive/pull-request
| 2024-06-29T16:21:57 |
2025-04-01T04:34:46.128745
|
{
"authors": [
"kkebo"
],
"repo": "kkebo/zyphy",
"url": "https://github.com/kkebo/zyphy/pull/94",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1823703892
|
demonstrate kkrt handling the get_caller_address zero case w/ error
Time spent on this PR: 2 hrs
Resolves: #
addresses: #216
Pull Request type
Please check the type of change your PR introduces:
[ ] Bugfix
[ ] Feature
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no API changes)
[ ] Build-related changes
[ ] Documentation content changes
[X] Testing
What is the new behavior?
This small pr introduces a potential pattern where pre-alpha testers can do PR's w/ solidity contracts. Currently using testsequencer, which is a little involved, the idea is to have a solidity_contracts directory that pre-alpha tests can add solidity code that isnt working and have those contracts ran against the CI
Does this introduce a breaking change?
[ ] Yes
[X] No
As discussed this morning, I don't feel like we need to add this kind of tests here. For the RPC I'd like us to focus first on the conformance tests.
Regarding this CALLER (and ORIGIN) opcode issue, we made some tests with @danilowhk to make sure that we understood correctly the expected behavior of the EVM, and tl;dr added an issue for adding from and origin to the ExecutionContext.
I'd like to close this issue here and focus on this task on the main kakarot repo side.
|
gharchive/pull-request
| 2023-07-27T06:27:46 |
2025-04-01T04:34:46.153022
|
{
"authors": [
"ClementWalter",
"jobez"
],
"repo": "kkrt-labs/kakarot-rpc",
"url": "https://github.com/kkrt-labs/kakarot-rpc/pull/359",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
181438793
|
@role('admin')
@role('admin')
get ErrorException
Parse error: syntax error, unexpected ''admin'' (T_CONSTANT_ENCAPSED_STRING) (View: ...
To do this we have to use @role(('admin'))
But this construction does not work @role(('moderator|admin'))
It's fixed. Thank you.
Can you please update latest version and let me know if it's work for you?
sure
There are total three blade directives.
Are you asking about that or something else?
@role('admin')
<p>This is visible to users with the admin role. Gets translated to
\Ntrust::role('admin')</p>
@endrole
@permission('manage-admins')
<p>This is visible to users with the given permissions. Gets translated to
\Ntrust::can('manage-admins'). The @can directive is already taken by core
laravel authorization package, hence the @permission directive instead.</p>
@endpermission
@ability('admin,owner', 'create-post,edit-user')
<p>This is visible to users with the given abilities. Gets translated to
\Ntrust::ability('admin,owner', 'create-post,edit-user')</p>
@endability
Also let me know if you think any feature missing.
I mean, available the use of multiple roles for the @role directive in Blade? (explode() function)
For example @role('admin|owner|moderator')
You can use like below role with or condition.
@ability('admin,owner,moderator', '')
This user has at least one role from admin, editor or moderator.
@endrole
Thanks, sorry for my carelessness.
@fearrr
You can use like below.
@role(['superadmin', 'user'])
This should have role 'superadmin' or 'user'.
@endrole
@role(['superadmin', 'user'], true)
This should have both role 'superadmin' and 'user'.
@endrole
|
gharchive/issue
| 2016-10-06T14:57:09 |
2025-04-01T04:34:46.163846
|
{
"authors": [
"fearrr",
"klaravel"
],
"repo": "klaravel/ntrust",
"url": "https://github.com/klaravel/ntrust/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
360527309
|
Included features from issue #56 - Global callback when log type is called
This fixes issue #56 you can now set global callbacks for logs which will be called once the logger function is called. The callback function will get the exact same data which the user used.
const {Signale} = require('signale');
const options = {
disabled: false,
interactive: false,
stream: process.stdout,
scope: 'custom',
types: {
remind: {
badge: '**',
color: 'yellow',
label: 'reminder',
done: (...msg) => {
// Do something with the logged message(s)
}
},
santa: {
badge: '🎅',
color: 'red',
label: 'santa'
}
}
};
const custom = new Signale(options);
custom.remind('Improve documentation.');
custom.santa('Hoho! You have an unused variable on L45.');
Calling custom.remind('Hello', ', I love cookies') will get passed to the done callback as ['Hello', ', I love cookies']. You can then use it for whatever logging purposes.
Any update on this?
|
gharchive/pull-request
| 2018-09-15T10:13:04 |
2025-04-01T04:34:46.170796
|
{
"authors": [
"Vimiso",
"tiehm"
],
"repo": "klaussinani/signale",
"url": "https://github.com/klaussinani/signale/pull/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2044762498
|
Line 3031 occultism:white_candle new log error since last update. Unknown registry key
Describe the bug
Unknown registry key
To Reproduce
Steps to reproduce the behavior:
use all the mods in the log or download modpack and run game
Expected behavior
No line of code as after the last update this has occured
Screenshots
N/A
System (please complete the following information):
Occultism Version: 1.89.1
OS: [e.g. Windows] 10
Minecraft Version: 1.20.1
Modpack Link and Version, or list of mods:
https://www.curseforge.com/minecraft/modpacks/orbit-cycle/files/4870317
latest (7).log
Additional context
Add any other context about the problem here.
Sorry this escaped my notice because it was posted in theurgy, not occultism.
This was fixed in #1003
|
gharchive/issue
| 2023-11-18T16:03:07 |
2025-04-01T04:34:46.246086
|
{
"authors": [
"Sultia",
"klikli-dev"
],
"repo": "klikli-dev/occultism",
"url": "https://github.com/klikli-dev/occultism/issues/1012",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1309628265
|
Change cmake command to include Release type
Fix the README to not generate a debug executable, else, the generate test example is extremely slow, roughly 14 times slower.
Thanks so much for this cleanup. Merged.
|
gharchive/pull-request
| 2022-07-19T14:42:18 |
2025-04-01T04:34:46.288589
|
{
"authors": [
"jratcliff63367",
"themasterlink"
],
"repo": "kmammou/v-hacd",
"url": "https://github.com/kmammou/v-hacd/pull/119",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
910831312
|
Cannot install hypno? The pyinjector installed successfully.
This error occurred after I did: "pip install hypno".
`ERROR: Command errored out with exit status 1:
command: 'c:\users\user\appdata\local\programs\python\python37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\user\AppData\Local\Temp\pip-install-kvu296dw\hypno_e95e0e11b75f4dfd899bc3900de3c056\setup.py'"'"'; file='"'"'C:\Users\user\AppData\Local\Temp\pip-install-kvu296dw\hypno_e95e0e11b75f4dfd899bc3900de3c056\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\user\AppData\Local\Temp\pip-wheel-a32_s8k5'
cwd: C:\Users\user\AppData\Local\Temp\pip-install-kvu296dw\hypno_e95e0e11b75f4dfd899bc3900de3c056
Complete output (18 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\hypno
copying hypno\hypno.py -> build\lib.win-amd64-3.7\hypno
copying hypno_init_.py -> build\lib.win-amd64-3.7\hypno
copying hypno_main_.py -> build\lib.win-amd64-3.7\hypno
running build_ext
building 'hypno.injection' extension
creating build\temp.win-amd64-3.7
creating build\temp.win-amd64-3.7\Release
creating build\temp.win-amd64-3.7\Release\hypno
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\user\appdata\local\programs\python\python37\include -Ic:\users\user\appdata\local\programs\python\python37\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tchypno/injection.c /Fobuild\temp.win-amd64-3.7\Release\hypno/injection.obj
injection.c
hypno/injection.c(3): fatal error C1083: Cannot open include file: 'dlfcn.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit status 2
ERROR: Failed building wheel for hypno
Running setup.py clean for hypno
Failed to build hypno
Installing collected packages: hypno
Running setup.py install for hypno ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\user\appdata\local\programs\python\python37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\user\AppData\Local\Temp\pip-install-kvu296dw\hypno_e95e0e11b75f4dfd899bc3900de3c056\setup.py'"'"'; file='"'"'C:\Users\user\AppData\Local\Temp\pip-install-kvu296dw\hypno_e95e0e11b75f4dfd899bc3900de3c056\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\user\AppData\Local\Temp\pip-record-zqsrahm6\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\user\appdata\local\programs\python\python37\Include\hypno'
cwd: C:\Users\user\AppData\Local\Temp\pip-install-kvu296dw\hypno_e95e0e11b75f4dfd899bc3900de3c056
Complete output (18 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\hypno
copying hypno\hypno.py -> build\lib.win-amd64-3.7\hypno
copying hypno_init_.py -> build\lib.win-amd64-3.7\hypno
copying hypno_main_.py -> build\lib.win-amd64-3.7\hypno
running build_ext
building 'hypno.injection' extension
creating build\temp.win-amd64-3.7
creating build\temp.win-amd64-3.7\Release
creating build\temp.win-amd64-3.7\Release\hypno
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\user\appdata\local\programs\python\python37\include -Ic:\users\user\appdata\local\programs\python\python37\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tchypno/injection.c /Fobuild\temp.win-amd64-3.7\Release\hypno/injection.obj
injection.c
hypno/injection.c(3): fatal error C1083: Cannot open include file: 'dlfcn.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit status 2
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\user\appdata\local\programs\python\python37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\user\AppData\Local\Temp\pip-install-kvu296dw\hypno_e95e0e11b75f4dfd899bc3900de3c056\setup.py'"'"'; file='"'"'C:\Users\user\AppData\Local\Temp\pip-install-kvu296dw\hypno_e95e0e11b75f4dfd899bc3900de3c056\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\user\AppData\Local\Temp\pip-record-zqsrahm6\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\user\appdata\local\programs\python\python37\Include\hypno' Check the logs for full command output.`
Did some weird shit and fixed it.
|
gharchive/issue
| 2021-06-03T20:30:59 |
2025-04-01T04:34:46.309856
|
{
"authors": [
"Spindermoon1"
],
"repo": "kmaork/hypno",
"url": "https://github.com/kmaork/hypno/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1306918510
|
When the page is compressed the menu toggle no longer works
data-toggle on the compressed nav-bar is no longer working - could be a BootStrap update problem
Bootstrap 5 - needed bs-data updates.
|
gharchive/issue
| 2022-07-16T22:17:41 |
2025-04-01T04:34:46.314960
|
{
"authors": [
"kmcluskey"
],
"repo": "kmcluskey/FlyMet",
"url": "https://github.com/kmcluskey/FlyMet/issues/127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1242496255
|
[main] Format markdown
Cron -knative-prow-robot
/cc knative-sandbox/eventing-writers
/assign knative-sandbox/eventing-writers
Produced by: knative-sandbox/knobots/actions/update-markdown
Details:
CODE-OF-CONDUCT.md 49ms
/retest
|
gharchive/pull-request
| 2022-05-20T01:37:07 |
2025-04-01T04:34:46.353443
|
{
"authors": [
"gab-satchi",
"knative-automation"
],
"repo": "knative-sandbox/control-protocol",
"url": "https://github.com/knative-sandbox/control-protocol/pull/181",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
674600494
|
Reconciler first pass
Adding first pass reconciler. Will refactor and add tests in a followup.
(╯°□°)╯︵ kubectl get clusterducktype.discovery.knative.dev/podspecables.duck.knative.dev -oyaml
apiVersion: discovery.knative.dev/v1alpha1
kind: ClusterDuckType
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"discovery.knative.dev/v1alpha1","kind":"ClusterDuckType","metadata":{"annotations":{},"labels":{"discovery.knative.dev/release":"devel"},"name":"podspecables.duck.knative.dev"},"spec":{"group":"duck.knative.dev","names":{"name":"PodSpecable","plural":"podspecables","singular":"podspecable"},"selectors":[{"labelSelector":"duck.knative.dev/podspecable=true"}],"versions":[{"name":"v1","refs":[{"group":"extensions","kind":"Deployment","version":"v1beta1"},{"group":"apps","kind":"ReplicaSet","version":"v1"},{"group":"apps","kind":"DaemonSet","version":"v1"},{"group":"apps","kind":"StatefulSet","version":"v1"},{"group":"batch","kind":"Job","version":"v1"}]}]}}
creationTimestamp: "2020-08-06T21:02:18Z"
generation: 1
labels:
discovery.knative.dev/release: devel
name: podspecables.duck.knative.dev
resourceVersion: "131737"
selfLink: /apis/discovery.knative.dev/v1alpha1/clusterducktypes/podspecables.duck.knative.dev
uid: 44c2b669-6a9e-4efd-a43d-5814c1466fa0
spec:
group: duck.knative.dev
names:
name: PodSpecable
plural: podspecables
singular: podspecable
selectors:
- labelSelector: duck.knative.dev/podspecable=true
versions:
- name: v1
refs:
- group: extensions
kind: Deployment
version: v1beta1
- group: apps
kind: ReplicaSet
version: v1
- group: apps
kind: DaemonSet
version: v1
- group: apps
kind: StatefulSet
version: v1
- group: batch
kind: Job
version: v1
status:
conditions:
- lastTransitionTime: "2020-08-06T21:02:18Z"
status: "True"
type: Ready
duckCount: 5
ducks:
- duckVersion: v1
ref:
group: apps
kind: ReplicaSet
version: v1
- duckVersion: v1
ref:
group: apps
kind: DaemonSet
version: v1
- duckVersion: v1
ref:
group: apps
kind: StatefulSet
version: v1
- duckVersion: v1
ref:
group: batch
kind: Job
version: v1
- duckVersion: v1
ref:
group: extensions
kind: Deployment
version: v1beta1
observedGeneration: 1
This is blocked on https://github.com/knative/pkg/issues/1590
/lgtm
/approve
|
gharchive/pull-request
| 2020-08-06T21:04:33 |
2025-04-01T04:34:46.356823
|
{
"authors": [
"n3wscott",
"nachocano"
],
"repo": "knative-sandbox/discovery",
"url": "https://github.com/knative-sandbox/discovery/pull/36",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
619269533
|
Go modules conversion
Fixes #6
Convert to go modules (from dep)
Update to latest (as of 5/15/20) eventing-contrib (and eventing / pkg /etc)
Fix breakages from update (SubscribableStatus moved to v1beta1, and kncontroller.StartAll() signature change)
Re-sort imports from initial repo migration.
@googlebot rescan
/check-cla
somehow PR has commit git email using users.noreply.github.com from some intermediary enterprise github login. Recreating new branch/PR with clean commits...
|
gharchive/pull-request
| 2020-05-15T21:31:52 |
2025-04-01T04:34:46.358987
|
{
"authors": [
"chaodaiG",
"travis-minke-sap"
],
"repo": "knative-sandbox/eventing-kafka",
"url": "https://github.com/knative-sandbox/eventing-kafka/pull/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
970445648
|
Check for cluster readiness in cluster creation step
In the current cluster creation process, we sleep for a few seconds to ensure that the cluster has been created (for both Kind and Minikube clusters) before running any kubectl commands against it.
However, this is a hacky way to create the cluster. It would be much nicer if we were able to test for cluster creation somehow rather than using sleep.
kind actually has a way, see https://github.com/knative-sandbox/kn-plugin-quickstart/pull/84
Thanks @markusthoemmes !!
|
gharchive/issue
| 2021-08-13T14:21:25 |
2025-04-01T04:34:46.361296
|
{
"authors": [
"markusthoemmes",
"psschwei"
],
"repo": "knative-sandbox/kn-plugin-quickstart",
"url": "https://github.com/knative-sandbox/kn-plugin-quickstart/issues/75",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
730940373
|
use the new hack repo for scripts
Proposed Changes
test-infra/scripts -> hack
ref: https://github.com/knative/hack/pull/2
/lgtm
/approve
|
gharchive/pull-request
| 2020-10-27T23:14:18 |
2025-04-01T04:34:46.362741
|
{
"authors": [
"ZhiminXiang",
"n3wscott"
],
"repo": "knative-sandbox/net-certmanager",
"url": "https://github.com/knative-sandbox/net-certmanager/pull/114",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
817798690
|
re work vocabulary csv functionality
posting from slack:
if you want to make "bind list to csv" less cursed, make a utility function that loads a csv and returns it as a dict, then write this instead:
ctx = Context()
ctx.lists['user.vocabulary'] = load_wordlist_csv('vocabulary.csv')
stop using a lock and punching lists into another file
if you resource.open or resource.read to load the csv, it will just reload your current file when the resource changes, so you don't need to implement a reloader. (edited)
so I checked out what the current list loader does
a lot of the logic is around creating the file if it doesn't exist yet
so, that should be what the utility function does
load a dictionary from a file using resource.open()
create the file using the default if necessary
return the result
then to actually put it in a context, vocabulary.py can just do that
also unless there are multiple users of this function, it should probably just live in vocabulary.py?
I am using bind_list_to_csv in places other than vocabulary.py in my own config. But maybe if it were simpler I could just duplicate the code and not worry about it.
As long as we're rewriting this code we might as well fix #339 as well.
Planning on taking a shot at this on my live stream on Sunday.
I've already got this locally, just need to merge.
Note to self: need #356 merged first.
|
gharchive/issue
| 2021-02-27T02:27:17 |
2025-04-01T04:34:46.373021
|
{
"authors": [
"knausj85",
"rntz"
],
"repo": "knausj85/knausj_talon",
"url": "https://github.com/knausj85/knausj_talon/issues/367",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1274413160
|
Add <number_small> to wheel commands
Adds fine grained control to directional scroll wheel commands. Allows user to keep the default scroll value settings the same.
No need to change these settings:
# The amount to scroll up/down (equivalent to mouse wheel on Windows by default)
user.mouse_wheel_down_amount = 120
# The amount to scroll left/right
user.mouse_wheel_horizontal_amount = 40
example usage:
🔈 wheel up 🔈=> scroll up by setting_mouse_wheel_down_amount.get()
🔈 wheel up seven 🔈=> scroll up by setting_mouse_wheel_down_amount.get() * 7
🔈 wheel down 🔈=> scroll down by setting_mouse_wheel_down_amount.get()
🔈 wheel down three 🔈=> scroll down by setting_mouse_wheel_down_amount.get() * 3
🔈 wheel right 🔈=> scroll right by setting_mouse_wheel_horizontal_amount.get()
🔈 wheel right four 🔈=> scroll right by setting_mouse_wheel_horizontal_amount.get() * 4
🔈 wheel left 🔈=> scroll left by setting_mouse_wheel_horizontal_amount.get()
🔈 wheel left nine 🔈=> scroll left by setting_mouse_wheel_horizontal_amount.get() * 9
This is quite close to the options 'wheel down five times' or 'wheel down fifth'. Do you think there's a substantial benefit over those options?
Fair point. Might be confusing to people new to the repo.
For me it allows for improvement of recognition (similar to having "paste that" and "pace that". And economy of vocalization for some frequently used commands. The "--th" repeater at the end of a long day is quite stressful on my throat.
No worries if you decide this isn't a good addition to the repo.
Thanks!
not that exciting of an update after all :)
|
gharchive/pull-request
| 2022-06-17T02:25:55 |
2025-04-01T04:34:46.377232
|
{
"authors": [
"lexjacobs",
"splondike"
],
"repo": "knausj85/knausj_talon",
"url": "https://github.com/knausj85/knausj_talon/pull/883",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
26907211
|
http://knexjs.org/ is down
404 - There isn't a GitHub Page here.
Its down again!!
|
gharchive/issue
| 2014-02-04T19:13:05 |
2025-04-01T04:34:46.380297
|
{
"authors": [
"HelloKashif",
"MajorBreakfast"
],
"repo": "knex/knex",
"url": "https://github.com/knex/knex/issues/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
124339300
|
Disconf 一处硬编码问题com.baidu.disconf.core.common.restful.impl.RestfulMgrImpl
com.baidu.disconf.core.common.restful.impl.RestfulMgrImpl
private File retryDownload(String fileName, RemoteUrl remoteUrl, int retryTimes, int retrySleepSeconds)
throws Exception {
String tmpFileDir = "./disconf/download"; //为什么要硬编码……………… 有没有考虑到可能有问题
String tmpFilePath = OsUtil.pathJoin(tmpFileDir, fileName);
String tmpFilePathUnique = MyStringUtils.getRandomName(tmpFilePath);
File tmpFilePathUniqueFile = new File(tmpFilePathUnique);
retry4ConfDownload(remoteUrl, tmpFilePathUniqueFile, retryTimes, retrySleepSeconds);
return tmpFilePathUniqueFile;
}
fix
|
gharchive/issue
| 2015-12-30T11:59:35 |
2025-04-01T04:34:46.382753
|
{
"authors": [
"knightliao",
"wanglei1598"
],
"repo": "knightliao/disconf",
"url": "https://github.com/knightliao/disconf/issues/53",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
211466788
|
Build fails when using Swift Package Manager: Tests/Evergreen has an invalid name
Swift Package Manager build fails with error: the module at Tests/Evergreen has an invalid name ('Evergreen').
cat Package.swift
import PackageDescription
let package = Package(
name: "linuxlog",
dependencies: [
.Package(url: "https://github.com/knly/Evergreen.git", majorVersion: 0),
// ...
]
)
swift build
Cloning https://github.com/knly/Evergreen.git
HEAD is now at 2eefe6f removed initial info loggin for now, improved documentation
Resolved version: 0.8.2
error: the module at Tests/Evergreen has an invalid name ('Evergreen'): the name of a test module has no ‘Tests’ suffix
fix: rename the module at ‘Tests/Evergreen’ to have a ‘Tests’ suffix
swift --version
Apple Swift version 3.1 (swiftlang-802.0.31.3 clang-802.0.30.2)
Target: x86_64-apple-macosx10.9
Hello @maximveksler, the issue is just an empty folder Tests/Evergreen that has been removed from the git repository a while ago, or rather moved to Tests/EvergreenTests. You can simply delete the folder or perform a git clean:
git clean -fdn # dry run to make sure only `Tests/Evergreen` will be deleted
git clean -fd
|
gharchive/issue
| 2017-03-02T17:47:41 |
2025-04-01T04:34:46.386062
|
{
"authors": [
"knly",
"maximveksler"
],
"repo": "knly/Evergreen",
"url": "https://github.com/knly/Evergreen/issues/21",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2432822287
|
feat: add validate partial command
Description
** Describe what, why and how of the changes clearly and concisely. Add any additional useful context or info, as necessary. **
Todos
** List any todo items necessary before merging, if any. Delete if none. **
[ ] Sample todo item 1
[ ] Sample todo item 2
Tasks
** Link to task(s) or issue(s) which this PR corresponds to. Example: KNO-54 **
Screenshots
** Attach any screenshots or recordings to visually illustrate the changes, as necessary. Delete if not relevant. **
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#388 👈
#387
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @connorlindsey and the rest of your teammates on Graphite
Merge activity
Jul 30, 1:15 PM EDT: @connorlindsey started a stack merge that includes this pull request via Graphite.
|
gharchive/pull-request
| 2024-07-26T19:07:14 |
2025-04-01T04:34:46.392813
|
{
"authors": [
"connorlindsey"
],
"repo": "knocklabs/knock-cli",
"url": "https://github.com/knocklabs/knock-cli/pull/388",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
607728425
|
ESP8266 example error
Many apologies if this is not an issue with the code in the PubSubClient examples folder in this repo, and a misunderstanding or board selection issue on my part but when I compile this .ino file in the repo I get the following error pasted below.
Additional info:
IDE version 1.8.9, Windows 10
I have changed nothing in the example except of course the SSID and key.
I have added the additional boards manager URL and installed the ESP8266 boards in Boards Manager, and selected NodeMCU 0.9 which I believe is the correct version for my board (either way it doesn't compile).
Arduino: 1.8.9 (Windows 10), Board: "NodeMCU 0.9 (ESP-12 Module), 80 MHz, Flash, Legacy (new can return nullptr), All SSL ciphers (most compatible), 4MB (FS:2MB OTA:~1019KB), v2 Lower Memory, Disabled, None, Only Sketch, 115200"
C:\Users\Mat\.....\Sketches\NodeMCU0.9-Wificonnect\NodeMCU0.9-Wificonnect.ino: In function 'void reconnect()':
NodeMCU0.9-Wificonnect:88:39: error: invalid conversion from 'const char*' to 'char*' [-fpermissive]
if (client.connect(clientId.c_str())) {
^
In file included from C:\Users\Mat\.....\Sketches\NodeMCU0.9-Wificonnect\NodeMCU0.9-Wificonnect.ino:22:0:
C:\Users\Mat\.....\Sketches\libraries\PubSubClient/PubSubClient.h:62:12: error: initializing argument 1 of 'boolean PubSubClient::connect(char*)' [-fpermissive]
boolean connect(char *);
^
NodeMCU0.9-Wificonnect:96:27: error: 'class PubSubClient' has no member named 'state'
Serial.print(client.state());
^
C:\Users\Mat\.....\Sketches\NodeMCU0.9-Wificonnect\NodeMCU0.9-Wificonnect.ino: In function 'void setup()':
NodeMCU0.9-Wificonnect:108:10: error: 'class PubSubClient' has no member named 'setServer'
client.setServer(mqtt_server, 1883);
^
NodeMCU0.9-Wificonnect:109:10: error: 'class PubSubClient' has no member named 'setCallback'
client.setCallback(callback);
^
Multiple libraries were found for "PubSubClient.h"
Used: C:\Users\Mat\.....\Sketches\libraries\PubSubClient
Not used: C:\Users\Mat\.....\Sketches\libraries\arduino_385171
Not used: C:\Users\Mat\.....\Sketches\libraries\ESP8266_Microgear
exit status 1
invalid conversion from 'const char*' to 'char*' [-fpermissive]
Apologies again, the installed library I was using must have been old, I have removed it from my libraries folder and it now compiles okay. I'll close this.
|
gharchive/issue
| 2020-04-27T17:39:15 |
2025-04-01T04:34:46.398963
|
{
"authors": [
"hazymat"
],
"repo": "knolleary/pubsubclient",
"url": "https://github.com/knolleary/pubsubclient/issues/732",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
398281159
|
Making HTTPS calls with Hyper + Hyper-TLS
I originally posted this question here, but the kind folks over at hyper-tls have convinced me that the issue might be with Daemonize. All of this code is being run on MacOS 10.14.2
I'm using Hyper + Hyper-TLS with the Daemonize crate to make the following HTTPS call from within a Daemon my subprocess runs. Interestingly, if I make a HTTP call things work fine and the request gets POST'ed to my server. However, if I change the BASE_URL to be HTTPS, then I get the following error printed out:
Error {
kind: Connect,
cause: Custom {
kind: Other,
error: Error {
code: -909
The folks at hyper-tls seem to think it might have something to do with hyper-tls depending on the security-framework crate which interacts with the MacOS keychain, and Daemonize not being able to give that crate/process access to the keychain. Error 909 seems to correspond to a security-framework error code.
Code:
My daemon is created and run from inside the main file of my program:
fn main() {
let job_daemon = Daemonize::new()
.pid_file(job_daemon_pid_file_path) // Every method except `new` and `start`
.chown_pid_file(true) // is optional, see `Daemonize` documentation
.stdout(job_stdout)
.stderr(job_stderr)
.working_directory("/tmp") // for default behaviour.
.privileged_action(move || {
krephis::new_log(&ujn, &s); // This executes the HTTPS POST request shown above
});
match job_daemon.start() {
Ok(_) => {
},
Err(e) => eprintln!("{}", e),
}
}
That krephis::new_log function calls another function which returns a future:
pub fn new_log(job_id: &str, line: &str) {
let fut = make_https_request()&token, job_id, line.map(|_response| {
}).map_err(|e| {
// An error occurs when sending HTTPS calls in a daemon... why?
match e {
kraken_utils::FetchError::Http(e) => {
eprintln!("\n============");
eprintln!("error: {:#?}", e); // nothing gets printed after this... wtf?
eprintln!("============\n");
eprintln!("http error: {}", e);
},
kraken_utils::FetchError::Json(e) => {
eprintln!("json parsing error: {}", e);
},
kraken_utils::FetchError::KrakenServerError(e) => {
eprintln!("Server error: {}", e.message);
},
kraken_utils::FetchError::Other(e) => {
eprintln!("Error: {}", e);
},
}
});
rt::run(fut);
}
make_https_request has the code shown above:
fn make_https_request(token: &str, job_name: &str, line: &str) -> impl Future<Item = StatusCode, Error = kraken_utils::FetchError> {
let url: hyper::Uri = format!("{}/logs/new", BASE_URL).parse().unwrap();
let https_connector = HttpsConnector::new(4).unwrap();
let client = Client::builder().build(https_connector);
let method = hyper::Method::POST;
let mut headers = HeaderMap::new();
let json_payload = json!({
"jobName": job_name,
"line": line,
});
let mut req = Request::new(Body::from(json_payload.to_string()));
headers.insert("x-access-token", HeaderValue::from_str(&token).unwrap());
headers.insert(
hyper::header::CONTENT_TYPE,
HeaderValue::from_static("application/json")
);
*req.method_mut() = method;
*req.uri_mut() = url;
*req.headers_mut() = headers;
client.request(req).and_then(|res| {
Ok(res.status())
}).from_err::<kraken_utils::FetchError>().from_err()
}
I did a few tests, it works now, at least if tokio runtime is configuring after daemonization.
|
gharchive/issue
| 2019-01-11T13:06:04 |
2025-04-01T04:34:46.435028
|
{
"authors": [
"grantgumina",
"knsd"
],
"repo": "knsd/daemonize",
"url": "https://github.com/knsd/daemonize/issues/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
342283902
|
Issue on Dynamic Creation in PHP
So I am "dynamically" creating a flow chart through PHP. I'm looping through a JSON object and then this is the input for ajax in JS. However when the flowchart is created I get a parse error and I don't know why.
Below is a summary of what I'm doing.
PHP
echo "graph TB", PHP_EOL;
echo "subgraph Hood One", PHP_EOL;
if ($number_of_devices > 0)
{
for ($i = 0; $i < $number_of_devices; $i++) {
echo "id" . $jsonEquipmentDecoded[$i]->array_pos . "[<center><img src='" . $jsonEquipmentDecoded[$i]->symbol_img_location . "' height='50'></br>" . $jsonEquipmentDecoded[$i]->display_name . ($jsonEquipmentDecoded[$i]->device_data->equipment_properties->has_location === true ? "</br>". $jsonEquipmentDecoded[$i]->location : "") . "</center>]", PHP_EOL;
}
}
echo "end", PHP_EOL;
JS (Used the example in here)
$.ajax({
type: "POST",
url: "templates/experiment-parameters/equipment-diagram-mermaid.php",
data: {current_selection: completeJsonObject},
success: function(data){
var needsUniqueId = "render" + (Math.floor(Math.random() * 10000)).toString(); //should be 10K attempts before repeat user finger stops working before then hopefully
mermaid.mermaidAPI.render(needsUniqueId, data, mermaidApiRenderCallback);
function mermaidApiRenderCallback(graph) {
$('#id_equipment_diagram').html(graph);
}
}
});
What the PHP creates
graph TB
subgraph Hood One
id2[<center><img src='img/symbols/Pump.svg' height='50'></br>Hitec-Zang SyrDos2</br>192.168.174.253:4002</center>]
id1[<center><img src='img/symbols/Pump.svg' height='50'></br>Hitec-Zang SyrDos2</br>192.168.174.253:4001</center>]
id4[<center><img src='img/symbols/microreactor.png' height='50'></br>Microreactor</center>]
id3[<center><img src='img/symbols/ir.svg' height='50'></br>Mettler IR</br>192.168.174.123</center>]
end
And the output is correct -
And the error -
Uncaught Error: Parse error on line 1:
^
Expecting 'NEWLINE', 'SPACE', 'GRAPH', got 'EOF'
at Parser.parseError (mermaid.js:65625)
at Parser.parse (mermaid.js:65716)
at Object.getClasses (mermaid.js:65014)
at Object.render (mermaid.js:71583)
at _loop (mermaid.js:71068)
at Object.init (mermaid.js:71078)
at contentLoaded (mermaid.js:71113)
at mermaid.js:71131
So the issue was lying with having the div class set as mermaid with no content in it. Remove the class reference and let the javascript do the magic.
|
gharchive/issue
| 2018-07-18T11:14:03 |
2025-04-01T04:34:46.439494
|
{
"authors": [
"gar-syn"
],
"repo": "knsv/mermaid",
"url": "https://github.com/knsv/mermaid/issues/690",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
744136579
|
git drive without any navigators added could print the help
Error
Error:
0: Empty list of items given to `MultiSelect`
Metadata
key
value
version
0.1.0
Fixed in https://github.com/knutwalker/git-drive/releases/tag/0.2.0
|
gharchive/issue
| 2020-11-16T19:56:50 |
2025-04-01T04:34:46.441877
|
{
"authors": [
"jjaderberg",
"knutwalker"
],
"repo": "knutwalker/git-drive",
"url": "https://github.com/knutwalker/git-drive/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
929852451
|
Display the column calculations above the column header ?
Thank you Table Filter, it helped me a lot
I have been using this since a month, I have found that it'd be great if I have an option to have a look at the column sum, averahe on the top of selected column.
Anyone if solved or can solve and share your views would be appreciated.
Thank you
@koalyptus please help me with this issue,
|
gharchive/issue
| 2021-06-25T05:50:59 |
2025-04-01T04:34:46.452154
|
{
"authors": [
"SaiPrabhu611"
],
"repo": "koalyptus/TableFilter",
"url": "https://github.com/koalyptus/TableFilter/issues/819",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1438077725
|
Cannot interpret nested call statements in return
local := R[70..80]
def func2(val, exp) : numreg
return(Mth::exp(exp * Mth::ln(val)))
end
power := R[20]
power = ns1::func2(4, 2)
gives error:
indirect_node.rb:70:in `id': undefined method `value' for #<TPPlus::Nodes::ArgumentNode:0x000001f424c699f8 @id=3, @comment=:ret> (NoMethodError)
@target.value\r
Current work around is to put it outside of return using a local register
local := R[70..80]
def func2(val, exp) : numreg
num := LR[]
num = Mth::exp(exp * Mth::ln(val))
return(num)
end
power := R[20]
power = ns1::func2(4, 2)
|
gharchive/issue
| 2022-11-07T09:58:02 |
2025-04-01T04:34:46.453520
|
{
"authors": [
"kobbled"
],
"repo": "kobbled/tp_plus",
"url": "https://github.com/kobbled/tp_plus/issues/43",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2724753301
|
Actors > Vehicles: Bonus Actions and Reactions need a generic text box like what is available in Actions
Like Actions, Reactions and Bonus Actions may have some flavor text. A text box should to be added to contain such text like in Actions as an enhancement to the Vehicle Actor.
Vehicles only have actions, not bonus actions or reactions. If you have any example vehicles that do otherwise feel free to show me.
Winged Dragon Vehicle has a reaction. Per KP, it is possible that future vehicles may have similar generic text to restrict the use of a bonus action.
fvtt-Actor-winged-dragon-WMYZewAervWnyDkL.json
|
gharchive/issue
| 2024-12-07T18:09:54 |
2025-04-01T04:34:46.455719
|
{
"authors": [
"arbron",
"secondworldpublishing"
],
"repo": "koboldpress/black-flag",
"url": "https://github.com/koboldpress/black-flag/issues/872",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1063734410
|
Make from SVGs 3d glb models
Let's add that easy SWITCH/toggle 😄
which would help user generate in browser from svg 3d glb model in NFT
end goal would be exploring these models in AR/VR more easily :)
focus is generative art now
|
gharchive/issue
| 2021-11-25T15:43:40 |
2025-04-01T04:34:46.469375
|
{
"authors": [
"JustLuuuu",
"yangwao"
],
"repo": "kodadot/nft-gallery",
"url": "https://github.com/kodadot/nft-gallery/issues/1295",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1875333098
|
Notification sidebar won't close when clicking on a notification
What happened?
true for mobile and desktop:
https://github.com/kodadot/nft-gallery/assets/36627808/4a61324e-18da-421e-9703-820ca52e9bbb
👋
|
gharchive/issue
| 2023-08-31T11:24:16 |
2025-04-01T04:34:46.470587
|
{
"authors": [
"prury",
"xiiiAtCn"
],
"repo": "kodadot/nft-gallery",
"url": "https://github.com/kodadot/nft-gallery/issues/7071",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1308357176
|
disabling build_search_index breaks the theme mode toggle functionality
in config.toml, if you remove 'build_search_index = true" (for those who do not want to have search on their site, or at least do not want it at the middle of their header) it also breaks the theme mode changing functionality.
it's also unclear how to edit the search input text color. i've tried everything.
i've resorted to just allowing zola to build the search index, which i don't actually want, just because it breaks the theme otherwise. but if i allow the search index to be built and don't have a search form in my nav, i just have a vestigial element floating in the middle of my header nav anyway, and i cannot get rid of it. frustrating
the thing is, the search doesn't work. it doesn't even work on the demo website. so why have it at all?
ok, figured it out.
<script src="{{ get_url(path='mode-switch.js') | safe }}"></script>
needs to be loaded outside of the if config.build_search_index statement. that's all.
|
gharchive/issue
| 2022-07-18T18:29:28 |
2025-04-01T04:34:46.541756
|
{
"authors": [
"asyapluggedin"
],
"repo": "kogeletey/karzok",
"url": "https://github.com/kogeletey/karzok/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2000568909
|
RuntimeError: stack expects each tensor to be equal size, but got [4, 108, 148] at entry 0 and [4, 96, 168] at entry 1
I've been trying to train a LoRA, and I'm getting the following error:
steps: 3%|████▎ | 63/2175 [03:19<1:51:31, 3.17s/it, Average key norm=0.000497, Keys Scaled=0, avr_loss=0.0661]Traceback (most recent call last):
File "E:\github\kohya_ss\sdxl_train_network.py", line 185, in <module>
trainer.train(args)
File "E:\github\kohya_ss\train_network.py", line 755, in train
for step, batch in enumerate(train_dataloader):
File "e:\github\kohya_ss\venv\lib\site-packages\accelerate\data_loader.py", line 394, in __iter__
next_batch = next(dataloader_iter)
File "e:\github\kohya_ss\venv\lib\site-packages\torch\utils\data\dataloader.py", line 633, in __next__
data = self._next_data()
File "e:\github\kohya_ss\venv\lib\site-packages\torch\utils\data\dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "e:\github\kohya_ss\venv\lib\site-packages\torch\utils\data\_utils\fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "e:\github\kohya_ss\venv\lib\site-packages\torch\utils\data\_utils\fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "e:\github\kohya_ss\venv\lib\site-packages\torch\utils\data\dataset.py", line 243, in __getitem__
return self.datasets[dataset_idx][sample_idx]
File "E:\github\kohya_ss\library\train_util.py", line 1239, in __getitem__
example["latents"] = torch.stack(latents_list) if latents_list[0] is not None else None
RuntimeError: stack expects each tensor to be equal size, but got [4, 108, 148] at entry 0 and [4, 96, 168] at entry 1
steps: 3%|████▎ | 63/2175 [03:20<1:51:46, 3.18s/it, Average key norm=0.000497, Keys Scaled=0, avr_loss=0.0661]
Traceback (most recent call last):
File "C:\Users\Jared\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Jared\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "E:\github\kohya_ss\venv\Scripts\accelerate.exe\__main__.py", line 7, in <module>
File "e:\github\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main
args.func(args)
File "e:\github\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 986, in launch_command
simple_launcher(args)
File "e:\github\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['e:\\github\\kohya_ss\\venv\\Scripts\\python.exe', './sdxl_train_network.py', '--enable_bucket', '--min_bucket_reso=256', '--max_bucket_reso=2048', '--pretrained_model_name_or_path=E:\\github\\stable-diffusion-webui\\models\\Stable-diffusion\\sdxl\\sd_xl_base_1.0_0.9vae.safetensors', '--train_data_dir=C:\\training\\person-xl\\img', '--resolution=1024,1024', '--output_dir=C:\\training\\person-xl\\lora-output', '--logging_dir=C:\\training\\person-xl\\logging', '--network_alpha=128', '--save_model_as=safetensors', '--network_module=networks.lora', '--unet_lr=1.0', '--network_train_unet_only', '--network_dim=128', '--output_name=person-xl-2.0', '--lr_scheduler_num_cycles=100', '--scale_weight_norms=1', '--network_dropout=0.1', '--cache_text_encoder_outputs', '--no_half_vae', '--lr_scheduler=cosine', '--lr_warmup_steps=218', '--train_batch_size=4', '--max_train_steps=2175', '--save_every_n_epochs=10', '--mixed_precision=bf16', '--save_precision=bf16', '--caption_extension=.txt', '--cache_latents', '--cache_latents_to_disk', '--optimizer_type=Prodigy', '--optimizer_args', 'weight_decay=0.05', 'betas=0.9,0.98', '--max_data_loader_n_workers=0', '--keep_tokens=1', '--bucket_reso_steps=32', '--min_snr_gamma=5', '--gradient_checkpointing', '--xformers', '--noise_offset=0.0357', '--adaptive_noise_scale=0.00357', '--log_prefix=xl-lora', '--sample_sampler=euler_a', '--sample_prompts=C:\\training\\person-xl\\lora-output\\sample\\prompt.txt', '--sample_every_n_steps=25']' returned non-zero exit status 1.
I have a feeling one of the images is causing the issue. Is there a way to figure out which one?
In https://github.com/kohya-ss/sd-scripts/blob/main/library/train_util.py#L1146
if torch.Size([4, 96, 168]) == latents.size():
print(image_info.absolute_path)
Something like that.
It looks like it's happening immediately so maybe it's something related to cache_latents and bucket_no_upscale.
You guys were right. Just renamed my files and the training completed. Thank you!
Looks like for SDXL training now exist problem with buckets. Because doesn't work with batch > 1 and when loading buckets ignores bucket_reso_steps.
|
gharchive/issue
| 2023-11-18T20:28:47 |
2025-04-01T04:34:46.567031
|
{
"authors": [
"jndietz",
"rockerBOO",
"x-name"
],
"repo": "kohya-ss/sd-scripts",
"url": "https://github.com/kohya-ss/sd-scripts/issues/958",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
682099546
|
Fix string(int) issues for Go 1.15 compatibility
go vet in Golang 1.15 does not allow type casting from int to string. More info in https://github.com/golang/go/issues/32479 and https://github.com/golang/go/issues/3939.
Example output:
$ make test
go vet ./...
# github.com/kolide/fleet/server/datastore
server/datastore/datastore_labels_test.go:22:29: conversion from int to string yields a string of one rune, not a string of digits (did you mean fmt.Sprint(x)?)
server/datastore/datastore_labels_test.go:22:40: conversion from int to string yields a string of one rune, not a string of digits (did you mean fmt.Sprint(x)?)
make: *** [lint-go] Error 2
Thank you!
|
gharchive/pull-request
| 2020-08-19T19:16:06 |
2025-04-01T04:34:46.628072
|
{
"authors": [
"jalseth",
"zwass"
],
"repo": "kolide/fleet",
"url": "https://github.com/kolide/fleet/pull/2286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
198318973
|
Conflict with ActiveRecord::Enum
After gem install I start receiving error:
ActionView::Template::Error (undefined method `where' for #Module:0x000000018e5138)
code example:
Model
class Payment < ApplicationRecord
enum payment_type: {dotpay: 1, bank_transfer: 4 }
end
then:
Payment.dotpay raises the exception above
Thanks for the info. I will take a look in the following days!
@PabloMD I didn't find any issue, but I am adding tests not to feat/tests branch just to be sure.
@vasilakisfil, I do not have time to investigate it further right now. Here's what I get on rails console just in case you are curious:-)
2.3.1 :001 > Payment.dotpay
NoMethodError: undefined method `where' for #<Module:0x000000064c8848>
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/activerecord-5.0.0.1/lib/active_record/enum.rb:193:in `block (4 levels) in enum'
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/active_hash_relation-1.1.0/lib/active_record/scope_names.rb:22:in `block (2 levels) in scope'
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/activerecord-5.0.0.1/lib/active_record/relation.rb:351:in `scoping'
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/active_hash_relation-1.1.0/lib/active_record/scope_names.rb:22:in `block in scope'
from (irb):1
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/railties-5.0.0.1/lib/rails/commands/console.rb:65:in `start'
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/railties-5.0.0.1/lib/rails/commands/console_helper.rb:9:in `start'
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/railties-5.0.0.1/lib/rails/commands/commands_tasks.rb:78:in `console'
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/railties-5.0.0.1/lib/rails/commands/commands_tasks.rb:49:in `run_command!'
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/railties-5.0.0.1/lib/rails/commands.rb:18:in `<top (required)>'
from bin/rails:9:in `require'
from bin/rails:9:in `<main>'
@PabloMD I added some test cases with enums, I don't see any conflict anywhere. Are you sure it comes from ActiveHashRelation ?
@vasilakisfil, hi the previous reply is from my company account:-), and it looks that the problem lies in resolving scope by ActiveHashRelation.
In your added test case you actualy query DB by column and its value which works, I have tested it too (Payment.where(payment_type: 1)), but when you use Payment.dotpay it should translate it to same query ('Payment.where(payment_type: 1)').
...
from /home/pablo/.rvm/gems/ruby-2.3.1@glakso5/gems/active_hash_relation-1.1.0/lib/active_record/scope_names.rb:22:in `block (2 levels) in scope'
Thanks for reporting. I have fixed it on master. Some notes:
Filtering on scopes now is not enabled by default. To enable you need to create an initializer (check documentation on master)
Filtering on scopes supports arguments
Monkey patch for filtering scopes has improved A LOT. Actually it's as gentle as it can be: it aliases the method, adds some sugar and executes it. (unfortunately Rails does not provide a way to iterate through defined scopes)
Closing for now, feel free to reopen.
|
gharchive/issue
| 2017-01-02T10:45:52 |
2025-04-01T04:34:46.634118
|
{
"authors": [
"PabloMD",
"demural",
"vasilakisfil"
],
"repo": "kollegorna/active_hash_relation",
"url": "https://github.com/kollegorna/active_hash_relation/issues/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
807232711
|
feat: add Open Graph title, image and description
Noticed when sharing the link on Slack that it didn't expand the URL, due to missing OG metadata - added it here.
came up with a random description, feel free to change :)
took the best image I could find (logo from the header), but would probably be nicer with a custom one with a phone + app?
Facebook preview:
Slack preview (image missing for some reason?):
fixat!
Tack för detta! Nu är den mergad! ❤️
|
gharchive/pull-request
| 2021-02-12T13:25:05 |
2025-04-01T04:34:46.637031
|
{
"authors": [
"JCB-K",
"irony"
],
"repo": "kolplattformen/skolplattformen",
"url": "https://github.com/kolplattformen/skolplattformen/pull/84",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
807714359
|
Set app name to 'Skolplattformen'
Partial fix for #89
being fixed here as well - https://github.com/kolplattformen/skolplattformen/pull/98
|
gharchive/pull-request
| 2021-02-13T09:31:27 |
2025-04-01T04:34:46.638140
|
{
"authors": [
"devilbuddy"
],
"repo": "kolplattformen/skolplattformen",
"url": "https://github.com/kolplattformen/skolplattformen/pull/97",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1102456075
|
Updated name in toolbox and toolboxIcon of plugin
We have many similar plugins with the same name and icon in the toolbar.
@Akash187 Thanks for your pull request,
could you please check the SVG icon element?, it is not being rendered correctly in the toolbar:
Best,
Mario
@Akash187 Thanks for the new commit, only one more thing, please add a newline at the end of the ToolboxIcon.svg file.
Best,
Mario
@Akash187 Thanks for contribute!,
the Pull Request is accepted, it will be merged, and the new version will be released soon.
Best,
Mario
@MarioRodriguezS 🙂
@MarioRodriguezS updated my profile to Opensource contributor 🙂
|
gharchive/pull-request
| 2022-01-13T22:04:01 |
2025-04-01T04:34:46.654920
|
{
"authors": [
"Akash187",
"MarioRodriguezS"
],
"repo": "kommitters/editorjs-inline-image",
"url": "https://github.com/kommitters/editorjs-inline-image/pull/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1130680988
|
kubeval Project seems to be dead
seems that kubeval is dead:
https://github.com/instrumenta/kubernetes-json-schema/issues/32#issuecomment-1021133568
The latest commit is from Apr 2021:
https://github.com/instrumenta/kubeval/commit/062c99a2ad6554ca5798c07599fa6c06db975325
Thanks!
Do you know if https://github.com/yannh/kubeconform includes all relevant checks and is in party with kubeval?
Kubeconform has feature parity and also updated API references
This issue is resolved in this commit eaf51639cc4a5b3b4d2d3a9c0940324f508b0fa5
|
gharchive/issue
| 2022-02-10T18:53:54 |
2025-04-01T04:34:46.657586
|
{
"authors": [
"itielshwartz",
"mstoetzer",
"nirsht",
"s3than"
],
"repo": "komodorio/validkube",
"url": "https://github.com/komodorio/validkube/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
356801244
|
output string parameter
Hello, thanks for the work this is great !
I encountered an issue when trying to create a simple "String Value" node, based on the MathSample :
[Node("String Value", "Input", "Basic", "Allows to output a simple string value.", false)]
public void InputStringValue(string inValue, out string outValue)
{
outValue = inValue;
}
When creating the node, I have this exception :
System.MissingMethodException: 'Constructor on type 'System.String' not found.'
This breaks into NodeVisual.cs line 193, because the tested out type is not 'String' but 'String&'.
I worked around it by replacing the test this way :
var p = output.ParameterType.Name.TrimEnd('&').ToLower() == "string"
I thought this would require your attention for a proper fix :)
@komorra I got the same problem, do you still maintance this project?
Just create your own custom nodes like this:
https://gist.github.com/sqrMin1/ee7fab1a584c6c63e8c32c90f7be7dc4
You will not run into issues like this anymore.
Hi, this has been fixed now, in 01dcc2903ce1718b9296f09b9a513dfd2d761ca3 , now it is possible to use string types directly as node input and output parameters.
|
gharchive/issue
| 2018-09-04T13:06:04 |
2025-04-01T04:34:46.662147
|
{
"authors": [
"arqtiq",
"komorra",
"sqrMin1",
"zhenyuan0502"
],
"repo": "komorra/NodeEditorWinforms",
"url": "https://github.com/komorra/NodeEditorWinforms/issues/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
841720319
|
Switch waker to newly stabilised std::task::Wake
Hopefully using Wake will make the futures part a bit leaner.
Well, that's a "no".
Can't make the dependency with Arc<dyn CoreContainer> work out quite right for Waker::from(Arc<_: Wake>).
Oh well, guess we don't mess with a running system, then^^
|
gharchive/issue
| 2021-03-26T08:35:21 |
2025-04-01T04:34:46.664105
|
{
"authors": [
"Bathtor"
],
"repo": "kompics/kompact",
"url": "https://github.com/kompics/kompact/issues/148",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1598889191
|
There is not implementation found for mac.HmacImpl
I have the above error when I try to use mnemonicWords.toKey() function
I have already added bouncycastle as it is in the walleth code
bellow is the full error
Exception in thread "main" java.lang.RuntimeException: There is not implementation found for mac.HmacImpl - you need to either depend on crypto_impl_spongycastle or crypto_impl_bouncycastle
at org.kethereum.crypto.CryptoAPIKt.loadClass(CryptoAPI.kt:15)
at org.kethereum.crypto.CryptoAPI$hmac$2.invoke(CryptoAPI.kt:19)
at org.kethereum.crypto.CryptoAPI$hmac$2.invoke(CryptoAPI.kt:19)
at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)
at org.kethereum.crypto.CryptoAPI.getHmac(CryptoAPI.kt:19)
at org.kethereum.bip32.ConverterKt.toExtendedKey-oOkmR4Q(Converter.kt:23)
at org.kethereum.bip32.ConverterKt.toExtendedKey-oOkmR4Q$default(Converter.kt:21)
at org.kethereum.bip32.BIP32.toKey-oOkmR4Q(BIP32.kt:26)
at org.kethereum.bip32.BIP32.toKey-oOkmR4Q$default(BIP32.kt:25)
at org.kethereum.bip39.MnemonicKt.toKey-aHn7skU(Mnemonic.kt:78)
at org.kethereum.bip39.MnemonicKt.toKey-aHn7skU$default(Mnemonic.kt:77)
at com.example.rawkotlinproject.MainKt.main(Main.kt:30)
at com.example.rawkotlinproject.MainKt.main(Main.kt)
You need to share a full project (ideally reduced to the error) so there is even a chance to say what is going on there
it's working now
initially i wanted to find my way around the library as there is no documentation at this time so i created a temp intellij project for the test, the error came from there, when i re-ran the project in android studio, it work perfectly.
when I get get my footing on the Library, will it be ok if I documented it?
I'll close this now, thank you
sure thing - happy about every documentation attempt!
|
gharchive/issue
| 2023-02-24T15:55:37 |
2025-04-01T04:34:46.666558
|
{
"authors": [
"geofferyj",
"ligi"
],
"repo": "komputing/KEthereum",
"url": "https://github.com/komputing/KEthereum/issues/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
261829315
|
add better documentation
expound more in the readme
add better code comments(check it out at; https://godoc.org/github.com/komuW/meli)
fixed by https://github.com/komuw/meli/pull/99
|
gharchive/issue
| 2017-09-30T08:40:03 |
2025-04-01T04:34:46.668230
|
{
"authors": [
"komuw"
],
"repo": "komuw/meli",
"url": "https://github.com/komuw/meli/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1998174969
|
This is for
https://github.com/konbraphat51/AnimatedWordCloud/issues/18
Refering: https://github.com/konbraphat51/AnimatedWordCloud/issues/18
|
gharchive/issue
| 2023-11-17T02:55:37 |
2025-04-01T04:34:46.670973
|
{
"authors": [
"konbraphat51"
],
"repo": "konbraphat51/Timelapse_Text",
"url": "https://github.com/konbraphat51/Timelapse_Text/issues/1",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
65748956
|
Empty ApiOperation.position parameter breaks generated page for static UI
File html.mustache contains the following line to generate tabs:
$("#tabs{{index}}{{apiIndex}}{{opIndex}}").tabs();
If ApiOperation.position is not defined for each REST API method, than it equals 0 for each method. As you can see, it will generate something like:
$("#tabs00").tabs();
$("#tabs00").tabs();
$("#tabs00").tabs();
and all div elements for tabs will be like this:
<div id="tabs00">
...
</div>
<div id="tabs00">
...
</div>
<div id="tabs00">
...
</div>
As result, only the first <div id="tabs00"> will be processed as tab ($("#tabs00").tabs();).
Thank you!
|
gharchive/issue
| 2015-04-01T18:26:52 |
2025-04-01T04:34:46.673157
|
{
"authors": [
"vbauer"
],
"repo": "kongchen/swagger-maven-plugin",
"url": "https://github.com/kongchen/swagger-maven-plugin/issues/128",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
473151449
|
[FEATURE] Have the ability to disable the init container (which requires privileged permissions)
I would like to have the ability to disable the init container via environment variable or annotation, since my cluster security settings do not allow privileged containers.
I will make sure that /proc/sys/net/ipv4/ip_forward = 1 myself.
Thanks in advance.
Yes I have the same issue. I think also it would be nice to see a PodSecurityPolicy that allows you to easily roll out the load balancer when you have a restrictive cluster.
|
gharchive/issue
| 2019-07-26T02:49:41 |
2025-04-01T04:34:46.689393
|
{
"authors": [
"HuxyUK",
"digger18"
],
"repo": "kontena/akrobateo",
"url": "https://github.com/kontena/akrobateo/issues/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
132700526
|
distribute cli via installer (instead of gem)
Just like vagrant... https://github.com/mitchellh/vagrant-installers
Now that Windows and OSX is going to have better Docker support (https://blog.docker.com/2016/03/docker-for-mac-windows-beta/) maybe we could package cli as a docker image and just install wrapper script to host?
There is work ongoing to create installers with https://github.com/chef/omnibus
see #1019
OSX installer now available
Omnibus installer is available for OSX, let's use new issues for tracking other missing platforms.
|
gharchive/issue
| 2016-02-10T13:31:26 |
2025-04-01T04:34:46.691981
|
{
"authors": [
"SpComb",
"hans-d",
"jakolehm",
"jnummelin",
"kke"
],
"repo": "kontena/kontena",
"url": "https://github.com/kontena/kontena/issues/511",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
235830816
|
Update test kommando to fix missing bash failures
Fixes #2468 by bumping the test/Gemfile kommando dep to 0.1.1, which falls back to /bin/sh if bash is missing. Updating the Gemfile.lock also bumped the kontena-cli deps.
Also fixes #1903 by using docker-compose rm -v during teardown
The service exec specs now pass with Kommando 0.1.2:
Fetching kommando 0.1.2 (was 0.1.1)
Installing kommando 0.1.2 (was 0.1.1)
service exec
runs a command inside a service
returns an error if command not found
runs a command inside a service on a given instances
runs a command inside a service with tty
runs a command with piped stdin
runs a command on every instance with --all
Finished in 1 minute 9.2 seconds (files took 0.29609 seconds to load)
6 examples, 0 failures
ping @kke - merge this for 1.3.1 so that I can also get the e2e specs to pass for the release?
|
gharchive/pull-request
| 2017-06-14T10:16:49 |
2025-04-01T04:34:46.694482
|
{
"authors": [
"SpComb"
],
"repo": "kontena/kontena",
"url": "https://github.com/kontena/kontena/pull/2475",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
254008213
|
Send & store logs in batches
It's more efficient to send logs from agent to master in batches.
@SpComb PTAL
|
gharchive/pull-request
| 2017-08-30T13:56:09 |
2025-04-01T04:34:46.695341
|
{
"authors": [
"jakolehm"
],
"repo": "kontena/kontena",
"url": "https://github.com/kontena/kontena/pull/2750",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
736033824
|
Consider supporting migration of databases.
There are a lot of existing databases that have been configured extensively for a particular application and enterprise.
Think about a way the database configuration can be migrated to other database engines automatically.
We should try to do it in a general way instead of hardcoding support each pair of database engines.
This is being handled as part of Tackle DiVA
|
gharchive/issue
| 2020-11-04T11:21:08 |
2025-04-01T04:34:46.699475
|
{
"authors": [
"HarikrishnanBalagopal",
"ashokponkumar"
],
"repo": "konveyor/move2kube",
"url": "https://github.com/konveyor/move2kube/issues/159",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2038117077
|
:seedling: If seed dir exists on the filesystem, don't clone it in the dockerfile
This was the simplest way to do it I think, without requiring Makefile updates to pass along build-args and dealing with various values. If you just drop tackle2-seed into the project it won't clone it anymore.
The failing test is from ruleset changes that have broken CI, unrelated to this PR (this PR is the first step to gating that repo so we stop getting breakages)
|
gharchive/pull-request
| 2023-12-12T16:18:58 |
2025-04-01T04:34:46.700553
|
{
"authors": [
"fabianvf"
],
"repo": "konveyor/tackle2-hub",
"url": "https://github.com/konveyor/tackle2-hub/pull/576",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1484000053
|
runtime-proxy: remove unused file
Ⅰ. Describe what this PR does
remove unused file
Ⅱ. Does this pull request fix one issue?
NONE
Ⅲ. Describe how to verify it
Ⅳ. Special notes for reviews
V. Checklist
[ ] I have written necessary docs and comments
[ ] I have added necessary unit tests and integration tests
[x] All checks passed in make test
/close
|
gharchive/pull-request
| 2022-12-08T08:05:23 |
2025-04-01T04:34:46.795073
|
{
"authors": [
"fengyehong",
"honpey"
],
"repo": "koordinator-sh/koordinator",
"url": "https://github.com/koordinator-sh/koordinator/pull/870",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1118750792
|
Added reference to XenoTactic youtube channel & add documentation on alignment/centering.
I formatted the file using Intellij's "Reformat File" which might explain some of the formatting changes.
Done
done
Thanks!
|
gharchive/pull-request
| 2022-01-30T23:17:05 |
2025-04-01T04:34:46.902079
|
{
"authors": [
"Kietyo",
"soywiz"
],
"repo": "korlibs/docs.korge.org",
"url": "https://github.com/korlibs/docs.korge.org/pull/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2001650699
|
Kosmos needs to synchronize the label information of the namespace to the sub-cluster
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
Kosmos version:
Others:
/assign @yuleichun-striving
|
gharchive/issue
| 2023-11-20T08:43:01 |
2025-04-01T04:34:46.920952
|
{
"authors": [
"yuleichun-striving"
],
"repo": "kosmos-io/kosmos",
"url": "https://github.com/kosmos-io/kosmos/issues/268",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
95300297
|
event being fired 3x even with correct js
I have this JS on my app
$(document).on('page:before-change', function(){
if($('.chat-text-field').val())
return confirm("Your message hasn't been sent yet. Do you still want to leave?");
});
turns out that this message is beign fired 3 times. Is there something else missing?
Also, my data-confirm message are being fired multiple times.
I'm using Ruby 2.2.2 with Rails 4.2.3 and Turbolinks 2.5.3
is it inside $(function() { ... }) ?
nope...it is just as I pasted above
is the script loaded inside ?
it is placed before the `
Sorry for the long delay...that actually worked. Why?
+1
@jlerpscher here is the answer: https://github.com/kossnocorp/jquery.turbolinks/issues/51#issuecomment-121845953
Thank you @luizkowalski but why we have to place it on <head>?
I asked the same question above hahahaha
Putting the script in the body would defeat the purpose of turbolinks as turbolinks reloads the body but keeps the head to save time.
I think that turbolinks (or jquery-turbolinks) cancels previous handlers etc on page change as the page hasn't actually reloaded according to the browser. Script run in the body though will run each time it is loaded and will stack on top of each other causing events to fire multiple times.
|
gharchive/issue
| 2015-07-15T22:03:08 |
2025-04-01T04:34:46.925806
|
{
"authors": [
"Subtletree",
"jlerpscher",
"luizkowalski",
"rstacruz"
],
"repo": "kossnocorp/jquery.turbolinks",
"url": "https://github.com/kossnocorp/jquery.turbolinks/issues/51",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
511161662
|
Allow BusyBar to shrink
Returning getWidth() dynamically for getPrefWidth() allows
BusyBar to grow but not shrink when the parent layout resizes.
segmentWidth should be a sensible prefWidth for this widget.
This is to address #316
Lemme rebase properly and try again.
|
gharchive/pull-request
| 2019-10-23T08:27:59 |
2025-04-01T04:34:46.940117
|
{
"authors": [
"azurvii"
],
"repo": "kotcrab/vis-ui",
"url": "https://github.com/kotcrab/vis-ui/pull/317",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1706371241
|
Data discrepancy found!
invalid year 3003 for Lost in Translation. Should be 2003 :-)
Thanks for info. The release year has been fixed with release 1.0.0
|
gharchive/issue
| 2023-05-11T18:52:35 |
2025-04-01T04:34:47.529577
|
{
"authors": [
"kovacing"
],
"repo": "kovacing/flask-sherlock",
"url": "https://github.com/kovacing/flask-sherlock/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
451208210
|
Reexport ShortByteString
https://hackage.haskell.org/package/bytestring-0.10.8.2/docs/Data-ByteString-Short.html
Nice! This is very useful data type. I think that to make this reexport more complete, we also need to add additional functions where possible:
One instance
Conversion functions
readFile/writeFile
Maybe something else...
|
gharchive/issue
| 2019-06-02T16:15:39 |
2025-04-01T04:34:47.642951
|
{
"authors": [
"chshersh",
"vrom911"
],
"repo": "kowainik/relude",
"url": "https://github.com/kowainik/relude/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
638242646
|
Create poetry environment when starting a new project
When creating new project, there are only three options to create new environment:
Virtualenv
Pipenv
Conda
Is it possible to add Poetry in this list or it would require additional API on PyCharm side?
Temporary workaround
Create project with existing python interpreter and then create poetry environment
Hi @MrMrRobat
The feature can not be implemented by the restrictions of PyCharm.
https://github.com/koxudaxi/poetry-pycharm-plugin#feature-restrictions
Create a new environment when creating a new project.
He is going to start to add APIs (a.k.a extension point) into PyCharm 2020.2 EAP.
The detail is in this issue.
https://github.com/koxudaxi/poetry-pycharm-plugin/issues/58#issuecomment-643668766
Oh, sorry, somehow I missed it.
I’m glad to see such a great plugin finally working on PyCharm and PyCharm devs are willing to provide full integration for it. Great work, @koxudaxi, thank you!
@MrMrRobat
I have released a new version as 0.5.0 that supports this feature.
This version required PyCharm 2020.2.2 or later.
Related PR
https://github.com/koxudaxi/poetry-pycharm-plugin/pull/104
|
gharchive/issue
| 2020-06-13T22:19:40 |
2025-04-01T04:34:47.648262
|
{
"authors": [
"MrMrRobat",
"koxudaxi"
],
"repo": "koxudaxi/poetry-pycharm-plugin",
"url": "https://github.com/koxudaxi/poetry-pycharm-plugin/issues/63",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
692656645
|
set up operator monitoring
People we don't know inquire about being operators, sometimes they seem trustworthy. We should be able to evaluate their work, then we could add them and remove them if they are not appropriate.
Log or metric when operator accepts call, including number
this is the MVP
then we could check the operator log, see how they describe calls or ask why they haven't logged
Record operator calls
stretch goal
notify both sides or at least the caller
MixMonitor to record?
use the b argument to followme to gosub to a context that starts MixMonitor?
use the b argument to followme to gosub to a context that logs operator?
|
gharchive/issue
| 2020-09-04T03:25:13 |
2025-04-01T04:34:47.673403
|
{
"authors": [
"kra"
],
"repo": "kra/futel-installation",
"url": "https://github.com/kra/futel-installation/issues/393",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2008876208
|
MSG100 delay sensors unavailable in V4.4.0 (also V4.4..x alpha had to revert to main stream)
Issues without a description (using the header is not good enough) will be closed.
open and close delay sensor not supported and now use the defaults of 35 seconds for opening.
Device MSG100, Firmware 3.2.7
From changelog
"added support for newer garageDoor (msgXXX) firmwares with open/close delay x channel (fixing also #301)"
Issues without debug logging will be closed.
irrelevant
Issues without configuration will be closed
Standard config I.e no change to my config apart from meross lan update
HA core 2023.11.3, OS 11.1, Supervisor 2023.11.3, frontend 20231030.2
-->
Version of the custom_component
V 4.3.0 but issues is in V4.4.0 (only started in alpha V4.4.03 ? (latest alpha))
Reverted back to V4.3.0 solves issue
Hello @Duke-Box,
My fault, nevertheless!
It looks like your fw 3.2.7 falls in an edge case not covered by latest 4.4.0 implementation.
I knew this could happen since I had a trace log in the past with exactly this garageDoor fw version.
That device fw is particularly 'wrong' since it doesn't support interacting with the device configuration for open/close duration
Previous meross_lan version was aware of this and built a total mess (behind the scenes) in order to 'emulate' the device open/close timeouts in HA with various issues in between (but finally resolved through various patches)
Now, for this release I've totally removed all of these 'emulation' since I thought that fw should have been superseeded by now with something newer (and less faulty) but either Meross didn't update the fw for that particular device hw version or you don't have the option to upgrade it so we're left with the issue.
I'll restore the 'emulation' behavior in the next update!
A possible coincidence but about a week ago, Alexa stopped communicating with the MSG100 and a powercycle of the device brought it back
I wonder if the firmware was trying to update?
Anyway glad you know about it and I'll wait for the update.
Thanks
Mike
@krahabb
Hi
I notice that this is 'emulation' behaviour has snot been implemented in the latest release v4.4.1, Is this correct?
I still have to use v4.3.0 to make my MSG100 work properly with my automations.
Can you give me an expected timeline for this fix.
Many thanks
Mike
Yeah, I've quickly restored the behavior but the dev branch has now gone too far away and I'm far from being able to quickly release the fix in the stable releases :(
Since I'm releasing a small patch in the meantime I'll see if I can merge this fix into the main branch together.
I'll let you know
The pre-release channel has (or should have) the fix. You have to enable HACS pre-releases download in order to be prompted for the update (by default HACS only notifies and installs stable releases)
Thanks
You mean beta releases?
On Thu, 14 Dec 2023, 10:50 krahabb, @.***> wrote:
The pre-release channel has (or should have) the fix. You have to enable
HACS pre-releases download in order to be prompted for the update (by
default HACS only notifies and installs stable releases)
—
Reply to this email directly, view it on GitHub
https://github.com/krahabb/meross_lan/issues/338#issuecomment-1855619951,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJIPW7DUAVFELJEKHGYESI3YJLKWPAVCNFSM6AAAAAA7YK4ZUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJVGYYTSOJVGE
.
You are receiving this because you modified the open/close state.Message
ID: @.***>
Yeah..kind of..I always struggle to identify where HACS asks for this and it is in the 'redownload' page for the repository in HACS: there you can instruct HACS to show beta versions (which are the github pre-releases despite the fact that I usually mark them as 'alpha' ;)
Many Thanks - That's fixed it for me.
|
gharchive/issue
| 2023-11-23T23:01:27 |
2025-04-01T04:34:47.686600
|
{
"authors": [
"Duke-Box",
"krahabb"
],
"repo": "krahabb/meross_lan",
"url": "https://github.com/krahabb/meross_lan/issues/338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
200197532
|
Mojo::URL->new decodes ampersand in fragment which is not encoded in to_string
Mojolicious version: 777f24f14 (current master at time of writing)
Perl version: 5.20.2
Operating system: Debian Linux Jessie (8)
Steps to reproduce the behavior
$ perl -MMojo::URL -E 'say Mojo::URL->new("/?a=b&c=A%26B")'
/?a=b&c=A%26B
$ perl -MMojo::URL -E 'say Mojo::URL->new("/#a=b&c=A%26B")'
/#a=b&c=A&B
Expected behavior
Encoded ampersand %26 is always preserved and URL is stringified to the original string.
Actual behavior
For the fragment part the percent-encoding is decoded and the contained ampersand & is printed. This makes the resulting URL being unequal to the input URL.
Please quote the section of the spec requiring this behavior.
Closing this issue until we have more information.
|
gharchive/issue
| 2017-01-11T20:22:37 |
2025-04-01T04:34:47.689678
|
{
"authors": [
"dboehmer",
"kraih"
],
"repo": "kraih/mojo",
"url": "https://github.com/kraih/mojo/issues/1034",
"license": "artistic-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1293457235
|
validate schema definitions and reuse schema
About
With the current implementation, krakend will throw a runtime error at request time and return a 500 if the JSON schema is invalid. I think it would be helpful if krakend would throw an error on startup instead (but ignoring the configuration seems more in line with how configuration errors are usually handled in krakend, so that's what I implemented here).
Also, the current implementation could be more efficient if the schema was only loaded and compiled once, rather than on every request.
thanks for the contribution!
|
gharchive/pull-request
| 2022-07-04T18:08:33 |
2025-04-01T04:34:47.691360
|
{
"authors": [
"kpacha",
"moritzploss"
],
"repo": "krakendio/krakend-jsonschema",
"url": "https://github.com/krakendio/krakend-jsonschema/pull/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1380982973
|
How can get enviroment variable in lua script
My endpoint inside endpoints on krakend.json
{
"endpoint": "/v1/api-biotech/extraction/{biometricMethod}",
"method": "POST",
"output_encoding": "no-op",
"extra_config": {
"documentation/openapi": {
"version": "1.0"
}
},
"backend": [{
"url_pattern": "/api-biotech/extraction/{biometricMethod}",
"extra_config": {
"modifier/lua-backend": {
"sources": ["rbac.lua"],
"pre": "access_check()",
"allow_open_libs": true
}
},
"encoding": "no-op",
"sd": "static",
"method": "POST",
"host": ["http://host.docker.internal:3000"],
"disable_host_sanitize": false
}],
"input_headers": ["*"]
}
Inside my docker-compose.yml i'am set the enviroment rbac_global_server
krakend_ce:
image: krakend
container_name: krakend_ce
ports:
- "8080:8080"
environment:
rbac_global_server: http://host.docker.internal:12180
command: ['run', '-c', '/etc/krakend/krakend.json']
Inside my rbac.lua script i'am trying get enviroment variable like this
function access_check()
rbac_global_server = os.getenv("rbac_global_server")
print(rbac_global_server )
end
But when execute i'am received this error
2022/09/21 13:31:03 [Recovery] 2022/09/21 - 13:31:03 panic recovered:
krakend_ce | runtime error: slice bounds out of range [-1:]
krakend_ce | /usr/local/go/src/runtime/panic.go:118 (0xeb2f74)
krakend_ce | /go/pkg/mod/github.com/alexeyco/binder@v0.0.0-20180729220023-2a21303f588a/error.go:140 (0x1813e16)
krakend_ce | /go/pkg/mod/github.com/alexeyco/binder@v0.0.0-20180729220023-2a21303f588a/binder.go:27 (0x1812886)
krakend_ce | /go/pkg/mod/github.com/alexeyco/binder@v0.0.0-20180729220023-2a21303f588a/error.go:201 (0x181430f)
krakend_ce | /go/pkg/mod/github.com/alexeyco/binder@v0.0.0-20180729220023-2a21303f588a/binder.go:43 (0x1812828)
krakend_ce | /go/pkg/mod/github.com/alexeyco/binder@v0.0.0-20180729220023-2a21303f588a/binder.go:26 (0x18127b8)
krakend_ce | /go/pkg/mod/github.com/krakendio/krakend-lua/v2@v2.0.1/proxy/proxy.go:74 (0x181a5d3)
krakend_ce | /go/pkg/mod/github.com/luraproject/lura/v2@v2.0.5/proxy/balancing.go:77 (0x120d210)
krakend_ce | /go/pkg/mod/github.com/luraproject/lura/v2@v2.0.5/proxy/http.go:113 (0x12124c1)
krakend_ce | /go/pkg/mod/github.com/luraproject/lura/v2@v2.0.5/router/gin/endpoint.go:42 (0x19b0b93)
krakend_ce | /go/pkg/mod/github.com/gin-gonic/gin@v1.7.7/context.go:168 (0x19a64c1)
krakend_ce | /go/pkg/mod/github.com/gin-gonic/gin@v1.7.7/recovery.go:99 (0x19a64ac)
krakend_ce | /go/pkg/mod/github.com/gin-gonic/gin@v1.7.7/context.go:168 (0x19a5726)
krakend_ce | /go/pkg/mod/github.com/gin-gonic/gin@v1.7.7/logger.go:241 (0x19a5709)
krakend_ce | /go/pkg/mod/github.com/gin-gonic/gin@v1.7.7/context.go:168 (0x19a4a90)
krakend_ce | /go/pkg/mod/github.com/gin-gonic/gin@v1.7.7/gin.go:555 (0x19a46f8)
krakend_ce | /go/pkg/mod/github.com/gin-gonic/gin@v1.7.7/gin.go:511 (0x19a4231)
krakend_ce | /go/pkg/mod/github.com/rs/cors@v1.6.0/cors.go:207 (0x210a437)
krakend_ce | /usr/local/go/src/net/http/server.go:2047 (0x11ad90e)
krakend_ce | /go/pkg/mod/github.com/rs/cors@v1.6.0/cors.go:207 (0x210a437)
krakend_ce | /usr/local/go/src/net/http/server.go:2047 (0x11ad90e)
krakend_ce | /usr/local/go/src/net/http/server.go:2879 (0x11b13ba)
krakend_ce | /usr/local/go/src/net/http/server.go:1930 (0x11ac9e7)
krakend_ce | /usr/local/go/src/runtime/asm_amd64.s:1581 (0xee91e0
In KrakenD, the lua VM runs into the vacuum and has no access to the system itself, so the libraries os and io are unavailable.
But you can achieve your goal if you use flexible configuration and then pass the value of the envar as an argument to your function:
{
"endpoint": "/lua",
"backend":[
{
"url_pattern": "/__debug/lua",
"host": ["http://127.0.0.1:8080"]
}
],
"extra_config": {
"modifier/lua-proxy": {
"pre": "your_function('{{ env "YOUR_ENVAR" }}')"
}
}
},
Cheers!
|
gharchive/issue
| 2022-09-21T13:58:34 |
2025-04-01T04:34:47.706079
|
{
"authors": [
"kpacha",
"luanromeu"
],
"repo": "krakendio/krakend-lua",
"url": "https://github.com/krakendio/krakend-lua/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
111942945
|
Option to set preview frame image and avoid loading image until clicked on?
I would like to use something like this on a page with many GIFs so I need a way to have my GIFs downloaded only once clicked on instead of all loaded on page load. If I have 100mb of GIFs on the page, the point of having a play button for me is to allow for the page to still be fast but currently it downloads all images on page load to get the first frame image for preview.
Any chance of modifying to make it optional to provide a URL of preview image and have it only download image on demand when play is clicked?
Well, Gifffer is based on onload event of the images. Without that we can't create the preview. I guess you should go with the old-fashion way of:
generating preview of the give on the server and load that instead
load a static image placeholder and write some JavaScript that loads progressively all the gifs
|
gharchive/issue
| 2015-10-17T04:12:29 |
2025-04-01T04:34:47.717354
|
{
"authors": [
"jasondavis",
"krasimir"
],
"repo": "krasimir/gifffer",
"url": "https://github.com/krasimir/gifffer/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1447080641
|
Authentication pages + LayoutRoute
Requirements:
[ ] Update sign up & login pages according to the new design
[ ] Create LayoutRoute component to adjust redirects inside the app
Was under a discussion for over a month. Decided to remove because of irrelevancy
|
gharchive/issue
| 2022-11-13T20:56:16 |
2025-04-01T04:34:47.718695
|
{
"authors": [
"krau5"
],
"repo": "krau5/pomo",
"url": "https://github.com/krau5/pomo/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2363246503
|
Initialize FirebaseApp if it is not already initilized
Bug:
Trying to re-initialize FirebaseApp throws an exception
Solution:
Check if it is already initialized
Hey, thanks for another contribution! I made another PR that would make it more idiomatic .
|
gharchive/pull-request
| 2024-06-19T23:33:36 |
2025-04-01T04:34:47.754518
|
{
"authors": [
"ArnauKokoro",
"krizzu"
],
"repo": "krizzu/firebase-auth-provider",
"url": "https://github.com/krizzu/firebase-auth-provider/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1589178439
|
Improve/document AddressMapper API
An action from #99 is to improve and document the AddressMapper API.
I think it warrants a discussion.
Should be closed by #254
Closed by work that ended with #271.
|
gharchive/issue
| 2023-02-17T11:01:33 |
2025-04-01T04:34:47.757249
|
{
"authors": [
"k-wall"
],
"repo": "kroxylicious/kroxylicious",
"url": "https://github.com/kroxylicious/kroxylicious/issues/139",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1750881297
|
YAML Test Suite tests
Runs tests using YAML Test Suite data
TODO
[x] use the file-based test data, not the YAML-based data (which requires parsing and is not well documented)
[x] IntelliJ doesn't seem to support EditorConfig very well any more. I propose committing .idea/codeStyles/codeStyleConfig.xml to the repo in a separate PR. #64
[x] split off some of the code tidying/refactoring into a separate PR
#58
#60
#61
#65
#66
#91
@aSemy this PR looks like it contains several kinds of changes that are not really related. It would be much easier for me to review and check for correctness if we split it into e. g.:
kotlinizing code
fixing code
running tests from the YAML test suite
@aSemy this PR looks like it contains several kinds of changes that are not really related. It would be much easier for me to review and check for correctness if we split it into e. g.:
kotlinizing code
fixing code
running tests from the YAML test suite
Agreed, I'll split them out.
Thanks! 🙇
|
gharchive/pull-request
| 2023-06-10T08:57:38 |
2025-04-01T04:34:47.763495
|
{
"authors": [
"aSemy",
"krzema12"
],
"repo": "krzema12/snakeyaml-engine-kmp",
"url": "https://github.com/krzema12/snakeyaml-engine-kmp/pull/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2386211220
|
Update README.md for using custom service names
Description:
Small README.md correction for the service names.
@krzko 👋
any chance you want to merge this one or should I close it? thanks!
|
gharchive/pull-request
| 2024-07-02T13:07:50 |
2025-04-01T04:34:47.765087
|
{
"authors": [
"dsotirakis"
],
"repo": "krzko/opentelemetry-collector-contrib",
"url": "https://github.com/krzko/opentelemetry-collector-contrib/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
414394079
|
Metamorphic pathfinder
Before I describe this in detail, which will take some time, are you guys crazy enough (you'd have to be!) to attempt writing a new path finder from scratch if the concept is good enough?
Very brief summary
Note: I'm describing this just as roads, but it would also be ped paths, ferry lines, etc.
Summarise the detailed information (lanes, restrictions, etc) at the segment level
Summarise the line of segments between two junctions in to a single "road"
At each junction, summarise the lane connections in to routes between each road at junction
Result is a 'zoomed out' view of the transport network - basically a bunch of junctions with 'roads' connecting them.
Pathfinder chooses route from A to B at the macroscopic level
Once a route is chosen, the detailed path is worked out for that route
There's a huge bunch of stuff that the current pathfinder wades through at the microscopic (lane) level that it just doesn't need to do. The metamorphic pathfinder finds a route at the macroscopic level, then fills in the details for that route at the microscopic level.
The metamorphic pathfinder also has the advantage that most of the mods' features can be "always on" because they won't impact performance any more.
This is a good idea, I had a similar one as well. But still, you probably don't need all vehicles to have the paths calculated in this way, only a portion.
Of course there are some catches when coding this, for example there can by way from segment A to B and from B to C, but there is no way from A to C because the vehicle cannot change lanes.
Before creating a new pathfinder we should see how the emergency vehicles behavior and all this will affect it.
Yes, I considered the 'lack off lane change' issue that will arise from the approach taken in OP. Thankfully, you've provided a very concise clarification of the problem. I don't know, yet, how to fully handle that issue. And there are similar issues, for example dealing with congestion. How do we encode these things in to the macroscopic summary of the transport network?
Ultimately, it's a math problem. It's almost like we'd need some way to encode information pertaining to the upcoming junction(s) in to the routing at the previous junction. At lest with the lane change issue, it's predominantly a one-time calculation when networks (roads, etc) are updated. But how to deal with the congestion issue which changes due to a multitude of reasons (disasters, football matches, concerts, 'rush hour' if using RealTime mod, etc).
Those two issues, lane routing (taking in to account lane changes) and congestion, are the things that cause my brain to melt then pour out of my ears and scamper across the floor in a bid to escape to the nearest mountains in the hopes of peacefully herding goats for the rest of its life so as not to be traumatised by such ghoulish maths.
Oh, as for emergency vehicles, they route normally but they can do fancy stuff on the way. Although not as fancy as these garbage trucks: https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/64#issuecomment-468088631
As ambitious of a project this is, I'd be in for it if you and @krzychu124 are. This is probably super duper long-term, though, and won't probably be gotten to until like a year from now due to all the changes we're making in terms of the UI and other logic (ex. emergency vehicle)
I think your suggested method is called "Contraction hierarchies" (https://en.wikipedia.org/wiki/Contraction_hierarchies). I have seen an implementation of it in an OSM routing framework called "GraphHopper" (https://github.com/graphhopper/graphhopper).
It requires heavy precomputations and the graphhopper implementation is quite inflexible, e.g. no dynamic costs are supported. However I have not looked further into it.
Is the current pathfinder GPU accelerated? Could this one be?
I think that would give a more realistic approach if done correctly. I'm not sure but I think there is an issue or a Steam discussion on the topic that the AI always chooses the "perfect" route or shortest path and not the route you would normally choose in real life. You would prefer highways and ring roads just because they have fewer "obstacles", they are more comfortable to drive, have fewer junctions and traffic lights, you rather follow the street signs.
This concept could be projected to a multilayered graph.
Outer Zone { Street { Segment { Lane } } }
Districts { District { Street { Segment { Lane } } } }
I have not mentioned junctions because I see them just as segments that also belong to streets and have multiple lanes with a source and a sink and prohibited lane change.
Assuming we need to reach a certain building, it would be sufficient to know the nearest lane to the building, numbers are not necessary. From there you can then look for a parking lot.
So, for instance, this could be our target:
Residental District -> Beach Avenue -> Segment 15 -> Lane 2
Concerning the pathfinding, I see two scenarios. The obvious one is to predetermine the complete or partial macroscopic path as a subgraph for every car. That has the advantage that lane changing is more realistic given that subgraph and cars can gauge the lane changes considering traffic density and where they are heading.
But there are many cars and considering #316 you could approach it from another perspective. Assuming that there are fewer junctions than cars, it would be more memory efficient to let the junctions know "their street signs", or where the cars need to go to get to the desired destination. Of course, "normal" connections between two segments should not count as junctions, where lane numbers match and are not modified via TM:PE UI.
When a car starts driving, it asks the next "real" junction upfront giving its desired destination object. Technically, it won't even ask, it just adds itself to the next segment's lane, and the segment knows the next junctions and does all the work. Getting the request, the junction then looks into its KV-Store or cache and either returns the matched target lanes or it lazily uses the pathfinder to find the appropriate lanes if there is no match. The target lanes identify the lanes before the junction to get to the required lanes on the junction itself that lead to the target. The car then stores the target lanes and eventually switch the lane on its way to the junction. Once the car has reached the junction via the lanes, the junction directs it via one of the possible junction lanes that lead to the target. After that, the car repeats the process and asks the next junction.
Junctions would have a small caches for every layer and also know their own macroscopic position.
Position: Workers District -> Busy Street
Districts:
Outer Zone -> target lane 1, 2; junction lane 1, 2, 3
Residental District -> target lane 2; junction lane 3
Streets of this District: ...
Segments of this Street: ...
It will return target lane 2 for the example above.
The next junction is inside the Residental District, so it will now skip to the street lookup.
Position: Residental District -> Town Street
Districts: ...
Streets of this District:
Beach Avenue -> target lane 1, junction lane 1, 2
Lonely Hills -> target lane 1, junction lane 3
Segments of this Street: ...
Return: target lane 1
What do you think about it thus far?
This article on pathfinding https://arxiv.org/ftp/arxiv/papers/1810/1810.01776.pdf
mentions some ways to optimize searches by defining key points and some dividing lines between the regions of the map. And there's a theorem that the solution would cross the divider between the two closest points: from A to the dividing line, and from B to the dividing line.
Not read the pdf yet, but I'd pondered something like that should we get a node/segment grid up and running. Draw straight line between source and target, then half way draw perpendicular line across it. Look for routes crossing the perpendicular line and do some sort of quick path find (as pondered in earlier comments above) to find if any of those routes can reach target, then check if source can reach those routes. Using the node/segment grid we could create a rectangle to define the width of the area to examine, and if no routes are found just move that rectangle left or right (from perspective of source facing target) and try again, possibly caching that "going form here to there needs to take a detour to the left" if possible.
|
gharchive/issue
| 2019-02-26T02:02:11 |
2025-04-01T04:34:47.779130
|
{
"authors": [
"FireController1847",
"RenaKunisaki",
"Strdate",
"VictorPhilipp",
"aubergine10",
"kvakvs",
"matfax"
],
"repo": "krzychu124/Cities-Skylines-Traffic-Manager-President-Edition",
"url": "https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/114",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1077368159
|
Fix group duplication while linking
Depending on the project setup, the children of the project main group can have either path property, or name property, or both together.
For example, the Pods projects generated by Cocoapods would not add path property to the groups, but only names.
Previously, sourcery was checking only path property to find out if the group exists or not. This PR adds check with the name of groups, but only if there is no existing group with path.
1 Error
:no_entry_sign:
Any changes to library code need a summary in the Changelog.
Generated by :no_entry_sign: Danger
|
gharchive/pull-request
| 2021-12-11T02:56:28 |
2025-04-01T04:34:47.788437
|
{
"authors": [
"SourceryBot",
"rubensamsonyan"
],
"repo": "krzysztofzablocki/Sourcery",
"url": "https://github.com/krzysztofzablocki/Sourcery/pull/1017",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1463250534
|
quick install does not work correctly with v0.10.0-rc0
/kind bug
What steps did you take and what happened:
Quick install is broken with the new version because of the way the version is parsed.
What did you expect to happen:
The correct version has to be parsed and installed.
Environment:
KServe Version: 0.10.0-rc0
/assign andyi2it
|
gharchive/issue
| 2022-11-24T12:10:36 |
2025-04-01T04:34:47.793179
|
{
"authors": [
"andyi2it"
],
"repo": "kserve/kserve",
"url": "https://github.com/kserve/kserve/issues/2561",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1382976348
|
feat: TorchServe support
Motivation
The Triton runtime can be used with model-mesh to serve PyTorch torchscript models, but it does not support arbitrary PyTorch models i.e. eager mode. KServe "classic" has integration with TorchServe but it would be good to have integration with model-mesh too so that these kinds of models can be used in distributed multi-model serving contexts.
Modifications
Add adapter logic to implement the modelmesh management SPI using the torchserve gRPC management API
Build and include new adapter binary in the docker image
Add mock server and basic unit tests
Implementation notes:
Model size (mem usage) is not returned from the LoadModel RPC but rather done separately in the ModelSize rpc (so that the model is available for use slightly sooner)
TorchServe's DescribeModel RPC is used to determine the model's memory usage. If that isn't successful it falls back to using a multiple of the model size on disk (similar to other runtimes)
The adapter writes the config file for TorchServe to consume
TorchServe does not yet support the KServe V2 gRPC prediction API (only REST) which means that can't currently be used with model-mesh. The native TorchServe gRPC inference interface can be used instead for the time being.
A smaller PR to the main modelmesh-serving controller repo will be opened to enable use of TorchServe, which will include the ServingRuntime specification.
Result
TorchServe can be used seamlessly with ModelMesh Serving to serve PyTorch models, including eager mode.
Resolves https://github.com/kserve/modelmesh-runtime-adapter/issues/4
Contributes to https://github.com/kserve/modelmesh-serving/issues/63
Looks like the new adapter is not part of unit tests, https://github.com/kserve/modelmesh-runtime-adapter/blob/main/scripts/run_tests.sh. Wonder if it is intentional.
@chinhuang007 that links to the main branch - you can see that it's added as part of this PR here.
/lgtm
|
gharchive/pull-request
| 2022-09-22T20:26:55 |
2025-04-01T04:34:47.798926
|
{
"authors": [
"chinhuang007",
"njhill"
],
"repo": "kserve/modelmesh-runtime-adapter",
"url": "https://github.com/kserve/modelmesh-runtime-adapter/pull/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1109431411
|
Is it memcached compatible?
I'm testing this gem with memcached with this simple script
#!/usr/bin/env ruby
require 'dalli-elasticache'
endpoint = "localhost:11211"
elasticache = Dalli::ElastiCache.new(endpoint)
#puts elasticache.public_methods
elasticache.set('test_key', 'test_value')
puts elasticache.get('test_key')
I got this...
vagrant@ror:~$ ./test.rb
Traceback (most recent call last):
./test.rb:11:in `<main>': undefined method `set' for #<Dalli::ElastiCache:0x00005556b58a7990> (NoMethodError)
Did you mean? send
I'm doing something wrong?
I'm using those versions:
dalli (3.2.0)
dalli-elasticache (0.2.0)
Thanks.
@icalvete In the situation you describe dalli-elasticache is not required or advised. This gem is used to resolve the individual node addresses for an ElastiCache memcached cluster, so it can be used by dalli (or in theory anything else that needs those addresses). It is not a proxy for the Dalli::Client class.
For the example you give above, you should just use Dalli:
#!/usr/bin/env ruby
require 'dalli'
endpoint = "localhost:11211"
client = Dalli::Client.new(endpoint)
client.set('test_key', 'test_value')
puts client.get('test_key')
@petergoldstein I want my code compatible with both, memchached or elasticache.
Thats why I'm trying this. The decision to use Elasticache it is not made yet.
@icalvete If you want to support both local (or just unclustered remote) and Elasticache, you will need to control that using some sort of variable. You will not be able to do this transparently because you're working with two different types of servers (cluster configuration endpoints vs memcached servers).
For example:
#!/usr/bin/env ruby
require 'dalli'
servers =
if ENV['USE_ELASTICACHE']
require 'dalli-elasticache'
config_endpoint = "example.com:1234"
Dalli::ElastiCache.new(config_endpoint).servers
else
["localhost:11211"]
end
client = Dalli::Client.new(servers)
client.set('test_key', 'test_value')
puts client.get('test_key')
|
gharchive/issue
| 2022-01-20T15:19:57 |
2025-04-01T04:34:47.813541
|
{
"authors": [
"icalvete",
"petergoldstein"
],
"repo": "ktheory/dalli-elasticache",
"url": "https://github.com/ktheory/dalli-elasticache/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1926815822
|
Feature Request: Advanced Warning Time
Calculate AWT following methodology in previous version of validation code.
Can you outline the methodology here?
Advanced Warning Time has been added to the code. AWT is calculated if models provide forecasts for:
All Clear
Threshold Crossing Time
Start Time
Peak Intensity
Peak Intensity Max
AWT is calculated for each of the forecast types, so if a model only produces one type of forecast, the AWT will always be reported with respect to:
AWT to Observed Threshold Crossing Time
AWT to Observed Start Time (in case somehow a modeler applies different thresholds to calculate start time and threshold crossing time?)
If a peak flux forecast, then Peak Intensity Time or Peak Intensity Max Time
AWT is calculated by selecting the first in a continuous series of forecasts leading up to an SEP event that predict an SEP event will occur.
|
gharchive/issue
| 2023-10-04T19:07:34 |
2025-04-01T04:34:47.816404
|
{
"authors": [
"ktindiana",
"rickyegeland"
],
"repo": "ktindiana/sphinxval",
"url": "https://github.com/ktindiana/sphinxval/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1196620651
|
Testing Dispose Directly With Usage Of "using" IDisposable
I'm wondering if for tests that are verifying Dispose, and do indeed implement IDisposable, if we'd want to avoid usage of the using syntax as the compiler generates a good ol' newable wrapped in a try-finally and issues the Dispose call as well at the end of the block.
It likely might not hurt anything, but there is potential side effects with the double call.
I thought of that, but ultimately I figured it was better to continue doing "best practices", instead of throwing in a mix of when to do it "right" and when to do it "wrong".
|
gharchive/issue
| 2022-04-07T22:13:06 |
2025-04-01T04:34:47.817997
|
{
"authors": [
"Frueber",
"ktmitton"
],
"repo": "ktmitton/Mittons.Fixtures",
"url": "https://github.com/ktmitton/Mittons.Fixtures/issues/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
416697731
|
Error on npm run dev
import VueExtendLayouts from 'vue-extend-layout'; // in APP.vue
npm run dev
==>
ERROR in ./node_modules/vue-extend-layout/vue-extend-layout.vue?vue&type=script&lang=js& (./node_modules/vue-loader/lib??vue-loader-options!./node_modules/vue-extend-layout/vue-extend-layout.vue?vue&type=script&lang=js&) 50:19
Module parse failed: Unexpected token (50:19)
You may need an appropriate loader to handle this file type.
| if (!this.layoutName) return
| const ln = this.prefix + this.layoutName
return () => import(/* webpackChunkName: "layout-[request]" */ `@/${this.path}/${ln}.vue`)
| }
| }
@ ./node_modules/vue-extend-layout/vue-extend-layout.vue?vue&type=script&lang=js& 1:0-116 1:132-135 1:137-250 1:137-250
@ ./node_modules/vue-extend-layout/vue-extend-layout.vue
@ ./node_modules/babel-loader/lib!./node_modules/vue-loader/lib??vue-loader-options!./src/js/App.vue?vue&type=script&lang=js&
@ ./src/js/App.vue?vue&type=script&lang=js&
@ ./src/js/App.vue
@ ./src/js/main.js
@ multi (webpack)-dev-server/client?http://0.0.0.0:3000 (webpack)/hot/dev-server.js ./src/js/main.js
update "vue": "^2.5.16" ==> "vue": "^2.5.17"
|
gharchive/issue
| 2019-03-04T09:05:11 |
2025-04-01T04:34:47.844068
|
{
"authors": [
"wwweojin"
],
"repo": "ktquez/vue-extend-layout",
"url": "https://github.com/ktquez/vue-extend-layout/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1848045491
|
allow configuring controller's concurrency
Motivation
Its often desirable to control the max no of reconciliations that the controller can run at a given moment to see more predictable behavior or to better utilize the host machine's resources.
Fixes #1248
Solution
Add concurrency to controller::Config which defines a limit on the number of concurrent reconciliations that the controller can execute at any given moment. Its default by 0, which lets the controller run with unbounded concurrency.
If users set concurrency: 1 (controller runtime's default btw) in a highly parallel controller you might end up in a bad failure mode; throttling the controller, but continuing to fill the scheduler's pending set (afaikt). Maybe we need a configurable max limit on the amount of pending reconciliations and just return a new error type after this?
pending is largely the best place waiting lot for that backpressure, IMO. If we propagate the backpressure beyond the scheduler then we end up with 1) stale caches (because the reflector would also be backpressured), 2) less deduplication, 3) more memory usage (for modified objects we'd store both the old version in the reflector cache and the new version in the queue).
The memory cost of the pending queue (a few strings per object) is also trivial compared to the reflector cache itself (a full copy of each object, regardless of whether it's even queued at all).
Thanks @nightkr, that is reassuring. I'll resolve the comments related to congestion or backpressure.
|
gharchive/pull-request
| 2023-08-12T14:32:34 |
2025-04-01T04:34:47.851851
|
{
"authors": [
"aryan9600",
"clux",
"nightkr"
],
"repo": "kube-rs/kube",
"url": "https://github.com/kube-rs/kube/pull/1277",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
861442625
|
[knative] Issue when building knative 0.22.0 and kustomize namespace patch
During KFServing installation, I am trying to upgrade KNative by following https://knative.dev/docs/install/install-serving-with-yaml/#install-the-serving-component.
Since we want to patch knative-serving namespace with ASM label, I use kustomize to patch the knative resource.
After the comment from Yuan and David: https://github.com/kubeflow/gcp-blueprints/pull/212#discussion_r615369195. I realized that serving-core.yaml has duplicated definition as serving-crd.yaml. Therefore my attempt to include both files will failed in kustomization.yaml. After the adjustment it is the structure I have:
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./knative-0-22-0/serving-core.yaml
- ./knative-0-22-0/net-istio.yaml
namespace: knative-serving
patches:
- path: patches/namespace-patch.yaml
namespace-patch.yaml
apiVersion: v1
kind: Namespace
metadata:
name: knative-serving
labels:
istio.io/rev: asm-192-1
istio-injection: null
However, the build failed with the following message:
Error: error marshaling into JSON: json: unsupported type: map[interface {}]interface {}
I found that this issue seems similar https://github.com/mikefarah/yq/issues/519, but I am already using yq v3.4.1 from my end.
I also applied the customization of kfserving-core.yaml file from https://github.com/argoflow/argoflow/commit/ce115f089736b8e2988fa31029eb1d3411fc1be4, but didn't resolve the issue. I think this customization might resolve the issue I haven't encountered yet.
This is caused by a bug in kyaml as described in this comment in the following issue: https://github.com/kubernetes-sigs/kustomize/issues/3446. For now I think the easiest way forward is to simply expand the manifest.
This is caused by a bug in kyaml as described in this comment in the following issue: kubernetes-sigs/kustomize#3446. For now I think the easiest way forward is to simply expand the manifest.
Thank you David for the reference! It makes sense and I expanded net-istio.yaml and serving-core.yaml. The kustomize build works now!
One caveat is that I need to run multiple times of same command, because the namespace was not ready at the first run. I will do some customization to apply namespace resource first...
You might also want to consider adding eventing but commented out as it is optional, so users can easily enable it if they want to. https://github.com/argoflow/argoflow/tree/master/knative might be handy as a reference (also for the image digests).
|
gharchive/issue
| 2021-04-19T15:33:36 |
2025-04-01T04:34:47.920294
|
{
"authors": [
"DavidSpek",
"zijianjoy"
],
"repo": "kubeflow/gcp-blueprints",
"url": "https://github.com/kubeflow/gcp-blueprints/issues/217",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
319581081
|
Add example/tutorial for distributed tf-cnn benchmark
This should provide an e2e tutorial to show how to create a tf-cnn benchmark with kubebench, and how to get the results.
@jlewi yes this is in scope of 0.3, we will run some benchmarks once code is fully done, and can share some example results if needed.
1 question: im not sure what's proper way to share the results, would you like to keep it with the repo or separately?
What is results in this context? Is it models? Or is it just metrics e.g. accuracy after so many steps?
Large data files (e.g. models) should can be published via GCS but not checked into source control. Small files; e.g. a text or markdown file can be checked into source control.
The results are metrics, e.g. images/sec, or time needed to reach x accuracy, etc.
/priority p1
@xyhuang What's the status of this? If its not done we should probably move this to 0.4.0 because we are finalizing 0.3.
@jlewi i think we can close this now. We have a user guide that can serve as a generic tutorial (https://github.com/kubeflow/kubebench/blob/master/doc/user_guide.md). We also have an example config that can be easily customized (https://github.com/kubeflow/kubebench/blob/master/examples/config/tf-cnn/tf-cnn-dummy.yaml).
/close
|
gharchive/issue
| 2018-05-02T14:40:21 |
2025-04-01T04:34:47.924802
|
{
"authors": [
"jlewi",
"xyhuang"
],
"repo": "kubeflow/kubebench",
"url": "https://github.com/kubeflow/kubebench/issues/20",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
549187781
|
update formatting of code blocks
See issue #1505.
This change is
/assign
This is available to preview at:
https://deploy-preview-1526--competent-brattain-de2d6d.netlify.com/docs/pipelines/sdk/python-based-visualizations/
/ok-to-test
Thanks for making these changes! This looks good to me on a desktop and a mobile phone.
/lgtm
/approve
|
gharchive/pull-request
| 2020-01-13T21:19:41 |
2025-04-01T04:34:47.942415
|
{
"authors": [
"joeliedtke",
"kbhawkey"
],
"repo": "kubeflow/website",
"url": "https://github.com/kubeflow/website/pull/1526",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
280183724
|
kn should be refactorized with GO
kn CLI should be written in GO for better portability and versioning.
closing for now.
|
gharchive/issue
| 2017-12-07T16:02:01 |
2025-04-01T04:34:47.943559
|
{
"authors": [
"mcapuccini"
],
"repo": "kubenow/KubeNow",
"url": "https://github.com/kubenow/KubeNow/issues/298",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1117324243
|
Support for container log max files in logging config
Signed-off-by: Waleed Malik ahmedwaleedmalik@gmail.com
What this PR does / why we need it:
Adds support for ContainerLogMaxFiles in LoggingConfiguration for KubeOne Clusters. ContainerLogMaxFiles configures the maximum number of container log files that can be present for a container.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Support for ContainerLogMaxFiles in LoggingConfig
/test pull-kubeone-e2e-aws-upgrade-1.20-1.21
|
gharchive/pull-request
| 2022-01-28T12:00:35 |
2025-04-01T04:34:47.949240
|
{
"authors": [
"ahmedwaleedmalik"
],
"repo": "kubermatic/kubeone",
"url": "https://github.com/kubermatic/kubeone/pull/1759",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
901763096
|
[watch/iter_resp_lines] If the pod log contains blank lines, blank lines are ignored
https://github.com/kubernetes-client/python-base/blob/b87a5fe119b820e9196f39343b0f5a1546d7465b/watch/watch.py#L44-L58
it seems blank lines are ignored in pod logs.
when pod logs like below, this code ignore blank lines
foobar
foobar
foobar
yield "foobar"
yield "foobar"
yield "foobar"
but it have to be like below not ignore blank lines
yield "foobar"
yield ""
yield "foobar"
yield ""
yield "foobar"
/assign @FuZer
|
gharchive/issue
| 2021-05-26T04:51:33 |
2025-04-01T04:34:48.029070
|
{
"authors": [
"FuZer",
"roycaihw"
],
"repo": "kubernetes-client/python-base",
"url": "https://github.com/kubernetes-client/python-base/issues/239",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
795824441
|
fix: default boilerplate path
add csi-release-tools to the program and do verify boilerplate does not work, because default boilerplate dir error.
/assign @msau42
/assign @msau42
/ok-to-test
Do you have a test PR in some other repo with this commit?
/ok-to-test
Do you have a test PR in some other repo with this commit?
I tested it in my project after I modified it
I verified this change in csi-driver-host-path.
/lgtm
/approve
/release-note-none
|
gharchive/pull-request
| 2021-01-28T09:23:02 |
2025-04-01T04:34:48.031835
|
{
"authors": [
"pohly",
"yiyang5055"
],
"repo": "kubernetes-csi/csi-release-tools",
"url": "https://github.com/kubernetes-csi/csi-release-tools/pull/132",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
266292912
|
docker-compose build for ui ends in error
docker-compose up for ui components ends in the following error:
...
/bin/sh: 1: curl: not found
gpg: no valid OpenPGP data found.
ERROR: Service 'ui' failed to build: The command '/bin/sh -c install_packages apt-transport-https && curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && install_packages yarn' returned a non-zero code: 2
...
Looks like this issue has been fixed upstream. Doing a docker pull bitnami/node:8 fixed my issue.
|
gharchive/issue
| 2017-10-17T21:57:36 |
2025-04-01T04:34:48.035619
|
{
"authors": [
"ritazh"
],
"repo": "kubernetes-helm/monocular",
"url": "https://github.com/kubernetes-helm/monocular/issues/375",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
369273671
|
checkpointer: ignore Affinity within podspec
Kubernetes 1.12.x introduced new logic for Affinity [1]. In addition to
new logic, the Pod contains a default affinity. The new default affinity
gets serialized into the checkpoint file, and the 1.12.x kubelet does
not restore the pod due to the affinity.
This PR removes the affinity from the spec and documents that affinity's
are not supported.
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": null
}
]
}
}
},
/cc @aaronlevy @dghubble
/fixes #1001
[1] https://github.com/kubernetes/kubernetes/pull/68173
[2] https://github.com/kubernetes/kubernetes/blob/e39b510726113581c6f6a9c2db1753d794aa9cce/pkg/controller/daemon/util/daemonset_util.go#L183-L196
This looks fine to me - but don't want to lgtm until we have tests hooked up (Also would be good to get prow to respect the tests passing or not so an lgtm doesn't auto merge something that didn't pass - see: https://github.com/kubernetes-incubator/bootkube/pull/1000#issuecomment-426810011)
I tested this out by building a pod-checkpointer image and using it as the checkpointer in a v1.12.1 cluster. It solves this issue for me. In steady-state, there are two pod-checkpoint pods running like before (one from DaemonSet, one from checkpointed pod) and power cycling the cluster is tolerated.
Thanks! :raised_hands:
I'm working on the builders today to get PR testing working again.
coreosbot run e2e
closing in favor of a PR with tests. https://github.com/kubernetes-incubator/bootkube/pull/1007
|
gharchive/pull-request
| 2018-10-11T19:35:30 |
2025-04-01T04:34:48.041229
|
{
"authors": [
"aaronlevy",
"dghubble",
"rphillips"
],
"repo": "kubernetes-incubator/bootkube",
"url": "https://github.com/kubernetes-incubator/bootkube/pull/1004",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
194252845
|
Download OpenAPI spec directly from k8s release branch
To be always valid and up to date, we should download the spec from release branch directly. I've also put all release relevant constant into a constant.py file for easier management.
This PR also update the client with latest release-1.5 that fixes #55
Fixes #49
Current coverage is 93.91% (diff: 100%)
Merging #59 into master will not change coverage
@@ master #59 diff @@
==========================================
Files 9 9
Lines 592 592
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
Hits 556 556
Misses 36 36
Partials 0 0
Powered by Codecov. Last update baba523...4ee330b
/lgtm. Thank, @mbohlool.
|
gharchive/pull-request
| 2016-12-08T05:50:08 |
2025-04-01T04:34:48.045361
|
{
"authors": [
"caesarxuchao",
"codecov-io",
"mbohlool"
],
"repo": "kubernetes-incubator/client-python",
"url": "https://github.com/kubernetes-incubator/client-python/pull/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
242753091
|
cmd/kpod/images.go: Add JSON output option
For kpod images, we need to output in JSON format so that consumers
(programatic) have structured input to work with.
Signed-off-by: baude bbaude@redhat.com
Can we add a test? The test could parse the output using jq or python.
Tests added.
rebase attempt
@baude failed make lint.
@mrunalp should be ready for merge now after rebase and cleanup.
@baude Looks like you have some whitespace in your patches, and the validation does not like it.
LGTM
@mrunalp @runcom PTAL
Needs rebase, but LGTM (with a nit)
@mrunalp ptal
@baude Can you fix the alignment in the test? Otherwise LGTM.
@mrunalp @runcom @rhatdan ptal ... this is the rework to make the formatting usable by other commands. when approved, ill work in the other commands
You did not remove --json from kpod history or in the tests directory.
@rhatdan I had planned to clean up the other commands in a different PR. Is that ok?
@runcom I think this is ready to merge can you PTAL?
@mrunalp PTAL
LGTM
LGTM
|
gharchive/pull-request
| 2017-07-13T16:07:20 |
2025-04-01T04:34:48.050288
|
{
"authors": [
"baude",
"mrunalp",
"rhatdan",
"runcom"
],
"repo": "kubernetes-incubator/cri-o",
"url": "https://github.com/kubernetes-incubator/cri-o/pull/653",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
195540627
|
Flannel network not working on CoreOS
On coreos I tried both cases with
kube_network_plugin: flannel
and
kube_network_plugin: canal
using flannel, pod ip is based on docker0 subnet, and not from kube_pods_subnet.
this is big problem for me right now, I need some help to figure it out, how to fix it
using canal, pod ip is correct and pods are able to communicate with each other, but the host can’t connect to pod.
By default on docker service uses this envs:
ExecStart=/usr/lib/coreos/dockerd daemon --host=fd:// $DOCKER_OPTS $DOCKER_CGROUPS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
$DOCKER_NETWORK_OPTIONS is not one of them,
so a solution here is to modify /etc/systemd/system/docker.service.d/docker-options.conf
accordingly, for example modifying only $DOCKER_OPTS:
[Service]
Environment="DOCKER_OPTS=--insecure-registry=10.233.0.0/18 --graph=/var/lib/docker --bip=10.233.116.1/24 --mtu=1450"
After that docker is able to use --bip=10.233.116.1/24 --mtu=1450 too.
What do you think ?
A more simple solution here is to change DOCKER_NETWORK_OPTIONS into DOCKER_OPT_MTU in network_plugin/templates/flannel-options.j2
This is only used on coreos AFAIK and will not brake something else.
|
gharchive/issue
| 2016-12-14T14:11:50 |
2025-04-01T04:34:48.054261
|
{
"authors": [
"genti-t"
],
"repo": "kubernetes-incubator/kargo",
"url": "https://github.com/kubernetes-incubator/kargo/issues/748",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1282068246
|
VirtualCluster should support to sync specific conditions of PodStatus from tenant to super
User Story
As a developer, I would like VirtualCluster to support to sync specific conditions of PodStatus from tenant to super, for some user-defined condition types are designed to be added by users or controllers from tenant cluster.
For example, OpenKruise provides some workloads that support in-place update for Pods, so we define a InPlaceUpdateReady type readinessGate and add the condition of it into Pod status. However, now syncer will only sync Pod status from super to tenant, which will overwrite the custom condition and lead to Pod always be NotReady because Kubelet can not see this condition in super cluster.
Detailed Description
VirtualCluster syncer should support to sync specific conditions of PodStatus from tenant to super.
/kind feature
Interesting idea, I'm trying to think of how we could accomplish something, it leads us a little into a more split-brain super & tenant then have context that matters vs making spec tenant authoritative for the .spec and super for the .status. How does this function under the hood when an update actually happens? Would the syncer need to push patches/updates to the pod specs for that in-place upgrade to be fulfilled? I don't believe we support that as of today at all.
Thanks for filing the issue, I understand the request and I think we should address it. One problem is that both tenant control plane and super cluster may update the conditions specified in the readiness gate. The bigger problem is that the status now has two sources of truth in two etcds, which makes it difficult to avoid overwriting under races.
@FillZpp, a workaround is to add a Pod label by Kruise controller when the readiness gate is ready, to indicate the syncer to set the condition in the super cluster, which I admit is definitely a hack. I cannot think of a more elegant solution at the moment.
Thanks for all your replying.
Would the syncer need to push patches/updates to the pod specs for that in-place upgrade to be fulfilled? I don't believe we support that as of today at all.
Alright, forget about the in-place upgrade, which is just a case we noticed. Actually this is a common request, for Kubernetes provides readinessGate feature to let users control whether a Pod should be ready or not.
kind: Pod
...
spec:
readinessGates:
- conditionType: "www.example.com/feature-1"
status:
conditions:
- type: Ready # a built in PodCondition
status: "False"
lastProbeTime: null
lastTransitionTime: 2018-01-01T00:00:00Z
- type: "www.example.com/feature-1" # an extra PodCondition
status: "False"
lastProbeTime: null
lastTransitionTime: 2018-01-01T00:00:00Z
containerStatuses:
- containerID: docker://abcd...
ready: true
...
When we set a custom conditionType in spec.readinessGates, kubelet could set Pod as ready only if the custom condition with True has been added into status.conditions.
However, if syncer can only push the whole status from super to tenant, the custom condition which user adds in tenant cluster will never be synced to super, which means the Pod will never be ready.
Since this is a basic feature provided by Kubernetes, it will be nice if VirtualCluster could support it.
a workaround is to add a Pod label by Kruise controller when the readiness gate is ready, to indicate the syncer to set the condition in the super cluster, which I admit is definitely a hack. I cannot think of a more elegant solution at the moment.
Yeah, it is a workaround that we can deploy two controllers in both super and tenant clusters. The one in tenant adds a specific label into Pod, then the other one in super watches the label and adds condition into status.
I don't know much about the implementation of VirtualCluster, so maybe this is impossible... I'm wondering if we can have an optional white list to let user choose which condition types that should be synced from tenant to super?
Maybe we can use some tricks like protectedMetaPrefixes to setup a list to determine which status conditions should be sync from tenant cluster to super cluster?
@Fei-Guo @FillZpp I agree there should be a mechanism to config the down/up syncing behave,but can we let a default configuration to support a quite common scenario ? The most common one is :The kurise is only installed in Tenat Cluster, let it work as it works as in Super Cluster, while Kurise needn't to any change anying.
Other scenario, for example some controller will work in Super ,even both Tenant and Super,we can support them by configing some special Labe to guide the complex behavior.
already supported in this pr https://github.com/kubernetes-sigs/multi-tenancy/pull/1294
did you meet some bug in testing?
already supported in this pr kubernetes-sigs/multi-tenancy#1294 did you meet some bug in testing?
This implementation is straightforward but there is a very tiny race window in theory such that tenant condition can be overwritten. If the readiness condition is updated by tenant at the time between CheckUWPodStatusEquality and vPod.Update(), the readiness condition can be overwritten by the vPod.Update() call. Syncer was not designed to handle all kinds of races, it uses periodic scanning to find the mismatches the fix them. The problem here is that once the readiness condition is removed accidentally, no one is going to reconcile and recover it because none of the tenant controller expects the condition can be removed once it is set. To eliminate the race, the tenant controller can notify syncer and let the syncer update the readiness conditions in the super cluster and reconcile for them.
Not sure if Kruise controller hits this race though.
|
gharchive/issue
| 2022-06-23T09:05:53 |
2025-04-01T04:34:48.125943
|
{
"authors": [
"Fei-Guo",
"FillZpp",
"ThunderYe",
"christopherhein",
"wondywang",
"zhuangqh"
],
"repo": "kubernetes-sigs/cluster-api-provider-nested",
"url": "https://github.com/kubernetes-sigs/cluster-api-provider-nested/issues/281",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
616179183
|
Fix server connection timeout
This PR handles the following issues found with the timeout setting in the tools:
The default server connection timeout is stated in documentation as 10s but in code it is set to 2s. This doc is now changed to 2s.
It allows a negative timeout value which is probably not a very good setting as the system will give it an indeterminate value. This means an unknown or indefinite timeout. This now uses the default value when set to negative.
Finally, there seemed to some bugs the way the timeout value is retrieved and handled from config file and CLI. It doesn't seem to be converted to seconds as expected.
Partial #605
[EDITED]
/assign @yujuhong
/assign @feiskyer
F0511 21:19:23.550924 21939 server.go:152] unknown flag: --experimental-dockershim
sorry @hickeyma I missed this the first time I skimmed the logs.. doing to many things at once
so there's more than one issue... the timeout value was one.. but unrelated to the travis fail.
https://github.com/kubernetes/kubernetes/commit/53adde65ce000c4d90ee8f807e90658426733a52#diff-41db1a9454b65c65b4c44937886b5880
@mikebrow Thanks for the review. Updated and ready for review again. Description also updated and logic changed from initial commit.
see https://github.com/kubernetes-sigs/cri-tools/blob/master/hack/run-critest.sh#L33 looks like k8s finally removed that flag..
Thanks for heads up @mikebrow. PR #607 now opened for that issue.
CI now passing. PR is good for review again.
Thanks for review and comments @saschagrunert and @mikebrow. Updated and ready for review again.
Thanks for the review comments @mikebrow. I have updated the debug message as suggested.
also in testing there's an odd issue here where the config timeout format must be int and the --timeout option must be int+s for example 2s not 2.
This is a tricky one because if I change this so that both the CLI flag and the config file property use the same data type then I would break backwards compatibility. I added some extra text to flag and doc to be more explicit on it. Hope this helps.
Thanks for review @mikebrow @saschagrunert @feiskyer and for merging @feiskyer
|
gharchive/pull-request
| 2020-05-11T20:51:56 |
2025-04-01T04:34:48.204757
|
{
"authors": [
"hickeyma",
"mikebrow"
],
"repo": "kubernetes-sigs/cri-tools",
"url": "https://github.com/kubernetes-sigs/cri-tools/pull/606",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
919045667
|
Revert PR template labels
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Reverts PR template labels. It appears that this feature only works for issues. (Example: https://github.com/kubernetes-sigs/gateway-api/pull/692)
Does this PR introduce a user-facing change?:
NONE
/cc @hbagdi
Looks like this is a common feature request: https://github.com/isaacs/github/issues/1252.
My bad, apologies for the inconvenience here.
/lgtm
/approve
|
gharchive/pull-request
| 2021-06-11T18:01:34 |
2025-04-01T04:34:48.215310
|
{
"authors": [
"hbagdi",
"robscott"
],
"repo": "kubernetes-sigs/gateway-api",
"url": "https://github.com/kubernetes-sigs/gateway-api/pull/693",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
458386277
|
[Cherry-Pick #26] Ignore nodes if out of syc.
Signed-off-by: Da K. Ma klaus1982.cn@gmail.com
/cc @k82cn
/ok-to-test
/lgtm
/approve
|
gharchive/pull-request
| 2019-06-20T07:03:27 |
2025-04-01T04:34:48.237862
|
{
"authors": [
"asifdxtreme",
"k82cn"
],
"repo": "kubernetes-sigs/kube-batch",
"url": "https://github.com/kubernetes-sigs/kube-batch/pull/863",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1206367550
|
Support ReplicaSet
Let's support ReplicaSet.
The simulator support only resources related to scheduling and ReplicaSet haven't been supported so far because of that.
But, creating high-priority Pods causes two actions on the scheduler in a real cluster when Pods are managed by ReplicaSet, and preempted by that high priority Pod: (high-priority Pod is scheduled, preempted Pod managed by ReplicaSet is re-created and scheduled.)
Of course, the current simulator can simulate this behavior by manually creating the preempted pod again, but it can be difficult when the user is automating the creation of the resource by some scripts. This is because users need to see which Pods have been preempted and need to be re-created in every resource operation.
For example, suppose you create five different low-priority Pods (managed by ReplicaSet in your real cluaster) and then one high priority Pod in your script. You need to check which low-priority Pod is preempted by high-priority Pod and re-create that Pod to simulate the behavior of ReplicaSet. It is very annoying to do that.
/kind feature
/triage accepted
Not to mark as stale.
/assign
/area simulator
/priority next-release
/remove-lifecycle rotten
/lifecycle frozen
|
gharchive/issue
| 2022-04-17T12:53:47 |
2025-04-01T04:34:48.241094
|
{
"authors": [
"196Ikuchil",
"sanposhiho"
],
"repo": "kubernetes-sigs/kube-scheduler-simulator",
"url": "https://github.com/kubernetes-sigs/kube-scheduler-simulator/issues/144",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.