id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
927497869
|
Iosevka ligatures messed up
Looks like our handling of iosevka is wrong.
Determine if this issue is upstream using the swash testing app
Do some investigation about how ligatures are formed with iosevka
Take this learning and adjust the glyph positioning system to handle this edge case
https://github.com/be5invis/Iosevka/issues/1007#issuecomment-842871090
Pretty sure this would be of help
Iosevka Term TTF v7.2.4 ligatures display perfectly on my machine.
On the other hand, I am having a different kind of problem with Iosevka Super TTC variants.
All apps on Windows see Iosevka properly, except for Neovide. Neovide uses the super-light weight. It is understandable, because the first font in the TTC is the lightest one.
Perhaps this is because of the way underlying lib handles fonts? It doesn't yet understand TTC?
maybe the problem is i am using a custom build?
@mjoork is that the latest version???
@PyGamer0 Why yes, Iosevka 7.2.4 and Neovide 0.5.0. Perhaps I wasn't clear enough... latest release.
Downloaded pre-release 0.7.0 just now and here's how it looks like. Renders perfect ligatures.
The only thing that bothers me is the missing Windows logo. This is not a big deal, but why would it suddenly disappear?
@mjoork That release is old; neovide has updated a lot since then, try getting a build from here
We're working on newer releases but there's some bugs that should be fixed first
@Kethku, @PyGamer0, I see, I will try it. Thank you for being nice to me btw 😊
Fira Code ligatures display fine
Cascadia Code ligatures display fine too
Iosevka ligatures don't display at all, is there some kind of guard added to not display them for now?
I figured out it was upstream
Awesome. I created an upstream issue here: https://github.com/dfrg/swash/issues/15
Apologies all. This one's on me. I pushed a fix that seems to address it.
I will update the swash version shortly
Done. Iosevka should be fixed
Looks like the glyphs are selected correctly now, but there is some rendering weirdness...
I think whats going on is that the anti aliasing is being done per glyph and these longer ligatures are made up of multiple glyphs. This means that the line of pixels at the edge of the glyph are rendered strangely.
Some possible solutions:
When we detect that a string of characters is being positioned off of the normal grid, take the suggested positioning rather than rounding to the nearest grid position
Expose a setting for turning off the anti aliasing (this is a bandaid)
Think harder about how to render on a grid without accumulating pixel errors over the course of a line...
And extended ligatures should now be working in Fira Code...
Closing as this is fixed on my machine. Thanks again @dfrg
|
gharchive/issue
| 2021-06-22T18:00:23 |
2025-04-01T04:55:15.041881
|
{
"authors": [
"Kethku",
"PyGamer0",
"dfrg",
"mjoork"
],
"repo": "Kethku/neovide",
"url": "https://github.com/Kethku/neovide/issues/742",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1664360245
|
When dowloading using batch downloader chapters are missing.
There are chapters in the anime for intro and outro which are present when manually downloading but missing when using batch downloader.
Hey @xHaruke, I didn't notice this at all, interesting. Could you let me know which anime which episode you downloaded? I can quickly check if this can be implemented in the script or not.
Hey @xHaruke, the downloaded file from the site is one mp4 file contains chapter metadata, that's why you saw it in mpv. The script downloads m3u8 playlist and segments, as the same as streaming, and there is no such metadata. The file sources are different. Sorry, cannot make it.
|
gharchive/issue
| 2023-04-12T11:15:52 |
2025-04-01T04:55:15.043654
|
{
"authors": [
"KevCui",
"xHaruke"
],
"repo": "KevCui/animepahe-dl",
"url": "https://github.com/KevCui/animepahe-dl/issues/87",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
248133325
|
fixes #36 - filename#id renders as #id only
I just updated bower_components for the demo of it working, removing the modified code in bower_components, you should be able to see the issue of the icon not showing up.
Let me know if there is anything I can do to help further, thanks!
RE: https://github.com/Keyamoon/svgxuse/issues/36
TODO: minification / bowerifcation, i don't know the minification process or how to bower this.
I'm afraid this is not the right solution. By adding the base before #, you're making a relative URL, which is not the same as referencing an id in the same html page.
Hmm, would like to learn more. is the value supposed to be href="#id"?
Yes. If the value of xlink:href is like filename.svg#id and the browser fails to load it, svgxuse would fetch filename.svg and prepend it to body. It then updates the value of xlink:href to #id.
I think i'm following, let me dig in a bit more.
This PR was super off point, should probably close. If I figure something out i'll send over another.
|
gharchive/pull-request
| 2017-08-04T22:40:51 |
2025-04-01T04:55:15.049519
|
{
"authors": [
"Keyamoon",
"debugish"
],
"repo": "Keyamoon/svgxuse",
"url": "https://github.com/Keyamoon/svgxuse/pull/38",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
65482905
|
Support for ESSTIX font
Please consider adding esstix font (ttf).
Curious: why esstix instead of stix? Merging into #42.
Essix is the font used in many old science books.
Trying to capture that.
BTW, could you re-open this issue?
#42 is quite a different issue: it's about alternative scripts (mathrm, mathcal etc.) of a certain font.
Like any other complete math font, ESSTIX font comes with its own mathrm, mathcal, etc.
Oops, I guess I was too hasty. #123 is similar enough though that I'll just track both there.
|
gharchive/issue
| 2015-03-31T15:50:12 |
2025-04-01T04:55:15.053533
|
{
"authors": [
"raichu",
"spicyj"
],
"repo": "Khan/KaTeX",
"url": "https://github.com/Khan/KaTeX/issues/216",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
83851508
|
timeout transition group TICK doesn't trigger animation in Safari?
Hi,
Have you noticed that the TICK of 17 ms in timeout transition group doesn't seem to be enough to trigger an animation in Safari (desktop or mobile)? Seems to work great in everything else.
Thanks!
Noah
I'm experiencing the same issue.
Safari does not seem to work properly with TimeoutTransitionGroup.
:+1:
I also appear to be seeing this... not sure if it has always been true or only recently.
Found this issue on ReactCSSTransitionGroup
https://github.com/facebook/react/issues/2104
I tried it, and moving the transition CSS from enter to enter-active fixes this issue on Safari.
wow, that worked for me, too. i should add that i also had to remove the transition from enter, not just add it to enter-active (i had tried adding it to both, which didn't work). i'll close this, though it'd be nice if the interface mirrored CSSTransitionGroup (minus the timeouts, of course).
@noahgrant Sorry, what would you like changed here?
|
gharchive/issue
| 2015-06-02T05:35:18 |
2025-04-01T04:55:15.552438
|
{
"authors": [
"devgeeks",
"idw111",
"noahgrant",
"spicyj"
],
"repo": "Khan/react-components",
"url": "https://github.com/Khan/react-components/issues/40",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
88033735
|
Apple Watch, iOS9. Light power state does not update
Power switches work fine, even when iPhone is turned off. When other HomeKit app changes the light power state it is not updated on the Watch and I have to switch it twice to make an action. I have no idea if the error is watchOS2 related or it's a bug in the app itself. It's updated correctly on the iPhone.
I'll see if it's fixed on the new HMWatch repo ;)
|
gharchive/issue
| 2015-06-13T16:58:28 |
2025-04-01T04:55:15.553603
|
{
"authors": [
"mikegapinski"
],
"repo": "KhaosT/HomeKit-Demo",
"url": "https://github.com/KhaosT/HomeKit-Demo/issues/34",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
301222440
|
Camera not found after 0.1.4 update
After the the latest 0.1.4 update, the platform no longer finds the camera.
From the console, I can see the Camera-ffmpeg platform is initiated, but no camera is found/loaded as I do not see the line "XXX camera is running on port XXXXX". As the accessory isn't found/started, I can't add it in Home app.
Is there a way to install the previous version of Camera-ffmpeg? Thanks.
Sorry about that, please try v0.1.5. That should fix the issue.
I'm having same issue :(
|
gharchive/issue
| 2018-02-28T23:22:42 |
2025-04-01T04:55:15.555808
|
{
"authors": [
"KhaosT",
"brianerdelyi",
"ryanchucks"
],
"repo": "KhaosT/homebridge-camera-ffmpeg",
"url": "https://github.com/KhaosT/homebridge-camera-ffmpeg/issues/148",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2243380007
|
[Потенциально] 6 лаба на автомат
Обязательные работы готовы, так что можно пробовать дополнительную.
6 лабораторная работа сделана не прямо сейчас и не на базе нашего текущего приложения — я разрабатывал её уже несколько лет для разных предметов, так что там есть много интересного. Может быть, она подходит на автомат.
Сайт представляет собой интернет-магазин книг. Там есть сессия, корзина, покупки, кабинет администратора. Можно добавлять новые ресурсы (книги) на сайт, и они становятся доступны для покупок. Можно добавлять новых авторов - это как категории. Теперь по пунктам:
Есть фронтенд, причём многократно отполированный — адаптивный и под мобилки, и под разные масштабы/размеры монитора на ПК, и в целом дизайн неплох. Вдохновлялся концептом дизайна хостинга Fandom, где я держу некоторые сайты.
Есть бэкенд на Java + Spring с расширенным CRUD, то бишь пагинация тоже там есть.
Есть самопальный механизм перевода сайта на разные языки при помощи JS. В требования к автомату не входит, но просто интересная вещь.
Есть работа с базой данных, авторизация и регистрация там же, пароли все хэшируются. Чтобы сайт работал, нужно сначала создать базу данных, она приложена в ресурсах.
Юнит-тесты на месте.
Отлично. На автомат вполне достаточно.
|
gharchive/pull-request
| 2024-04-15T11:04:41 |
2025-04-01T04:55:15.558722
|
{
"authors": [
"Hummel009",
"Khmelov"
],
"repo": "Khmelov/DC2024-01-27",
"url": "https://github.com/Khmelov/DC2024-01-27/pull/123",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1054125015
|
Add matte material with parameter "color"
Would you be willing to merge this PR which adds a dedicated subclass for the matte material? It's mostly that currently, with matte being used as an alias for OBJ, one has to set the color parameter via "kd", not "color", and my own sample code then not being compliant with the standard.
FWIW, the only use of matte that I can see in the SDK is found in anariTuturial.cpp, where the diffuse color parameter is not set on the material; i.e., the change set shouldn't break existing behavior.
Thanks, I don't see any issues with this PR and it makes the example device more compliant with the Materials section of the spec.
If you can update the default color to be {0.8,0.8,0.8} I'll merge your PR
Thanks! I just amended that to the original commit.
|
gharchive/pull-request
| 2021-11-15T21:40:23 |
2025-04-01T04:55:15.562840
|
{
"authors": [
"griffin28",
"szellmann"
],
"repo": "KhronosGroup/ANARI-SDK",
"url": "https://github.com/KhronosGroup/ANARI-SDK/pull/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1467315470
|
Improvements to the External Memory and Semaphores conformance tests
External Memory Sharing
a) Write a test that does external memory sharing only and is not dependent on semaphores being supported.
b) Confirm that EnqueueAcquire and EnqueueRelease calls are being made correctly for all tests.
Semaphores
a) Do cross OpenCL context tests for multiple semaphore handle types such as sync_fd and opaque_fd .
b) Add a multiple signal - multiple wait test for in-order queues.
c) Test the following queries - CL_DEVICE_HANDLE_LIST_KHR, CL_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR
My comment on semaphore PR https://github.com/KhronosGroup/OpenCL-CTS/pull/1587#issuecomment-1331013946 was mentioned on the 06/12/22 WG teleconference. I think it would make sense before starting any of this semaphore work to have a first step which moves the existing cl_khr_semaphore tests from test_api into its own directory in test_conformance/extensions/
Some comments -
Agree with Ewan that we should move these tests to test_conformance. Filed #1624 to track the same so that it doesn't get lost in comments.
In terms of tests suggested on this issue -
I have updated the description for first item from "External sharing" to "External memory sharing" for better clarity.
I think the tests requested part of 1a) and 1b) can be developed independent cl_khr_semaphore tests under test_conformance/api and need not be blocked by #1624.
Typically external memory and semaphore sharing tests can be broadly categorized as below -
Core semaphore behavior described by cl_khr_semaphore - This is what cl_khr_semaphore tests under test_conformance/api/ test.
The test scenario described by "2b)Add a multiple signal - multiple wait test for in-order queues." is a good example of this category and can be an extension to tests under test_conformance/api/. These tests currently rely on semaphores created by OpenCL and used within OpenCL (largely same context)
cl_khr_external semaphore provides another mechanism to create semaphores – by importing semaphores exported by other APIs. See below for details. Same tests under test_conformance/api/ can be extended to cover external semaphores as well.
We have some changes to address this to consume semaphores imported from test_vulkan. Will update here once have PR created.
Core external memory/semaphore behavior described by cl_khr_external_memory/cl_khr_external_semaphore. These tests along with one or more handle specific extensions (cl_khr_external_memory_opaque_fd, cl_khr_external_semaphore_win32 etc) can be written in following different settings
OpenCL as producer and OpenCL as consumer (Same API / different contexts): Create memory/semaphores directly in one OpenCL context/process, export it as fd/win32 handle and then import it in another OpenCL context/process using this handle. This is what Balaji is referring to as part of 2.a. in the context of semaphores. This will again require ability to create semaphores within OpenCL as well as export them from OpenCL. We can extend existing tests under test_conformance/api/ to cover this request. 1a) is equivalent example for cl_khr_external_memory. However, this setting will require ability to export memory which is not covered by current specs. So, not sure how we can cover this under this setting. May be @bcalidas has some ideas around this.
Different APIs - Create semaphores in one API (e.g. Vulkan or OpenCL) (Producer), export them as opaque_fd/sync_fd/win32 handle and then import it in yet another API (e.g. OpenCL or Vulkan) using this handle. This again can be subdivided into two
a. With Vulkan or some other API as producer and OpenCL as consumer : test_vulkan here https://github.com/KhronosGroup/OpenCL-CTS/tree/main/test_conformance/vulkan where Vulkan is producer of memory and semaphores.
b. OpenCL as producer and Vulkan or some other API as consumer. This is still missing coverage.
Also, it may be a good idea to review existing tests for missing coverage and file separate issues to track them.
@bcalidas Can we split this issue into separate issues - One to track external_memory test suggestions in (1) and another one to track external_semaphore test suggestions in (2)? I am assuming these will end up being separate tests and (1) need not be blocked on semaphore tests as such.
https://github.com/KhronosGroup/OpenCL-CTS/pull/1629 have been merged.
@bcalidas, @nikhiljnv: Friendly remainder about pending review for 1a (https://github.com/KhronosGroup/OpenCL-CTS/commit/9e83f008f5469af2e13ff4f1c7602351485d14ea).
@bcalidas @nikhiljnv, @EwanC @bashbaug :
Created cl_khr_external_semaphore PR:
https://github.com/KhronosGroup/OpenCL-CTS/pull/1645
Created cl_klh_semaphore PR:
https://github.com/KhronosGroup/OpenCL-CTS/pull/1646
Please review.
@bcalidas, @nikhiljnv, @EwanC, @bashbaug
Created a draft PR for External Sharing 1a task:
https://github.com/KhronosGroup/OpenCL-CTS/pull/1648
In general it is just the same work done in:
https://github.com/KhronosGroup/OpenCL-CTS/compare/main...pj87:OpenCL-CTS:cl_vk_external_sharing?expand=1
but without the proposed changes for test_vulkan_interop_image.
For item 1 b) , https://github.com/KhronosGroup/OpenCL-CTS/pull/1629 provides partial coverage. However, we need to follow-up by re-reviewing external memory and external semaphore tests and adding any missing clEnqueueAcquire, clEnqueueRelease calls.
2 a) needs more enhancements to existing tests.
2 b) is not needed after clarification to semaphore reset behavior (https://github.com/KhronosGroup/OpenCL-Docs/issues/883)
Qualcomm will follow up with PRs for 1 b) and 2 a ).
https://github.com/KhronosGroup/OpenCL-CTS/pull/1854 updates semaphore behavior to avoid export of imported semaphores and adds support for the SemaphoreReimportSyncFD command.
Qualcomm will follow up with PRs for 1 b) and 2 a ).
@bcalidas is it possible to sum up current status of this issue ? It looks to me like part of it is covered meanwhile remaining part is in progress, could you confirm that ? Thanks !
https://github.com/KhronosGroup/OpenCL-CTS/pull/1899 covers 1b,
https://github.com/KhronosGroup/OpenCL-CTS/pull/1886 covers 2a
This issue can now be closed.
Closing as discussed in memory subgroup call of June 4th, 2024.
|
gharchive/issue
| 2022-11-29T01:34:44 |
2025-04-01T04:55:15.580410
|
{
"authors": [
"EwanC",
"bcalidas",
"nikhiljnv",
"pj87",
"shajder"
],
"repo": "KhronosGroup/OpenCL-CTS",
"url": "https://github.com/KhronosGroup/OpenCL-CTS/issues/1588",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1190498050
|
Remove draft state for loader spec
The OpenXR-SDK-Source has implemented and used new loader based on this spec for OpenXR apps. The loader implementation based on this spec has been released to normal users multiple version. IMO, it's time to remove draft state from spec title.
cc @rpavlik .
Looks like Azure CI has problem when initializing build tools:
Generating script.
========================== Starting Command Output ===========================
"C:\Windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "D:\a\_temp\7b35feca-b8bd-4b2f-9bc9-31c773233dce.cmd""
CMake Error at CMakeLists.txt:25 (project):
Generator
Visual Studio 16 2019
could not find any instance of Visual Studio.
-- Configuring incomplete, errors occurred!
See also "D:/a/1/s/build/CMakeFiles/CMakeOutput.log".
##[error]Cmd.exe exited with code '1'.
Finishing: Generate build system
It's not related to this PR's changes.
Thanks for this! Group agrees this was an oversight. Approved to merge, I'll just have to figure out what happened to ci.
|
gharchive/pull-request
| 2022-04-02T04:51:32 |
2025-04-01T04:55:15.584283
|
{
"authors": [
"rpavlik",
"utzcoz"
],
"repo": "KhronosGroup/OpenXR-SDK-Source",
"url": "https://github.com/KhronosGroup/OpenXR-SDK-Source/pull/300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
298700889
|
Texturing mapping U coordinates are wrong when importing into BabylonJS
@bghgary @blgrossMS This in reference to #111. I see that the fix is to negate the X part of the coordinate. But doesn't this break the texture mapping U coordinates? When I import a GLTF exported with this code, the texture mapping is flipped horizontally. Since BabylonJS doesn't reverse the X coordinate by negating it, rather BabylonJS rotates the mesh by 180 degrees, is this conversing broken?
Either the export doesn't negate the X but does a 180 degree rotation or BabylonJS negates the X coordinate and not do the rotation.
Thoughts?
@blgrossMS Sorry for not adding additional info why I closed it. It was actually a user error on my part. Although, the normal mapping and metallicroughness texture is not working correctly through the exporter, which I will take a stab at fixing.
|
gharchive/issue
| 2018-02-20T18:30:21 |
2025-04-01T04:55:15.592298
|
{
"authors": [
"justinctlam"
],
"repo": "KhronosGroup/UnityGLTF",
"url": "https://github.com/KhronosGroup/UnityGLTF/issues/130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
195330497
|
layers: Update Valid Usage enums in log_msg
Update Valid Usage enums in core_validation and update
their status in the VU database.
This completes Jira task VL-65.
Change-Id: I51ed327ad65f3a5d1f64bba01ad576c6656f88df
Note: deleting a branch too quickly after pushing to master can cause a race with Travis-CI.
Travis-CI will begin its integration builds, but that can fail if the branch is deleted too quickly.
|
gharchive/pull-request
| 2016-12-13T18:25:09 |
2025-04-01T04:55:15.601939
|
{
"authors": [
"mikew-lunarg"
],
"repo": "KhronosGroup/Vulkan-LoaderAndValidationLayers",
"url": "https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/pull/1263",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1200378756
|
FI: Move CLI Parsing Framework into a Separate Library
What is the Problem?
Not so much a problem at the moment but if the platform class is removed with #438 then the CLI Parsing should be moved to event handlers which take in reference to all resources they modify. At the moment the platform triages all.
What is the proposed Solution?
Port Plugins to the new framework paradigm
Examples
in main(){
EventBus bus;
PluginSet plugins;
plugins.add(SomePluginWhichChangesTheWindow(window));
plugins.add(SomePluginWhichControlsLogging());
bus.import(plugins.handlers());
}
Issues
Plugins is in a good state to port but it would mean that handling applications on desktop would need to change
With the mentioned #438 having been closed some time ago, do we still need/want this?
Closing this for now. I don't think we're going to implement this.
|
gharchive/issue
| 2022-04-11T18:34:14 |
2025-04-01T04:55:15.604457
|
{
"authors": [
"SaschaWillems",
"TomAtkinsonArm"
],
"repo": "KhronosGroup/Vulkan-Samples",
"url": "https://github.com/KhronosGroup/Vulkan-Samples/issues/439",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
221920316
|
compile fixes for gltf cleanup pass
compile fixes for gltf cleanup pass
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Gloria Kennickell seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2017-04-15T02:07:26 |
2025-04-01T04:55:15.607178
|
{
"authors": [
"CLAassistant",
"gkennickell"
],
"repo": "KhronosGroup/Vulkan-Samples",
"url": "https://github.com/KhronosGroup/Vulkan-Samples/pull/115",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2514787917
|
Enable Address Sanitizer in Linux CI
Fixes #968
CI Vulkan-Tools build queued with queue ID 252917.
CI Vulkan-Tools build # 1523 running.
CI Vulkan-Tools build # 1523 passed.
|
gharchive/pull-request
| 2024-09-09T19:52:46 |
2025-04-01T04:55:15.608396
|
{
"authors": [
"charles-lunarg",
"ci-tester-lunarg"
],
"repo": "KhronosGroup/Vulkan-Tools",
"url": "https://github.com/KhronosGroup/Vulkan-Tools/pull/1030",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1435600968
|
controller update to V7.2.95
Hello,
Unfortunately, since the controller update to V7.2.95, the connection no longer works.
see log:
[04/11/2022, 06:57:32] [UniFiPoeControl] Removing 4 device(s)
[04/11/2022, 06:58:33] [UniFiPoeControl] Setting up new accessory: USL24P (MAC: xxxxx) / #1 (POE AP-EG)
[04/11/2022, 06:58:33] [UniFiPoeControl] Setting up new accessory: USL24P (MAC: xxxxx) / #2 (POE AP-OG)
[04/11/2022, 06:58:33] [UniFiPoeControl] Setting up new accessory: USL24P (MAC: xxxxx) / #3 (POE AP Aussen)
[04/11/2022, 06:58:33] [UniFiPoeControl] Setting up new accessory: USL24P (MAC: xxxxx) / #4 (POE AP -Keller)
[04/11/2022, 06:58:33] [UniFiPoeControl] Removing 4 device(s)
I find my PoE control still works with controller 7.2.95, however the accessory is removed and reset every minute.
What that means is that the switch in Home. app keep disappearing and being added back in the default room (which is a bit of a pain as the device I am controlling is in another room!).
Also, Home.app automations don't work as the PoE switch is removed and added back with another UIDD so disappears from any automations.
Having upgraded to Unifi Network version 7.3.76 everything appears to be working properly now.
Hello,
unfortunately having the same issue even with 7.3.76:
[12/30/2022, 11:15:30 AM] [UniFiPoeControl] Setting up new accessory: US8P150 (MAC: f4:92:bf:70:be:ec) / #2 (AP Lisa)
[12/30/2022, 11:15:30 AM] [UniFiPoeControl] Setting up new accessory: US8P150 (MAC: f4:92:bf:70:be:ec) / #4 (AP OG)
[12/30/2022, 11:15:30 AM] [UniFiPoeControl] Removing 2 device(s)
@67jedi Do you still have this problem? I'm using 7.5.174 without any issues. Is something wrong with your cache?
|
gharchive/issue
| 2022-11-04T06:03:42 |
2025-04-01T04:55:15.772396
|
{
"authors": [
"67jedi",
"Kienz",
"lolleck1976",
"stevetrease"
],
"repo": "Kienz/homebridge-unifi-poe-control",
"url": "https://github.com/Kienz/homebridge-unifi-poe-control/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
780767393
|
new maps ported
Q
A
Bug fix?
no
New feature?
no
Needs wipe?
no
Description:
Content:
[x] Content part
ported the mission to the "Australia" map by "Aussie"
requires CUP Core, Maps and Apex DLC
ported the mission to "Virolahti - Valtatie 7" map by "furean"
requires CUP Core, Maps
Successfully tested on:
[yes] Local MP
[yes] Dedicated MP
australia map needs some proper work to create realistic towns and cities to fill out some of the empty space
australia map needs some proper work to create realistic towns and cities to fill out some of the empty space
|
gharchive/pull-request
| 2021-01-06T18:43:17 |
2025-04-01T04:55:15.783415
|
{
"authors": [
"stutpip123"
],
"repo": "KillahPotatoes/KP-Liberation",
"url": "https://github.com/KillahPotatoes/KP-Liberation/pull/852",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2760379235
|
Fix: Add ChanMeng666's Portfolio
Closes #257
@all-contributors please add @ChanMeng666 for code
|
gharchive/pull-request
| 2024-12-27T04:32:46 |
2025-04-01T04:55:15.820240
|
{
"authors": [
"ChanMeng666",
"Kiran1689"
],
"repo": "Kiran1689/Awesome-Dev-Portfolios",
"url": "https://github.com/Kiran1689/Awesome-Dev-Portfolios/pull/258",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1944625106
|
Use of icons in consistent format and style
🎇 Description
we can use font awesome icons across all pages rather than using svg or png of icons from different sources. this will enhance the UI of the website
🖼 Screenshots
@KiranAminPanjwani please assign me this issue
can i work on this issue
@avdhendra already I am working on.
|
gharchive/issue
| 2023-10-16T08:07:03 |
2025-04-01T04:55:15.822741
|
{
"authors": [
"avdhendra",
"mdtausifiqbal"
],
"repo": "KiranAminPanjwani/MedStats",
"url": "https://github.com/KiranAminPanjwani/MedStats/issues/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
196664397
|
Tooltip component
Closes #270
Note pour merger : penser à mettre le package.json à jour avec react-tooltip. On pourra voir ensemble pour le faire.
Le line-height me semble un peu trop important par rapport à la maquette. Et l'espacement autour du texte devrait être plus important en haut et en bas.
Maquette :
Il manque la fermeture de la tooltip quand on clique en dehors de la zone.
|
gharchive/pull-request
| 2016-12-20T12:47:07 |
2025-04-01T04:55:15.828255
|
{
"authors": [
"LanF3usT",
"Ynote",
"camilledel"
],
"repo": "KissKissBankBank/kitten",
"url": "https://github.com/KissKissBankBank/kitten/pull/303",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
529987470
|
Introduce request title
Adds option to name test requests and show that request title in final documentation.
Usage: |> BlueBird.ConnLogger.save(title: "Without params") (also added to readme).
Defaults to response status given no title supplied.
Inspired by https://github.com/api-hogs/bureaucrat test helpers.
That title makes HTML output much more readable!
Hi @elvanja, thanks a lot for this pr! :)
For me it looks well, so IMHO we can merge it.
|
gharchive/pull-request
| 2019-11-28T15:55:40 |
2025-04-01T04:55:15.834844
|
{
"authors": [
"elvanja",
"rhazdon",
"tomekowal"
],
"repo": "KittyHeaven/blue_bird",
"url": "https://github.com/KittyHeaven/blue_bird/pull/54",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
172510627
|
Fixing the linking errors for tests when default modules are not built
This should prevent failures of automatic build tests on dashboards
:100:
|
gharchive/pull-request
| 2016-08-22T17:37:11 |
2025-04-01T04:55:15.851659
|
{
"authors": [
"dzenanz",
"thewtex"
],
"repo": "KitwareMedical/ITKRLEImage",
"url": "https://github.com/KitwareMedical/ITKRLEImage/pull/10",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
572060072
|
「终结」中的翻译错误
版本:2.0.8.4QF
「隔热机械方块」应为「防冻机械方块」
感谢反馈,已修正
|
gharchive/issue
| 2020-02-27T12:46:13 |
2025-04-01T04:55:15.852851
|
{
"authors": [
"Kiwi233",
"balthild"
],
"repo": "Kiwi233/Translation-of-GTNH",
"url": "https://github.com/Kiwi233/Translation-of-GTNH/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1176630499
|
KNP translatable and doctrine inheritance
Hello !
I am trying to associate knp translatable and doctrine inherence. When I try to build the database it return an error :
There is no column with name translatable_id on table file_translation
To reproduce the bug :
// Parent classes
// File.php
#[ORM\Table('file')]
#[ORM\InheritanceType('JOINED')]
#[ORM\DiscriminatorColumn(name: 'type', type: 'string', length: 255)]
#[ORM\DiscriminatorMap([
'pdf' => Pdf::class,
'video' => Video::class
])]
class File implements TranslatableInterface
{
use TranslatableTrait;
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column(type: 'integer')]
private ?int $id = null;
#[ORM\Column(type: 'boolean')]
private bool $is_active;
}
// Parent classes
// File.php
#[ORM\Entity]
#[ORM\Table('file_translation')]
#[ORM\InheritanceType('JOINED')]
#[ORM\DiscriminatorColumn(name: 'type', type: 'string', length: 255)]
#[ORM\DiscriminatorMap([
'pdf' => PdfTranslation::class,
'video' => VideoTranslation::class
])]
class FileTranslation implements TranslationInterface
{
use TranslationTrait;
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column(type: 'integer')]
private ?int $id = null;
#[ORM\Column(type: 'string', length: 255)]
private string $foo;
#[ORM\Column(type: 'string', length: 255)]
private string $bar;
}
And child class are same and look like :
#[ORM\Entity(repositoryClass: VideoRepository::class)]
class Video extends File
{
#[ORM\Column(type: 'integer')]
private int $duration;
}
And
#[ORM\Entity(repositoryClass: VideoTranslationRepository::class)]
class VideoTranslation extends FileTranslation
{
#[ORM\Column(type: 'string', length: 255)]
private string $subject;
}
A solution I found is to change the inheritance type to SINGLE_TABLE but i do not want to have all data in a single table.
If you want to have a complete example, I let a complete git project to reproduce the bug https://github.com/fauVictor/knp_translation_bug
Thank you for your time and help !
I have seen that PR but the problem steel here. I suppose that I have a problem on my configuration but I don't know where. If someone has an idea ?
I patched the bug using the last commit content waiting for a new version.
|
gharchive/issue
| 2022-03-22T11:14:47 |
2025-04-01T04:55:15.883101
|
{
"authors": [
"fauVictor"
],
"repo": "KnpLabs/DoctrineBehaviors",
"url": "https://github.com/KnpLabs/DoctrineBehaviors/issues/692",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
357921273
|
Where does the L object come from?
When I import some components like
import {
LMap,
LTileLayer,
LMarker,
LPolygon,
LPolyline,
LPopup
} from "vue2-leaflet";
I'm able to access an L object, with methods like DomEvent, control, and Icon. Is this somehow made global when importing LMap? I'm using ESLint and it's not liking the lack of definition for it. I could write in an exception, but I was curious myself how I'm able to access it. I only know of it from copying examples in the repo.
Vue leaflet instantiate leaflet which at the version that is used attach itself to the window object. Thus you are able to use it
@hpo14 We work with Nuxt as well, to run it on nuxt you need 2 things:
1)
A js file in your plugins folder ( say the name is : vue-leaflet.js )
Add this in your file:
import Vue from 'vue';
import Vue2Leaflet from 'vue2-leaflet';
import L from 'leaflet';
delete L.Icon.Default.prototype._getIconUrl;
L.Icon.Default.mergeOptions({
iconRetinaUrl: require('leaflet/dist/images/marker-icon-2x.png'),
iconUrl: require('leaflet/dist/images/marker-icon.png'),
shadowUrl: require('leaflet/dist/images/marker-shadow.png')
});
Vue.component('l-map', Vue2Leaflet.LMap);
Vue.component('l-tilelayer', Vue2Leaflet.LTileLayer);
Vue.component('l-marker', Vue2Leaflet.LMarker);
Vue.component('l-tooltip', Vue2Leaflet.LTooltip);
Vue.component('l-popup', Vue2Leaflet.LPopup);
Vue.component('l-control-zoom', Vue2Leaflet.LControlZoom);
Vue.component('v-marker-cluster', Vue2LeafletMarkerCluster);
Vue.component('l-geo-json', Vue2Leaflet.LGeoJson);
Vue.component('l-feature-group', Vue2Leaflet.LFeatureGroup);
@lordfuoco thank you
I'll try your method later i get home. And make response for the result.
https://github.com/schlunsen/nuxt-leaflet
I also found this github and follow steps, and result is a page of blank (?)
If look from the Chrome DevTools, it actual generate the map (look like it DO really generate it)
But from Devtool''s [Network Tab], my page doesnt send any req to tile.osm.org, that's wired.
Am i do anything wrong ?
@hpo14 if this is resolved can you please close the issue ?
|
gharchive/issue
| 2018-09-07T05:07:16 |
2025-04-01T04:55:15.895515
|
{
"authors": [
"hpo14",
"lordfuoco",
"wishinghand"
],
"repo": "KoRiGaN/Vue2Leaflet",
"url": "https://github.com/KoRiGaN/Vue2Leaflet/issues/216",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2283381659
|
Bug?: Filter in MTB Query Portal
When applying the filter in one of the MTB Query Portal's tab/page, it is not applied when one switches to another tab/page, but has to be re-set and re-applied. Could that be fixed?
@lucienclin i can't reproduce the issue.
|
gharchive/issue
| 2024-05-07T13:38:58 |
2025-04-01T04:55:16.012720
|
{
"authors": [
"lucienclin",
"tada5hi"
],
"repo": "KohlbacherLab/dnpm-dip-portal",
"url": "https://github.com/KohlbacherLab/dnpm-dip-portal/issues/436",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1895059282
|
Clickable navigation box
Description
The current implementation offers clickable navigation boxes.
However, if we click on it, it will not redirect us to the desired path. Only if we hover directly above the words, it will redirect us.
Expectation
Change this by behavior so that the box will redirect us upon clicking.
I changed the transition effect of the buttons from low opacity to full transparent background. The color of the button's border is changed to "peru" (some kind of dark yellow) and is only shown when we hover right over the words.
|
gharchive/issue
| 2023-09-13T18:40:48 |
2025-04-01T04:55:16.022222
|
{
"authors": [
"kevinnguyen20"
],
"repo": "Kollektives-Plagiieren/innovative-commercial-market",
"url": "https://github.com/Kollektives-Plagiieren/innovative-commercial-market/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
390355597
|
Download KV electrum servers list error
The Download KV electrum servers list button is not working and throws the below error in dev tools.
Version: 0.3.1
OS: Win 10
Issue no longer present, closing
|
gharchive/issue
| 2018-12-12T18:36:51 |
2025-04-01T04:55:16.035612
|
{
"authors": [
"jdmarlow86"
],
"repo": "KomodoPlatform/Agama",
"url": "https://github.com/KomodoPlatform/Agama/issues/159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1862746711
|
Docs: 添加 contributors
@all-contributors please add @KomoriDev for doc and content
@all-contributors please add @KomoriDev for doc and content
@all-contributors please add @KomoriDev for doc and content
@all-contributors please add @NCBM for doc and content
|
gharchive/issue
| 2023-08-23T07:35:35 |
2025-04-01T04:55:16.046402
|
{
"authors": [
"KomoriDev"
],
"repo": "KomoriDev/NoneBotX",
"url": "https://github.com/KomoriDev/NoneBotX/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
615555080
|
No such file or directory
when I run the script
konduit-init --chip cpu
I get error as bellow
nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/output/adapter/ClassificationMultiOutputAdapter.java:[28,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/JsonSerdeUtils.java:[29,21] package org.nd4j.base does not exist
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :konduit-serving-api
Traceback (most recent call last):
File "/root/.konduit-serving/source/build_jar.py", line 116, in
os.path.join(args.source, args.target),
File "/usr/lib64/python2.7/shutil.py", line 82, in copyfile
with open(src, 'rb') as fsrc:
IOError: [Errno 2] No such file or directory: '/root/.konduit-serving/source/konduit-serving-uberjar/target/konduit-serving-uberjar-0.1.0-SNAPSHOT-all-linux-x86_64-cpu.jar'
[root@f8c7a778fa75 konduit-serving]# clear
[root@f8c7a778fa75 konduit-serving]# konduit-init --chip cpu
Unknown option: -C
usage: git [--version] [--help] [-c name=value]
[--exec-path[=]] [--html-path] [--man-path] [--info-path]
[-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
[--git-dir=] [--work-tree=] [--namespace=]
[]
error: pathspec 'cli_base_5' did not match any file(s) known to git.
Unknown option: -C
usage: git [--version] [--help] [-c name=value]
[--exec-path[=]] [--html-path] [--man-path] [--info-path]
[-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
[--git-dir=] [--work-tree=] [--namespace=]
[]
Running command: /root/.konduit-serving/source/mvnw -Puberjar,tensorflow clean install -Dmaven.test.skip=true -Djavacpp.platform=linux-x86_64 -Dchip=cpu -Ppython -Ppmml -Dspin.version=all
[INFO] Scanning for projects...
[INFO] Inspecting build with total of 11 modules...
[INFO] Installing Nexus Staging features:
[INFO] ... total of 11 executions of maven-deploy-plugin replaced with nexus-staging-maven-plugin
[INFO] ------------------------------------------------------------------------
[INFO] Detecting the operating system and CPU architecture
[INFO] ------------------------------------------------------------------------
[INFO] os.detected.name: linux
[INFO] os.detected.arch: x86_64
[INFO] os.detected.version: 3.10
[INFO] os.detected.version.major: 3
[INFO] os.detected.version.minor: 10
[INFO] os.detected.release: centos
[INFO] os.detected.release.version: 7
[INFO] os.detected.release.like.centos: true
[INFO] os.detected.release.like.rhel: true
[INFO] os.detected.release.like.fedora: true
[INFO] os.detected.classifier: linux-x86_64
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] konduit-serving [pom]
[INFO] konduit-serving-api [jar]
[INFO] konduit-serving-core [jar]
[INFO] konduit-serving-codegen [jar]
[INFO] konduit-serving-orchestration [jar]
[INFO] konduit-serving-pmml [jar]
[INFO] konduit-serving-tfjava [jar]
[INFO] konduit-serving-native [jar]
[INFO] konduit-serving-python [jar]
[INFO] konduit-serving-distro-bom [jar]
[INFO] konduit-serving-uberjar [jar]
[INFO]
[INFO] -----------------< ai.konduit.serving:konduit-serving >-----------------
[INFO] Building konduit-serving 0.1.0-SNAPSHOT [1/11]
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ konduit-serving ---
[INFO]
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-maven) @ konduit-serving ---
[INFO]
[INFO] --- os-maven-plugin:1.6.1:detect (default) @ konduit-serving ---
[INFO] ------------------------------------------------------------------------
[INFO] Detecting the operating system and CPU architecture
[INFO] ------------------------------------------------------------------------
[INFO] os.detected.name: linux
[INFO] os.detected.arch: x86_64
[INFO] os.detected.version: 3.10
[INFO] os.detected.version.major: 3
[INFO] os.detected.version.minor: 10
[INFO] os.detected.release: centos
[INFO] os.detected.release.version: 7
[INFO] os.detected.release.like.centos: true
[INFO] os.detected.release.like.rhel: true
[INFO] os.detected.release.like.fedora: true
[INFO] os.detected.classifier: linux-x86_64
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ konduit-serving ---
[INFO] Installing /root/.konduit-serving/source/pom.xml to /root/.m2/repository/ai/konduit/serving/konduit-serving/0.1.0-SNAPSHOT/konduit-serving-0.1.0-SNAPSHOT.pom
[INFO]
[INFO] ---------------< ai.konduit.serving:konduit-serving-api >---------------
[INFO] Building konduit-serving-api 0.1.0-SNAPSHOT [2/11]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ konduit-serving-api ---
[INFO] Deleting /root/.konduit-serving/source/konduit-serving-api/target
[INFO]
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-maven) @ konduit-serving-api ---
[INFO]
[INFO] --- os-maven-plugin:1.6.1:detect (default) @ konduit-serving-api ---
[INFO] ------------------------------------------------------------------------
[INFO] Detecting the operating system and CPU architecture
[INFO] ------------------------------------------------------------------------
[INFO] os.detected.name: linux
[INFO] os.detected.arch: x86_64
[INFO] os.detected.version: 3.10
[INFO] os.detected.version.major: 3
[INFO] os.detected.version.minor: 10
[INFO] os.detected.release: centos
[INFO] os.detected.release.version: 7
[INFO] os.detected.release.like.centos: true
[INFO] os.detected.release.like.rhel: true
[INFO] os.detected.release.like.fedora: true
[INFO] os.detected.classifier: linux-x86_64
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ konduit-serving-api ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /root/.konduit-serving/source/konduit-serving-api/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.7.0:compile (default-compile) @ konduit-serving-api ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 79 source files to /root/.konduit-serving/source/konduit-serving-api/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/pipeline/BasePipelineStep.java:[34,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/SchemaTypeUtils.java:[39,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/MetricRenderUtils.java:[25,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/input/conversion/BatchInputParser.java:[37,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/input/conversion/BatchInputParser.java:[39,34] package org.nd4j.linalg.primitives does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/input/conversion/BatchInputParser.java:[229,13] cannot find symbol
symbol: class Pair
location: class ai.konduit.serving.input.conversion.BatchInputParser
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/WritableValueRetriever.java:[26,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/output/adapter/ClassificationMultiOutputAdapter.java:[28,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/JsonSerdeUtils.java:[29,21] package org.nd4j.base does not exist
[INFO] 9 errors
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for konduit-serving 0.1.0-SNAPSHOT:
[INFO]
[INFO] konduit-serving .................................... SUCCESS [ 0.296 s]
[INFO] konduit-serving-api ................................ FAILURE [ 4.275 s]
[INFO] konduit-serving-core ............................... SKIPPED
[INFO] konduit-serving-codegen ............................ SKIPPED
[INFO] konduit-serving-orchestration ...................... SKIPPED
[INFO] konduit-serving-pmml ............................... SKIPPED
[INFO] konduit-serving-tfjava ............................. SKIPPED
[INFO] konduit-serving-native ............................. SKIPPED
[INFO] konduit-serving-python ............................. SKIPPED
[INFO] konduit-serving-distro-bom ......................... SKIPPED
[INFO] konduit-serving-uberjar ............................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.305 s
[INFO] Finished at: 2020-05-11T03:00:05Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.7.0:compile (default-compile) on project konduit-serving-api: Compilation failure: Compilation failure:
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/pipeline/BasePipelineStep.java:[34,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/SchemaTypeUtils.java:[39,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/MetricRenderUtils.java:[25,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/input/conversion/BatchInputParser.java:[37,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/input/conversion/BatchInputParser.java:[39,34] package org.nd4j.linalg.primitives does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/input/conversion/BatchInputParser.java:[229,13] cannot find symbol
[ERROR] symbol: class Pair
[ERROR] location: class ai.konduit.serving.input.conversion.BatchInputParser
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/WritableValueRetriever.java:[26,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/output/adapter/ClassificationMultiOutputAdapter.java:[28,21] package org.nd4j.base does not exist
[ERROR] /root/.konduit-serving/source/konduit-serving-api/src/main/java/ai/konduit/serving/util/JsonSerdeUtils.java:[29,21] package org.nd4j.base does not exist
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :konduit-serving-api
Traceback (most recent call last):
File "/root/.konduit-serving/source/build_jar.py", line 116, in
os.path.join(args.source, args.target),
File "/usr/lib64/python2.7/shutil.py", line 82, in copyfile
with open(src, 'rb') as fsrc:
IOError: [Errno 2] No such file or directory: '/root/.konduit-serving/source/konduit-serving-uberjar/target/konduit-serving-uberjar-0.1.0-SNAPSHOT-all-linux-x86_64-cpu.jar'
Can you find out the konduit pip module version that's installed? I've fixed this problem in the newer version. The problem was not fetching the branches and tags before checkout. Retrying konduit-init --chip gpu might fix this problem for you. Try it after upgrading the konduit package with pip install --upgrade konduit
|
gharchive/issue
| 2020-05-11T03:07:17 |
2025-04-01T04:55:16.092381
|
{
"authors": [
"ShamsUlAzeem",
"bewithme"
],
"repo": "KonduitAI/konduit-serving",
"url": "https://github.com/KonduitAI/konduit-serving/issues/315",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2226055278
|
Mesh security recommendations
Description
Testing instructions
Preview link:
Checklist
[x] Review label added
[x] Conditional version tags added, if applicable.
For example, if this change is for an upcoming 3.6 release, enclose your content in {% if_version gte:3.6.x %} <content> {% endif_version %} tags (or if_plugin_version tags for plugins).
Use any of the following keys:
gte:<version> - greater than or equal to a specific version
lte:<version> - less than or equal to a specific version
eq:<version> - exactly equal to a specific version
You can do the same for older versions.
07:40:26 vite.1 | TypeError: Invalid URL
07:40:26 vite.1 | at new URL (node:internal/url:775:[36](https://github.com/Kong/docs.konghq.com/actions/runs/8558528024/job/23476684430?pr=7173#step:12:37))
07:40:26 vite.1 | at setHostHeader (/home/runner/work/docs.konghq.com/docs.konghq.com/vite.config.ts:50:16)
07:40:26 vite.1 | at Object.configure (/home/runner/work/docs.konghq.com/docs.konghq.com/vite.config.ts:96:13)
07:40:26 vite.1 | at file:///home/runner/work/docs.konghq.com/docs.konghq.com/node_modules/vite/dist/node/chunks/dep-BBHrJRja.js:63284:18
07:40:26 vite.1 | at Array.forEach (<anonymous>)
07:[40](https://github.com/Kong/docs.konghq.com/actions/runs/8558528024/job/23476684430?pr=7173#step:12:41):26 vite.1 | at proxyMiddleware (file:///home/runner/work/docs.konghq.com/docs.konghq.com/node_modules/vite/dist/node/chunks/dep-BBHrJRja.js:63274:26)
07:40:26 vite.1 | at _createServer (file:///home/runner/work/docs.konghq.com/docs.konghq.com/node_modules/vite/dist/node/chunks/dep-BBHrJRja.js:64825:25)
07:40:26 vite.1 | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
07:40:26 vite.1 | at async CAC.<anonymous> (file:///home/runner/work/docs.konghq.com/docs.konghq.com/node_modules/vite/dist/node/cli.js:762:24)
@fabianrbz I guess this is due to my cross link to Konnect's docs. Do you know how I should write this to make the test pass?
@fabianrbz I ran this locally and all tests passed. What could be the issue?
The issue is that it comes from a fork, and the VITE_PORTAL_API_URL doesn't exist in the forks env.
|
gharchive/pull-request
| 2024-04-04T16:53:50 |
2025-04-01T04:55:16.098555
|
{
"authors": [
"fabianrbz",
"lahabana",
"mheap"
],
"repo": "Kong/docs.konghq.com",
"url": "https://github.com/Kong/docs.konghq.com/pull/7173",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
616712345
|
How to open API definition file in Insomnia Designer?
What is your question?
Insomnia Designer is great tool but I don't understand how I can use it to modify existing API definition .yaml? I can import a file using Create -> Import from -> File but after that no changes are synced to the original file.
My expected scenario is the following:
Run Insomnia Designer and open existing API definition file.
I modify API definition (debug, preview etc.)
All changes are saved to opened file
What is the context?
Insomnia Designer
Hi @rootext
Currently we create a local instance of the file in the app directory instead of modifying the original, is the ask here for the application to modify the opened file instead?
is the ask here for the application to modify the opened file instead?
Exactly.
If you maintain your definition file in git or something like that, the current behavior is very confusing and also the UI doesn't explain that a copy is created.
|
gharchive/issue
| 2020-05-12T14:41:20 |
2025-04-01T04:55:16.101984
|
{
"authors": [
"languitar",
"nijikokun",
"rootext"
],
"repo": "Kong/insomnia",
"url": "https://github.com/Kong/insomnia/issues/2161",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1119731717
|
Plugin test response status
It was noticed that in a plugin particularly in the access lifecycle method when you exit the response with a status code you cannot unit test it as the response status is always overiden by the hardcoded serviceResponse.
The examples commit shows this error in a simple example plugin and corresponding unit tests.
We have implemented a fix by conditionally merging the serviceResponse into the instance response only if the instance response.status is undefined. This works however i am not to sure why the instance response.status is undefined in the first place.
@NeuralEvolution Good catch! However I think we should use this.existing (https://github.com/Kong/kong-js-pdk/blob/8295f0335711057d48d050052deb1f7512573704/plugin_test.js#L161) instead. Logically if this.existing is true,
all following phases are skipped.
@fffonion I added a commit with the update as suggested.
Thank you @NeuralEvolution for the contribution!
|
gharchive/pull-request
| 2022-01-31T17:36:06 |
2025-04-01T04:55:16.104824
|
{
"authors": [
"NeuralEvolution",
"fffonion"
],
"repo": "Kong/kong-js-pdk",
"url": "https://github.com/Kong/kong-js-pdk/pull/131",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
567758660
|
kong go plugins not executed in some requests
Summary
If you configure a go plugin (no matter if it's globally or linked to an specific service) mixed with another LUA plugin sometimes the go plugin is not being executed although it should.
The plugin is executed sometimes during some requests, and after some requests being executed you have a bunch of responses when the plugin is not executed.
With a RPS of 7, aproximately the 50% requests are calling the go plugin but the rest are not.
It's an strange behaviour that I will chase looking at the plugin iterator code.
You don't have an error but if the plugin is not called, it's a failure.
Steps To Reproduce
Use docker-kong image 2.0.1 alpine and compile the go-hello plugin
Configure a service with a router (whatever)
Configure the go-hello globally or locally to the service
Do requests (e.g with jmeter) during 10 minutes
You will have the plugin being executed (and introducing the request header) in some requests and later a bunch of request not calling the plugin:
Additional Details & Logs
Kong version (docker alpine 2.0.1)
Kong debug-level startup logs ($ kong start --vv)
Kong error logs (<KONG_PREFIX>/logs/error.log)
Kong configuration (the output of a GET request to Kong's Admin port - see
https://docs.konghq.com/latest/admin-api/#retrieve-node-information)
Operating system docker
@ealogar Thanks for reporting. Do you see any errors on error logs?
@gszr I dont have any error, it's just that the plugin stop being called. dunno if the rpc message ir lost or whatever
I include a copy of the logs:
172.27.0.1 - - [19/Feb/2020:18:44:25 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:26 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:26 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:27 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:27 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:27 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:28 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:28 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:29 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:29 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:30 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:30 [error] 134#0: *70881 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:30 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:30 [error] 134#0: *70881 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:30 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:31 [error] 134#0: *70881 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:31 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:31 [error] 134#0: *70881 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:31 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:32 [error] 134#0: *70881 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:32 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:32 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:33 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:33 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:33 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:34 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:34 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:35 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:35 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:36 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:36 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:36 [error] 134#0: *70971 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:37 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:37 [error] 134#0: *70971 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:37 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:37 [error] 134#0: *70971 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:37 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:38 [error] 134#0: *70971 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:38 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:38 [error] 134#0: *70971 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:38 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:39 [error] 134#0: *70997 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:39 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:39 [error] 134#0: *70997 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:39 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:40 [error] 134#0: *70997 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:40 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:40 [error] 134#0: *70997 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:40 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
2020/02/19 18:44:40 [error] 134#0: *70997 [kong] go.lua:338 [go-hello2] Access go-hello, client: 172.27.0.1, server: kong, request: "POST /api1 HTTP/1.1", host: "localhost:8000"
172.27.0.1 - - [19/Feb/2020:18:44:40 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:41 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:41 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
172.27.0.1 - - [19/Feb/2020:18:44:42 +0000] "POST /api1 HTTP/1.1" 200 14 "-" "Apache-HttpClient/4.5.5 (Java/1.8.0_192)" "-" "-"
Thanks @ealogar, we will do some digging on our side.
Hi @gszr , I just found now (after 20 minutes testing) that if you define he priority in the go module, it works as expected and it's being called always.
var Priority = 1
I discovered that the pluginserver is looking for this property through a lookup and I always had in mind to set it....
This may be a workaround for me but you may consider in your diggings, maybe write in documentation to define prirority or whatever.
@gszr I have not been able to reproduce the issue in a couple of days. It may be a memory related issue ?
@gszr I am having this problem sometimes, I am afraid I will stop using go plugins. Have you been able to do some investigation ?
I had the same problem
but also,i wrote a plugin and let it act on two services with different configurion(use kong.yml).then i requested the services and found that the value of the configuration is not permanent,sometimes print value of plugin's configurion who is binded to another service.
One thing that I am doing now (but only with one global go plugin and some more in lua) is increasing the file descriptors of the nginx workers and reducing the workers connections to the half of the value as I have seen than the go-pluginsserver is taking this value when it's spawned. The worst thing of this issue is that no logs apppear, they would be very useful to help fixing the issue
Thanks for linking the issues, @ealogar! Both PRs are merged by now, so closing this.
Hey @ealogar , Unfortunately for me the issue is still there even with the above fix, I am using kong 2.0.3 in Heroku and just patched in go.lua and plugins_iterator.lua from the above PRs but still keep having this issue intermittently.
One pattern I have noticed is - when it happens, it keeps happening more than 50% of the times until I restart the app at which point it stops. Can you please suggest something to overcome this issue ? Initially I thought that the go pluginserver might be crashing but locally I can see that whenever it crashes it brings itself back and I assume that's what it should do in Heroku too.
I was able to patch the issue with this PR .Not sure if this is the best way but it seems to work for me in Heroku.
PS: I am not that well versed with Lua. So please bare with me :-)
@primableatom fo my use case both prs fixed the issue when reloading the plugin. I think that what you're suffering is not the same thing....
@primableatom Thanks for submitting the PR. That does look like a different issue. The team and I will look into the PR. Thanks again!
I encountered a similar problem like @primableatom . the kong version is 2.1.0. When the request was executed halfway in the go plugin, it was not executed, no response and no log. the the request appeared directly in the next level of service 。when i restart the kong-docker, everything is normal,Then I don’t know what happened, it went wrong again。
I have spend 4 month to write the big authorization go plugin, if the question can't be fixed, i just dont know what should i do.😿
|
gharchive/issue
| 2020-02-19T18:33:12 |
2025-04-01T04:55:16.121525
|
{
"authors": [
"ealogar",
"gszr",
"ifree321",
"lmqytz",
"primableatom"
],
"repo": "Kong/kong",
"url": "https://github.com/Kong/kong/issues/5586",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
292221043
|
check for nil value of KONG_HEADER_FILTER_STARTED_AT
Summary
Avoiding an error that occurs when ctx.KONG_HEADER_FILTER_STARTED_AT is nil.
attempt to perform arithmetic on field 'KONG_HEADER_FILTER_STARTED_AT' (a nil value)
This happens sometimes. See example issue.
@thibaultcha it's kind of an edge case, which is a bit hard to write tests for. In my case, it happens when a custom plugin is used for making a proxy request to a remote server.
I'd be happy to write a test case for this, but I think I might need some guidance to the proper setup of such test.
it's kind of an edge case, which is a bit hard to write tests for
Exactly, the goal of writing tests for this is to find the edge-case, and make sure it isn't one anymore, but properly expected and handled by Kong :)
@yaronpx You can find some fixture plugins we use in our tests in this directory. Don't forget to load it into the test instance via the custom_plugins property when starting the Kong test instance. You could then write a plugin that reproduces your error, in your own test suite.
Just from my experience working with custom Kong plugins, this appears to happen when you return from a custom plugin instead of using ngx.exit when a request is being proxied. Doing this causes the exact error above.
I am reluctant to merging this without a reproducible test case, especially since the underlying issue could have been fixed by now. From reading the proxy path, this seems to be an non-reachable branch, so we should make extra sure about the circumstances this happens under.
Until then, I am closing this. Feel free to reopen it with a reproducible test-case (even with a custom plugin in spec/fixtures as instructed above).
Thanks
|
gharchive/pull-request
| 2018-01-28T17:14:44 |
2025-04-01T04:55:16.127403
|
{
"authors": [
"thibaultcha",
"thomasgriffin",
"yaronpx"
],
"repo": "Kong/kong",
"url": "https://github.com/Kong/kong/pull/3185",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
520199772
|
Add a plugin on specific routes
HI,
Is it possible to activate kong on specific routes from the same service:
enable the plugin on myservice/private
disable the plugin on myservice/public
using KongIngress or Ingress.
Thank you in advance
You can create two Ingress resources that point to the same Service in Kubernetes but with two different paths and then have different plugins on those two Ingress resources.
See: https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/guides/using-kongplugin-resource.md#configuring-plugins-on-ingress-resource
Please close the issue if this is what you're looking for.
Thank you for your fast reply.
However I still have a problem, here are my ingresses :
`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-service-public
spec:
rules:
- http:
paths:
- path: /service
backend:
serviceName: service
servicePort: 3000
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-service-private
annotations:
plugins.konghq.com: kong-jwt
spec:
rules:
- http:
paths:
- path: /service/docs/
backend:
serviceName: service
servicePort: 3000
`
when I hit :
ip/service => good
ip/service/docs => bad because it sends me to ip/service with the plugin
I suppose I need to add a proxy :
apiVersion: configuration.konghq.com/v1 kind: KongIngress metadata: name: kong-ingress-service-private proxy: path: /docs/ route: strip_path: true
with :
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kong-service-private annotations: plugins.konghq.com: kong-jwt configuration.konghq.com: kong-ingress-service-private
but doesn't seem to work.
Thank you
@joselegitan Try creating he Ingress instead with /service/docs (without the trailing /).
That shoud route traffic correctly.
Closing due to lack of activity.
|
gharchive/issue
| 2019-11-08T19:42:55 |
2025-04-01T04:55:16.135180
|
{
"authors": [
"hbagdi",
"joselegitan"
],
"repo": "Kong/kubernetes-ingress-controller",
"url": "https://github.com/Kong/kubernetes-ingress-controller/issues/452",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2052208366
|
Flaky integration test: TestValidationWebhook/should_fail_the_validation_if_the_secret_in_ConfigPatches_of_KongClusterPlugin_generates_invalid_configuration
Problem statement
During one of the kong image validation workflow runs the TestValidationWebhook/should_fail_the_validation_if_the_secret_in_ConfigPatches_of_KongClusterPlugin_generates_invalid_configuration test has failed but it shouldn't (it passed on re-run)
webhook_test.go:891:
Error Trace: /home/runner/work/kubernetes-ingress-controller/kubernetes-ingress-controller/test/integration/webhook_test.go:891
Error: An error is expected but got nil.
Test: TestValidationWebhook/should_fail_the_validation_if_the_secret_in_ConfigPatches_of_KongClusterPlugin_generates_invalid_configuration
https://github.com/Kong/kubernetes-ingress-controller/actions/runs/7286242198/job/19854637730
Still occurring https://github.com/Kong/kubernetes-ingress-controller/actions/runs/7444386286/job/20250735830
To fix this, let's move the flaky test to envtests that were scaffolded in https://github.com/Kong/kubernetes-ingress-controller/pull/5605. TestAdmissionWebhook_KongVault can be used as an example of the admission webhook test that deploys Kong in docker so it's possible to also test cases that require communication with DP.
Fun fact: TestAdmissionWebhook_KongVault failed in https://github.com/Kong/kubernetes-ingress-controller/actions/runs/8437912949/job/23108889362?pr=5751
admission_webhook_envtest_test.go:52:
Error Trace: /home/runner/work/kubernetes-ingress-controller/kubernetes-ingress-controller/test/envtest/admission_webhook_envtest_test.go:196
/home/runner/work/kubernetes-ingress-controller/kubernetes-ingress-controller/test/envtest/admission_webhook_envtest_test.go:52
Error: Received unexpected error:
Internal error occurred: failed calling webhook "kongvaults.validation.ingress-controller.konghq.com": failed to call webhook: an error on the server ("HTTP status 405 (message: \"Method not allowed\")") has prevented the request from succeeding
Test: TestAdmissionWebhook_KongVault
Fun fact: TestAdmissionWebhook_KongVault failed in https://github.com/Kong/kubernetes-ingress-controller/actions/runs/8437912949/job/23108889362?pr=5751
admission_webhook_envtest_test.go:52:
Error Trace: /home/runner/work/kubernetes-ingress-controller/kubernetes-ingress-controller/test/envtest/admission_webhook_envtest_test.go:196
/home/runner/work/kubernetes-ingress-controller/kubernetes-ingress-controller/test/envtest/admission_webhook_envtest_test.go:52
Error: Received unexpected error:
Internal error occurred: failed calling webhook "kongvaults.validation.ingress-controller.konghq.com": failed to call webhook: an error on the server ("HTTP status 405 (message: \"Method not allowed\")") has prevented the request from succeeding
Test: TestAdmissionWebhook_KongVault
@pmalek It could fail because we mistakenly updated the Kong version that's used in envtests (https://github.com/Kong/kubernetes-ingress-controller/pull/5608/files). We have to stay with 3.4.x until 3.7.x is released with a fix for https://konghq.atlassian.net/browse/KAG-3699. https://github.com/Kong/kubernetes-ingress-controller/pull/5756 should address that.
😞 https://github.com/Kong/kubernetes-ingress-controller/commit/09f1e556e68442b87345879e18867e9b0226e3a7
Still failing: https://github.com/Kong/kubernetes-ingress-controller/actions/runs/8468297488/job/23201043133#step:7:4530
creating consumer background-noise-consumer-45
Warning: import/export of basic-authcredentials using decK doesn't work due to hashing of passwords in Kong.
creating basic-auth background-noise-consumer-31-credential-1 for consumer background-noise-consumer-31
creating basic-auth background-noise-consumer-32-credential-2 for consumer background-noise-consumer-32
creating basic-auth background-noise-consumer-31-credential-0 for consumer background-noise-consumer-31
creating basic-auth background-noise-consumer-32-credential-3 for consumer background-noise-consumer-32
creating basic-auth background-noise-consumer-31-credential-2 for consumer background-noise-consumer-31
creating basic-auth background-noise-consumer-31-credential-4 for consumer background-noise-consumer-31
creating basic-auth background-noise-consumer-33-credential-2 for consumer background-noise-consumer-33
creating basic-auth background-noise-consumer-33-credential-0 for consumer background-noise-consumer-33
creating basic-auth background-noise-consumer-32-credential-1 for consumer background-noise-consumer-32
creating basic-auth background-noise-consumer-31-credential-3 for consumer background-noise-consumer-31
creating basic-auth background-noise-consumer-33-credential-4 for consumer background-noise-consumer-33
creating basic-auth background-noise-consumer-33-credential-1 for consumer background-noise-consumer-33
creating basic-auth background-noise-consumer-32-credential-0 for consumer background-noise-consumer-32
creating basic-auth background-noise-consumer-33-credential-3 for consumer background-noise-consumer-33
creating basic-auth background-noise-consumer-32-credential-4 for consumer background-noise-consumer-32
webhook_test.go:901:
Error Trace: /home/runner/work/kubernetes-ingress-controller/kubernetes-ingress-controller/test/integration/webhook_test.go:901
Error: An error is expected but got nil.
Test: TestValidationWebhook/should_fail_the_validation_if_the_secret_in_ConfigPatches_of_KongClusterPlugin_generates_invalid_configuration
Warning: import/export of basic-authcredentials using decK doesn't work due to hashing of passwords in Kong.
creating basic-auth background-noise-consumer-35-credential-3 for consumer background-noise-consumer-35
creating basic-auth background-noise-consumer-36-credential-1 for consumer background-noise-consumer-36
creating basic-auth background-noise-consumer-34-credential-3 for consumer background-noise-consumer-34
creating basic-auth background-noise-consumer-35-credential-1 for consumer background-noise-consumer-35
creating basic-auth background-noise-consumer-34-credential-0 for consumer background-noise-consumer-34
creating basic-auth background-noise-consumer-34-credential-1 for consumer background-noise-consumer-34
Should be fixed by https://github.com/Kong/kubernetes-ingress-controller/pull/5797
|
gharchive/issue
| 2023-12-21T11:03:12 |
2025-04-01T04:55:16.144511
|
{
"authors": [
"czeslavo",
"pmalek",
"programmer04"
],
"repo": "Kong/kubernetes-ingress-controller",
"url": "https://github.com/Kong/kubernetes-ingress-controller/issues/5375",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
271418684
|
Balancer: getPeer fails
See Kong log below
2017/11/02 23:30:05 [warn] 19010#0: *28199029 [lua] balancer.lua:297: queryDns(): [ringbalancer] querying dns for xxx.host.name.xxx failed: dns query returned no results, client: 192.168.13.43, server: kong, request: "GET /mrhyde/dataservices/experiments/configuration/v1 HTTP/1.1", host: "gateway.mydomain.com"
2017/11/02 23:30:05 [warn] 19010#0: *28199029 [lua] balancer.lua:668: redistributeSlots(): [ringbalancer] redistributed slots, size=100, dropped=100, assigned=0, left unassigned=100, client: 192.168.13.43, server: kong, request: "GET /mrhyde/dataservices/experiments/configuration/v1 HTTP/1.1", host: "gateway.mydomain.com"
2017/11/02 23:30:05 [error] 19010#0: *28199029 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/resty/dns/balancer.lua:539: attempt to index field 'address' (a nil value)
stack traceback:
coroutine 0:
/usr/local/share/lua/5.1/resty/dns/balancer.lua: in function 'getPeer'
/usr/local/share/lua/5.1/kong/core/balancer.lua:281: in function 'balancer_execute'
/usr/local/share/lua/5.1/kong/core/handler.lua:126: in function 'before'
/usr/local/share/lua/5.1/kong.lua:292: in function 'access'
access_by_lua(nginx-kong.conf:96):2: in function <access_by_lua(nginx-kong.conf:96):1>, client: 192.168.13.43, server: kong, request: "GET /mrhyde/dataservices/experiments/configuration/v1 HTTP/1.1", host: "gateway.mydomain.com"
The error seems to be an edge case. The DNS record fails to resolve: see dns query returned no results in the log, hence the ring balancer releases all its slots;
[ringbalancer] redistributed slots, size=100, dropped=100, assigned=0, left unassigned=100
When, in this state a getPeer is requested, it will check assignments and return a functional error "No peers are available". The problem here occurs because the "redistribution" happens in this case after that check, inside getPeer. Due to this it tries to access an address that is no longer available.
It does not matter for the final result, as it would have failed to proxy anyway, but it should have been a proper "No peers are available" instead of trowing an error which results in a 500.
As for the dns results; the "dns query returned no results" error occurs when the server replies, but with an empty record. See https://github.com/Kong/lua-resty-dns-client/blob/0.4.x/src/resty/dns/client.lua#L706-L725
btw: this is on 0.4.x branch. But the edge case probably also exists on the master branch
|
gharchive/issue
| 2017-11-06T10:03:51 |
2025-04-01T04:55:16.149687
|
{
"authors": [
"Tieske"
],
"repo": "Kong/lua-resty-dns-client",
"url": "https://github.com/Kong/lua-resty-dns-client/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2051845832
|
Handle fallback for online content
Describe the task
Handle fallback for online content
[x] Create folder data
[x] Improve function fetchStaticData with fallback data
Version test: http://14.224.158.246:8900/s/od8swfraBg4MiKG
Validation passed
|
gharchive/issue
| 2023-12-21T07:15:52 |
2025-04-01T04:55:16.152201
|
{
"authors": [
"Sokol142196"
],
"repo": "Koniverse/SubWallet-Extension",
"url": "https://github.com/Koniverse/SubWallet-Extension/issues/2391",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2055500063
|
Fix bug "Invalid character" on earning
Fee numbers can sometimes get incorrectly rounded, which causes error to some libs
Version test: https://c6cb796b.subwallet-webapp.pages.dev/
Validation passed
Next step successfully
|
gharchive/issue
| 2023-12-25T08:12:02 |
2025-04-01T04:55:16.154264
|
{
"authors": [
"NamPhamc99",
"Sokol142196"
],
"repo": "Koniverse/SubWallet-Extension",
"url": "https://github.com/Koniverse/SubWallet-Extension/issues/2400",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
94316422
|
Anything happening with this project?
Last update about 5 months back, are there any updates and what is the progress on the next version of the CMS?
Clearly not, is this CMS now dead in the water?
alanr3id,
Thank you for leaving this comment.
New Kooboo will be released in about a month, currently the project is in final stage of development.
Due to various reasons, we haven't been able to answer your questions and requests very promptly, for which we sincerely apologize. However, after the release of New Kooboo, this situation will be fundamentally improved.
As you may have known, New Kooboo is being built with large-scale upgrade in many aspects and development of some innovative functions. Unfortunately, right now we can't provide you with a precise release date or more details. But we assure you that this project has always been our focus and is in good hands of our best engineers. We hope New Kooboo can be as amazing as we promised, or even more. We take this project very seriously and will try our best to bring it to you as soon as possible. And hopefully by then you will agree it is worth the wait.
Thank you for your support.
@koobooteam thank for your feedback, I happy with your work
Thanks for the update. I look forward to seeing what has been done.
Any updates @koobooteam ?
Be really cool to see some movement on this.
Once again any updates @koobooteam ?
I'm curious too...
Is the project still in development? On hold? Dead?
Would be graet to get some news about the project :+1:
I've been developing CMS for a long time and Kooboo is the best CMS solution ever.
I believe kooboo will be come back with new features but It already works stable even any changes.
In one way or another Koobo should not be die.
|
gharchive/issue
| 2015-07-10T14:22:26 |
2025-04-01T04:55:16.161839
|
{
"authors": [
"alanr3id",
"dstream",
"hyrmedia",
"koobooteam",
"nhaberl",
"r3id",
"rdonmez"
],
"repo": "Kooboo/CMS",
"url": "https://github.com/Kooboo/CMS/issues/326",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
526510594
|
Will be there support for random load in case of DockerCloudRetentionStrategy
https://github.com/KostyaSha/yet-another-docker-plugin/blob/e559f3d8fd78089e96f58e64d633a6e8885289cf/yet-another-docker-plugin/src/main/java/com/github/kostyasha/yad/DockerProvisioningStrategy.java#L148
Currently i have N slaves with same templates to spread the load. If i use DockerOnce retention strategy with RandomLeastLoadedDockerCloudOrder then the load is randomly spread accross two slaves.
But using DockerCloudRetentionStrategy (to keep slave alive for n minutes in order avoiding provisioning again), the RandomLeastLoadedDockerCloudOrder strategy is not used.
All the jobs get scheduled on slave1 until this one get full.
Can we add this DockerCloudRetentionStrategy to the list of supported randomleastloadedcloudorder ?
You can try... If it just uncomment then you can build .hpi and upload into your jenkins.
|
gharchive/issue
| 2019-11-21T10:43:23 |
2025-04-01T04:55:16.183134
|
{
"authors": [
"33man",
"KostyaSha"
],
"repo": "KostyaSha/yet-another-docker-plugin",
"url": "https://github.com/KostyaSha/yet-another-docker-plugin/issues/281",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
67940764
|
custom views, view attrs, attrs
It doesn't support custom view good enough - custom attrs.
It also doesn't support all included attrs (like minHeight)
How do I set background=?primaryColor?
Maybe there will be an intension in Anko plugin that will create new properties for Java get/set methods.
There is a property View.minimumHeight : Int.
backgroundResource = android.R.color.primary_text_dark.
can we have something like easy way to pass hardcoded attrs?
What hardcoded attrs do you mean?
custom attrs :)
Hi, how do I get android:layout_height="?attr/actionBarSize" functionality? or any other attr?
Here is what I ended up doing for this...
fun Context.attribute(value : Int) : TypedValue {
var ret = TypedValue()
getTheme().resolveAttribute(value, ret, true)
return ret
}
fun Context.attrAsDimen(value : Int) : Int{
return TypedValue.complexToDimensionPixelSize(attribute(value).data, getResources().getDisplayMetrics())
}
//inside the DSL of an activity onCreate
val toolbar = toolbarSupport{
setTitle("hello")
setElevationCompat(dip(4))
backgroundColor = attribute(R.attr.colorPrimary).data
}.layoutParams(width = org.jetbrains.anko.matchParent, height = attrAsDimen(net.schwiz.koat.R.attr.actionBarSize))
custom views :setBackgroundResource、setBackgroundColor() both don't work even though i override the onDraw method.
this is how i use attr.
val a = context.obtainStyledAttributes(attrs, R.styleable.CustomTitleView)
//获取属性
if (a.hasValue(R.styleable.CustomTitleView_center_title)) {
centerTitle = a.getString(R.styleable.CustomTitleView_center_title)
}
if (a.hasValue(R.styleable.CustomTitleView_right_title)){
rightTitle = a.getString(R.styleable.CustomTitleView_right_title)
}
if(a.hasValue(R.styleable.CustomTitleView_bgcolor)){
bgcolor = a.getResourceId(R.styleable.CustomTitleView_bgcolor, R.color.colorAccent)
}
if(a.hasValue(R.styleable.CustomTitleView_leftsrc)){
leftsrc = a.getResourceId(R.styleable.CustomTitleView_leftsrc, R.drawable.abc_btn_radio_material)
}
or use like this.
//获取属性
for (i: Int in 0..a.indexCount) {
var attr : Int = a.getIndex(i)
when(attr){
R.styleable.CustomTitleView_leftsrc ->
leftsrc = a.getResourceId(attr, R.drawable.abc_btn_radio_material)
R.styleable.CustomTitleView_right_title -> rightTitle = a.getString(attr)
R.styleable.CustomTitleView_center_title -> centerTitle = a.getString(attr)
}
}
|
gharchive/issue
| 2015-04-12T18:17:20 |
2025-04-01T04:55:16.205279
|
{
"authors": [
"schwiz",
"smallmouse2009",
"yanex",
"yoavst"
],
"repo": "Kotlin/anko",
"url": "https://github.com/Kotlin/anko/issues/19",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2331943513
|
Spring example
Having Ktor example is nice, but how kotlinx-rpc can be used in Spring app? Can you please provide some simple example ? I believe since Spring is quite popular server side JVM framework, it could be helpful for many people to have such kotlin- based example.
Hi!
You are right, a Spring example is needed here. Unfortunately, we don't have integration with it right now, unlike with the Ktor. Once we do, we will definitely add one
@Mr3zee thanks for the quick and Informative reply.
Do you have any plans already to provide such integration? Or at least I am wondering if it's short term or long term goal from your point of view.
I think right now it is long term goal. We are working on improving kRPC protocol in general, it is a priority now
|
gharchive/issue
| 2024-06-03T20:06:29 |
2025-04-01T04:55:16.207344
|
{
"authors": [
"AlexTrotsenko",
"Mr3zee"
],
"repo": "Kotlin/kotlinx-rpc",
"url": "https://github.com/Kotlin/kotlinx-rpc/issues/91",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1005217083
|
ルーム作成完了モーダルの招待URLボタン
close #419
close #345
やったこと
ルーム作成完了モーダルの招待URLの値を入れた
コピーボタンを作った:コピーマーク→押すと2秒間✅→コピーマークに戻る
スクリーンショット
参考リンク・補足など
イベント名入力欄を消した
|
gharchive/pull-request
| 2021-09-23T09:39:06 |
2025-04-01T04:55:16.215575
|
{
"authors": [
"knknk98"
],
"repo": "KoukiNAGATA/sushi-chat",
"url": "https://github.com/KoukiNAGATA/sushi-chat/pull/422",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2178008165
|
Show outdated modules
Hi, Kristjan
maybe with Show outdated modules is better to not check it by default ?
Would it also be okay to work with a URL parameter? If you include the following URL, the old modules will not be displayed: https://kristjanesperanto.github.io/MagicMirror-3rd-Party-Modules/?showOutdated=false
Somehow I see advantages in both options (showing and not showing) and I don't know which is really better.
I imagine the following scenario, which is why I tend to show the outdated modules by default: A user is looking for a module, but there is none "normal" module, but there is one outdated. The user looks at the module and adapts it to his needs and is happy. If we were to hide the outdated modules, the user might not be able to find the outdated module which helped him.
Because of the URL parameter I think I can close this issue :slightly_smiling_face:
|
gharchive/issue
| 2024-03-11T00:55:04 |
2025-04-01T04:55:16.344908
|
{
"authors": [
"KristjanESPERANTO",
"bugsounet"
],
"repo": "KristjanESPERANTO/MagicMirror-3rd-Party-Modules",
"url": "https://github.com/KristjanESPERANTO/MagicMirror-3rd-Party-Modules/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
282705454
|
merge package template from arlac77/npm-package-template
package.json
chore(devDependencies): update nyc@^11.4.1 from template
chore(devDependencies): update rollup@^0.52.2 from template
Codecov Report
Merging #110 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #110 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 2 2
Lines 14 14
=====================================
Hits 14 14
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 72e7038...fcd6882. Read the comment docs.
Coverage remained the same at 100.0% when pulling fcd6882fb388bcc9083c8fb11ce32bb403b51847 on template-sync-1 into 72e7038c05b52b8332a6b9c85f9fc9cfedfecfa2 on master.
|
gharchive/pull-request
| 2017-12-17T16:27:07 |
2025-04-01T04:55:16.370184
|
{
"authors": [
"arlac77",
"codecov-io",
"coveralls"
],
"repo": "Kronos-Integration/kronos-test-interceptor",
"url": "https://github.com/Kronos-Integration/kronos-test-interceptor/pull/110",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
530141085
|
merge from arlac77/npm-package-template-esm-only
package.json
chore(scripts): cover@#overwrite c8 -x 'tests/**/*' --temp-directory build/tmp ava && c8 report -r lcov -o build/coverage --temp-directory build/tmp
chore(package): add sideEffects from template
:tada: This PR is included in version 4.1.1 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2019-11-29T03:39:25 |
2025-04-01T04:55:16.373518
|
{
"authors": [
"arlac77"
],
"repo": "Kronos-Integration/service-health-check",
"url": "https://github.com/Kronos-Integration/service-health-check/pull/539",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
}
|
335259436
|
Trouble Getting Activation Bytes
Hello, I'm out of my league trying to follow these instructions, but I believe I'm about 90% of the way there and I just need a little direction.
To set the table, my goal is to obtain my Audible Activation Bytes so that I can convert my .aax audio book files to .m4b while retaining the chapter info.
I installed Brew, FFMPEG, and Bash 4.4.23 enroute to attempting to follow instructions.
In an attempt to get my Audible Activation Bytes I ran the following lines in my macOS 10.13.5 Terminal
brew install chromedriver ffmpeg
sudo easy_install pip
pip install selenium requests
git clone https://github.com/inAudible-NG/audible-activator
cd audible-activator
sed -i '' 's,chromedriver_path = "./chromedriver",chromedriver_path = "/usr/local/bin/chromedriver",' audible-activator.py
./audible-activator.py
Everything seems to go fine until I get to the last command (./audible-activator.py) ...when I run that I get the response: env: python2: No such file or directory
So I'm not sure what to do, but if someone could help I would greatly appreciate it. Please let me know if there are any other details I can provide. Thanks! - Ben
I am not sure that this is the right place to ask this. The audible-activator repo might be a better place.
The error message tells you that the env command cannot find python2. See the shebang in the first line:
#!/usr/bin/env python2
Check which versions are installed (one if the simplest ways is probably to just enter python in a terminal and use auto completion to let the auto completion list all versions.
If you have a version of python 2 installed (maybe named python2.7) find all files which refer to python2 by using grep -rnw python2 in the root folder of the activator and change python2 to whatever python 2 is called on your system.
If you dont have a version of python 2 installed you can most likely install it via brew.
Well it looks like audible-activator updated a few days ago to support python 3. The shebang on the python script is either wrong or they didn't update that file to python3. You can try changing the shebang from
#!/usr/bin/env python2
to
#!/usr/bin/env python
and see if that works for your system. Otherwise follow b0wter's advice and see if the upstream project can help.
|
gharchive/issue
| 2018-06-25T04:56:21 |
2025-04-01T04:55:16.383081
|
{
"authors": [
"KrumpetPirate",
"b0wter",
"benyzboy8682"
],
"repo": "KrumpetPirate/AAXtoMP3",
"url": "https://github.com/KrumpetPirate/AAXtoMP3/issues/80",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
73001319
|
Failed to install core package via symlink
Steps to reproduce below:
Clone repo
build
try to install core package via symlink as describe in https://github.com/Krzysztof-Cieslak/FSharp.Atom/issues/3#issuecomment-98683388
Atom Version: 0.196.0
System: GOODEWIND
Thrown From: core package, v0.0.1
Stack Trace
Failed to activate the core package
At path must be a string
TypeError: path must be a string
at TypeError (native)
at Object.fs.readdir (fs.js:800:11)
at Object.fs.readdir (ATOM_SHELL_ASAR.js:422:24)
at Core__projInit$ (D:\code\FSharp.Atom\src\core\lib\core.js:485:16)
at Core__initialize$ (D:\code\FSharp.Atom\src\core\lib\core.js:447:5)
at Core__activate$ (D:\code\FSharp.Atom\src\core\lib\core.js:433:5)
at D:\code\FSharp.Atom\src\core\lib\core.js:1632:14
at Object.module.exports.AtomFSharpCore.activate (D:\code\FSharp.Atom\src\core\lib\core.js:1652:26)
at Package.module.exports.Package.activateNow (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package.js:222:19)
at C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package.js:203:30
at Package.module.exports.Package.measure (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package.js:147:15)
at Package.module.exports.Package.activate (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package.js:195:14)
at PackageManager.module.exports.PackageManager.activatePackage (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package-manager.js:434:21)
at C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package-manager.js:418:29
at Config.module.exports.Config.transact (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\config.js:311:16)
at PackageManager.module.exports.PackageManager.activatePackages (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package-manager.js:413:19)
at PackageManager.module.exports.PackageManager.activate (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package-manager.js:394:46)
at Atom.module.exports.Atom.startEditorWindow (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\atom.js:623:21)
at Object.<anonymous> (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\window-bootstrap.js:12:8)
at Object.<anonymous> (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\window-bootstrap.js:23:4)
at Module._compile (module.js:452:26)
at Object.loadFile [as .js] (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\babel.js:162:21)
at Module.load (module.js:347:32)
at Function.Module._load (module.js:302:12)
at Module.require (module.js:357:17)
at require (module.js:376:17)
at setupWindow (file:///C:/Users/Steffen/AppData/Local/atom/app-0.196.0/resources/app.asar/static/index.js:86:23)
at window.onload (file:///C:/Users/Steffen/AppData/Local/atom/app-0.196.0/resources/app.asar/static/index.js:38:7)
Commands
Config
{
"core": {}
}
Installed Packages
# User
core, v0.0.1
paket, v0.0.1
# Dev
No dev packages
I forgot to install the packages from the readme.
still happens after installing the other packages.
Atom Version: 0.196.0
System: GOODEWIND
Thrown From: autocomplete-plus package, v2.12.1
Stack Trace
Failed to activate the core package
At path must be a string
TypeError: path must be a string
at TypeError (native)
at Object.fs.readdir (fs.js:800:11)
at Object.fs.readdir (ATOM_SHELL_ASAR.js:422:24)
at Core__projInit$ (D:\code\FSharp.Atom\src\core\lib\core.js:485:16)
at Core__initialize$ (D:\code\FSharp.Atom\src\core\lib\core.js:447:5)
at Core__activate$ (D:\code\FSharp.Atom\src\core\lib\core.js:433:5)
at D:\code\FSharp.Atom\src\core\lib\core.js:1632:14
at Object.module.exports.AtomFSharpCore.activate (D:\code\FSharp.Atom\src\core\lib\core.js:1652:26)
at Package.module.exports.Package.activateNow (C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package.js:222:19)
at C:\Users\Steffen\AppData\Local\atom\app-0.196.0\resources\app.asar\src\package.js:203:30
Commands
Config
{
"core": {}
}
Installed Packages
# User
autocomplete-plus, v2.12.1
core, v0.0.1
paket, v0.0.1
# Dev
No dev packages
During "installation discussion" with @isaacabraham we got same error on Windows 8.1 machine while trying opening standalone fsx file. Looks like issue is not connected with symlink and is reproduced also for normal copy of package in Atom package folder. Will investigate it more Soon™
|
gharchive/issue
| 2015-05-04T11:57:27 |
2025-04-01T04:55:16.393901
|
{
"authors": [
"Krzysztof-Cieslak",
"forki"
],
"repo": "Krzysztof-Cieslak/FSharp.Atom",
"url": "https://github.com/Krzysztof-Cieslak/FSharp.Atom/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
455792693
|
Need Assistance- Scoreboard not always loading
Hi,
I am currently using InfoBoardReborn on my server and have different scoreboards setup for different worlds. The only problem is that sometimes the scoreboard does not load in and other times it takes a few minutes for it to appear.
I was just wondering if there was anyway this could be fixed so that it doesn't take so long to load in.
Any help is much appreciated. Cheers.
This is a bug with the way the "whitelist" of the board. I take it that most boards have a different world they are allowed to be shown. The reason it takes a while for a scoreboard to be shown is because the server has 1 number that remembers on what board it is. but if the player is not allowed to see that board then it doesn't show one. until it ticks and goes for the next board. (which will force a board on the player.)
This is something I changed in the new version of IBR, every player has a "ladder" which contains all the boards the player is allowed to see and a number to keep track wich board.
I'm still working on the animations of the new version, so it should be done soon.
|
gharchive/issue
| 2019-06-13T14:59:53 |
2025-04-01T04:55:16.396488
|
{
"authors": [
"IOTubbzy",
"pixar02"
],
"repo": "Ktar5/Info-Board",
"url": "https://github.com/Ktar5/Info-Board/issues/150",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
920310623
|
Configurable evaluator response strategies for the phases of the auth pipeline
Authorino implements 3 so-called "strategies" for the 3 first phases of auth pipeline (wristband phase excluded). These strategies are:
at least one of the concurrent evaluators of the phase must resolve (at-least-one);
any of the concurrent evaluators of the phase must resolve (any);
all of the concurrent evaluators of the phase must evaluate (all).
These strategies currently map one-to-one with the first 3 phases of Authorino's auth pipeline, respectively: identity phase, metadata phase and authorization phase. I.e.,
if more than one identity source is specified, the supplied token or credential must always be validated with at least one the sources (i.e., at-least-one);
if more than one external source of metadata is specified, any number of them (from none, to all of them) can return a proper value and still Authorino will trigger the next phase (authorization) regardless (i.e., any);
if more than one authorization policy is specified, all of them must grant access, otherwise access will be denied altogether (i.e., all).
This issue is a request to make how the 3 strategies map to the phases configurable, with the default strategies still falling back to the ones currently enforced.
Example for a possible implementation
apiVersion: config.authorino.3scale.net/v1beta1
kind: Service
metadata:
name: my-api-protection-1
spec:
hosts:
- myapi.io
identity:
- name: auth-server-1
oidc:
endpoint: https://auth-server-1/auth
- name: auth-server-2
oidc:
endpoint: https://auth-server-2/auth
metadata:
- name: semaphore-1
http:
method: GET
sharedSecretRef:
name: myapi-protection-metadata-secrets
key: semaphore-1-shared-auth
- name: semaphore-2
http:
method: GET
sharedSecretRef:
name: myapi-protection-metadata-secrets
key: sempahore-2-shared-auth
authorization:
- name: authz-policy-1
json:
rules:
- selector: auth.metadata.semaphore-1.color
operator: eq
value: green
- selector: auth.metadata.semaphore-2.color
operator: eq
value: green
- selector: auth.identity.role
operator: eq
value: restricted-user
- name: authz-policy-1
opa:
inlineRego: |
semaphore1 { input.auth.metadata.semaphore-1.color == "red" }
semaphore2 { input.auth.metadata.semaphore-2.color == "red" }
sunday { time.weekday(time.now_ns()) == "Sunday" }
allow { not semaphore1; not semaphore2; sunday }
identityStrategy: any
metadataStrategy: all
authorizationStrategy: atLeastOne
The example above would be for a case where all the default strategies are overridden as follows:
any of the identity sources can help verifying the identity (including none, i.e., authenticating is optional);
all metadata sources must respond with data;
at least one of the authorization policies must grant access – i.e., either the user was authenticated and has the "restricted-user" role, or it's a Sunday; in both cases semaphore 1's and semaphore 2's colors can neither be "red".
Special considerations
The cases for configurable strategies in the metadata and in the authorization phases are less probably needed than the one for the identity phase, due to the existing options to represent conditionals in Authorino authorization policies (both, in OPA policies and in JSON pattern-matching policies). The configurability of the strategies would nonetheless provide another way to describe the authorization logic, at the very least improving readability in some cases.
Another thing to take into account is what happens when, in the metadata phase, the chose strategy fails (e.g. atLeastOne or all enforced, but none of the sources respond)? Should Authorino reject the request as DENIED? Would this virtually be promoting the metadata sources to potential external authorizers?
Inspired by OAS3 spec, the authorino API could be something like this:
Thanks for the suggestion, @eguzki.
There are few aspects of your suggestion that I'd like to comment about (if I even got them right):
Object linking, reusability and extension: the idea of having "identity schemes" that can be parametrized once used as "identity requirements"
Scope: what's expected with this issue vs. OAS
Use cases and implementation (regarding identity sources, but other types of evaluators as well): there are at least 4 now: "at least one evaluator is required to succeed", "all evaluators are required to succeed", "all evaluators are optional", "some (more than one) evaluators are required, some aren't"
1. Object linking, reusability and extension
In Authorino we don't use this pattern of having a base definition that can be extended when referred somewhere else in the CR. I know it's used in OAI specs, but I'd rather keep the way it is in Authorino for now. When we get to point where reusing definitions becomes a critical requirement, we'll probably be adding more specific CRDs and wiring things up through object refs instead.
Moreover, I'm afraid of people overusing those "identity scheme" definitions and later ending up with more schemes declared than actually used in "identity requirement". Again, I know this pattern is used a lot in OAI specs, but no so much of the typical UX expected when dealing with K8s CRs, IMO. Only counterexample I can think of are of volumes and volumeMounts in Deployment specs, but not cool. If an entire CR is not used ("garbage"), then it's easier to manage – delete it and done. But if the gargabe is inside of a still valid and used resource spec, then... sigh; it will stay there forever.
That said, maybe you were thinking that no "identity scheme" would become "garbage" inside a CR. If it's there, then it is being used. It is more about which schemes are optional and which ones are required... more on that further down in "3. Use cases"
*2. Scope (of this issue)
In general, I'd say your suggestion goes beyond what was originally expected with this issue, which was making two things that already exist, and that are currently hard-linked to each other in Authorino, configurable. The arrays of evaluators (identity, metadata, authorization and response) are there, as well as the different strategies to move to the next phase in the auth pipeline (atLeastOne, all, any), and they already work as-is.
The only thing is that the mapping of the different types of arrays that correspond to each phase of the pipeline and their respective strategy is now always identity → atLeastOne, metadata → any, authorization → all, response → all. We wanted to make this a choice of the user, with default to how it works now if not specified otherwise. Under that original plan, stating, e.g., so all applies to the list of identity sources instead of atLeastOne would be as simple as that: identity → all, done.
I was not at all hoping this to support any OAS-specific requirement. We're not going against it, of course, but there are things that we want to cover here that are unrelated to OAS (e.g. required/optional evaluators in phases other than identity verification), and at least one thing that you mentioned that is out of scope of this issue IMO (i.e. object linking, reusability and extension).
3. Use cases and implementation
The implementation proposed in the description of the issue should suffice to cover the use cases "at least one evaluator is required", "all evaluators are required" an "all evaluators are optional (but still try them all just in case – i.e. no cancelling of context)". I admit that, until now, I hadn't thought about the use case "some (more than one) are required, some are optional".
I guess for this other (yet uncovered) use case, a possible different implementation that not only satisfies it, but also makes it a superset of all other use cases, and yet avoids the whole linking/reusing/extending of object bits that I think are out of scope of this issue, is a simple flag optional: true that all evaluators would support (with default to optional: false).
At a glance, it's simple and intuitive, I think.
In terms of changes required in the code to achieve it, on the other hand, this solution would be a lot harder than what I had in mind originally, because of the cancelling of the Go contexts of concurrent evaluators running in the same phase.
Before triggering the phase, Authorino would have to scan the flags of all evaluators of the phase, to then monitor if all required ones have finished. (This "scan" is virtually equivalent to your proposed, more explicit, list of required ones.) If at least one among the required has failed, the pipeline can be aborted. However, if all required evaluators have finished successfully, but there are still some optional ones running, can the context be cancelled and the pipeline moved to the next phase? We'd need to think about that. Cancelling is one thing; letting the thread go all the way through and just ignore the result when it fails is a whole other thing.
Anyway, it's not as straightforward as just allowing the user to choose among the existing strategies which one to apply to which phase.
|
gharchive/issue
| 2021-06-14T11:18:53 |
2025-04-01T04:55:16.413163
|
{
"authors": [
"guicassolato"
],
"repo": "Kuadrant/authorino",
"url": "https://github.com/Kuadrant/authorino/issues/112",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1924513218
|
🛑 Kuerbis is down
In d00dcc0, Kuerbis (https://kuerbis.ovh) was down:
HTTP code: 502
Response time: 7466 ms
Resolved: Kuerbis is back up in 3a7533a after 9 minutes.
|
gharchive/issue
| 2023-10-03T16:28:56 |
2025-04-01T04:55:16.439662
|
{
"authors": [
"Kuerbis-HD"
],
"repo": "Kuerbis-HD/Kuerbis-Web-Status",
"url": "https://github.com/Kuerbis-HD/Kuerbis-Web-Status/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1525659485
|
Shell Sort In C
🛠️ Issue (Number)
Issue no #263
👨💻 Changes proposed
Added the program for sorting the elements using Shell Sort in C
✔️ Check List (Check all the applicable boxes)
[✔️] My code follows the code style of this project.
[✔️] This PR does not contain plagiarized content.
[✔️] The title of my pull request is a short description of the requested changes.
📷 Screenshots
I have added some more comment lines for explanation of gap concept in the code.
@Sanjanabharadwaj25 Added the time and space complexity in the code.
@Kumar-laxmi Please merge the pull request as it has got the approvals of all the reviewers
@Mansiuniyal Please don't add same algo. using two different PR
|
gharchive/pull-request
| 2023-01-09T14:07:55 |
2025-04-01T04:55:16.460156
|
{
"authors": [
"Kumar-laxmi",
"Mansiuniyal"
],
"repo": "Kumar-laxmi/Algorithms",
"url": "https://github.com/Kumar-laxmi/Algorithms/pull/276",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
232295417
|
support async/await function
Hello, when I try to change my function using async/await like this:
const login = async (req, res, next) => {
const user = await User.findOne({ email: req.body.email });
if (!user || !bcrypt.compareSync(req.body.password, user.password)) {
const err = new APIError('Authentication error', httpStatus.UNAUTHORIZED, true);
return next(err);
}
const token = jwt.sign({
email: user.email
}, config.jwtSecret);
return res.json({
token
});
};
I got this error when running yarn start
E:\Coding\beautypediavn\beautypediavn-backend-v2\dist\server\controllers\auth.controller.js:79
var _ref = _asyncToGenerator(regeneratorRuntime.mark(function _callee(req, res, next) {
^
ReferenceError: regeneratorRuntime is not defined
at E:\Coding\beautypediavn\beautypediavn-backend-v2\dist\server\controllers\auth.controller.js:79:32
at Object.<anonymous> (E:\Coding\beautypediavn\beautypediavn-backend-v2\dist\server\controllers\auth.controller.js:118:2)
at Module._compile (module.js:571:32)
at Object.Module._extensions..js (module.js:580:10)
at Module.load (module.js:488:32)
at tryModuleLoad (module.js:447:12)
at Function.Module._load (module.js:439:3)
at Module.require (module.js:498:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (E:\Coding\beautypediavn\beautypediavn-backend-v2\dist\server\routes\auth.route.js:23:13)
at Module._compile (module.js:571:32)
at Object.Module._extensions..js (module.js:580:10)
at Module.load (module.js:488:32)
at tryModuleLoad (module.js:447:12)
at Function.Module._load (module.js:439:3)
at Module.require (module.js:498:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (E:\Coding\beautypediavn\beautypediavn-backend-v2\dist\server\routes\index.route.js:15:13)
at Module._compile (module.js:571:32)
at Object.Module._extensions..js (module.js:580:10)
at Module.load (module.js:488:32)
at tryModuleLoad (module.js:447:12)
Any ideas on how to fix this or supporting this? Thank you, I'm not so used to these config things.
You need to install the plugin babel-plugin-syntax-async-functions
@osahner thank you but it doesn't work
@lednhatkhanh install the babel-pollyfil and import it before all other modules in index.js
import polyfill from 'babel-polyfill'; // eslint-disable-line
|
gharchive/issue
| 2017-05-30T15:39:49 |
2025-04-01T04:55:16.463648
|
{
"authors": [
"fcpauldiaz",
"lednhatkhanh",
"osahner"
],
"repo": "KunalKapadia/express-mongoose-es6-rest-api",
"url": "https://github.com/KunalKapadia/express-mongoose-es6-rest-api/issues/405",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1571305098
|
Title: I wish to add Blockchain README file
Explain what you want to add/change in documentation.
As of now readme file is empty and I want to add content according to the format needed.
@muskan467 Add more description about it
I will add image describing the repo and its three level by adding links of the projects of respective folders. And in last message to the contributors for contributing guidelines. I'll try to make it more informative and creative.
@muskan467 Sounds good.
|
gharchive/issue
| 2023-02-05T08:02:16 |
2025-04-01T04:55:16.704810
|
{
"authors": [
"Kushal997-das",
"muskan467"
],
"repo": "Kushal997-das/Project-Guidance",
"url": "https://github.com/Kushal997-das/Project-Guidance/issues/846",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1920336598
|
table Creator - Web design #12
https://images01.nicepagecdn.com/page/85/56/html-template-85569.jpg Design a table like the above image
Create a file "table_NUM_by_(YourName).html" in the modules/table folder.
Use simple HTML and CSS and JAVASCRIPT to create the table animation. You can use the interface development https://kwickerhub.com/ (This project) or you can write the lines of code yourself.
Step by Step Guide.
Fork this project(Use the 'fork' button in the top right corner) and Clone your Fork.
git clone https://github.com/YOUR_USERNAME/frontend
Open your code Editor and Create a file "table_NUM_YOUR_NAME_.html" in the "modules/table" folder of this project you just cloned. No need to create head, title and body tags, Just Add a div tag with some embedded style(i.e use the style tag) and the script tag where necessary.
If you want to add an image resource, please add it in the folder ••
"modules/table/images_and_icons"
We recommend you use an svg for your image/icon.
Push your Code: You need to push your recent changes back to the cloud. Use the command below in the main directory of this Repository
git push origin dev
or use a GUI tool to avoid mistakes or complexity. LOL.
Then make you Pull Request...
Below is a example of the code structure we need:
'''
'''
For more examples:
modules->buttons
modules->cards
for examples of contributions or Pull request accepted in to the project.
And please do not wait for this issue to be assigned to you as we have limited hands but have a lot to do/cover. Please send in your PR.
Good-luck.
Can You please assign this task to me?
Please read the issue again.
On Oct 1, 2023 09:00, "Shivansh Goel" @.***> wrote:
Can You please assign this task to me?
—
Reply to this email directly, view it on GitHub
https://github.com/KwickerHub/frontend/issues/363#issuecomment-1741996632,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AOKWIBMFWYVGHZV4BAPY6QLX5EPKTANCNFSM6AAAAAA5NVYHBI
.
You are receiving this because you authored the thread.Message ID:
@.***>
ohh okay... missed that lat line lol
Hey @NtemKenyor I will be on this issue and send the PR soon
okay 👍 well done. Would be waiting 😊
On Thu, Nov 30, 2023, 02:34 Rahul Chanumolu @.***>
wrote:
Hey @NtemKenyor https://github.com/NtemKenyor I will be on this issue
and send the PR soon
—
Reply to this email directly, view it on GitHub
https://github.com/KwickerHub/WebCraftifyAI/issues/363#issuecomment-1832964361,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AOKWIBONLNDNV3EZRMCC6W3YG7PBZAVCNFSM6AAAAAA5NVYHBKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZSHE3DIMZWGE
.
You are receiving this because you were mentioned.Message ID:
@.***>
Hey @NtemKenyor i have fixed the issue and sent a PR can you please review and merge my PR
Thankyou
|
gharchive/issue
| 2023-09-30T17:13:52 |
2025-04-01T04:55:16.716774
|
{
"authors": [
"NtemKenyor",
"Shivansh175",
"rahulchanumolu"
],
"repo": "KwickerHub/frontend",
"url": "https://github.com/KwickerHub/frontend/issues/363",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1410115016
|
Save Project Automatically
Describe the bug
We do not want people to loss their work or projects, So we intend to automatically save their projects.
Algo 1:
U can use a time interval to always call the save function
Algo 2 & Better
We only want to save content when the user adds content or when the user is active. So to achieve this, We intend to call the Save function when a new item is added to the development dashboard or after 10 active clicks
File/Files you will work on
dashboard.html if your code looks code, You can make another pull request for the ui_dashboard page
Possible Lines of Code
You should not add above the possible lines of code. (if given)
Result or Final Looks
Projects will be saved automatically.
Can you assign it to me?
I was able to work on this. We can now save content automatically. And the content is recovered when ever the page is loaded. Though I plan on improving the code better.
|
gharchive/issue
| 2022-10-15T09:59:26 |
2025-04-01T04:55:16.719884
|
{
"authors": [
"NtemKenyor",
"aadi58002"
],
"repo": "KwickerHub/frontend",
"url": "https://github.com/KwickerHub/frontend/issues/94",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
292085008
|
Self Hosted Nadeko offline
After I have finished installing it and following your guide on how to hosted it, its not online after I have finished all the setup, how do I get it up?
@Marko5689 Did Gremagol fixed your issue or your still dealing with it?
Ah yes he did
On Mon, Feb 5, 2018 at 3:53 AM, Macley-Kun notifications@github.com wrote:
@Marko5689 https://github.com/marko5689 Did Gremagol fixed your issue
or your still dealing with it?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Kwoth/NadekoBot/issues/2039#issuecomment-362934332,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AiMn3LD65XUO39IZBcFRFZPKpY83-Mb5ks5tRgrMgaJpZM4RvHz6
.
How about you close this issue? @Marko5689 ?
Nadeko is doing that on my server and on another server i need help
|
gharchive/issue
| 2018-01-27T03:15:26 |
2025-04-01T04:55:16.724272
|
{
"authors": [
"Macley-Kun",
"Marko5689",
"inkkey"
],
"repo": "Kwoth/NadekoBot",
"url": "https://github.com/Kwoth/NadekoBot/issues/2039",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
324651007
|
Deleted message log bug
it won't log any deleted message anymore since a month or 2.
they then told me it would be fixed in a future update but still nothing...
someone who has a actual solution?
Re-update or re-install your bot. All logs are working for me on v2.20.0.
Ditto. Logs have been working since a few updates ago. All I did was disable and re-enable logging back when it was fixed and they've worked great since.
|
gharchive/issue
| 2018-05-19T18:41:02 |
2025-04-01T04:55:16.725873
|
{
"authors": [
"shivaco",
"weslyv8",
"xnaas"
],
"repo": "Kwoth/NadekoBot",
"url": "https://github.com/Kwoth/NadekoBot/issues/2311",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
493972439
|
Removed deprecated coffee-react dependency
I removed this deprecated dependency as a prelude to solving the memory leak issue #90. This alone seems to make the component more stable and my leak warnings have gone away independently. I will fix separately if they come back... Thank you - I would really appreciate if this can be rolled out to npm. Please let me know if I can help.
Actually, this might need more testing with different react versions
|
gharchive/pull-request
| 2019-09-16T10:41:51 |
2025-04-01T04:55:16.729752
|
{
"authors": [
"danlester"
],
"repo": "KyleAMathews/react-retina-image",
"url": "https://github.com/KyleAMathews/react-retina-image/pull/91",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1364909358
|
Incorrect behavior with editing multiple sections at the same time
I'm not sure how exactly to describe the behavior, so here's a gif:
Narrating, I create a latex environment via an LSP snippet, create another environment inside that one. I then cut the inner environment, paste it again, and when trying to edit the outer environment, I get this weird duplication thing. This example is a bit contrived, but I've run into similar issues constantly and the only way to break out of whatever bug is to exit out of it.
The example is using mason, lspconfig, texlab, cmp, and LuaSnip/vsnip (MWE config at the bottom of the issue).
I've narrowed it down to be a luasnip issue as if I swap over to using vsnip, I get this instead:
Here's my (admittedly) large MWE config (partially stolen from telecope):
default.init.lua
vim.cmd [[set runtimepath=$VIMRUNTIME]]
vim.cmd [[set packpath=/tmp/nvim/site]]
local package_root = '/tmp/nvim/site/pack'
local install_path = package_root .. '/packer/start/packer.nvim'
local function load_plugins()
require('packer').startup {
{
'wbthomason/packer.nvim',
{
'nvim-telescope/telescope.nvim',
requires = {
'nvim-lua/plenary.nvim',
{ 'nvim-telescope/telescope-fzf-native.nvim', run = 'make' },
},
},
{ 'neovim/nvim-lspconfig', config = function()
local capabilities = vim.lsp.protocol.make_client_capabilities()
capabilities.textDocument.completion.completionItem.snippetSupport = true
require('lspconfig')['texlab'].setup{
capabilities = capabilities,
}
end
},
{ "williamboman/mason-lspconfig.nvim" },
{ "williamboman/mason.nvim",
config = function()
require('mason').setup{}
end,
},
{ 'L3MON4D3/LuaSnip' },
{ 'hrsh7th/vim-vsnip' },
{ 'hrsh7th/cmp-nvim-lsp'},
{ 'hrsh7th/nvim-cmp', config = function()
local cmp = require "cmp"
cmp.setup {
snippet = {
expand = function(args)
vim.fn["vsnip#anonymous"](args.body)
end,
-- expand = function(args)
-- require('luasnip').lsp_expand(args.body) -- For `luasnip` users.
-- end,
},
mapping = {
['<CR>'] = cmp.mapping.confirm({ select = true }),
['<C-n>'] = cmp.mapping.select_next_item(),
['<C-p>'] = cmp.mapping.select_prev_item(),
},
sources = cmp.config.sources({
{ name = "nvim_lsp" },
{ name = "buffer" },
}),
}
end},
},
-- ADD PLUGINS THAT ARE _NECESSARY_ FOR REPRODUCING THE ISSUE
config = {
package_root = package_root,
compile_path = install_path .. '/plugin/packer_compiled.lua',
display = { non_interactive = true },
},
}
end
_G.load_config = function()
require('telescope').setup()
require('telescope').load_extension('fzf')
-- ADD INIT.LUA SETTINGS THAT ARE _NECESSARY_ FOR REPRODUCING THE ISSUE
end
if vim.fn.isdirectory(install_path) == 0 then
print("Installing Telescope and dependencies.")
vim.fn.system { 'git', 'clone', '--depth=1', 'https://github.com/wbthomason/packer.nvim', install_path }
end
load_plugins()
require('packer').sync()
vim.cmd [[autocmd User PackerComplete ++once echo "Ready!" | lua load_config()]]
vim.o.completeopt = 'menu,menuone,noselect'
Ah, try setting delete_check_events, it's described a bit in the README.
Basically, luasnip does not automatically remove snippets when they're deleted, but this option can be set to autocommands on which checks for deletion will be performed (and the snippet properly removed)
(the reason for the duplication is that (after deletion!) the region of the to-be-copied placeholder spans the entire latex-environment, and anything inside this region is copied, sooo we end up with... that :sweat_smile:)
Yep, that worked. Thanks!
Any reason that's not set by default?
Actually, that doesn't completely solve the behaviors I'm seeing. I'll try and catch myself next time I run into it to see what I do to cause it. But this (possibly) different pathology has the side affect that I can't undo (u) out of it. In fact, pressing u just increases:
Here I'm pressing u, but the action counter only goes up and nothing changes.
Yep, that worked. Thanks!
Any reason that's not set by default?
Originally because I thought having some not-trivial function run every TextChanged(which is the minimum required to have this work at all) would be annoying, but this issue is not uncommon, so it might be time to reevaluate that
Actually, that doesn't completely solve the behaviors I'm seeing. I'll try and catch myself next time I run into it to see what I do to cause it. But this (possibly) different pathology has the side affect that I can't undo (u) out of it. In fact, pressing u just increases:
Here I'm pressing u, but the action counter only goes up and nothing changes.
Okay that one I've never seen before, could you open a new issue if you can reproduce it?
Originally because I thought having some not-trivial function run every TextChanged(which is the minimum required to have this work at all) would be annoying, but this issue is not uncommon, so it might be time to reevaluate that
I'm not super familiar with nvim backend stuff, but you could always have the function run async for every TextChanged event, with it blocking any luasnip command until it checks for deleted snippets..
Mhmmm, that's a nice approack for optimizing this.. Do you know of some way to get mutexes in lua? (I guess those would be necessary to reliably block other commands)
Do you know of some way to get mutexes in lua?
Nope. haha. I'm just familiar with parallel programming, nothing neovim specific (other than I know it can launch tasks asynchronously).
|
gharchive/issue
| 2022-09-07T16:20:31 |
2025-04-01T04:55:16.764968
|
{
"authors": [
"L3MON4D3",
"jrwrigh"
],
"repo": "L3MON4D3/LuaSnip",
"url": "https://github.com/L3MON4D3/LuaSnip/issues/582",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2028013557
|
Notifications stop working
Hi,
My notifications from HA to my windows 10 used to work fine.
I'm trying to figure why they sudently stop working.
I' ve checked port reservations, computer name, firewall, Mqtt, Api, all looks fine.
But when i go in Hass Agent windows application, configuration, Notifications and I click on " Show Test Notification" i got an error message:
https://pastebin.com/QeKYNtf8
Could you help me ?
It used to work fine for my Windows 11 pc, but suddenly the notifications also don't work anymore. Although the test message in the HASS agent Windows integration works for me, but using the notify service in HA does not sent error messages nor make notifications pop up... Almost similar, except for only the test notifications working for me
|
gharchive/issue
| 2023-12-06T08:29:25 |
2025-04-01T04:55:16.769646
|
{
"authors": [
"XalaTheShepard",
"ZorK766"
],
"repo": "LAB02-Research/HASS.Agent-Integration",
"url": "https://github.com/LAB02-Research/HASS.Agent-Integration/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
472563273
|
Adds error message when user enters a literal in InputURI.
closes #944
|
gharchive/pull-request
| 2019-07-24T22:05:59 |
2025-04-01T04:55:16.788948
|
{
"authors": [
"justinlittman"
],
"repo": "LD4P/sinopia_editor",
"url": "https://github.com/LD4P/sinopia_editor/pull/1050",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1584885646
|
🛑 [Web] Homepage is down
In eab4b2a, [Web] Homepage (https://rongyi.xyz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: [Web] Homepage is back up in 10d4308.
|
gharchive/issue
| 2023-02-14T21:57:55 |
2025-04-01T04:55:16.791464
|
{
"authors": [
"LER0ever"
],
"repo": "LER0ever/Status",
"url": "https://github.com/LER0ever/Status/issues/159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2445291383
|
Feature/lit 3748 js sdk setup local development environment for naga
Description
Updating several endpoints to V2
Closing this, created a new branch https://github.com/LIT-Protocol/js-sdk/pull/653 that checked out from the staging/v7 branch
|
gharchive/pull-request
| 2024-08-02T15:57:28 |
2025-04-01T04:55:16.816461
|
{
"authors": [
"Ansonhkg"
],
"repo": "LIT-Protocol/js-sdk",
"url": "https://github.com/LIT-Protocol/js-sdk/pull/575",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2066654754
|
使用文档链接无法打开
目前使用的 Sakura 版本?
2.0
目前使用的 halo 版本?
2.11.2
建议/问题
感谢~确实无法打开了,应该是我博客迁移到 2.x 之后链接更改了,我改一下。
|
gharchive/issue
| 2024-01-05T03:02:20 |
2025-04-01T04:55:16.818046
|
{
"authors": [
"LIlGG",
"dhe090"
],
"repo": "LIlGG/halo-theme-sakura",
"url": "https://github.com/LIlGG/halo-theme-sakura/issues/472",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1296092471
|
个性签名过长和居中问题处理
此 PR 与 #248 #264 相关
目的是:
个性签名居中显示
签名过长自动换行, 能够完整显示
效果如下:
放大支付二维码 30px #265
|
gharchive/pull-request
| 2022-07-06T16:27:18 |
2025-04-01T04:55:16.819903
|
{
"authors": [
"parasomn1a"
],
"repo": "LIlGG/halo-theme-sakura",
"url": "https://github.com/LIlGG/halo-theme-sakura/pull/273",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1560433800
|
Feature/whitlock/improve heroic roses
This PR re-enables contours in the heroic_roses shaping dataset that made Axom crash on previous versions of C2C. Some of the contours have also been modified so they better overlap.
The updated C2C needs to be in place for all these contours to work in Axom.
|
gharchive/pull-request
| 2023-01-27T21:06:11 |
2025-04-01T04:55:16.857733
|
{
"authors": [
"BradWhitlock"
],
"repo": "LLNL/axom_data",
"url": "https://github.com/LLNL/axom_data/pull/13",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2384200304
|
Syntax error in setup.py
Syntax error (maybe misplaced parentheses)?
https://github.com/LLNL/zfp/blob/5c976d8da013988174f931845862b6f94119cade/setup.py#L14
% pip install --no-build-isolation 'git+https://github.com/LLNL/zfp.git@1.0.1'
Collecting git+https://github.com/LLNL/zfp.git@1.0.1
Cloning https://github.com/LLNL/zfp.git (to revision 1.0.1) to /tmp/pip-req-build-aycbhn7n
Running command git clone --filter=blob:none --quiet https://github.com/LLNL/zfp.git /tmp/pip-req-build-aycbhn7n
Running command git checkout -q f40868a6a1c190c802e7d8b5987064f044bf7812
Resolved https://github.com/LLNL/zfp.git to commit f40868a6a1c190c802e7d8b5987064f044bf7812
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-req-build-aycbhn7n/setup.py", line 14
libraries=["zfp"], library_dirs=["build/lib64", "build/lib/Release"]), language_level = "3"]
^^^^^^^^^^^^^^^^^^^^
SyntaxError: invalid syntax. Maybe you meant '==' or ':=' instead of '='?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
I'm using python 3.10.12 and numpy 2.0.0.
@frobnitzem Thanks for filing this issue. We're aware of this; see discussion here and more recently in #231. I'm trying to figure out the original intent and how to correct this. I have minimal Python expertise, so any suggestions would be welcome.
|
gharchive/issue
| 2024-07-01T15:57:51 |
2025-04-01T04:55:16.866230
|
{
"authors": [
"frobnitzem",
"lindstro"
],
"repo": "LLNL/zfp",
"url": "https://github.com/LLNL/zfp/issues/233",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1971393263
|
Add note in dev guide about hissw
In GitLab by @wtbarnes on Oct 8, 2019, 23:54
Some of the tests that compare to IDL results use the hissw package to make calls to sswidl from Python. If the user has not configured hissw, these tests will fail. There needs to be note in the dev guide explaining this and how to configure hissw properly.
This is not a problem for normal users as hissw is not a hard dependency of the package and if it is missing, those tests will be skipped.
In GitLab by @wtbarnes on Oct 29, 2019, 15:07
mentioned in commit 59d6d707620d3ff181803f58eda26a535546a023
In GitLab by @wtbarnes on Oct 29, 2019, 15:07
closed via merge request !34
|
gharchive/issue
| 2019-10-09T06:54:02 |
2025-04-01T04:55:16.871754
|
{
"authors": [
"nabobalis"
],
"repo": "LM-SAL/aiapy",
"url": "https://github.com/LM-SAL/aiapy/issues/26",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
439665257
|
Error when trying to save a wallet without a name or alias
Description
Getting an error when trying to save a wallet without a name or alias.
Expected Behavior
Should save without error
Steps to Reproduce
Try to save a wallet without a name and/or alias.
Closed via https://github.com/LN-Zap/zap-desktop/pull/2158
|
gharchive/issue
| 2019-05-02T15:55:55 |
2025-04-01T04:55:16.891649
|
{
"authors": [
"mrfelton"
],
"repo": "LN-Zap/zap-desktop",
"url": "https://github.com/LN-Zap/zap-desktop/issues/2146",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2459716703
|
Missing German Translations
Steps to reproduce
Just missing in general at some places
Expected behavior
Expected German translations
Actual behavior
No translations provided
LNReader version
2.0.0-beta.2
Android version
Android 14
Device
Google Pixel 7
Other details
Reminder to check this out when available.
Acknowledgements
[X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open or closed issue.
[X] I have written a short but informative title.
[X] If this is an issue with a source, I should be opening an issue in the sources repository.
[X] I have updated the app to version 2.0.0.
[X] I will fill out all of the requested information in this form.
it is not a bug, you can raise a PR for adding translations.
|
gharchive/issue
| 2024-08-11T18:50:43 |
2025-04-01T04:55:16.896347
|
{
"authors": [
"Batorian",
"nyagami"
],
"repo": "LNReader/lnreader",
"url": "https://github.com/LNReader/lnreader/issues/1204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2368116502
|
Mobile login
Status
Add check on mobile to call app login as login method.
Add new route to capture access token within route.
Frontend has been moved to another repo
|
gharchive/pull-request
| 2024-06-23T01:29:23 |
2025-04-01T04:55:16.897497
|
{
"authors": [
"DarkDreizer",
"andres-javier-lopez"
],
"repo": "LOBICA/taskmaster-express",
"url": "https://github.com/LOBICA/taskmaster-express/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
399637205
|
LSST Verify
Create a simple notebook to demonstrate the lsst.verify package.
The project is done with https://github.com/LSSTScienceCollaborations/StackClub/pull/179 .
Thanks for the suggestions from @SimonKrughoff during the review.
|
gharchive/issue
| 2019-01-16T03:54:36 |
2025-04-01T04:55:17.014486
|
{
"authors": [
"bechtol"
],
"repo": "LSSTScienceCollaborations/StackClub",
"url": "https://github.com/LSSTScienceCollaborations/StackClub/issues/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
733586408
|
Project/density maps/jeffcarlin
Hi Nushkia - since you expressed interest in reviewing my Stack Club project a few weeks ago, I have invited you to review. Let me know if you are unable to review this notebook, or if you have any questions about it.
Alex - your comments are welcome as well, but if you want to weigh in after Nushkia, I can ping you then.
Thanks Jeff. I'll weigh in after Nushkia (if she is available).
I see this cell
! cd ~/stackclub/StackClub/ && python setup.py -q develop --user && cd -
I don't think you can use an absolute path like that, since not everyone has the Stack Club notebooks installed in the same place.
In the FindingDocs notebook (under GettingStarted), Phil does something like this:
! cd .. && python setup.py -q develop --user && cd -
Good point - I updated it to the relative path.
I see this call to see if a data set exists. Does this actually "guarantee" that the data set is on disk?
butler.datasetExists('deepCoadd_forced_src', dataId=dataref14)
I thought there may have been some subtleties here that involved getting the URI
I added the following to the notebook to clarify:
"Note, however, that in the Gen2 butler, this doesn't guarantee that the catalog actually exists as a file on disk. It only tells us that some processing steps were executed that intended to create the catalog, and thus added its entry to the registry. This issue will be remedied in the Gen3 butler, so it is not important to dwell on it further."
What does immediate=True mean here? I hadn't seen that before. Is it worth noting?
image14 = butler.get('deepCoadd_calexp', immediate=True, dataId=dataref14)
Indeed, that was a holdover to ancient times, but unnecessary now. See this community post
@jeffcarlin want to merge? This notebook is a nice holiday present!
|
gharchive/pull-request
| 2020-10-31T00:20:23 |
2025-04-01T04:55:17.019357
|
{
"authors": [
"jeffcarlin",
"kadrlica"
],
"repo": "LSSTScienceCollaborations/StackClub",
"url": "https://github.com/LSSTScienceCollaborations/StackClub/pull/250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1631385752
|
Switch to a referenceDownload dual-purpose getter/setter
This makes it easier for users to extract the getter and re-use it in their custom setter, if their customization is minor enough.
Closed by 996af00075cd8e4b2af5675863241b81b354bbb6.
|
gharchive/issue
| 2023-03-20T05:02:06 |
2025-04-01T04:55:17.030082
|
{
"authors": [
"LTLA"
],
"repo": "LTLA/gesel.js",
"url": "https://github.com/LTLA/gesel.js/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1221637113
|
Issue 45353: Cannot create a naming pattern reference (with walking the lookup) cross folder
Rationale
Name expression validation is looking for lookup fields using current folder, instead of lookup container, resulting in not found error for lookup properties.
Related Pull Requests
Changes
Use lookup container to get lookup table, not current container.
Re-targeting 22.03
https://github.com/LabKey/platform/pull/3307
|
gharchive/pull-request
| 2022-04-29T22:41:09 |
2025-04-01T04:55:17.037757
|
{
"authors": [
"XingY"
],
"repo": "LabKey/platform",
"url": "https://github.com/LabKey/platform/pull/3305",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
257192386
|
Where did our error definitions go?
We used to have good error descriptions for our error codes. Now DCAF errors are showing up as undefined:
What happened to our error definition files?
Marking this as prioritize, since this will be a support issue.
In at least the case screenshotted above, the error dialog happens here:
I believe this is part of the template for Delete Line provided.
Checking through the error files, I found a file that used to define the error @pollockm found in this issue - but has not changed in 3 years. It seems like this is more likely something we missed.
https://github.com/LabVIEW-DCAF/ModuleInterface/commits/master/source/TagBusModuleFramework-errors.txt
I ran into a similar issue with a different error code and thought they may be related:
I got this while running a main.vi.
My error seems to come from the class discovery singleton. Concerningly - I do have CEF-errors.txt installed into C:\Program Files (x86)\National Instruments\LabVIEW 2014\project\errors.
The undefined error only happens when running on the RT target - not the host. I talked to @dbendele about errors and he mentioned that error definitions usually need to be installed to the target to see them.
Is it generally the case that errors we custom define in DCAF are not defined on the rt target, and that we doom our users to generic messages on RT? Do we have a decent way of deploying these files?
I verified that this is a part of the template for both static and dynamic modules.
Closing this - moved to issues https://github.com/LabVIEW-DCAF/ModuleInterface/issues/32 and https://github.com/LabVIEW-DCAF/TagEditorCore/issues/352
...and yet the scan engine issue happens on a desktop target.
|
gharchive/issue
| 2017-09-12T21:39:43 |
2025-04-01T04:55:17.043844
|
{
"authors": [
"pollockm",
"theSloopJohnB"
],
"repo": "LabVIEW-DCAF/TagEditorCore",
"url": "https://github.com/LabVIEW-DCAF/TagEditorCore/issues/348",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
322500656
|
Make project Android agnostic
The plan is to make this library usable outside of Android so that other extensions can be added without having to manage another project.
Since the support annotations are already accessed via reflection there are no downsides. As part of making this a better Java library, the IntRange, FloatRange and Size annotations should be bundled within the library and operate the same way.
Considering this a rebrand of the library (with a new artifact name) the version number will be reset to 1.0.0 and the old library will no longer be supported.
This has been implemented
|
gharchive/issue
| 2018-05-12T10:30:36 |
2025-04-01T04:55:17.055720
|
{
"authors": [
"LachlanMcKee"
],
"repo": "LachlanMcKee/gsonpath-extensions",
"url": "https://github.com/LachlanMcKee/gsonpath-extensions/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2004456366
|
Games For Windows Live (GFWL) Support
Hello,
Is it possible to add support for Games For Windows Live games?
Thanks in advance.
Can you give axamples?
Can you give axamples?
Examples of games?
If yes, then you can look here:
https://www.pcgamingwiki.com/wiki/Games_for_Windows_-_LIVE
|
gharchive/issue
| 2023-11-21T14:51:14 |
2025-04-01T04:55:17.060918
|
{
"authors": [
"Lacro59",
"PatrikPepega"
],
"repo": "Lacro59/playnite-successstory-plugin",
"url": "https://github.com/Lacro59/playnite-successstory-plugin/issues/413",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2537363646
|
Assertion failed in AK/Vector.h
While visiting
https://picsim.js.org/board/1
After some 10 seconds, it crashes and this is shown in the terminal
VERIFICATION FAILED: i < m_size at /home/marco/ladd/ladybird/AK/Vector.h:139
/home/marco/ladd/ladybird/Build/ladybird/libexec/../lib/liblagom-ak.so.0(ak_verification_failed+0xef) [0x7fdab521036f]
/home/marco/ladd/ladybird/Build/ladybird/libexec/../lib/liblagom-js.so.0 JS::Bytecode::Interpreter::run_bytecode(unsigned long) 0x10120) [0x7fdab8496d00]
/home/marco/ladd/ladybird/Build/ladybird/libexec/../lib/liblagom-js.so.0 JS::Bytecode::Interpreter::run_executable(JS::Bytecode::Executable&, AK::Optional<unsigned long>, JS::Value) 0x203) [0x7fdab84868b3]
/home/marco/ladd/ladybird/Build/ladybird/libexec/../lib/liblagom-js.so.0 JS::ECMAScriptFunctionObject::ordinary_call_evaluate_body() 0x1f6) [0x7fdab859ba66]
/home/marco/ladd/ladybird/Build/ladybird/libexec/../lib/liblagom-js.so.0 JS::ECMAScriptFunctionObject::internal_call(JS::Value, AK::Span<JS::Value const>) 0x2d1) [0x7fdab859b2e1]
/home/marco/ladd/ladybird/Build/ladybird/libexec/../lib/liblagom-js.so.0(+0x1947f4) [0x7fdab84b17f4]
Looks like an array out-of-bound access in the vector lib.
See attached file with full stack-trace (both JS and C++)
stacktrace-ak-vector.txt
I'm at latest commit, d19b31529f28e88aa691a6a9fbd0215c54c6e81c
I'm on 33507578e0d36b0ef90ddbdc44959efa4abed383 and it doesn't crash for me
Could you please retest?
|
gharchive/issue
| 2024-09-19T21:02:15 |
2025-04-01T04:55:17.064575
|
{
"authors": [
"shlyakpavel",
"teaalltr"
],
"repo": "LadybirdBrowser/ladybird",
"url": "https://github.com/LadybirdBrowser/ladybird/issues/1453",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2261302343
|
Build error: ‘uint32_t’ does not name a type in utf8.h
Trying to build this repo (in order to get NeMo running) I get the following error:
$ pip install "git+https://github.com/LahiLuk/YouTokenToMe"
Collecting git+https://github.com/LahiLuk/YouTokenToMe
Cloning https://github.com/LahiLuk/YouTokenToMe to /tmp/pip-req-build-pbvabuz3
Running command git clone --filter=blob:none --quiet https://github.com/LahiLuk/YouTokenToMe /tmp/pip-req-build-pbvabuz3
Resolved https://github.com/LahiLuk/YouTokenToMe to commit f9fe56e198e22d552d821a5f432d4e5ada1e81e8
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting Click>=7.0 (from youtokentome==1.0.6)
Obtaining dependency information for Click>=7.0 from https://files.pythonhosted.org/packages/00/2e/d53fa4befbf2cfa713304affc7ca780ce4fc1fd8710527771b58311a3229/click-8.1.7-py3-none-any.whl.metadata
Downloading click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Building wheels for collected packages: youtokentome
Building wheel for youtokentome (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for youtokentome (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [121 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-311
creating build/lib.linux-x86_64-cpython-311/youtokentome
copying youtokentome/youtokentome.py -> build/lib.linux-x86_64-cpython-311/youtokentome
copying youtokentome/__init__.py -> build/lib.linux-x86_64-cpython-311/youtokentome
copying youtokentome/yttm_cli.py -> build/lib.linux-x86_64-cpython-311/youtokentome
running build_ext
building '_youtokentome_cython' extension
creating build/temp.linux-x86_64-cpython-311
creating build/temp.linux-x86_64-cpython-311/youtokentome
creating build/temp.linux-x86_64-cpython-311/youtokentome/cpp
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -g -fwrapv -O2 -fPIC -Iyoutokentome/cpp -I/home/tejes/work/inmoment/parakeet/venv/include -I/usr/include/python3.11 -c youtokentome/cpp/bpe.cpp -o build/temp.linux-x86_64-cpython-311/youtokentome/cpp/bpe.o -std=c++11 -pthread -O3
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -g -fwrapv -O2 -fPIC -Iyoutokentome/cpp -I/home/tejes/work/inmoment/parakeet/venv/include -I/usr/include/python3.11 -c youtokentome/cpp/utf8.cpp -o build/temp.linux-x86_64-cpython-311/youtokentome/cpp/utf8.o -std=c++11 -pthread -O3
In file included from youtokentome/cpp/utf8.cpp:1:
youtokentome/cpp/utf8.h:9:18: error: ‘uint32_t’ does not name a type
9 | constexpr static uint32_t INVALID_UNICODE = 0x0fffffff;
| ^~~~~~~~
youtokentome/cpp/utf8.h:6:1: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
5 | #include <cassert>
+++ |+#include <cstdint>
6 |
youtokentome/cpp/utf8.h:11:1: error: ‘uint32_t’ does not name a type
11 | uint32_t chars_to_utf8(const char* begin, uint64_t size, uint64_t* utf8_len);
| ^~~~~~~~
youtokentome/cpp/utf8.h:11:1: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h:13:6: error: variable or field ‘utf8_to_chars’ declared void
13 | void utf8_to_chars(uint32_t x, std::back_insert_iterator<std::string> it);
| ^~~~~~~~~~~~~
youtokentome/cpp/utf8.h:13:20: error: ‘uint32_t’ was not declared in this scope
13 | void utf8_to_chars(uint32_t x, std::back_insert_iterator<std::string> it);
| ^~~~~~~~
youtokentome/cpp/utf8.h:13:20: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h:13:71: error: expected primary-expression before ‘it’
13 | void utf8_to_chars(uint32_t x, std::back_insert_iterator<std::string> it);
| ^~
youtokentome/cpp/utf8.h:15:43: error: ‘uint32_t’ was not declared in this scope
15 | std::string encode_utf8(const std::vector<uint32_t> &utext);
| ^~~~~~~~
youtokentome/cpp/utf8.h:15:43: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h:15:51: error: template argument 1 is invalid
15 | std::string encode_utf8(const std::vector<uint32_t> &utext);
| ^
youtokentome/cpp/utf8.h:15:51: error: template argument 2 is invalid
youtokentome/cpp/utf8.h:17:13: error: ‘uint32_t’ was not declared in this scope
17 | std::vector<uint32_t> decode_utf8(const char *begin, const char *end);
| ^~~~~~~~
youtokentome/cpp/utf8.h:17:13: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h:17:21: error: template argument 1 is invalid
17 | std::vector<uint32_t> decode_utf8(const char *begin, const char *end);
| ^
youtokentome/cpp/utf8.h:17:21: error: template argument 2 is invalid
youtokentome/cpp/utf8.h:19:13: error: ‘uint32_t’ was not declared in this scope
19 | std::vector<uint32_t> decode_utf8(const std::string &utf8_text);
| ^~~~~~~~
youtokentome/cpp/utf8.h:19:13: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h:19:21: error: template argument 1 is invalid
19 | std::vector<uint32_t> decode_utf8(const std::string &utf8_text);
| ^
youtokentome/cpp/utf8.h:19:21: error: template argument 2 is invalid
youtokentome/cpp/utf8.h:33:3: error: ‘uint32_t’ does not name a type
33 | uint32_t operator*() {
| ^~~~~~~~
youtokentome/cpp/utf8.h:33:3: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h:43:3: error: ‘uint64_t’ does not name a type
43 | uint64_t get_utf8_len() {
| ^~~~~~~~
youtokentome/cpp/utf8.h:43:3: note: ‘uint64_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h:53:3: error: ‘uint32_t’ does not name a type
53 | uint32_t code_point = 0;
| ^~~~~~~~
youtokentome/cpp/utf8.h:53:3: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h:54:3: error: ‘uint64_t’ does not name a type
54 | uint64_t utf8_len = 0;
| ^~~~~~~~
youtokentome/cpp/utf8.h:54:3: note: ‘uint64_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
youtokentome/cpp/utf8.h: In member function ‘vkcom::UTF8Iterator vkcom::UTF8Iterator::operator++()’:
youtokentome/cpp/utf8.h:28:14: error: ‘utf8_len’ was not declared in this scope
28 | begin += utf8_len;
| ^~~~~~~~
youtokentome/cpp/utf8.h: In member function ‘void vkcom::UTF8Iterator::parse()’:
youtokentome/cpp/utf8.h:61:5: error: ‘code_point’ was not declared in this scope
61 | code_point = chars_to_utf8(begin, end - begin, &utf8_len);
| ^~~~~~~~~~
youtokentome/cpp/utf8.h:61:53: error: ‘utf8_len’ was not declared in this scope
61 | code_point = chars_to_utf8(begin, end - begin, &utf8_len);
| ^~~~~~~~
youtokentome/cpp/utf8.h:61:18: error: ‘chars_to_utf8’ was not declared in this scope
61 | code_point = chars_to_utf8(begin, end - begin, &utf8_len);
| ^~~~~~~~~~~~~
youtokentome/cpp/utf8.cpp: In function ‘uint32_t vkcom::chars_to_utf8(const char*, uint64_t, uint64_t*)’:
youtokentome/cpp/utf8.cpp:73:10: error: ‘INVALID_UNICODE’ was not declared in this scope
73 | return INVALID_UNICODE;
| ^~~~~~~~~~~~~~~
youtokentome/cpp/utf8.cpp: At global scope:
youtokentome/cpp/utf8.cpp:111:18: error: ambiguating new declaration of ‘std::vector<unsigned int> vkcom::decode_utf8(const char*, const char*)’
111 | vector<uint32_t> decode_utf8(const char* begin, const char* end) {
| ^~~~~~~~~~~
youtokentome/cpp/utf8.h:17:23: note: old declaration ‘int vkcom::decode_utf8(const char*, const char*)’
17 | std::vector<uint32_t> decode_utf8(const char *begin, const char *end);
| ^~~~~~~~~~~
youtokentome/cpp/utf8.cpp: In function ‘std::vector<unsigned int> vkcom::decode_utf8(const char*, const char*)’:
youtokentome/cpp/utf8.cpp:117:23: error: ‘INVALID_UNICODE’ was not declared in this scope
117 | if (code_point != INVALID_UNICODE) {
| ^~~~~~~~~~~~~~~
youtokentome/cpp/utf8.cpp: At global scope:
youtokentome/cpp/utf8.cpp:130:18: error: ambiguating new declaration of ‘std::vector<unsigned int> vkcom::decode_utf8(const std::string&)’
130 | vector<uint32_t> decode_utf8(const string& utf8_text) {
| ^~~~~~~~~~~
youtokentome/cpp/utf8.h:19:23: note: old declaration ‘int vkcom::decode_utf8(const std::string&)’
19 | std::vector<uint32_t> decode_utf8(const std::string &utf8_text);
| ^~~~~~~~~~~
youtokentome/cpp/utf8.cpp: In function ‘std::vector<unsigned int> vkcom::decode_utf8(const std::string&)’:
youtokentome/cpp/utf8.cpp:131:21: error: could not convert ‘vkcom::decode_utf8((& utf8_text)->std::__cxx11::basic_string<char>::data(), ((& utf8_text)->std::__cxx11::basic_string<char>::data() + ((sizetype)(& utf8_text)->std::__cxx11::basic_string<char>::size())))’ from ‘int’ to ‘std::vector<unsigned int>’
131 | return decode_utf8(utf8_text.data(), utf8_text.data() + utf8_text.size());
| ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| int
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for youtokentome
Failed to build youtokentome
ERROR: Could not build wheels for youtokentome, which is required to install pyproject.toml-based projects
OS: Ubuntu 23.10 x64
Python: 3.11.6
GCC: 13.2.0
Cython: 3.0.10 already installed in the venv
this https://github.com/cdbrendel/YouTokenToMe/commit/43774726008c2192556200797ba60a8e499b5e98 works
|
gharchive/issue
| 2024-04-24T13:26:03 |
2025-04-01T04:55:17.070296
|
{
"authors": [
"Tejes",
"WiegerWolf"
],
"repo": "LahiLuk/YouTokenToMe",
"url": "https://github.com/LahiLuk/YouTokenToMe/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
332605829
|
Added Appointments, Stylists models, and updated Readme
Description
Please include a summary of the change and a link to which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
Type of change
Please delete options that are not relevant.
[x] New feature (non-breaking change which adds functionality)
[x] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
[x] Ran every endpoint through Postman to make sure database was updated properly
Checklist:
[x] My code follows the style guidelines of this project
[x] I have performed a self-review of my own code
[x] My code has been reviewed by at least one peer
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[x] I have added tests that prove my fix is effective or that my feature works
[x] New and existing unit tests pass locally with my changes
This pull request completes the API for now, giving us a User, Stylist, and Appointment model.
The Appointment object holds a User, Stylist and a Date
|
gharchive/pull-request
| 2018-06-15T00:13:57 |
2025-04-01T04:55:17.076146
|
{
"authors": [
"JohnJohnx4"
],
"repo": "Lambda-School-Labs/hairspray",
"url": "https://github.com/Lambda-School-Labs/hairspray/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
734963217
|
Tests incident table
Description
Updates database tables to reflect data being received from DS API
Tests getting all incidents from database
Type of change
Bug fix (non-breaking change which fixes an issue)
Change Status
Complete, tested, ready to review and merge
How Has This Been Tested?
Jest
All changes reflect direction chosen to pursue with new database structure. Following were done to solve issues left over from labs27 :
removed some unused variables
made updates to remove prettier errors
removed CodeClimate from dependencies since it was not being used and was causing errors
|
gharchive/pull-request
| 2020-11-03T02:12:27 |
2025-04-01T04:55:17.078824
|
{
"authors": [
"MaryamMosstoufi",
"jduell12"
],
"repo": "Lambda-School-Labs/human-rights-first-be-a",
"url": "https://github.com/Lambda-School-Labs/human-rights-first-be-a/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1020544445
|
Spell Check - gamefication FIXED
All instances of gameification => gameifcation. I noticed that in the gamefication pages file there is an index file for clean exporting. Is there a reason that only one of the files in that folder is exported through this location?
PR video submission https://youtu.be/eurZR28beY0
Just need to change "gamefication" to "gameplay" now 👍
all set and ready for merge
|
gharchive/pull-request
| 2021-10-07T23:33:59 |
2025-04-01T04:55:17.080940
|
{
"authors": [
"artofmayhem"
],
"repo": "Lambda-School-Labs/scribble-stadium-fe",
"url": "https://github.com/Lambda-School-Labs/scribble-stadium-fe/pull/182",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
380385095
|
Albert Wong - DS1_Sprint2_Storytelling_with_Data
Pull request for DS1 Sprint week 2 - Storytelling with data
Completed flights section of the assignment for Sprint 2 Day 1 Storytelling
No changes, added Sprint 2 Day 2 notebook to Github
Worked on creating fivethirtyeight chart for Day 2 notebook
No changes, added Day 3 notebook to Github
Worked on additional charts using the gapminder data
No changes, saved Sprint Challenge 2 copy on github
|
gharchive/pull-request
| 2018-11-13T19:25:24 |
2025-04-01T04:55:17.082921
|
{
"authors": [
"albert-h-wong"
],
"repo": "LambdaSchool/DS-Sprint-02-Storytelling-With-Data",
"url": "https://github.com/LambdaSchool/DS-Sprint-02-Storytelling-With-Data/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
453018212
|
lenses-cli export connectors
The following command works: lenses-cli connectors --cluster-name CDH_2
The following command does not export anything: lenses-cli export connectors --cluster-name CDH_2 --dir export.
fixed.
|
gharchive/issue
| 2019-06-06T13:02:25 |
2025-04-01T04:55:17.118425
|
{
"authors": [
"mactsouk",
"spirosoik"
],
"repo": "Landoop/lenses-go",
"url": "https://github.com/Landoop/lenses-go/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2472705430
|
Register julia package
@JuliaRegistrator register()
Registration pull request created: JuliaRegistries/General/113391
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.2.0 -m "<description of version>" 5463b92c6128a1f5106d8aa13f3aa4045456b6e8
git push origin v0.2.0
Also, note the warning: This looks like a new registration that registers version 0.2.0.
Ideally, you should register an initial release with 0.0.1, 0.1.0 or 1.0.0 version numbers
This can be safely ignored. However, if you want to fix this you can do so. Call register() again after making the fix. This will update the Pull request.
Registration pull request updated: JuliaRegistries/General/113391
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.2.0 -m "<description of version>" c5738814bb8219edcf25c64f5656008263c941cf
git push origin v0.2.0
Also, note the warning: This looks like a new registration that registers version 0.2.0.
Ideally, you should register an initial release with 0.0.1, 0.1.0 or 1.0.0 version numbers
This can be safely ignored. However, if you want to fix this you can do so. Call register() again after making the fix. This will update the Pull request.
Registration pull request updated: JuliaRegistries/General/113391
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.2.0 -m "<description of version>" a2f64fb5a39f717a14606fdda4306286f1626e16
git push origin v0.2.0
Also, note the warning: This looks like a new registration that registers version 0.2.0.
Ideally, you should register an initial release with 0.0.1, 0.1.0 or 1.0.0 version numbers
This can be safely ignored. However, if you want to fix this you can do so. Call register() again after making the fix. This will update the Pull request.
Registration pull request created: JuliaRegistries/General/116122
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.2.2 -m "<description of version>" 0649e471659320b7a2279e9682a452fb1866a405
git push origin v0.2.2
Error while trying to register: Version 0.2.2 already exists
Registration pull request created: JuliaRegistries/General/116144
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.2.3 -m "<description of version>" 0068ef537a9764b722814511c7c66810861d1092
git push origin v0.2.3
Registration pull request created: JuliaRegistries/General/117228
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.3.0 -m "<description of version>" 48fcca8d08be1c3492fb1c101667c16d1b6f16bb
git push origin v0.3.0
Registration pull request created: JuliaRegistries/General/117967
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.3.1 -m "<description of version>" 1c910350f61f8751d31774878a9061b334d0f52f
git push origin v0.3.1
Registration pull request updated: JuliaRegistries/General/117967
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.3.1 -m "<description of version>" 6dbd34c4f69ffccafb2154f4cfbf2bad0147777c
git push origin v0.3.1
Registration pull request created: JuliaRegistries/General/118429
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.3.2 -m "<description of version>" a271f863b8bc2159a1bd8a4bf50454118fd816ee
git push origin v0.3.2
Registration pull request created: JuliaRegistries/General/118934
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.3.3 -m "<description of version>" eedb1e409543786f927431c0ac6ff824ec2b2808
git push origin v0.3.3
Registration pull request created: JuliaRegistries/General/119483
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.3.4 -m "<description of version>" 4875135f623af410736af43a1bd7495a723f4a7f
git push origin v0.3.4
Registration pull request created: JuliaRegistries/General/121338
Tip: Release Notes
Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.
@JuliaRegistrator register
Release notes:
## Breaking changes
- blah
To add them here just re-invoke and the PR will be updated.
Tagging
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.3.5 -m "<description of version>" 4fadfedd5f8247ad395d1597dfd4662c759668df
git push origin v0.3.5
|
gharchive/issue
| 2024-08-19T07:57:01 |
2025-04-01T04:55:17.148206
|
{
"authors": [
"JuliaRegistrator",
"ZenanH"
],
"repo": "LandslideSIM/MaterialPointSolver.jl",
"url": "https://github.com/LandslideSIM/MaterialPointSolver.jl/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1205929244
|
[Bug] Backpack v5 breaks morphTo select field
Bug report
What I did
I have a n-1 polymorphic relationship between "Intros" and "Companies" or "People". Meaning an intro can link a user to either a company or a person. I have an 'introable_id' column in the intros table that points to the related model. The relationship is defined on App\Models\Intro as:
public function introable() { return $this->morphTo(); }
I have been using the select2 (1-n) relationship field in Laravel Backpack v4 without issues by just hardcoding the model that I want to default to from Backpack:
$this->crud->addField([ // Select2
'label' => "Intro To",
'type' => 'select2',
'name' => 'introable_id',
'entity' => 'introable',
'model' => "App\Models\Company",
'attribute' => 'name',
]);
What I expected to happen
I thought I could update to Laravel Backpack v5 and this would still work.
What happened
After updating, creating a new Intro or updating an existing one throws an error.
On create, I get:
Field 'introable_id' doesn't have a default value even though I've double-checked that the request does indeed have introable_id set properly.
On update, I get:
Call to undefined method App\Models\Intro::introable_id()
The relationship is called 'introable' but the db column is called introable_id. This is thrown because line 99 in the Create trait is:
$relation = $item->{$relationMethod}();
What I've already tried to fix it
I tried to port over the v4 version of Select2 and that didn't work. I verified that the request has the proper data and searched around a lotl
Is it a bug in the latest version of Backpack?
Yes
Backpack, Laravel, PHP, DB version
When I run php artisan backpack:version the output is:
PHP VERSION:
PHP 8.1.1 (cli) (built: Dec 17 2021 22:38:05) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.1.1, Copyright (c) Zend Technologies
with Zend OPcache v8.1.1, Copyright (c), by Zend Technologies
LARAVEL VERSION:
v8.83.8@cf430301ad17656b3d918995bcdd0454c3c119b9
BACKPACK VERSION:
5.0.14@1e355c4c046a34423a1a3e3150120245a4bfd8e9
Hello @stevebeyatte
I am working on a core solution for the morph fields, I should have an mvp PR ready by tomorrow. In the meanwhile use entity => false on your introable_id field.
Let me know if that solves it for you.
Cheers
@pxpm thanks so much for your help. When I do that I get
"Undefined array key "relation_type"
on
vendor/backpack/crud/src/app/Library/CrudPanel/Traits/Input.php:123
That's the thing - if you specify entity => false then you'll have to manually give the field everything it needs. But after you do that, it should work as it did before.
relation_type should be MorphTo in your case, right? (string)
@tabacitu @pxpm
When I specify relation_type I also get the same query builder error, screenshot attached.
$this->crud->addField([ // Select2 'label' => "Intro To", 'type' => 'select2', 'name' => 'introable_id', 'relation_type' => 'MorphTo', 'entity' => false, 'model' => "App\Models\Company", 'attribute' => 'name', ]);
It calls the function off of the name attribute above, ex $model->introable_id(). I changed the name to introable (which is how the polymorphic relationship is defined) but then the introable_id field doesn't get passed so the database chokes without that value.
Thoughts on a quick fix for this @pxpm ?
@stevebeyatte It needs some core changes since introable_id is detected as a relationship (MorphTo) when it shouldn't, the relation is introable_id + introable_type.
As a workaround you can use other field name, like introable_input.
Then add a mutator and getter for your fake field that adds the real attribute in the model:
public function setIntroableInputAttribute($value) {
$this->attributes['introable_id'] = $value;
}
public function getIntroableInputAttribute() {
return $this['attributes']['introable_id'];
}
If you are using $fillable you should also add the introable_input into the fillable array.
If you run into problems using select2 field, I'd recommend you to use the select2_from_array since you'd be populating the select with javascript depending on the type, just use the select2_from_array with empty [] options and then populate it with JS.
I am sure I've already set morphTo relations working like this, you should be fine 👍
When the PR gets merged you just need to CRUD::field('introable')->models(['\App\Model\1',\App\Model\2']); .
Let me know,
Cheers
@pxpm @tabacitu any progress on MorphTo (not MorphToMany) relationships being handled either this way or by the relationship field type? I really need plain MorphTo, one way or another.
@adrienne we're working on it right now. ETA: 2 weeks
friendly ping on this one :-)
@stevebeyatte I've just finished reviewing the new morphTo field inside repeatable. It is a BEAUTY:
https://github.com/Laravel-Backpack/docs/pull/368
So simple, yet so powerful.
--
It's going through one more round of testing, by @jorgetwgroup . We expect to have it merged in 1-2 weeks, max.
Thank you for your patience 🙏
@pxpm , I believe we can close this, since we'll soon be merging all PRs related to this?
I've just merged https://github.com/Laravel-Backpack/CRUD/pull/4579 (and the corresponding PRO PR) which add morphTo functionality to the relationship field 😱 It's been a multi-month effort by @pxpm , and it hasn't been easy at all, so all credits go to him 👏👏👏
It's currently on main, where we'll test it a little more until Monday... and on Monday we tag 3.4.0 and launch it 🎉
I hope you'll be as happy to have this as we are.
Thank you for your patience 🙏
Cheers!
|
gharchive/issue
| 2022-04-15T22:36:50 |
2025-04-01T04:55:17.205302
|
{
"authors": [
"adrienne",
"pxpm",
"stevebeyatte",
"tabacitu"
],
"repo": "Laravel-Backpack/CRUD",
"url": "https://github.com/Laravel-Backpack/CRUD/issues/4323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
207226359
|
Filter 'active' by default?
I have searched docs, code and Google. Can't seem to see if this is currently possible, if not perhaps it could be a new feature?
I would like to load the 'list' crud view, with a filter enabled by default. In my use case the users only really care about active entries, I have given them an active filter, but it's confusing to some users to have to enable it every time.
Alternativley if there was a way to remove certain rows from the list view entirely, that would also suffice.
Thanks
I just have my navigation/sidebar link directly to the filter, seems easiest :D
Otherwise you can play with model scopes maybe
Sounds simple! What am I missing? I have AJAX pagination enabled so I don't appear to have a direct URL to the filter.
I have a query scope setup for the filter on the model. But I can't see where I would limit the initial :list view by that scope?
Sorry if I am being super dumb. Wouldnt be the first time!
You can use ?parameter=key
Hi @jdfx ,
I think the fastest way would be the one they suggested:
enter the page
activate the filters that you want active
copy the GET parameters and place them in your sidebar menu item
Result: whenever someone clicks on the sidebar item, they go directly to the filtered list.
Cheers!
I can' t make this work. If I do as @tabacitu suggets, and create the proper GET route, I get the data, but not the view. What am I missing?
this is not working with the ajax data tables. anyway to do it?
@remipou - it should work with the current Ajax implementation too. Try yourapp/admin/yourentity?filtername=value. Just tried the demo with http://demo/admin/monster?checkbox=true and it worked for me.
There is a second closure when adding an filter via $this->crud->addFilter(..) which is the if filter inactive closure. this would be the best position to apply a default filter i suppose, as you don't need to add the Get parameter everywhere :)
That's an excellent point, @OliverZiegler - very smart use. Something like this would make the filter active ALL THE TIME:
$this->crud->addFilter([ // add a "simple" filter called Draft
'type' => 'simple',
'name' => 'checkbox',
'label' => 'Simple',
],
false, // the simple filter has no values, just the "Draft" label specified above
function () { // if the filter is active (the GET parameter "checkbox" exits)
$this->crud->addClause('where', 'checkbox', '1');
},
function () { // if the filter is NOT active (the GET parameter "checkbox" does not exit)
$this->crud->addClause('where', 'checkbox', '1');
$this->crud->request->request->add(['checkbox' => 1]); // to make the filter look active
});
The problem with both solutions is that, upon click, the filter will NOT get deactivated, unfortunately. Hmm... I can't see a way around this, tbh...
@tabacitu if some filter logic is applied every time, I suppose the corresponding filter should be renamed to tell the user when it's "turned on" its deactivating some filtering..
Your example thought is kind of useless, ass this is what can be achieved directly via addClause.
See my example down here. I implement a filter to show "only inactive items" and by default I show only active Items.
That' the way we use this.
$this->crud->addFilter([ // add a "simple" filter called Draft
'type' => 'simple',
'name' => 'checkbox',
'label' => 'Show inactive Items',
],
false, // the simple filter has no values, just the "Draft" label specified above
function () { // if the filter is active (the GET parameter "checkbox" exits)
$this->crud->addClause('where', 'active', '0');
},
function () { // if the filter is NOT active (the GET parameter "checkbox" does not exit)
$this->crud->addClause('where', 'active', '1');
});
@OliverZiegler yup, I agree with you. The best solution here is probably semantics :-) Renaming the filter :-)
thanks @OliverZiegler. very useful
In BP 4X, I replaced $this->crud->request->request->add(['checkbox' => 1]); by $this->crud->getRequest()->request->add(['checkbox' => 1]);
Also to show up the default selected value you can do this in your CRUD controller: (my original code was for a multiple values filter)
$filters = $this->crud->filters();
foreach($filters as $filter)
{
if($filter->name === "checkbox" && empty($filter->currentValue))
{
$filter->currentValue = json_encode(1);
}
}
|
gharchive/issue
| 2017-02-13T14:00:34 |
2025-04-01T04:55:17.215221
|
{
"authors": [
"OliverZiegler",
"OwenMelbz",
"automat64",
"jdfx",
"jrbecart",
"luizmanoelf",
"remipou",
"tabacitu"
],
"repo": "Laravel-Backpack/CRUD",
"url": "https://github.com/Laravel-Backpack/CRUD/issues/435",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1535212105
|
[Feature Request] Labels for checkboxes, radio buttons, and ranges should have cursor: pointer CSS property
Labels for checkboxes and radio buttons, and the input type=range element are clickable.
The mouse cursor should reflect this by turning into a pointer:
Got a pull request to do this. It's my first pull request ever, so be gentle if I screw it up 😄
Awesome! Thanks for the PR. Let's close this in favor of https://github.com/Laravel-Backpack/CRUD/pull/4899
|
gharchive/issue
| 2023-01-16T16:35:04 |
2025-04-01T04:55:17.218039
|
{
"authors": [
"pekka",
"tabacitu"
],
"repo": "Laravel-Backpack/CRUD",
"url": "https://github.com/Laravel-Backpack/CRUD/issues/4898",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
618101431
|
Fixed details row icon
Plus icon wasn't switching to minus on details row open.
A new inspection was created.
Ofcourse @promatik
Well spotted. Going to mark this as ready for merge.
Thank you very much for contribution.
\o
The inspection completed: No new issues
|
gharchive/pull-request
| 2020-05-14T10:08:24 |
2025-04-01T04:55:17.220613
|
{
"authors": [
"promatik",
"pxpm",
"scrutinizer-notifier"
],
"repo": "Laravel-Backpack/CRUD",
"url": "https://github.com/Laravel-Backpack/CRUD/pull/2821",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1780314780
|
Provide a ThemeServiceProvider for themes
WHY
BEFORE - What was wrong? What was happening before this PR?
Two things:
each theme had its own AutomaticServiceProvider, with the same logic, and it was difficult to change logic in all of them all at once; more info in https://github.com/Laravel-Backpack/CRUD/issues/5147
for themes, we weren't loading the views if the theme wasn't active - more info in https://github.com/Laravel-Backpack/demo/pull/530#issuecomment-1611837126
AFTER - What is happening after this PR?
Backpack provides a ThemeServiceProvider and that loads the views even if the theme isn't active.
HOW
How did you achieve that, in technical terms?
moved AutomaticServiceProvider from all themes to CRUD;
renamed it to ThemeServiceProvider;
turned it into a class (instead of a trait);
made all AddonServiceProviders in themes extend this new ThemeServiceProvider;
Is it a breaking change?
Yes. Sort of. I mean it's a breaking change only if you're using a particular version of a theme without having the latest version of CRUD too. Which is something only our team members would do. So no, non-breaking for anybody testing Backpack, other than our team members.
After this PR is merged we should merge:
https://github.com/Laravel-Backpack/theme-coreuiv2/pull/15
https://github.com/Laravel-Backpack/theme-tabler/pull/86
https://github.com/Laravel-Backpack/theme-coreuiv4/pull/25
@pxpm let me know if doing this sounds like a bad idea to you. No testing needed. Otherwise give me the green light, I'm gonna go ahead and merge. It's all tested and working great.
|
gharchive/pull-request
| 2023-06-29T07:50:32 |
2025-04-01T04:55:17.226115
|
{
"authors": [
"tabacitu"
],
"repo": "Laravel-Backpack/CRUD",
"url": "https://github.com/Laravel-Backpack/CRUD/pull/5148",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2078492874
|
[Bug]: Honour mod missing from mod manager
Operating System
Windows 10
BG3 Mod Manager Version
1.0.10.0
BG3 Game Version
Full Release Patch 5 Hotfix 16
Bug Summary
It was there before, but I reinstalled the game somewhere else with more space, and I have all my mods except the Honour mod, which I don't even know why would be a mod in the first place.
But now that the game is installed and everything, the only mod missing is that one, and I cannot load up any saves without the game crashing.
I was told it could be a problem with the Mod Manager itself but what is the problem?
Links
No response
Hi there! I am having the same issues. Did you ever find a solution?
|
gharchive/issue
| 2024-01-12T10:08:14 |
2025-04-01T04:55:17.235927
|
{
"authors": [
"CaelumClamos",
"lumakirby"
],
"repo": "LaughingLeader/BG3ModManager",
"url": "https://github.com/LaughingLeader/BG3ModManager/issues/267",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
246579328
|
Show steam UID on profile
Is there an easy way to show the steam UID on the persons profile?
We make it mandatory that everyone logs in using your plugin however the reason is because we want it to automatically show UID on their profile?
I have done this if anyone needs it.
@ChrisStark I could use this. If you could please contact me on steam at: http://steamcommunity.com/id/kerr/
Here is the release on how to put it on your profile! hope this helps everyone
https://pastebin.com/0QJNm3Tz
@ChrisStark how did you made it mandatory to register with steam?
we dont approve applications unless they are. but you can make it the only
option and therefor will force them to do so.
On Fri, Aug 25, 2017 at 7:23 AM, good-live notifications@github.com wrote:
@ChrisStark https://github.com/chrisstark how did you made it mandatory
to register with steam?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Lavoaster/IP.Board-Steam-Authentication-Method/issues/88#issuecomment-324891839,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AL43r1lj12ASFmOaJrVWSOVsaMiGgRCFks5sbq6XgaJpZM4OnnXm
.
|
gharchive/issue
| 2017-07-30T11:53:49 |
2025-04-01T04:55:17.247431
|
{
"authors": [
"ChrisStark",
"K3RRR",
"good-live"
],
"repo": "Lavoaster/IP.Board-Steam-Authentication-Method",
"url": "https://github.com/Lavoaster/IP.Board-Steam-Authentication-Method/issues/88",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
150513451
|
libember_slim shared library
This adds the required export declarations to build and use libember_slim as a shared library (DLL) from a third-party application. It also fixes some bugs that were discovered during testing. In particular, a possible runtime error during CRC generation is avoided, and the struct sizes are made the same for debug and release builds, so the hosting application will work with either library.
I just merged the Pull Request, thanks for contributing!
Thanks. I was happy to help.
|
gharchive/pull-request
| 2016-04-23T03:21:17 |
2025-04-01T04:55:17.249168
|
{
"authors": [
"KimonHoffmann",
"tweibert"
],
"repo": "Lawo/ember-plus",
"url": "https://github.com/Lawo/ember-plus/pull/46",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1939152368
|
adding mini.starter to lualine disabled files
hi, small modification :) to v.10
thanks
Thanks!
|
gharchive/pull-request
| 2023-10-12T04:27:50 |
2025-04-01T04:55:17.254054
|
{
"authors": [
"bassamsdata",
"folke"
],
"repo": "LazyVim/LazyVim",
"url": "https://github.com/LazyVim/LazyVim/pull/1667",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2420711016
|
feat(core): defer clipboard because xsel and pbcopy can be slow
Description
See this discussion
TLDR:
Startup time performance is affected quite significantly when the clipboard provider is xsel(linux) or pbcopy(macos). I expect an improvement in these cases, especially on older pc's.
This PR resets vim.opt.clipboard after the options are loaded. Then, on VeryLazy, the setting is restored.
I also tested with yanky.
Relevant prints:
Before resetting vim.opt.clipboard in init, vim.print(vim.opt.clipboard) yields a table which will be captured:
--- fields
_name = "clipboard",
_value = "unnamedplus",
--- more fields
Set vim.opt.clipboard = "" and vim.print(vim.opt.clipboard), also yields a table:
--- fields
_name = "clipboard",
_value = "",
--- more fields
Related Issue(s)
Screenshots
Checklist
[x] I've read the CONTRIBUTING guidelines.
Tried it out locally and works as expected from my short-time test. Was also able to overwrite the value in options.lua user configuration. Also, vim.o.clipboard yields the value of the object table, so it's easier to check :stuck_out_tongue:
ty!
|
gharchive/pull-request
| 2024-07-20T08:09:13 |
2025-04-01T04:55:17.259281
|
{
"authors": [
"abeldekat",
"dpetka2001",
"folke"
],
"repo": "LazyVim/LazyVim",
"url": "https://github.com/LazyVim/LazyVim/pull/4120",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.