id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
867228352 | julia: add casts and converts types where necessary
Fixes https://github.com/adsharma/py2many/issues/78
Fixes https://github.com/adsharma/py2many/issues/80
This commit makes Julia the first language to pass all the tests!
cpp and rust are pretty close too.
| gharchive/pull-request | 2021-04-26T03:51:47 | 2025-04-01T06:37:45.031812 | {
"authors": [
"adsharma"
],
"repo": "adsharma/py2many",
"url": "https://github.com/adsharma/py2many/pull/126",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1043293980 | Add error check when type of arguments are missing
when writing e.g. all i { ... }, one is intending to write all i : I { ... }, but this error is not reported.
One way to solve this, is by enforcing it in the syntax.
| gharchive/issue | 2021-11-03T09:37:22 | 2025-04-01T06:37:45.053220 | {
"authors": [
"bvssvni"
],
"repo": "advancedresearch/last_order_logic",
"url": "https://github.com/advancedresearch/last_order_logic/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1696735682 | fix/x-axis labels
Currently, x-axis labels for the Line Charts on the Dashboard view are too closely bunched together
Two potential fixes include:
Removing the labels completely with
xaxis: {
labels: {
show: false
formatter: function(value){
return "Particpant " + value;
}
}
...}
Result:
Have label contain "|" to create distinction and remove overlapping labels
xaxis: {
labels: {
rotate: 0,
hideOverlappingLabels: true,
formatter: function(value){
return value + " |";
}
}
...}
Result:
Both suggestions are not ideal. Instead, modify the category value on the x-axis. The category value expects an array which renders the labels for each item. By default since we are just passing nothing through, it will render each number for the data points.
Looks like xaxis.categories length needs to be the exact as series.data length, currently trying to find a way around this
| gharchive/issue | 2023-05-04T21:36:01 | 2025-04-01T06:37:45.103068 | {
"authors": [
"LiamSingh64",
"rwx-yxu"
],
"repo": "advweb-grp1/advanced-web-final-year-project",
"url": "https://github.com/advweb-grp1/advanced-web-final-year-project/issues/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2659803873 | Need an option to ignore case of variables
I need an option to ignore case when matching/replacing variables.
PR #54
sure, I just thought it was probably a common use-case and it might deserve a simpler built-in option.
I understand, but that would make the ignoreCase option available only if the dev doesn't provide a getValue option, so it's not ideal for me right now. I will think about it. Thanks again for the PR and the tests.
| gharchive/issue | 2024-11-14T19:17:51 | 2025-04-01T06:37:45.129453 | {
"authors": [
"aegenet",
"jdoklovic"
],
"repo": "aegenet/belt",
"url": "https://github.com/aegenet/belt/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
569361813 | Compile error
Hi, I download this tool and i have error when I try "make install"
g++ -gdwarf-2 -shared -O2 -fPIC -std=gnu++11 layer.cpp -o libVkLayer_device_chooser.so layer.cpp: In function ‘VkResult ChooseDevice(VkInstance, const VkLayerInstanceDispatchTable&, const char*, VkPhysicalDevice_T*&)’: layer.cpp:65:23: error: ‘atoi’ was not declared in this scope 65 | int deviceIndex = atoi(env); | ^~~~ layer.cpp: In function ‘VkResult DeviceChooserLayer_EnumeratePhysicalDevices(VkInstance, uint32_t*, VkPhysicalDevice_T**)’: layer.cpp:88:29: error: ‘getenv’ was not declared in this scope 88 | const char* const env = getenv(kEnvVariable); | ^~~~~~ layer.cpp: In function ‘VkResult DeviceChooserLayer_EnumeratePhysicalDeviceGroupsKHR(VkInstance, uint32_t*, VkPhysicalDeviceGroupPropertiesKHR*)’: layer.cpp:125:29: error: ‘getenv’ was not declared in this scope 125 | const char* const env = getenv(kEnvVariable); | ^~~~~~ make: *** [Makefile:4: libVkLayer_device_chooser.so] Error 1
Solved, I downgrade gcc and g++ from version 10 to 9 and compile fine!
| gharchive/issue | 2020-02-22T16:59:23 | 2025-04-01T06:37:45.134158 | {
"authors": [
"juanro49"
],
"repo": "aejsmith/vkdevicechooser",
"url": "https://github.com/aejsmith/vkdevicechooser/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1221954182 | Implement WinchController
The winch controller controls the reel-out speed and tether force.
Generic control components
[x] Integrator
[x] Limiter
[x] Mixer_2D
[x] Mixer_3D
WinchController
[x] function calc_vro
[x] CalcVSetIn
Models for controller unit tests
[x] Winch
Controllers
[x] SpeedController
[x] LowerForceController
[x] UpperForceController
[x] WinchController
Test scripts
[x] test_speedcontroller1
[x] test_speedcontroller2
[x] test_force_speedcontroller1
[x] test_force_speedcontroller2
[x] test_winchcontroller
All tested and done. :)
| gharchive/issue | 2022-05-01T00:18:22 | 2025-04-01T06:37:45.143579 | {
"authors": [
"ufechner7"
],
"repo": "aenarete/KiteControllers.jl",
"url": "https://github.com/aenarete/KiteControllers.jl/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1052809188 | fix anya templates and images
Add template mapping for anya
The current images seems to have smaller resolution and anya can't be found with default configuration.
PS. I'm doing a javazon gg gloves shopper. If there's interest I could make a PR later on
| gharchive/pull-request | 2021-11-14T00:21:07 | 2025-04-01T06:37:45.144989 | {
"authors": [
"reqyl"
],
"repo": "aeon0/botty",
"url": "https://github.com/aeon0/botty/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
100337746 | Config fix
same fix but then against the 2.2.x branch
Would you mind copying the Test from https://github.com/aerogear/aerogear-android-push/pull/51/?
sorry I was cherry picking, added the test
Can you flatten ca0908c...00f7c85? After that we will be good to go.
:+1:
| gharchive/pull-request | 2015-08-11T15:24:10 | 2025-04-01T06:37:45.146894 | {
"authors": [
"danielpassos",
"edewit",
"secondsun"
],
"repo": "aerogear/aerogear-android-push",
"url": "https://github.com/aerogear/aerogear-android-push/pull/52",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
476225107 | fix: only allow strings below 250 chars for title + desc
Description
Checklist
[ ] npm test passes
[ ] npm run build works
[ ] tests are included
[ ] documentation is changed or added
Verified! good work!
| gharchive/pull-request | 2019-08-02T15:02:56 | 2025-04-01T06:37:45.149186 | {
"authors": [
"StephenCoady",
"wtrocki"
],
"repo": "aerogear/ionic-showcase",
"url": "https://github.com/aerogear/ionic-showcase/pull/243",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
155220632 | Fix version number v3.4.8
Version number has not been updated in v3.4.8.
However, maybe this fix should be v3.4.8.1?
Same as https://github.com/aerospike/aerospike-client-php/issues/84 ?
You're right @sergeyklay! I missed this issue. Waiting for 3.4.9 then (or 3.4.8.1 for a rapid fix if commiters agree).
This version number is important for me because my provisioning process (Ansible) installs latest version IF the current version installed is lower than "v3.4.9" in this case. This causes a new build each time I provision the machine(s)...
It's now at 3.4.9, closing this PR. Thanks!
| gharchive/pull-request | 2016-05-17T09:58:14 | 2025-04-01T06:37:45.166052 | {
"authors": [
"koleo",
"rbotzer",
"sergeyklay"
],
"repo": "aerospike/aerospike-client-php",
"url": "https://github.com/aerospike/aerospike-client-php/pull/87",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2760691671 | feat: Changes + Test for 8.0 Schema
Points to schema 8.0.0
Added config changes in 8.0 yaml and conf for tests
Should be merged only after 8.0.0 schema is merged.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 79.04%. Comparing base (fbbdc07) to head (5ed5011).
Additional details and impacted files
@@ Coverage Diff @@
## main #57 +/- ##
=======================================
Coverage 79.04% 79.04%
=======================================
Files 14 14
Lines 1021 1021
=======================================
Hits 807 807
Misses 145 145
Partials 69 69
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
| gharchive/pull-request | 2024-12-27T10:18:38 | 2025-04-01T06:37:45.173731 | {
"authors": [
"a-spiker",
"codecov-commenter"
],
"repo": "aerospike/asconfig",
"url": "https://github.com/aerospike/asconfig/pull/57",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1997317286 | [as2_behaviors] Behaviors composable nodes
Basic Info
Info
Please fill out this column
Issue(s) this addresses
#57
ROS2 version tested on
Humble
Aerial platform tested on
Gazebo
Description of contribution in a few bullet points
Motion behaviors can be launched as composable nodes too. Changes were made without breaking stand-alone nodes execution, which are still available.
Execution and interaction with other nodes are exactly the same.
Added NodeOptions to behavior server to allow launching components. Default values are backward compatible.
Moved definitions of motion behaviors to CPP files.
Description of documentation updates required from your changes
How to launch composable nodes even it is pretty straightforward.
ros2 launch as2_behaviors_motion composable_motion_behaviors.launch.py
namespace:=drone0
land_plugin_name:=land_plugin_speed
go_to_plugin_name:=go_to_plugin_position
follow_path_plugin_name:=follow_path_plugin_position
takeoff_plugin_name:=takeoff_plugin_speed
Notice that the only change is at the launcher file, parameters are identical to previous launcher.
Future work that may be required in bullet points
Takeoff behavior should be renamed: TakeOffBehavior --> TakeoffBehavior to fulfill CamelCase naming convention.
I'm working on moving other behaviors to composable nodes too
Codecov Report
Attention: 510 lines in your changes are missing coverage. Please review.
Comparison is base (638c8cc) 2.86% compared to head (bdd054b) 2.80%.
Report is 4 commits behind head on main.
:exclamation: Current head bdd054b differs from pull request most recent head 4b51394. Consider uploading reports for the commit 4b51394 to get more accurate results
Files
Patch %
Lines
...ference_behavior/src/follow_reference_behavior.cpp
0.00%
136 Missing :warning:
...s_motion/takeoff_behavior/src/takeoff_behavior.cpp
0.00%
92 Missing :warning:
.../follow_path_behavior/src/follow_path_behavior.cpp
0.00%
91 Missing :warning:
...haviors_motion/land_behavior/src/land_behavior.cpp
0.00%
85 Missing :warning:
...viors_motion/go_to_behavior/src/go_to_behavior.cpp
0.00%
82 Missing :warning:
as2_motion_controller/src/controller_handler.cpp
0.00%
16 Missing :warning:
as2_core/src/aerial_platform.cpp
0.00%
6 Missing :warning:
...lude/as2_behavior/__impl/behavior_server__impl.hpp
0.00%
2 Missing :warning:
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #358 +/- ##
========================================
- Coverage 2.86% 2.80% -0.07%
========================================
Files 103 100 -3
Lines 5614 5597 -17
Branches 472 491 +19
========================================
- Hits 161 157 -4
+ Misses 5315 5286 -29
- Partials 138 154 +16
Flag
Coverage Δ
unittests
2.80% <0.00%> (-0.07%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Everything is fine. My only concern is not being able to use add_subdirectories() in the CMakeLists, as this requires modifying the main CMakeLists for each trajectory generator added, adding all the dependencies. Whereas, if they are kept separate, all dependencies remain isolated. Perhaps one option is for the individual CMakeLists to generate the library with its dependencies, and from the main one, simply generate the executable linked with its library and add it to the components.
I completely agree @RPS98, I'm taking another look at it.
Also, I've been testing deeply (more than 30 different flights with Gazebo and Crazyflies) and everything looks good except from DroneInterface never founding the behaviors and init time. Calling them after works well, but for some reason they are visible while wait_for_service. Also, I'm taking a further look at this.
| gharchive/pull-request | 2023-11-16T16:58:08 | 2025-04-01T06:37:45.197922 | {
"authors": [
"RPS98",
"codecov-commenter",
"pariaspe"
],
"repo": "aerostack2/aerostack2",
"url": "https://github.com/aerostack2/aerostack2/pull/358",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
206109567 | error: /dev/stderr permission denied
error opening file: open /dev/stderr: permission denied
Hint: touch /dev/stderr, or chown/chmod it so that the cosgo process can access it.
I can only reproduce this when using a "system" user, one with no "shell".
sudo adduser --system cosgo-test
cd ~cosgo-test
sudo -u cosgo-test cosgo
Can successfully start server with --system user using the "log" flag.
sudo -u cosgo-test cosgo -log debug.log & sudo -u cosgo-test tail -f debug.log
| gharchive/issue | 2017-02-08T06:39:19 | 2025-04-01T06:37:45.200261 | {
"authors": [
"aerth"
],
"repo": "aerth/cosgo",
"url": "https://github.com/aerth/cosgo/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
113532951 | Working solve_field in astrometry.net
We need astrometry.net installed to the UAHPC server
I installed on my HPC home folder, but you installed on our midterm folder correct?
| gharchive/issue | 2015-10-27T08:49:10 | 2025-04-01T06:37:45.283232 | {
"authors": [
"dsidi",
"pbieberstein"
],
"repo": "aesoll/astrometriconf",
"url": "https://github.com/aesoll/astrometriconf/issues/7",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1812483554 | An unexpected error occurred
After a couple days of using the plugin I started to get this error:
Failed to execute yamllint An unexpected error occurred: Traceback (most recent call last): File "/opt/homebrew/bin/yamllint", line 8, in sys.exit(run()) ^^^^^ File "/opt/homebrew/Cellar/yamllint/1.32.0/libexec/lib/python3.11/site-packages/yamllint/cli.py", line 249, in run prob_level = show_problems(problems, 'stdin', args_format=args.format, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I thought you would want to know.
Hey, thanks for the report. I can't tell what this error is since it's an internal Yamllint error which is missing most of the stacktrace. I'll close this for now but if anyone else encounters it then I'd appreciate some more information.
| gharchive/issue | 2023-07-19T18:18:33 | 2025-04-01T06:37:45.285651 | {
"authors": [
"aesy",
"jon-strayer"
],
"repo": "aesy/yamllint-intellij",
"url": "https://github.com/aesy/yamllint-intellij/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1765123171 | oracle details: creator / operator is not displayed
Context information
application version: latest dev deployment
device:
browser:
operating system:
Steps to reproduce
visit: https://aescan.dev.aepps.com/oracles/ok_N7nzuDXwFHJtWKvVnV4AQtzZmUFSFvXFf79GsNdrA7XMEYxqi
What is expected?
I would expect the creator / operator to be displayed
What is actually happening?
currently "N/A" is shown
A solution for this would be to replace oracle hash prefix with ak_.
| gharchive/issue | 2023-06-20T10:37:33 | 2025-04-01T06:37:45.289010 | {
"authors": [
"marc0olo",
"michele-franchi"
],
"repo": "aeternity/aescan",
"url": "https://github.com/aeternity/aescan/issues/332",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
2132304136 | Need help for binary door sensor
Hello,
I need help for configuring a binary sensor in order to watch opening status of a door
I use an esp01s as the "senso" (transmitter) and a d1mini as the "hub" (receiver)
Reading documentation; I found example of binary senso usage like this:
binary_sensor:
- id: user_button
on_click:
- beethowen_transmitter.send_event: button_click
- beethowen_transmitter.send_event:
device_type: button
event_type: double_click
- beethowen_transmitter.send_event:
device_type: dimmer
event_type: left
value: 3
I took a look at the available events for the "button" type:
But this doesn't match my use case.
I just need to send an event (ON / OFF) reflecting the state of a microswitch plugged on my esp01s; to a central hub, over ESP-NOW (because my sensors are running on battery; so I need to take care of power consumption)
It seems there are a lot of "binary sensors" related to this on BThome format:
I tried with this config:
But: what are the available "event types" for a "door binary sensor" ?
Thank you very much for your help
Best regards
Anyone to help here please?
Hi,
I dn't know if this is the right way to to it; but I was able to achieve it by doing this:
Transmitter:
beethowen_transmitter:
id: buanderie
connect_persistent: true
sensors:
- measurement_type: generic_boolean
sensor_id: fenetre01
auto_send: false
binary_sensor:
- platform: gpio
pin:
number: GPIO2
mode: INPUT_PULLUP
inverted: true
filters:
- delayed_on: 25ms
- delayed_off: 25ms
id: fenetre01
**on_press:
- beethowen_transmitter.send: true
on_release:
- beethowen_transmitter.send: false**
Receiver:
beethowen_receiver:
dump: unmatched
devices:
- mac_address: "***MAC_ADDRESS***"
name_prefix: buanderie
dump: all
binary_sensor:
- platform: beethowen_receiver
mac_address: "***MAC_ADDRESS***"
sensors:
- measurement_type: generic_boolean
id: fenetre01
name: fenetre01
I hope it will be usefull for others :)
| gharchive/issue | 2024-02-13T13:16:54 | 2025-04-01T06:37:45.305371 | {
"authors": [
"jerome83136"
],
"repo": "afarago/esphome_component_bthome",
"url": "https://github.com/afarago/esphome_component_bthome/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
199807147 | Jelly effect
In some devices when the camera open can be seen a jelly effect.
Has anyone noticed this problem?
I haven't seen this. Would you mind giving more specific details? Which phones, MC version, etc?
Phone Moto G4 Plus, Android Marshmallow
| gharchive/issue | 2017-01-10T12:21:16 | 2025-04-01T06:37:45.336850 | {
"authors": [
"bkawakami",
"tylerkrupicka"
],
"repo": "afollestad/material-camera",
"url": "https://github.com/afollestad/material-camera/issues/157",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2424220724 | Can I add an indicator?
Thank you so much
What?
Similar to the indicator property of Tabbar
Hi @itMcdull !
I'm sorry for the long wait, I'm switching jobs at the moment.
Regarding the addition of the indicator property, I don't think it makes much sense on this package, since the package already exposes parameters to control the indicator style.
Please let me know what do you want to achieve that the package is not supporting.
Closing this issue. Feel free to reopen it if needed
| gharchive/issue | 2024-07-23T04:17:25 | 2025-04-01T06:37:45.338807 | {
"authors": [
"afonsocraposo",
"itMcdull"
],
"repo": "afonsocraposo/buttons_tabbar",
"url": "https://github.com/afonsocraposo/buttons_tabbar/issues/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1390725783 | Example cannot install mkdocs-material 8.5.3
Describe the bug
I imported the example pages.yml file for github actions into my documentation. It works locally, but when deploying fails.
To Reproduce
Paste the example workflow file and commit to main.
Log
I can share more if needed
2022-09-29T11:33:37.8353566Z ERROR: Could not find a version that satisfies the requirement mkdocs-material==8.5.3 (from versions: 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.11.0, 1.12.0, 1.12.1, 1.12.2, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.2.6, 2.3.0, 2.4.0, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.5.4, 2.5.5, 2.6.0, 2.6.1, 2.6.2, 2.6.3, 2.6.4, 2.6.5, 2.6.6, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.8.0, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.9.4, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, 3.1.0, 3.2.0, 3.3.0, 4.0.0, 4.0.1, 4.0.2, 4.1.0, 4.1.1, 4.1.2, 4.2.0, 4.3.0, 4.3.1, 4.4.0, 4.4.1, 4.4.2, 4.4.3, 4.5.0, 4.5.1, 4.6.0, 4.6.1, 4.6.2, 4.6.3, 5.0.0b1, 5.0.0b2, 5.0.0b2.post1, 5.0.0b3, 5.0.0b3.post1, 5.0.0b3.post2, 5.0.0rc1, 5.0.0rc2, 5.0.0rc3, 5.0.0rc4, 5.0.0, 5.0.1, 5.0.2, 5.1.0, 5.1.1, 5.1.2, 5.1.3, 5.1.4, 5.1.5, 5.1.6, 5.1.7, 5.2.0, 5.2.1, 5.2.2, 5.2.3, 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.4.0, 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4, 5.5.5, 5.5.6, 5.5.7, 5.5.8, 5.5.9, 5.5.10, 5.5.11, 5.5.12, 5.5.13, 5.5.14, 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.1.1, 6.1.2, 6.1.3, 6.1.4, 6.1.5, 6.1.6, 6.1.7, 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 7.0.0b1, 7.0.0b2, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.1.11, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 8.0.0b1, 8.0.0b2, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12)
2022-09-29T11:33:37.8355869Z ERROR: No matching distribution found for mkdocs-material==8.5.3
Can you please try it our with the latest version 8.5.9? Or using the main branch instead:
- name: Deploy docs
uses: afritzler/mkdocs-gh-pages-action@main
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
I removed my workaround, pushed and it works now.
| gharchive/issue | 2022-09-29T11:40:58 | 2025-04-01T06:37:45.378142 | {
"authors": [
"afritzler",
"blackshibe"
],
"repo": "afritzler/mkdocs-gh-pages-action",
"url": "https://github.com/afritzler/mkdocs-gh-pages-action/issues/125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1772527817 | Navbar in join us page
What happened?
In join Us page if anyone try to backout to home page there is no back-button or navbar to render to home page.
How can we reproduce this bug?
To solve this bug O would like to add navbar in join us page.
Desktop Information (Optional)
No response
Urgency (Optional)
Medium priority
Record
[x] I agree to follow this project's Code of Conduct
[X] I have checked the existing issues
[X] I'm a GSSoC'23 contributor
[X] I want to work on this issue
Sure @BhartiNagpure add footer as well
Sure @BhartiNagpure add footer as well
sure
| gharchive/issue | 2023-06-24T08:12:12 | 2025-04-01T06:37:45.411630 | {
"authors": [
"BhartiNagpure",
"agamjotsingh18"
],
"repo": "agamjotsingh18/codesetgo",
"url": "https://github.com/agamjotsingh18/codesetgo/issues/512",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
411311190 | Clarification
You said:
If you plan to expose a local port on a remote machine (external interface) you need to enable the "GatewayPorts" option in your 'sshd_config'
You're talking about sshd_config on the remote machine, not the local one correct?
It has been a while, but yes indeed thats right.. An alternative is to use nginX as proxy with the prozy_pass option.
| gharchive/issue | 2019-02-18T05:46:53 | 2025-04-01T06:37:45.455389 | {
"authors": [
"agebrock",
"monkeysuffrage"
],
"repo": "agebrock/reverse-tunnel-ssh",
"url": "https://github.com/agebrock/reverse-tunnel-ssh/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
470817679 | Create collection from REST API
Hello there,
first of all: I really like the project however it really suffers from the absence of proper documentation.
How would I go about creating a collection using the REST API? /api/collections/newCollection/,/api/collections/createCollection/, etc. all return a 404 error
Just in case anyone still needs it
Sample field names:
name: TEXT
data: Object
const options = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'api-key': 'account-xxxxxxxdea2ce008ef5ab239ab47'
},
body: '{"data":{"name":"hello","data":{"hello":"world"}}}'
};
fetch('https://yourdomain.com/api/collections/save/<collection-name>', options)
.then(response => response.json())
.then(response => console.log(response))
.catch(err => console.error(err));
| gharchive/issue | 2019-07-21T19:48:03 | 2025-04-01T06:37:45.511020 | {
"authors": [
"PatrickSachs",
"coderbuzz"
],
"repo": "agentejo/cockpit",
"url": "https://github.com/agentejo/cockpit/issues/1150",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1256598562 | Build and verify docker image locally
closes #39
Build docker image locally and verify that angular application is being served. Succesful CI run:
https://github.com/agera-edc/EDCDataDashboard/runs/6709384524
Secrets will be deleted after merging.
Secrets removed
| gharchive/pull-request | 2022-06-01T17:32:01 | 2025-04-01T06:37:45.512533 | {
"authors": [
"marcgs"
],
"repo": "agera-edc/EDCDataDashboard",
"url": "https://github.com/agera-edc/EDCDataDashboard/pull/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
930934993 | How do you save ResNet-34 that is implemented from pages 478-479?
Hi Mr. Geron;
I am a teen fan of yours reading this book, and I have experimented with building a custom Residual Unit:
However whenever I do model.save("test.h5") , I get this error:
P. S. Additional question: I am currently using this custom ResNet to do some behavioral cloning on the VizDoom game environment. My data, which consists of recordings of me playing, seems to be really noisy, and no matter how i train my model the lowest binary cross entropy loss is 0.659. Can you please give me some advice?
THANK YOU SOO MUCH
Your Fan
Wait I noticed that the loss of 0.659 is when I intentionally overfit my model on only 32 instances: if I train it on 10 entire videos the loss becomes 0.71. Sorry for the misunderstanding!
Hi @OrdinaryHacker101 ,
Thanks for your question (and for your kind words!).
The first issue might be a TensorFlow bug. Could you please provide a minimal code example that reproduces the bug? Also, perhaps try using a the file extension .ckpt instead of .h5 and see if that solves the issue?
If I understand correctly, the second issue is solved, right?
@ageron THANK YOU MR. GERON I solved both issues: I found out that it worked when I didn't save certain things e.g. skip layers, etc.
| gharchive/issue | 2021-06-27T13:44:45 | 2025-04-01T06:37:45.517835 | {
"authors": [
"OrdinaryHacker101",
"ageron"
],
"repo": "ageron/handson-ml2",
"url": "https://github.com/ageron/handson-ml2/issues/450",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
762091839 | Windows 10 - Vagrant 2.2.14 - HostsUpdater.rb:152:in `+': no implicit conversion of nil into String
As in #190 when using Vagrant 2.2.14 throws an error about the conversion of nil to String. It works with vagrant 2.2.10, no info about the inbetween versions at this time.
I can confirm the beta version working.
| gharchive/issue | 2020-12-11T08:51:19 | 2025-04-01T06:37:45.519541 | {
"authors": [
"rogerpfaff"
],
"repo": "agiledivider/vagrant-hostsupdater",
"url": "https://github.com/agiledivider/vagrant-hostsupdater/issues/192",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
202347572 | Documentation Error
In the tutorial a wrong import statement is used.
Found in the ModelDataSource section
from graphos.sources.models import ModelDataSource
should in fact be
from graphos.sources.model import ModelDataSource
@simonsmiley Its changed on master now. Thanks!
| gharchive/issue | 2017-01-21T23:41:55 | 2025-04-01T06:37:45.523422 | {
"authors": [
"akshar-raaj",
"simonsmiley"
],
"repo": "agiliq/django-graphos",
"url": "https://github.com/agiliq/django-graphos/issues/91",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
592960012 | Improve logging system
Add a logging system that has multiple features
Configured via environment variables
Implemented winston as default logging, and LogDNA for production logging.
Other transports can be implemented on demand.
| gharchive/issue | 2020-04-02T22:39:10 | 2025-04-01T06:37:45.529240 | {
"authors": [
"agnjunio"
],
"repo": "agnjunio/albion-killbot",
"url": "https://github.com/agnjunio/albion-killbot/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1909942880 | Setup vitepress
close #5
LGTM
| gharchive/pull-request | 2023-09-23T17:26:11 | 2025-04-01T06:37:45.529960 | {
"authors": [
"aruay99",
"yamyam263"
],
"repo": "agnyz/elysia-realworld-example-app",
"url": "https://github.com/agnyz/elysia-realworld-example-app/pull/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1966695001 | 🛑 SIHE Canvas is down
In ea6c7f8, SIHE Canvas (https://canvas.sheridan.edu.au) was down:
HTTP code: 503
Response time: 1778 ms
Resolved: SIHE Canvas is back up in 45d11d3 after 5 minutes.
| gharchive/issue | 2023-10-28T18:23:29 | 2025-04-01T06:37:45.541227 | {
"authors": [
"agrez"
],
"repo": "agrez/upptime",
"url": "https://github.com/agrez/upptime/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2590748287 | yourgov: additional steps setup
Move the current step 7, 'confirm and submit', to step 10 and create placeholder steps for 7, 8, 9. There are placeholders for state setters, schemas, validators, confirm steps, and pages.
View preview
Checklist
Preflight
[x] Prefix the PR title with the slug of the package or component - e.g. accordion: Updated padding or docs: Updated header links
[x] Describe the changes clearly in the PR description
[x] Read and check your code before tagging someone for review
[ ] Create a changeset file by running yarn changeset. Learn more about change management.
Testing
[ ] Manually test component in various modern browsers at various sizes (use Browserstack)
[ ] Manually test component in various devices (phone, tablet, desktop)
[ ] Manually test component using a keyboard
[ ] Manually test component using a screen reader
[ ] Manually tested in dark mode
[ ] Component meets Web Content Accessibility Guidelines (WCAG) 2.1 standards
[ ] Add any necessary unit tests (HTML validation, snapshots etc)
[ ] Run yarn test to ensure tests are passing. If required, run yarn test -u to update any generated snapshots.
Documentation
[ ] Create or update documentation on the website
[ ] Create or update stories for Storybook
[ ] Create or update stories for Playroom snippets
Creating new component
[ ] Document the component for the website (docs/overview.mdx and docs/code.mdx at a minimum)
[ ] Preconstruct entrypoint has been created (run yarn in the root of the repo to do this)
[ ] Changeset file includes a minor
[ ] Export components for docs site and Playroom (docs/components/designSystemComponents.tsx)
[ ] Add component to Kitchen Sink (.storybook/stories/KitchenSink.tsx)
[ ] Add snippets to Playroom (docs/playroom/snippets.ts)
[ ] Add pictogram to Docs (docs/components/pictograms/index.tsx)
Need to update references in 2 files so they last step is blocked:
if (step.formStateKey === 'step7' && !canConfirmAndSubmit) return 'blocked';
should be
if (step.formStateKey === 'step10' && !canConfirmAndSubmit) return 'blocked';
| gharchive/pull-request | 2024-10-16T06:08:02 | 2025-04-01T06:37:45.550306 | {
"authors": [
"ChrisLaneAU",
"stowball"
],
"repo": "agriculturegovau/agds-next",
"url": "https://github.com/agriculturegovau/agds-next/pull/1837",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1830394603 | Not in the app store ;)
I am just creating this so that I don't miss when/if it gets into the Apple appstore ;) :) :). Thanks for creating this app. It really looks amazing. The history graph will be very useful.
If you charge $ for it, I will certainly buy it just for the convenience of it being easily available without the need to sideload it in developer mode.... which I can't do on my company phone.
Subscribed! ;)
@agibson2 https://apps.apple.com/us/app/ironos-companion/id6469055544 😊
Thanks to Pine Store, they provided us their App Store account so we were able to publish it.
Now we just have the Play Store left!
Still waiting for the play store
@gamelaster any updates on getting it on the Play Store?
@gamelaster any updates on getting it on the Play Store?
In upcoming weeks, I will try to deploy the app to Google Play. Sorry, was too busy :(
Sorry, January was too busy, will try in upcoming days.
https://play.google.com/store/apps/details?id=dev.eduardom.ironos_companion it's finally published! Sorry for waiting.
Awesome, I'm going to close this issue now then!
having it on F-Droid would also be nice, I prefer installing open source apps from there whenever possible
https://github.com/aguilaair/IronOS_Companion/issues/11
| gharchive/issue | 2023-08-01T03:27:54 | 2025-04-01T06:37:45.569831 | {
"authors": [
"TheBest6337",
"agibson2",
"aguilaair",
"gamelaster",
"lunaneff"
],
"repo": "aguilaair/IronOS_Companion",
"url": "https://github.com/aguilaair/IronOS_Companion/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
715521519 | added c# recursion algorithm sample (#788)
Assigned Issue Number ?
#788
Language Opted ?
c#
Number of files added or updated ?
1
Name of these files ?
Algorithms => Recursion_Algorithm => Fibonacci.cs
example :
Algorithm => Search_Algorithm => Linear_Search => Linear_Search.py
Checklist
[x] I've read the contribution guidelines.
[x] I've referred the correct issue number.
[x] I've fill up this entire template correctly.
Invalid PR :warning:
| gharchive/pull-request | 2020-10-06T09:54:52 | 2025-04-01T06:37:45.578350 | {
"authors": [
"ahampriyanshu",
"ulieckstein"
],
"repo": "ahampriyanshu/algo_ds_101",
"url": "https://github.com/ahampriyanshu/algo_ds_101/pull/871",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
314754031 | adjust viroconcom in the requirement.txt
In the issue#41 the viroconcom import refers to a specific branch. This settings will be probably pushed to the master branch.
If this happen we have to adjust the requirements.txt when @janlehmk has finished his work in the branch and push it to the master of viroconcom.
The requirement.txt is adjusted.
| gharchive/issue | 2018-04-16T17:45:03 | 2025-04-01T06:37:45.579570 | {
"authors": [
"topape"
],
"repo": "ahaselsteiner/virocon",
"url": "https://github.com/ahaselsteiner/virocon/issues/127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2026502773 | How to link external library into zig_binary/library/test
Is #15 necessary to write something like:
zig_binary(
name = "my-bin",
main = "my-bin.zig",
deps = ["@FOO//:bar"],
)
where @FOO//:bar is an external C/C++ library ?
This seems indeed like a most welcome addition.
Is there any way to link in a C/C++ library into a zig_library/binary/test atm ?
In my trials I hit does not have mandatory providers: 'ZigPackageInfo'.
Thanks for the great work, looking forward to connecting my first toy programs with it!
Is #15 necessary to write something like:
Yes, that's correct. #15 was phrased as being about zig_binary specifically. But, by now the underlying implementation of zig_binary|library|test is the same. So, I've generalized the ticket to capture all of these.
This seems indeed like a most welcome addition.
I'd love to hear if you have some specific use-case and if you have any constraints about how this should work.
In particular, as #15 points out, Zig can link libraries in two different ways, either by filepath, or by -l flags. I.e. either some/path/to/libfoo.a, or -lfoo. Would your use-case have a preference for one or the other way of doing this?
Also, Bazel's cc_library targets capture both static and dynamic libraries at the same time and it is up to the downstream user to decide which to use. The cc_* rules offer control through a linkstatic attribute and otherwise prefer static linking over dynamic linking except for tests where it is the other way around. Now there is also cc_shared_library in Bazel.
Does your use-case have any constraints wrt a preference for static or dynamic linking?
At the moment my plan would be to start with a very simple implementation, exposing a cdeps attribute, passing libraries by filepath if possible, and preferring static linking.
zig_binary(
name = "my-bin",
main = "my-bin.zig",
cdeps = ["@FOO//:bar"],
)
Thanks for your reply, I don't have specific use cases yet beyond kicking the tires of C / zig interop under bazel; static library is the sensible thing for me to start with.
At the moment my plan would be to start with a very simple implementation, exposing a cdeps attribute, passing libraries by filepath if possible, and preferring static linking.
This would be great!
@persososo Thanks for sharing your thoughts on this! I'll close this as a duplicate of #15. Since #15 is already intended to cover this use-case.
| gharchive/issue | 2023-12-05T15:40:08 | 2025-04-01T06:37:45.626314 | {
"authors": [
"aherrmann",
"persososo"
],
"repo": "aherrmann/rules_zig",
"url": "https://github.com/aherrmann/rules_zig/issues/146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
98232997 | Refactor common code.
Refactored repetitive settings loading into a utility function. I put it in a separate modules to set up my next PR.
I did some basic tests to ensure that Run, RunAsPiped and Compile were still working.
Thanks
| gharchive/pull-request | 2015-07-30T18:34:58 | 2025-04-01T06:37:45.628009 | {
"authors": [
"aviaryan",
"selimb"
],
"repo": "ahkscript/SublimeAutoHotkey",
"url": "https://github.com/ahkscript/SublimeAutoHotkey/pull/22",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
593013095 | command " corona -s cases -l 10" don't show chinese data
command " corona -s cases -l 10" don't show chinese data,
but corona china well done.
Wow. Strange. Help solve this issue?
I think I fixed it, submitted a pull request!
thanks, it's ok now.
| gharchive/issue | 2020-04-03T01:19:30 | 2025-04-01T06:37:45.629454 | {
"authors": [
"Nezia1",
"ahmadawais",
"bingo8670"
],
"repo": "ahmadawais/corona-cli",
"url": "https://github.com/ahmadawais/corona-cli/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
455173784 | Dynamic Blocks Support
Feature Request
Is your feature request related to a problem? Please describe.
Gutenberg allows the use of the 'render_callback' in php when registering a block type.
This isn't currently in use and makes it impossible to create dynamic blocks (like latest posts).
Describe the solution you'd like
Implement a way of having a render_callback for each block.
Using a render_callback function is very welcome here as well! I can not figure out how dynamic blocks with the REST API work, the withSelect and @wordpress/data package are barely documented (like almost the entire Gutenberg repo..) so the PHP fallback is something that I could really use :)
I guess for now a solution is to make seperate plugins just for the blocks that need this. As long as CGB is only used for one block then the render_callback should work.
I can't find documentation on how to properly enqueue react to the save function, though I have seen references to it being possible.
Like here:
https://wordpress.stackexchange.com/questions/333532/how-to-add-javascript-function-in-the-save-of-gutenberg-block
https://www.youtube.com/watch?v=jauZCeLrGFA&t=1s
https://wp.zacgordon.com/2017/12/26/how-to-add-javascript-and-css-to-gutenberg-blocks-the-right-way-in-plugins-and-themes/
It would be great if this was included in create-guten-block, or at least added to documentation so it's clear on how to implement this. The php world is foreign to me, as I am more comfortable with react/javascript.
@Jebble I've started my own project to simplify WordPress development for myself. Not directly related to gutenberg block development, but it is also possible.
Until this dynamic block support is added to cgb, you might want to check out my wp-plugin-boilerplate.
You can simply add another entry file inside the wpds-scripts.config.js for your dynamic block and then enqueue the build script with a render_callback inside your plugin's PHP file.
This is not an advertisement, it's just to help those people who actually need to have access to the render callback, like myself 😉
I got a very simple solution that worked for me, you just need to add one more register_block_type with the same name you registered in your js and just put the render_callback option.
register_block_type(
'custom-block/your-block', array(
'render_callback' => 'block_render_callback'
)
);
function block_render_callback($attributes, $content) {
return 'Works!';
}
Did you try to do this with multiple blocks?
Did you try to do this with multiple blocks?
Yes, I have multiple blocks but just a few are dynamic.
Do you have a sample to show? I tried that but I still can't see the rendered code on the screen.
Did you put the same name in your javascript?
So... I just figure out the mistake that led to my previous question. Inside the plugin.php I can add the code that @rodrigowbazevedo showed before and, since the register_block_type has the same name as its javascript part, it works fine. My problem was a misconception about the way Create Guten Block works, I guess.
Did you try to do this with multiple blocks?
trying that now. It appears there is some extra support needed for render callback with multiple blocks. Going to investigate this now and well maybe I will figure it out and show an example.
| gharchive/issue | 2019-06-12T12:03:51 | 2025-04-01T06:37:45.638523 | {
"authors": [
"Jebble",
"josias-r",
"megphillips91",
"noahneumark",
"olafghanizadeh",
"rhamses",
"rodrigowbazevedo"
],
"repo": "ahmadawais/create-guten-block",
"url": "https://github.com/ahmadawais/create-guten-block/issues/193",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
799427086 | The automated release is failing 🚨
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
| gharchive/issue | 2021-02-02T16:16:17 | 2025-04-01T06:37:45.769870 | {
"authors": [
"ahmadnassri"
],
"repo": "ahmadnassri/node-glob-promise",
"url": "https://github.com/ahmadnassri/node-glob-promise/issues/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
416319064 | function kubectl() in README doesn't handle quoted arguments correctly
I posted on Stack Overflow about a difficult-to-diagnose error when working with kubectl here: https://stackoverflow.com/questions/54948471/jsonpath-range-not-working-when-using-kubectl
I traced the error down to having followed this repo's suggestion to overwrite kubectl with a function. It seems the function strips quotes from arguments when in some cases they are necessary.
Instead of
function kubectl() { echo "+ kubectl $@"; command kubectl $@; }
how about this?
function kubectl() { echo "+ kubectl $@"; command kubectl "$@"; }
I recall that wasnt working under certain circumstances but that’s what shellcheck was recommending, too.
If you tested thoroughly, I’m happy to accept a PR.
The issue is further complicated with commands such as this -
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090
results in...
+ kubectl -n istio-system port-forward + kubectl -n istio-system get pod -l app=prometheus -o jsonpath={.items[0].metadata.name} prometheus-67599bf55b-sq822 9090:9090 Error: unknown shorthand flag: 'l' in -l
Not sure what the fix is. I'd recommend removing the function from the README.
BTW ahmetb. Great project. The aliases are really useful. Many thanks for your efforts.
Since it’s just in the readme and not in the code, I am ok with keeping it despite it half-works. I’d be interesting in figuring out what the actual fix would be though.
“$@“ should be working. Have you tried it @RossWhitehead?
| gharchive/issue | 2019-03-01T23:25:32 | 2025-04-01T06:37:45.787853 | {
"authors": [
"RossWhitehead",
"ahmetb",
"axesilo"
],
"repo": "ahmetb/kubectl-aliases",
"url": "https://github.com/ahmetb/kubectl-aliases/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
781102574 | add artnet protocol support
its a brilliant software, and is well written but I am using artnet in some of my devices, it would be fun to use ledfx with artnet , I have a setup with 3000 ws2811.
+1 I want to use resolume and it only supports artNet. So when i decide to use ledfx i always need to change the wled config and restart. it gets anoying
+1
| gharchive/issue | 2021-01-07T07:35:46 | 2025-04-01T06:37:45.790943 | {
"authors": [
"LoneWalkerWolf",
"alexandremix",
"johermohit"
],
"repo": "ahodges9/LedFx",
"url": "https://github.com/ahodges9/LedFx/issues/187",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1338339198 | 🛑 Dashboard is down
In 795cb53, Dashboard (https://guard-bot.ahq-alt.repl.co/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Dashboard is back up in 3c1320f.
| gharchive/issue | 2022-08-14T20:13:50 | 2025-04-01T06:37:45.801764 | {
"authors": [
"ahqsoftwares"
],
"repo": "ahqsoftwares/uptime-2",
"url": "https://github.com/ahqsoftwares/uptime-2/issues/383",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1188406229 | 🛑 Bot is down
In fa77ea2, Bot (https://ahq-miness.ahqsecret.repl.co/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bot is back up in d6a46d8.
| gharchive/issue | 2022-03-31T17:25:12 | 2025-04-01T06:37:45.804172 | {
"authors": [
"ahqsoftwares"
],
"repo": "ahqsoftwares/uptime-2",
"url": "https://github.com/ahqsoftwares/uptime-2/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1365890106 | 🛑 Dashboard is down
In b70f3b8, Dashboard (https://guard-bot.ahq-alt.repl.co/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Dashboard is back up in 12eb032.
| gharchive/issue | 2022-09-08T09:29:03 | 2025-04-01T06:37:45.806751 | {
"authors": [
"ahqsoftwares"
],
"repo": "ahqsoftwares/uptime-2",
"url": "https://github.com/ahqsoftwares/uptime-2/issues/623",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
269333756 | cat instead of echo
ahrasis,
line 54
cat "root" > /etc/incron.allow
should be
echo "root" > /etc/incron.allow
You are correct, it shoule the latter. It fixed now.
| gharchive/issue | 2017-10-28T17:12:54 | 2025-04-01T06:37:45.808030 | {
"authors": [
"ahrasis",
"pannet1"
],
"repo": "ahrasis/LE4ISPC",
"url": "https://github.com/ahrasis/LE4ISPC/issues/2",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1635053319 | Put new chat button same level as the send button
Describe the feature
As what the title suggests, it would be a better user experience(otherwise i need to click more and new chat) and reasonable implementation considering how the api works which needs to append old message so i have to clear conversation quite often.
Hi - having the Start a new chat button right next to Send message button could be easily destructive and cause unwanted user experience (i.e. accidentally clicking it while trying to send a message)
Did you see the controls up at the toolbar for Genie: Start a new chat command?
There is also a command for Genie: Start a new chat, for which you could assign a keyboard shortcut if you find yourself creating new chats very often, that'd be very useful.
I see. ty for the tip. Didn't notice that plus button upper there before.
I still think it could be better to put that feature besides the send button, it works like Bing chat. Maybe autofocus can add to this also. I don't know, your product, it's just a personal opinion.
Anyway, gj on this extension.
| gharchive/issue | 2023-03-22T04:58:48 | 2025-04-01T06:37:45.828142 | {
"authors": [
"genieai-info",
"xhy279"
],
"repo": "ai-genie/chatgpt-vscode",
"url": "https://github.com/ai-genie/chatgpt-vscode/issues/8",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
836248300 | Merge stable back into master
We can merge this but avoid deleting the branch in order to apply the tidying up changes to master efficiently.
Unnecessary since the cleanup branch was created from stable.
| gharchive/pull-request | 2021-03-19T18:12:48 | 2025-04-01T06:37:45.829275 | {
"authors": [
"herbiebradley"
],
"repo": "ai4er-cdt/gtc-biodiversity",
"url": "https://github.com/ai4er-cdt/gtc-biodiversity/pull/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
717229976 | Contact details
Add contact details for any queries
Those who have not been provided with my contact details personally can use GitHub to post issues
| gharchive/issue | 2020-10-08T10:43:03 | 2025-04-01T06:37:45.862629 | {
"authors": [
"aidan-parkinson"
],
"repo": "aidan-parkinson/anonymous-feedback",
"url": "https://github.com/aidan-parkinson/anonymous-feedback/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
161989054 | [1.9.4] - Rendering block entity crash
The portable tank causes a crash when put into a tinker's construct crafting station.
We believe the issue is a rendering issue due to the fact that the crafting station renders items on top of the table.
https://gist.github.com/Lazarix/006ac9201aeffdd2cb1c31458b634734
What Mekanism release are you using?
latest code from today, 1.9.4
TiCon's crafting stations are known to crash with mod items, you should probably disable the render.
Fixed, thanks
| gharchive/issue | 2016-06-23T18:19:14 | 2025-04-01T06:37:45.864907 | {
"authors": [
"Lazarix",
"Xiaminou",
"aidancbrady"
],
"repo": "aidancbrady/Mekanism",
"url": "https://github.com/aidancbrady/Mekanism/issues/3399",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1823080913 | bug: Typescript files are not being compiled, when I add million.webpack() to my webpack 5 configuration
Hi @aidenybai
Filing this issue again. The original can be seen here. #476
I tried with both v2.5.0 and latest which installed 2.5.2-beta.0 and get the following errors.
using windows 11, VS Code and:
"million": "^2.5.2-beta.0",
"typescript": "^5.1.6",
"webpack": "^5.88.2",
"webpack-cli": "^5.1.4",
"webpack-dev-server": "^4.15.1"
Can we get a repro please @TerrySlack? It'll be easier to help us set see the issue and then figure out possible solutions to it.
@tobySolutions
Here you go:
https://github.com/TerrySlack/millionjs-bug
@tobySolutions were you able to detect the issue(s) in the repo I provided?
@tobySolutions ?
@tobySolutions @TerrySlack @aidenybai any updates on this?
@norbertorok92 I've pinged them a few times, since providing a repo and no response.
I'd love to use this library, but at this point, I'm moving on. If they ever decide to put out an update, I'll revisit. I can't wait any longer
Hmm, sorry about this @TerrySlack and @norbertorok92.
Can you install the latest version please and then retry this. Also, I'd love to see your configuration file to see how you set up Million.
@tobySolutions
You can view it in the repo you requested, amd I provided in the comment from Aug 1
Really sorry for the late response. Nobert brought it up with the team too and I'm looking into it.
I'll keep you updated as I might not be able to look at this myself at the moment. I'm a bit under the weather.
Thanks for responding!
@tobySolutions How is it going? Any progress?
@tobySolutions How is it going? Any progress?
Hey Terry! Yep! Nobert was able to fix his bug, but, I'm not sure if the same thing can apply to you.
I'll share it here soon so you can have a read through of hos solution
Hey Toby,
Well, that's ambiguous :)
So are you saying it doesn't work with Windows? Or Webpack? If it's a webpack thing, I can give it a try on Vite if necessary.
If it's my configuration in Webpack, I'm open to any suggestions your team has. I really like what I have read about million and can see a perfect use for it in an upcoming project.
I'm going to be hitting a hard deadline on whether to use millions or not.
Do you have a solution for windows, typescript and webpack 5?
If the repo I provided needs tweaks that can happen. Is this a lib that works cross platform? If so, what tweaks need to be made to have ot work with webpack 5 and typescript?
I love the concept of this lib and hope there is a solution.
@tobySolutions Well, when I get a github bot pointing out how stale this issue is, I realize that a solution isn't coming.
To bad. Really love the concept.
Hey there @TerrySlack; that's just github actions being weird though. The issue is still actively under review
Hey @tobySolutions , culd I get an update on what is happening?
Are you still getting this @TerrySlack? 🤔🤔
I can't answer for @TerrySlack, but I'm noticing the same error in https://github.com/NDLANO/h5p-editor-topic-map/pull/641. By setting the transpileOnly: true option on ts-loader, the errors disappeared from the build, but the browser console is now saying that React is not defined.
Thanks for looking into this!
@tobySolutions
I haven't tried using your library since I filed this bug. I haven't heard of any uodates regarding it. Have things been fixed? Curious, did you use the repo I provided to test things out?
Terry
Im having the same issue. It happens in auto and manual mode. Without this plugin it runs fine.
Im executing
"start": "NODE_ENV=development webpack-dev-server --mode development",
related packages
"webpack": "^5.60.0",
"webpack-bundle-analyzer": "^4.5.0",
"webpack-cli": "^4.9.1",
"webpack-dev-server": "^4.3.1",
"webpack-server": "^0.1.2",
"million": "^2.6.4",
"typescript": "^4.4.4",
webpack.config.js
const dotenv = require('dotenv').config()
const Dotenv = require('dotenv-webpack')
const HtmlWebpackPlugin = require('html-webpack-plugin')
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin')
const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer')
const MiniCssExtractPlugin = require('mini-css-extract-plugin')
const path = require('path')
const TerserPlugin = require('terser-webpack-plugin')
const webpack = require('webpack')
const million = require("million/compiler");
const APP_MODE = process.env.NODE_ENV || 'development'
const APP_PORT = process.env.APP_PORT || '3000'
const isProduction = process.env.NODE_ENV === 'production'
module.exports = {
entry: './src/index.tsx',
mode: APP_MODE,
output: {
filename: '[name].chunk.js',
path: path.join(__dirname, 'build'),
clean: true,
},
devServer: {
port: APP_PORT,
client: {
overlay: {
errors: true,
warnings: false,
},
},
allowedHosts: ['*', 'dev.cor.com', 'cor.com'],
},
module: {
rules: [
{
/* The following line to ask babel
to compile any file with extension
.js */
test: /\.js?$/,
/* exclude node_modules directory from babel.
Babel will not compile any files in this directory */
exclude: [/node_modules/],
// To Use babel Loader
loader: 'babel-loader',
options: {
presets: ['@babel/preset-env' /* to transfer any advansed ES to ES5 */, '@babel/preset-react'], // to compile react to ES5
},
},
{
test: [/\.tsx?$/, /\.ts?$/],
use: 'ts-loader',
exclude: /node_modules/,
},
{
// The following rule is set to add all css compiler rules
test: /\.css$/i,
use: [isProduction ? MiniCssExtractPlugin.loader : 'style-loader', 'css-loader'],
},
{
test: /\.svg$/,
use: [
{
loader: 'svg-url-loader',
options: {
limit: 10000,
},
},
],
},
],
},
resolve: {
extensions: ['.jsx', '.js', '.ts', '.tsx', '.json', '.css', '.jpg', '.jpeg', '.png', '.svg'],
fallback: { 'process/browser': require.resolve('process/browser') },
alias: {
react: path.resolve('./node_modules/react'),
},
},
plugins: [
million.webpack(),
// Dot env plugin to get .env variables to npm process
new Dotenv(),
new webpack.ProvidePlugin({
Buffer: ['buffer', 'Buffer'],
process: 'process/browser',
}),
new MiniCssExtractPlugin({
filename: isProduction ? `static/css/[name].[contenthash].css` : '[name].css',
chunkFilename: `static/css/[id].[contenthash].css`,
}),
new HtmlWebpackPlugin({
template: './public/index.html',
}),
],
optimization: {
minimize: true,
minimizer: [new TerserPlugin({ parallel: true })],
splitChunks: {
// include all types of chunks
chunks: 'all',
cacheGroups: {
vendor: {
name: 'node_vendors',
test: /[\\/]node_modules[\\/]/,
chunks: 'all',
},
},
},
},
}
isProduction && (module.exports['devtool'] = 'source-map')
isProduction && module.exports['plugins'].push(new webpack.optimize.AggressiveMergingPlugin())
Im having the same issue. It happens in auto and manual mode. Without this plugin it runs fine. Im executing
"start": "NODE_ENV=development webpack-dev-server --mode development",
related packages
"webpack": "^5.60.0",
"webpack-bundle-analyzer": "^4.5.0",
"webpack-cli": "^4.9.1",
"webpack-dev-server": "^4.3.1",
"webpack-server": "^0.1.2",
"million": "^2.6.4",
"typescript": "^4.4.4",
webpack.config.js
const dotenv = require('dotenv').config()
const Dotenv = require('dotenv-webpack')
const HtmlWebpackPlugin = require('html-webpack-plugin')
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin')
const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer')
const MiniCssExtractPlugin = require('mini-css-extract-plugin')
const path = require('path')
const TerserPlugin = require('terser-webpack-plugin')
const webpack = require('webpack')
const million = require("million/compiler");
const APP_MODE = process.env.NODE_ENV || 'development'
const APP_PORT = process.env.APP_PORT || '3000'
const isProduction = process.env.NODE_ENV === 'production'
module.exports = {
entry: './src/index.tsx',
mode: APP_MODE,
output: {
filename: '[name].chunk.js',
path: path.join(__dirname, 'build'),
clean: true,
},
devServer: {
port: APP_PORT,
client: {
overlay: {
errors: true,
warnings: false,
},
},
allowedHosts: ['*', 'dev.cor.com', 'cor.com'],
},
module: {
rules: [
{
/* The following line to ask babel
to compile any file with extension
.js */
test: /\.js?$/,
/* exclude node_modules directory from babel.
Babel will not compile any files in this directory */
exclude: [/node_modules/],
// To Use babel Loader
loader: 'babel-loader',
options: {
presets: ['@babel/preset-env' /* to transfer any advansed ES to ES5 */, '@babel/preset-react'], // to compile react to ES5
},
},
{
test: [/\.tsx?$/, /\.ts?$/],
use: 'ts-loader',
exclude: /node_modules/,
},
{
// The following rule is set to add all css compiler rules
test: /\.css$/i,
use: [isProduction ? MiniCssExtractPlugin.loader : 'style-loader', 'css-loader'],
},
{
test: /\.svg$/,
use: [
{
loader: 'svg-url-loader',
options: {
limit: 10000,
},
},
],
},
],
},
resolve: {
extensions: ['.jsx', '.js', '.ts', '.tsx', '.json', '.css', '.jpg', '.jpeg', '.png', '.svg'],
fallback: { 'process/browser': require.resolve('process/browser') },
alias: {
react: path.resolve('./node_modules/react'),
},
},
plugins: [
million.webpack(),
// Dot env plugin to get .env variables to npm process
new Dotenv(),
new webpack.ProvidePlugin({
Buffer: ['buffer', 'Buffer'],
process: 'process/browser',
}),
new MiniCssExtractPlugin({
filename: isProduction ? `static/css/[name].[contenthash].css` : '[name].css',
chunkFilename: `static/css/[id].[contenthash].css`,
}),
new HtmlWebpackPlugin({
template: './public/index.html',
}),
],
optimization: {
minimize: true,
minimizer: [new TerserPlugin({ parallel: true })],
splitChunks: {
// include all types of chunks
chunks: 'all',
cacheGroups: {
vendor: {
name: 'node_vendors',
test: /[\\/]node_modules[\\/]/,
chunks: 'all',
},
},
},
},
}
isProduction && (module.exports['devtool'] = 'source-map')
isProduction && module.exports['plugins'].push(new webpack.optimize.AggressiveMergingPlugin())
Hey there! Thanks for this, I think a reproduction will be much better actually. Thank you!
Im having the same issue. It happens in auto and manual mode. Without this plugin it runs fine. Im executing
"start": "NODE_ENV=development webpack-dev-server --mode development",
related packages
```
"webpack": "^5.60.0",
"webpack-bundle-analyzer": "^4.5.0",
"webpack-cli": "^4.9.1",
"webpack-dev-server": "^4.3.1",
"webpack-server": "^0.1.2",
"million": "^2.6.4",
"typescript": "^4.4.4",
```
webpack.config.js
const dotenv = require('dotenv').config()
const Dotenv = require('dotenv-webpack')
const HtmlWebpackPlugin = require('html-webpack-plugin')
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin')
const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer')
const MiniCssExtractPlugin = require('mini-css-extract-plugin')
const path = require('path')
const TerserPlugin = require('terser-webpack-plugin')
const webpack = require('webpack')
const million = require("million/compiler");
const APP_MODE = process.env.NODE_ENV || 'development'
const APP_PORT = process.env.APP_PORT || '3000'
const isProduction = process.env.NODE_ENV === 'production'
module.exports = {
entry: './src/index.tsx',
mode: APP_MODE,
output: {
filename: '[name].chunk.js',
path: path.join(__dirname, 'build'),
clean: true,
},
devServer: {
port: APP_PORT,
client: {
overlay: {
errors: true,
warnings: false,
},
},
allowedHosts: ['*', 'dev.cor.com', 'cor.com'],
},
module: {
rules: [
{
/* The following line to ask babel
to compile any file with extension
.js */
test: /\.js?$/,
/* exclude node_modules directory from babel.
Babel will not compile any files in this directory */
exclude: [/node_modules/],
// To Use babel Loader
loader: 'babel-loader',
options: {
presets: ['@babel/preset-env' /* to transfer any advansed ES to ES5 */, '@babel/preset-react'], // to compile react to ES5
},
},
{
test: [/\.tsx?$/, /\.ts?$/],
use: 'ts-loader',
exclude: /node_modules/,
},
{
// The following rule is set to add all css compiler rules
test: /\.css$/i,
use: [isProduction ? MiniCssExtractPlugin.loader : 'style-loader', 'css-loader'],
},
{
test: /\.svg$/,
use: [
{
loader: 'svg-url-loader',
options: {
limit: 10000,
},
},
],
},
],
},
resolve: {
extensions: ['.jsx', '.js', '.ts', '.tsx', '.json', '.css', '.jpg', '.jpeg', '.png', '.svg'],
fallback: { 'process/browser': require.resolve('process/browser') },
alias: {
react: path.resolve('./node_modules/react'),
},
},
plugins: [
million.webpack(),
// Dot env plugin to get .env variables to npm process
new Dotenv(),
new webpack.ProvidePlugin({
Buffer: ['buffer', 'Buffer'],
process: 'process/browser',
}),
new MiniCssExtractPlugin({
filename: isProduction ? `static/css/[name].[contenthash].css` : '[name].css',
chunkFilename: `static/css/[id].[contenthash].css`,
}),
new HtmlWebpackPlugin({
template: './public/index.html',
}),
],
optimization: {
minimize: true,
minimizer: [new TerserPlugin({ parallel: true })],
splitChunks: {
// include all types of chunks
chunks: 'all',
cacheGroups: {
vendor: {
name: 'node_vendors',
test: /[\\/]node_modules[\\/]/,
chunks: 'all',
},
},
},
},
}
isProduction && (module.exports['devtool'] = 'source-map')
isProduction && module.exports['plugins'].push(new webpack.optimize.AggressiveMergingPlugin())
Hey there! Thanks for this, I think a reproduction will be much better actually. Thank you!
Forget it! just add transpileOnly: true, and it builds. Thanks ❤️
Hmm, thank you very much @9gustin!! I should definitely document this. I'll put a tab on this.
| gharchive/issue | 2023-07-26T20:03:54 | 2025-04-01T06:37:45.886725 | {
"authors": [
"9gustin",
"TerrySlack",
"boyum",
"norbertorok92",
"tobySolutions"
],
"repo": "aidenybai/million",
"url": "https://github.com/aidenybai/million/issues/496",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
400391468 | Asciidoc setup with docToolchain container
the docToolchain submodule has been replaced with a container
./gradlew generateDocs
will now generate the docs through the docToolchain container.
the project now also has its independent htmlSanityCheck-task back:
./gradlew htmlSanityCheck
PS: the docToolchain container is pulled from the docker hub - it is defined in this repository https://github.com/docToolchain/docker-image and build on the docker hub - on change.
It already contains all dependencies which makes the execution quite fast (when it is already downloaded)
I will look into this - but currently (April 2019) I don't find the time ... so please be patient...
since this is outdated, let's close it. ...and you can also remove the source branch to clean up the repo.
| gharchive/pull-request | 2019-01-17T17:47:44 | 2025-04-01T06:37:45.963513 | {
"authors": [
"gernotstarke",
"rdmueller"
],
"repo": "aim42/htmlSanityCheck",
"url": "https://github.com/aim42/htmlSanityCheck/pull/263",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
413437576 | 403 Error when aiohttp resources are not fully merged
Aiohttp merges resources, when they are added in the correct order:
When using aiohttp-cors, resources have to be fully merged, or you end up having 2 resources for the same path, one for the GET method one for the POST, but the latter will not be considered when aiohttp-cors answers the OPTIONS request. It seems that aiohttp-cors supports only a single resource per path.
It would be nice if a hint about this could be added to the docs, if the support of multiple resources for the same path is too complex.
Requirements:
aiohttp==3.0.6
aiohttp_cors==0.7.0
Replicate:
def handler():
pass
app.router.add_route('POST', '/a', handler)
app.router.add_route('GET', '/b', handler)
app.router.add_route('PUT', '/a', handler)
# Configure default CORS settings.
cors = aiohttp_cors.setup(app, defaults={
"*": aiohttp_cors.ResourceOptions(
allow_credentials=True,
expose_headers="*",
allow_headers="*")
})
# Configure CORS on all routes.
for route in list(app.router.routes()):
cors.add(route)
@FabianElsmer we faced with the same issue, allow_methods="*" - could help you.
you could try this:
cors = aiohttp_cors.setup(app, defaults={ "*": aiohttp_cors.ResourceOptions( allow_credentials=True, expose_headers="*", allow_headers="*", allow_methods="*" ) })
| gharchive/issue | 2019-02-22T14:51:41 | 2025-04-01T06:37:45.976662 | {
"authors": [
"FabianElsmer",
"IgorKuzmenko"
],
"repo": "aio-libs/aiohttp-cors",
"url": "https://github.com/aio-libs/aiohttp-cors/issues/226",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1159807581 | Broken connections are reused in connection pools
Describe the bug
If an application has a connection break with the network, it accumulates broken connections with the Redis server and tries to reuse them to execute commands. This is due to the fact that the connect() methods do not check whether an exception is set in the StreamReaders:
class Connection:
async def connect(self):
"""Connects to the Redis server if not already connected"""
if self.is_connected: # <-- self._reader may contain an exception (e.g. OSError No route to host).
return
class SentinelManagedConnection:
async def connect(self):
if self._reader: # <-- self._reader may contain an exception (e.g. OSError No route to host).
return # already connected
This cases should be handled.
To Reproduce
Initialize base Redis client or Sentinel client, connect to Redis server
Send pings periodically
Turn off wi-fi
Look at ping errors
Turn on wi-fi
Look at ping errors
Expected behavior
There should be no errors in step 6
Logs/tracebacks
https://gist.github.com/evgenymarkov/1af3a5cb12887068cb9c6a72c09168ea
Python Version
3.9.10
aioredis Version
2.0.1
Additional context
No response
Code of Conduct
[X] I agree to follow the aio-libs Code of Conduct
https://github.com/aio-libs/aioredis-py/pull/1313
My issue can be related with https://github.com/aio-libs/aioredis-py/issues/1174
| gharchive/issue | 2022-03-04T16:10:49 | 2025-04-01T06:37:45.988255 | {
"authors": [
"evgenymarkov"
],
"repo": "aio-libs/aioredis-py",
"url": "https://github.com/aio-libs/aioredis-py/issues/1314",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1605175939 | Hierarchical planning notebook
Colab link (works as long as the hierarchical-notebook branch exists)
https://colab.research.google.com/github/aiplan4eu/unified-planning/blob/hierarchical-notebook/notebooks/Hierarchical_Planning.ipynb
@alvalentini You can use the notebook here for the hands-on session. A colab link is provided in the initial comment. I will update this branch to fix a few remaining inconsistencies in the notebook.
@arbimo Perfect, thanks!
@alvalentini This is now ok to be merged I believe.
| gharchive/pull-request | 2023-03-01T15:22:17 | 2025-04-01T06:37:45.990307 | {
"authors": [
"alvalentini",
"arbimo"
],
"repo": "aiplan4eu/unified-planning",
"url": "https://github.com/aiplan4eu/unified-planning/pull/349",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
996994252 | Connecting to https://dapp.robonomics.network/ with the wrong network selected on Metamask
Issue:
When connecting to https://dapp.robonomics.network/ with the wrong network selected on metamask (such as the Moonriver network being selected), an error message pops up:
The issue is that many users do not notice / understand the current error message "For work, you need to switch the network", and people have been asking how to solve this simple issue.
I think that the error text should be changed to be a bit more descriptive, and maybe the error message text should be a larger font.
An example of how the text can be changed:
"Your wallet is currently connected to the wrong network, please switch to the Ethereum network to continue."
Thanks for the suggestion! We are working now on improving this error handling
Ready! Thanks a lot for your contribution 🤖🦾🙏
| gharchive/issue | 2021-09-15T11:55:41 | 2025-04-01T06:37:45.993416 | {
"authors": [
"Leemo94",
"positivecrash"
],
"repo": "airalab/dapp.robonomics.network",
"url": "https://github.com/airalab/dapp.robonomics.network/issues/60",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1129220413 | Project 1 Review (Scapsulators)
Just ran your project, It was not clear how to run the project, had to figure out the branch myself.
I'm running the project from feature/3-user-management-service.
On build getting this error : -
Error: Unable to access jarfile target/UserManagement-0.0.1-SNAPSHOT.jar
Do I need to have some prerequisite setup? Let me know if I am doing something wrong.
Reviewed by Team Scapsulators (https://github.com/airavata-courses/scapsulators)
Hello Shubham,
Thank you for the review. We haven't merged all the branches to main yet. Have you tried already remote docker images?.
Let me know if you are able to run.
Hello, I tried to run your project now. I also found it difficult to follow through the WIKI document https://github.com/airavata-courses/DCoders/wiki/local-runbook as mentioned by @shubhpatr
Can you provide steps to run remote docker images ?
Thanks.
Team Garuda
The commands are given on the same page. You need to have the docker installed on your machine (That's the prereq).
https://github.com/airavata-courses/DCoders/wiki/local-runbook#follow-the-below-steps-to-run-the-application-locally
User Management: Replace {docker_host} with the docker hostname
Rest of the applications runs as it is (copy and paste the command).
Got it, thanks.
Maybe a note about which branch to checkout before each module might be helpful on https://github.com/airavata-courses/DCoders/wiki/local-runbook
Had to refer other parts of WIKI to figure it out.
Thanks,
Team Garuda
Got it. Thanks for the review. Will make the changes. 👍
Just ran your project, It was not clear how to run the project, had to figure out the branch myself.
I'm running the project from feature/3-user-management-service.
On build getting this error : -
Error: Unable to access jarfile target/UserManagement-0.0.1-SNAPSHOT.jar
Do I need to have some prerequisite setup? Let me know if I am doing something wrong.
Reviewed by Team Scapsulators (https://github.com/airavata-courses/scapsulators)
Fixed the docker-compose issue. Could you please check if it is working for you now?.
Resolved the docker-compose, and Mapbook M1 chip docker issue.
| gharchive/issue | 2022-02-09T23:59:47 | 2025-04-01T06:37:46.003165 | {
"authors": [
"pranavacharya",
"shubhpatr",
"vinayakasg18"
],
"repo": "airavata-courses/DCoders",
"url": "https://github.com/airavata-courses/DCoders/issues/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
217365712 | When enforcing version_limit, order the version records by created_at
The details of the issue are in #940.
I split the commits out into visible, obvious parts. If you checkout the middle commit, db4c586, you can see the failure by running rake spec. I'm more than happy to squash these down, if the maintainers would like.
Thanks Dan. Initial review:
Please add a changelog entry under Unreleased -> Fixed
I think that the SQL generated by sibling_versions should not change, should remain unordered.
Unless you feel strongly about keeping it, I don't think we need the "here comes the crazy part" spec. It's a bit hard to follow. If we can replace it with a simpler spec, perhaps by inserting specific values of created_at, that'd be fine.
I think that the SQL generated by sibling_versions should not change, should remain unordered.
So, move the order call to the call site, in enforce_version_limit!? I can do that.
...replace it with a simpler spec, perhaps by inserting specific values of created_at, that'd be fine.
I'll give that a try. Agreed, that'll be much simpler.
@jaredbeck, I made the changes - the CHANGELOG, and moving the order out of sibling_versions.
I also changed the spec to create fake Version records. It's definitely shorter, but I'm not sure if it's easier to understand. Let me know what you think.
Looks good, thanks Dan!
Squashed offline into 300a16c
Thank you @jaredbeck! When do you think this fix will reach rubygems?
We're very close to the next release, 7.0.0. It could happen this week if all goes well.
| gharchive/pull-request | 2017-03-27T20:29:31 | 2025-04-01T06:37:46.009599 | {
"authors": [
"danbernier",
"jaredbeck"
],
"repo": "airblade/paper_trail",
"url": "https://github.com/airblade/paper_trail/pull/941",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
168476890 | Inspectable WebContents
After a blank setup I recieve this message? Anyone else has the same?
Caravel Tag 0.10.0
Thanks Philipp
Which browser is this? Looks like some remote debugging options of your browser than something related to caravel.
Seems invalid, not enough details, closing
| gharchive/issue | 2016-07-30T17:03:00 | 2025-04-01T06:37:46.025120 | {
"authors": [
"mistercrunch",
"philippfrenzel",
"xrmx"
],
"repo": "airbnb/caravel",
"url": "https://github.com/airbnb/caravel/issues/868",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
159266745 | Add spanish translation link
Hi!, I've translated this to Spanish, could you add the link?
Absolutamente, gracias!
| gharchive/pull-request | 2016-06-08T20:45:36 | 2025-04-01T06:37:46.025950 | {
"authors": [
"ismamz",
"ljharb"
],
"repo": "airbnb/css",
"url": "https://github.com/airbnb/css/pull/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
539667472 | TypeError: Cannot read property 'Events' of undefined
setup.js
var Enzyme = require('enzyme')
var Adapter = require('enzyme-adapter-react-16')
Enzyme.configure({ adapter: new Adapter() })
When I run jest, it got this error:
TypeError: Cannot read property 'Events' of undefined
Version
"react": "~16.8.6",
"react-dom": "~16.8.6",
"enzyme": "^3.10.0"
"jest": "^24.5.0",
"enzyme-adapter-react-16": "^1.15.1"
"react-addons-test-utils": "^15.6.2",
Why are you using react-addons-test-utils? That shouldn't even be installed if you're using react 16.
Can you provide more info about the error? Is there a stack trace? Are you getting it running tests, or even with no tests at all?
The question is where is Events being used?
Hopefully the stack trace would reveal that.
Closing for now, happy to reopen if it can be reproduced or if more info is provided.
| gharchive/issue | 2019-12-18T13:03:15 | 2025-04-01T06:37:46.028736 | {
"authors": [
"SarpongAbasimi",
"ljharb",
"yanxiaosong0902"
],
"repo": "airbnb/enzyme",
"url": "https://github.com/airbnb/enzyme/issues/2308",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
675213227 | Pycocotools alternative
Pycocotools is causing a lot of problems. It's currently mainly used for calculating metrics, it would be nice to find an alternative.
This is a strong alternative. Please comment if you have a candidate =)
I have seen this. It is a good alternative but still lacks an API. See the readme for usage, there is no way we can actually compute metrics and add them.
Best alternatives against PyCoco are, lightning metrics, raise a feature request for metrics package in torch. They had this request before too.
pycocotools is currently also being used on Polygon.to_mask, we also need to find an alternative way of doing that.
Because of #510 masks are now heavily dependent on pycocotools because of the heavy use of "Encoded RLE"
We don't have any bandwidth to think about removing this dependency for the time being
| gharchive/issue | 2020-08-07T18:57:01 | 2025-04-01T06:37:46.170296 | {
"authors": [
"FraPochetti",
"lgvaz",
"oke-aditya"
],
"repo": "airctic/icevision",
"url": "https://github.com/airctic/icevision/issues/283",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
658775340 | 关于台湾的行政区划代码有重复的
"710300": { "710301": "仁爱区", "710302": "信义区", "710303": "中正区", "710304": "暖暖区", "710305": "安乐区", "710307": "七堵区" }, "710400": { "710301": "中区", "710302": "东区", "710303": "南区", "710304": "西区", "710305": "北区", "710306": "北屯区", "710307": "西屯区", "710308": "南屯区" },
下方的中区是否应该是710401
确实是个 bug。
"710300": { "710301": "仁爱区", "710302": "信义区", "710303": "中正区", "710304": "暖暖区", "710305": "安乐区", "710307": "七堵区" }, "710400": { "710301": "中区", "710302": "东区", "710303": "南区", "710304": "西区", "710305": "北区", "710306": "北屯区", "710307": "西屯区", "710308": "南屯区" },
下方的中区是否应该是710401
催更催更
台湾这些区域规划有问题吧, 比如 南投县 不在列表中
这个数据源自哪儿?我是搞错了么
这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
去民政部查了下,台湾省资料暂缺,懂了,统一后再查询
| gharchive/issue | 2020-07-17T02:39:41 | 2025-04-01T06:37:46.223001 | {
"authors": [
"13251511962",
"Fox-54500",
"Libohan12",
"airyland",
"marvinking",
"shadyfan"
],
"repo": "airyland/china-area-data",
"url": "https://github.com/airyland/china-area-data/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1709064855 | ⚠️ Turnos 24x7 has degraded performance
In 767fa9a, Turnos 24x7 (https://reservas-dev.alternativasinteligentes.com/) experienced degraded performance:
HTTP code: 200
Response time: 968 ms
Resolved: Turnos 24x7 performance has improved in 37aab87.
| gharchive/issue | 2023-05-14T20:32:47 | 2025-04-01T06:37:46.225655 | {
"authors": [
"AlternativasInteligentes"
],
"repo": "aisa-status/status-upptime-dev",
"url": "https://github.com/aisa-status/status-upptime-dev/issues/2211",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1729195688 | ⚠️ Turnos 24x7 has degraded performance
In 1fb3273, Turnos 24x7 (https://reservas-dev.alternativasinteligentes.com/) experienced degraded performance:
HTTP code: 200
Response time: 957 ms
Resolved: Turnos 24x7 performance has improved in acd9d5a.
| gharchive/issue | 2023-05-28T05:32:32 | 2025-04-01T06:37:46.228181 | {
"authors": [
"AlternativasInteligentes"
],
"repo": "aisa-status/status-upptime-dev",
"url": "https://github.com/aisa-status/status-upptime-dev/issues/2564",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1461939804 | ⚠️ Turnos 24x7 has degraded performance
In 73dbf74, Turnos 24x7 (https://reservas-dev.alternativasinteligentes.com/) experienced degraded performance:
HTTP code: 200
Response time: 1423 ms
Resolved: Turnos 24x7 performance has improved in f356a05.
| gharchive/issue | 2022-11-23T15:13:33 | 2025-04-01T06:37:46.230870 | {
"authors": [
"AlternativasInteligentes"
],
"repo": "aisa-status/status-upptime-dev",
"url": "https://github.com/aisa-status/status-upptime-dev/issues/883",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1609096743 | Make function std::ping::ping concurrency safe
The function ping so far did not check the ident received. As a consequence the function ping might returned wrong results when used simultaneously from different threads, see https://github.com/aisk/ping/issues/6 for more details.
This commit fixes the issue by checking the ident of the reply and comparing it with the expected ident.
Changes:
Changed type of timeout from Option<Duration> to Duration. This avoids an additional error check in new code.
Receive echo in a loop until either the correct ident was received or a timeout occured.
I have tested the changes and they seem to work for me. Let me know if something still needs to be changed.
Great work! I have only a small issue which is pointed out up here. And it should be better to add a mutiple thread test case like what you posted on #6
Thanks for the review, sounds fine. I am currently a bit occupied, give me a couple of days to update the PR.
These commits is already merged in #9.
| gharchive/pull-request | 2023-03-03T18:43:52 | 2025-04-01T06:37:46.236861 | {
"authors": [
"aisk",
"michael-hartmann"
],
"repo": "aisk/ping",
"url": "https://github.com/aisk/ping/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
974361630 | Sending request headers
We have a use case where it is required to set certain request headers, based on a key or value from Kafka.
Would it be possible to add this feature? I can see several ways to do it.
Automatically use fields in the record key as headers. This may make sense but it is prone to sending unnecessary headers.
Specify a certain prefix for headers in the record key, e.g. http_header_*. When a field matches this pattern, send it as a request header, removing the http_header_ prefix.
Instead of sending the value as a request body, use nested fields "headers" and "body" to describe the payload.
Hi @Oduig
Thank you. Such functionality makes sense. I've added it to our backlog and we'll think how to best implement it.
Hello @ivanyu,
I am also getting this issue when using authentication type that is not STATIC, It appears that the content type is only being added on the AUTH_HTTP_REQUEST_BUILDER that is used when the authentication type is STATIC.
I think the solution would be to add this on the DEFAULT_HTTP_REQUEST_BUILDER (requires confirmation also on the oauth one).
| gharchive/issue | 2021-08-19T07:30:32 | 2025-04-01T06:37:46.264580 | {
"authors": [
"Oduig",
"arcmmartins",
"ivanyu"
],
"repo": "aiven/http-connector-for-apache-kafka",
"url": "https://github.com/aiven/http-connector-for-apache-kafka/issues/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1678902793 | feat: add fetch chunk transform
Depends on #187
@jeqo can you rebase this one to fix the test failures?
| gharchive/pull-request | 2023-04-21T18:04:46 | 2025-04-01T06:37:46.265883 | {
"authors": [
"AnatolyPopov",
"jeqo"
],
"repo": "aiven/tiered-storage-for-apache-kafka",
"url": "https://github.com/aiven/tiered-storage-for-apache-kafka/pull/188",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2579082152 | Added algorithm for Doubly Linked List in C
📥 Pull Request
Description
Added a Doubly linked list in C Language
Fixes # 358
Type of change
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[x] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Documentation update
Checklist:
[x] My code follows the style guidelines of this project
[x] I have performed a self-review of my code
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have added tests that prove my fix is effective or that my feature works
[x] New and existing unit tests pass locally with my changes
[x] Any dependent changes have been merged and published in downstream modules
@ajay-dhangar done changes.
| gharchive/pull-request | 2024-10-10T14:45:11 | 2025-04-01T06:37:46.274327 | {
"authors": [
"Mansi07sharma"
],
"repo": "ajay-dhangar/algo",
"url": "https://github.com/ajay-dhangar/algo/pull/438",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1012931518 | add Hard Problem Merge K Sorted list
Submit a New Problem of Merge K Sorted List using easy method as Possible
Time Complexity O(n)
Space O(n)
Thank you for this Question Indeed this is a hard Question Please share this repo with your friends so that they can also contribute to this repository
Happy coding and Learning
LGTM
| gharchive/pull-request | 2021-10-01T06:00:35 | 2025-04-01T06:37:46.280441 | {
"authors": [
"Omkarjaiswal",
"ajeetjaiswal02"
],
"repo": "ajeetjaiswal02/Leetcode-Hard-Problems",
"url": "https://github.com/ajeetjaiswal02/Leetcode-Hard-Problems/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1566125626 | 🛑 MONGOLIA is down
In 4fde687, MONGOLIA (https://www.bricks4kidz.mn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MONGOLIA is back up in 6899514.
| gharchive/issue | 2023-02-01T13:32:56 | 2025-04-01T06:37:46.487017 | {
"authors": [
"ajmalalavi"
],
"repo": "ajmalalavi/B4K-Site-monitor",
"url": "https://github.com/ajmalalavi/B4K-Site-monitor/issues/324",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1263653165 | Primary/secondary color picker and background input
Category
Feature
Overview
Added two color selectors to the common settings page, which allow you to change the application's theme with the primary selector and change the logo text's gradient with a combination of primary/secondary selector
This has been updated to edit the MantineProvider's theme property and works globally across the app without having to read the config for what colors to use in individual components
Added a background image input to the advanced settings (set to cover, centered, and no-repeat) to address #32
Issue Number
#32
New Vars
config.settings.primaryColor
config.settings.secondaryColor
config.settings.background
Screenshot
Code Quality Checklist
[x] All changes are backwards compatible
[x] There are no (new) build warnings or errors
[x] Attribute is outlined in the schema and documented
[ ] Package is essential, and has been checked out for security or performance
[ ] Bumps version, if new feature added
I was able to edit the theme prop of MantineProvider by using React's context API and have pushed a new commit that removes the need to check config.settings.primaryColor in all of the colored components.
I'll be adding a bit more like a color tone selector and tile opacity slider before I open the PR for review again.
If you guys have any other suggestions of things I can add, let me know.
Not sure how to fix this if I'm being honest. There are only a couple of ways to handle backgrounds in CSS. We can either use the original size which results in the image being cut off or scale it to the browser window which results in the image not filling the screen if it's a different resolution.
The image is 1920x1080, and so is my browser window. So we can definitely fix this.
I will try some things.
Can't get it to work. But the issue is occuring on every image I try, so the feature is not ready for release yet.
IMO, app opacity should change the opacity of Modals, Drawers, and Item, backgrounds, etc. Not the icons (and text?).
See c0c816d
Can't get it to work. But the issue is occuring on every image I try, so the feature is not ready for release yet.
styles={{
body: {
backgroundImage: `url('${config.settings.background}')` || '',
backgroundPosition: 'top',
backgroundSize: 'cover',
backgroundRepeat: 'no-repeat',
},
}}
Changing the position to top makes it full width.
This should be fixed by giving the body min-height: 100vh.
I've make a PR from this PR 🤯 adding modules on left.
I'm almost ok, just need to check to add a params on each modules to know if it's a full width module or if it's a "widget".
Finished, just need to let @Aimsucks my PR in his branch to get it there.
| gharchive/pull-request | 2022-06-07T17:41:23 | 2025-04-01T06:37:46.496847 | {
"authors": [
"Aimsucks",
"Darkham42",
"walkxcode"
],
"repo": "ajnart/homarr",
"url": "https://github.com/ajnart/homarr/pull/188",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1326531153 | run LINTUL3 for potato
Dear developer,
Thanks for the great documentation and the easy-to-use python packages.
I followed all the examples and they ran smoothly without any trouble.
So now I am trying to run the LINTUL3 model for a potato crop by following the 03 notebook.
But got some error message about a parameter called DVSDR
File c:\Users\cflfcl\.conda\envs\py3_pcse\lib\site-packages\pcse\engine.py:228, in Engine.run_till_terminate(self)
225 """Runs the system until a terminate signal is sent."""
227 while self.flag_terminate is False:
--> 228 self._run()
File c:\Users\cflfcl\.conda\envs\py3_pcse\lib\site-packages\pcse\engine.py:208, in Engine._run(self)
205 self.drv = self._get_driving_variables(self.day)
207 # Agromanagement decisions
--> 208 self.agromanager(self.day, self.drv)
210 # Rate calculation
211 self.calc_rates(self.day, self.drv)
File c:\Users\cflfcl\.conda\envs\py3_pcse\lib\site-packages\pcse\agromanager.py:916, in AgroManager.__call__(self, day, drv)
914 # call handlers for the crop calendar, timed and state events
915 if self.crop_calendars[0] is not None:
--> 916 self.crop_calendars[0](day)
918 if self.timed_event_dispatchers[0] is not None:
919 for ev_dsp in self.timed_event_dispatchers[0]:
...
64 value = parvalues[parname]
65 if isinstance(getattr(self, parname), (Afgen)):
66 # AFGEN table parameter
ParameterError: Value for parameter DVSDR missing.
What I did
I modified the data\agro\lintul3_springwheat.agro to a fake potato crop.
Version: 1.0
AgroManagement:
- 2021-10-01:
CropCalendar:
crop_name: 'potato'
variety_name: 'potato01'
crop_start_date: 2021-10-15
crop_start_type: emergence
crop_end_date: 2022-03-20
crop_end_type: harvest
max_duration: 300
TimedEvents: null
StateEvents: null
Downloaded weather data from the NASA Power project and modified it to the nl1.xlsx format with the right input value and units. The ExcelWeatherDataProvider function parsed the file correctly.
Used the example site and soil files. I had to fake a value for the parameter ROOTDI to make the soil file parser pass. Not sure what this parameter does.
crop = CABOFileReader(os.path.join(data_dir, "crop", "POT701.CAB"))
soil = PCSEFileReader(os.path.join(data_dir, "soil", "lintul3_springwheat.soil"))
site = PCSEFileReader(os.path.join(data_dir, "site", "lintul3_springwheat.site"))
Wondering what I've missed here.
Note: I've forked the repository and commit the changes I made back to my forked repository if you'd like to reproducible the error I got.
A bit of background about what I try to achieve.
I'd like to run some simulations for potato growth and development in India to see if the LINTUL model can help me make some decisions on the experiment design (e.g cultivar selection) and key dates to make observations on potato traits.
Thank you very much in advance for any feedback.
Dear Frank034,
sorry for the very late reply. Is this still relevant?
Allard
hi Allard, I am able to use wofost potato. so I will close this now.
| gharchive/issue | 2022-08-03T00:20:04 | 2025-04-01T06:37:46.505286 | {
"authors": [
"ajwdewit",
"frank0434"
],
"repo": "ajwdewit/pcse_notebooks",
"url": "https://github.com/ajwdewit/pcse_notebooks/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
217731308 | Make the http.Client customizable
Needed to allow changing the http.Client, 0 value is missing defaults on timeouts and proxy config.
Dear, @japettyjohn
Thanks for your merge request !
| gharchive/pull-request | 2017-03-29T00:29:07 | 2025-04-01T06:37:46.513058 | {
"authors": [
"AstinCHOI",
"japettyjohn"
],
"repo": "akamai-open/NetStorageKit-Golang",
"url": "https://github.com/akamai-open/NetStorageKit-Golang/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
73761334 | Fix #44 Allow generation of native launcher for products
Hi
Please, can you review my work on generation of native launcher. At this stage, it is not complete, but I would like to know if I'm on the good way and what refactor you would like to see before merging.
Note :
windows launcher is ok (which was the primary request of issue #44 )
OSX launcher in still a WIP (I don't understand why it don't work)
I don't have a linux box under the hand to test it, I'll setup a VM this week end to check it out.
Note change :
OSX launcher is ok
Linux launcher is tested
Hi Yann,
Thanks a lot for your great contributions!
There were some little bugs, which I fixed:
issue #63: incorrect passing of jvmargs in EquinoxProductConfigurer.customizeIniFile.
issue #64: getEclipseBundleSymbolicName is invoked "too early" for rcp apps.
issue #65: EquinoxProductConfigurer generates eclipse.ini, although it should be ${project.name}.ini
please update your fork and do your tests/builds. If everything works with you, I'll make a release.
As I said on twitter, It's a pleasure to try to improve Wuff.
I saw your fixes. Thanks to you for spotting 'my' bugs. Hard to think about all usecases.
I'll check with your version of the repository tomorrow evening and I come back to you.
Nope there is problem. (I test your master + my pending PR). The "companion library" can't be found. The native counterpart of org.equinox.launch is not in plugins dir after product creation. I'll investigate tonight.
I just tried to reproduce this, without success.
Could you, please try:
git clone git@github.com:akhikhl/wuff.git
cd wuff
gradle build
cd examples/RcpApp-1/MyRcpApp
gradle build
Expected result: MyRcpApp/build/output/MyRcpApp-1.0.0.0-xxx/plugins contains unpacked org.eclipse.equinox.launcher with os-specific part (dll, so, ...).
Ok, I reproduce my problem. If you add a name to the product :
products {
product name: "foo", platform: 'windows', arch: 'x86_64'
}
Then the platform specific launcher is not there.
I used the product name to build different "flavored" version. Maybe it was not intended for this usage.
If it's a correct usage, It seem there is a missmatch, in EquinoxProductConfigurer:60
configName = "${productConfigPrefix}${productNamePrefix}${platform}_${arch}${languageSuffix}"
And defaultConfig.groovy (which don't include productNamePrefix)
Good, I'll try that.
I confirm: adding "name" attribute removed native launcher. Will have a look.
I fixed the issue with native launchers and named products.
Could you, please, test the fix on snapshot?
Good news. I made the test, it work for me :)
| gharchive/pull-request | 2015-05-06T22:49:50 | 2025-04-01T06:37:46.563578 | {
"authors": [
"akhikhl",
"ylemoigne"
],
"repo": "akhikhl/wuff",
"url": "https://github.com/akhikhl/wuff/pull/53",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
9110216 | support multiple
as player,it must be nice.
I don't understand your suggestion at all. What do you request?
| gharchive/issue | 2012-12-08T11:34:36 | 2025-04-01T06:37:46.580162 | {
"authors": [
"akjava",
"x-a-n-a-x"
],
"repo": "akjava/BVH-Motion-Creator",
"url": "https://github.com/akjava/BVH-Motion-Creator/issues/2",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
147688425 | Windows. eval $(ssh-agent)
Hello, I'm working on Windows and have to make in each git-bash session commands:
eval $(ssh-agent)
ssh-add
Just then I have access for repo.
How can I execute this commands in Atom?
Now I have no access to repo with git-plus :-(
I have solved it with .bash_profile
Put into that file (located in home directory ~/)
evalssh-agent -s ssh-add
| gharchive/issue | 2016-04-12T09:20:44 | 2025-04-01T06:37:46.641045 | {
"authors": [
"artemhp"
],
"repo": "akonwi/git-plus",
"url": "https://github.com/akonwi/git-plus/issues/428",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
184875044 | Fix wrong test input data for dropping leading spaces
A test case in textformatter lacked the leading spaces it was supposed to test
for.
Coverage increased (+0.3%) to 30.598% when pulling 892078b7c6e5a94fd4f8ecc5c72a7a1a4c6e7363 on tsipinakis:test-drop-whitespace into 3241e10f3955fce9f48a6031799952fca1f50cc7 on akrennmair:master.
Coverage increased (+0.3%) to 30.598% when pulling 892078b7c6e5a94fd4f8ecc5c72a7a1a4c6e7363 on tsipinakis:test-drop-whitespace into 3241e10f3955fce9f48a6031799952fca1f50cc7 on akrennmair:master.
Coverage increased (+0.3%) to 30.598% when pulling 892078b7c6e5a94fd4f8ecc5c72a7a1a4c6e7363 on tsipinakis:test-drop-whitespace into 3241e10f3955fce9f48a6031799952fca1f50cc7 on akrennmair:master.
Coverage increased (+0.3%) to 30.598% when pulling 892078b7c6e5a94fd4f8ecc5c72a7a1a4c6e7363 on tsipinakis:test-drop-whitespace into 3241e10f3955fce9f48a6031799952fca1f50cc7 on akrennmair:master.
Coverage increased (+0.005%) to 30.42% when pulling e89b0e2804f64ab895347cc7a3c296fc15808a9f on tsipinakis:test-drop-whitespace into 1fa890123580556846e57dd2b8343faed1c7de96 on akrennmair:master.
For the record: me and @tsipinakis discussed this PR on IRC last night. The main takeaway is that test description could be worded better. What I meant when I wrote it is that after wrapping, lines don't start with space; i.e. if space doesn't fit onto a line and is going to be put at the beginning of a new line, it should just be ignored. The test case actually demonstrated that, but suboptimal wording confused people (including me—it's been only 3 months since I wrote it but on the first glance, I decided that the test is useless).
Yesterday I also voiced an idea: if first line of wrapped text ends up consisting solely of whitespace, it should be dropped. @tsipinakis kindly updated the code and the test to reflect this behaviour, but after some sleep I no longer believe it's really necessary ._.
A situation where whitespace takes up a whole line is extremely unlikely unless we're working on an extremely narrow screen (think a dozen columns wide). I don't believe it makes sense to complicate our code to make a marginal improvement in such an unlikely scenario.
So I'd like to change the scope of this PR to updating the test description. @tsipinakis, do you agree? Do you have any ideas about better wording for the test?
Agreed.
Perhaps
When wrapping, ignore leading whitespace wrapped to the next line from the end of the previous one
would be a better description for the current test.
Don't think we need "When wrapping" part at all, we're mentioning the function name (wrap_line) already. The rest of it seems a bit too complicated to me. Maybe "ignore whitespace that's going to be wrapped onto the next line"?
Or maybe we should use your wording and just write an extensive comment like one that I wrote in the previous post, explaining what naive function will do and how wrap_line intelligently deviates?
ignore whitespace that's going to be wrapped onto the next line
Indeed sounds a lot better. To avoid any confusion a comment with an example would be a good idea either way.
So… you'll update the PR, right? :)
Coverage remained the same at 30.401% when pulling 289e054adcfedd2067d4ba7bfb750922d212d049 on tsipinakis:test-drop-whitespace into 765b053d7ed535244cfd093de04c6911611a4b86 on akrennmair:master.
That looks perfect, thank you!
| gharchive/pull-request | 2016-10-24T15:31:36 | 2025-04-01T06:37:46.658473 | {
"authors": [
"Minoru",
"coveralls",
"tsipinakis"
],
"repo": "akrennmair/newsbeuter",
"url": "https://github.com/akrennmair/newsbeuter/pull/388",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
276821446 | TextInput keyboard auto disappears in few seconds.
Version
react-native-router-flux v4.0.0-beta.24
react-native v0.50.4
this issue is similar to #2442, and still exist in latest version.
Dose someone have a workaround to fix this ?
@syq7970 I got the same issue. How did u solve?
@munkhorgil Answer in #2442 , In my case, IndicatorViewPager or something like ViewPager with autoPlay will trigger this issue
| gharchive/issue | 2017-11-26T12:34:06 | 2025-04-01T06:37:46.688151 | {
"authors": [
"munkhorgil",
"syq7970"
],
"repo": "aksonov/react-native-router-flux",
"url": "https://github.com/aksonov/react-native-router-flux/issues/2652",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
169309622 | Restore statem
Restore statem logic used with previous Switch
@sarovin could you please check if it still works for you :)
LGTM
| gharchive/pull-request | 2016-08-04T07:33:31 | 2025-04-01T06:37:46.689671 | {
"authors": [
"aksonov",
"sarovin"
],
"repo": "aksonov/react-native-router-flux",
"url": "https://github.com/aksonov/react-native-router-flux/pull/1013",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2176886479 | feat(webhook): detect duplicate warehouse subs
Fixes: #1583
This adds validation to ensure a Warehouse does not contain multiple subscription entries pointing to the same URL, or in the case of a HTTP/S Helm chart, to the same URL and chart name.
$ kubectl apply -f warehouse.yaml
The Warehouse "my-warehouse" is invalid:
* spec.subscriptions[2].image: Invalid value: "nginx": subscription for image repository already exists at "spec.subscriptions[0].image"
* spec.subscriptions[4].git: Invalid value: "https://github.com/example/kargo-demo.git": subscription for Git repository already exists at "spec.subscriptions[1].git"
* spec.subscriptions[5].chart: Invalid value: "https://example.com/charts/": subscription for chart "nginx" already exists at "spec.subscriptions[3].chart"
* spec.subscriptions[7].chart: Invalid value: "oci://foo": subscription for chart already exists at "spec.subscriptions[6].chart"
One possible issue and looks good otherwise.
| gharchive/pull-request | 2024-03-08T22:44:20 | 2025-04-01T06:37:46.691144 | {
"authors": [
"hiddeco",
"krancour"
],
"repo": "akuity/kargo",
"url": "https://github.com/akuity/kargo/pull/1593",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2728169424 | feat: custom PR title (#3063)
closes #3063
@krancour Please let me know your thoughts on testing here. My thinking here is that we can both assume expressions in promotion steps are evaluated properly (likely tested elsewhere), and that if I pass any title to a Git provider's CreatePullRequest, the pull request will be created with the given title.
@muenchdo correct on both counts. 😄
@muenchdo I think if you re-run codegen, this looks g2g.
| gharchive/pull-request | 2024-12-09T20:15:25 | 2025-04-01T06:37:46.692841 | {
"authors": [
"krancour",
"muenchdo"
],
"repo": "akuity/kargo",
"url": "https://github.com/akuity/kargo/pull/3107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
353459932 | How edit boilerplate into Android Studio ?
Android 3.1.4
Win 7
Hi @alejandri ,
What do you mean with 'boilerplate'?
Hi Artyorsh, Thanks. I want import and use this PACKAGE into ANDROID STUDIO project.
| gharchive/issue | 2018-08-23T16:30:33 | 2025-04-01T06:37:46.697956 | {
"authors": [
"alejandri",
"artyorsh"
],
"repo": "akveo/kittenTricks",
"url": "https://github.com/akveo/kittenTricks/issues/64",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
389142338 | Error when try to follow the Getting Started Guide
Hi, I'm sorry for this noob question. But when I try to follow the smart table Getting Started and integrate with Angular basic template using ng new exchange-ws syntax to understand how smart table work.
I've got this error when I run ng serve.
Here's my Angular version
Any help will be appreciated so much. Thanks
npm install ng2-completer --save
| gharchive/issue | 2018-12-10T05:08:00 | 2025-04-01T06:37:46.700636 | {
"authors": [
"ariebrainware",
"naveednazarali"
],
"repo": "akveo/ng2-smart-table",
"url": "https://github.com/akveo/ng2-smart-table/issues/917",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1376717457 | error in npm install ngx-admin 7.0.0
Issue type
I'm submitting a ... (check one with "x")
[x] bug report
[ ] feature request
[ ] question about the decisions made in the repository
Issue description
I'm having dependencies error in npm install. I deleted package.lock.json and node_modules before doing npm install
Current behavior:
Expected behavior:
Npm install will run without error
Steps to reproduce:
Npm install
Related code:
Other information:
Same problem with ngx-admin 8, lots of dependencies resolve pbs.
Use the node version 14.15.0 to resolve this issue
Use the node version 14.15.0 to resolve this issue
Thanks bro, it worked for me
Use the node version 14.15.0 to resolve this issue
Thank you so much!
Use the node version 14.15.0 to resolve this issue
Thanks bro its done work for me @udhayakumar-yavar
it works also with 14.21.3
It does not work with other lts gallum and hydrogen
I'm using nvm
| gharchive/issue | 2022-09-17T09:14:13 | 2025-04-01T06:37:46.707948 | {
"authors": [
"JohannesH1998",
"PYRO-DRANE",
"amielmendoza",
"biggosh",
"mahdiqnbi",
"reedlex98",
"udhayakumar-yavar"
],
"repo": "akveo/ngx-admin",
"url": "https://github.com/akveo/ngx-admin/issues/5954",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
935869292 | Mi Smart Power Plug 2 (chuangmi.plug.212a01) power consumption sensor wrong
power consumption value should have two decimal points. in this case actual power consumption is 78.25W while its being reported as 7825:
[{'did': 'electric_power', 'piid': 6, 'siid': 5, 'value': 7825 'code': 0}]
verified with the app and independent power meter
[{'did': 'electric_current', 'piid': 2, 'siid': 5, 'value': 5, 'code': 0}] and [{'did': 'power_consumption', 'piid': 1, 'siid': 5, 'value': 100, 'code': 0}] are stuck at these values even the plug is switched off
Duplicate https://github.com/al-one/hass-xiaomi-miot/issues/101#issuecomment-856406591
Wouldn't it make more sense for the device to work out of the box by customising the W sensor automatically instead of requiring manual .yaml editing?
This value and unit are from miot-spec. Under normal circumstances, the component will not be changed for a specific model of device, unless the entity is unavailable or affects the use.
| gharchive/issue | 2021-07-02T15:23:20 | 2025-04-01T06:37:46.724620 | {
"authors": [
"al-one",
"blakadder"
],
"repo": "al-one/hass-xiaomi-miot",
"url": "https://github.com/al-one/hass-xiaomi-miot/issues/118",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1860800844 | Yeelight语音助手没有Play Control conversation的实体,如何添加?
Device model / 设备型号
yeelink.wifispeaker.v1
Component version / 插件版本
0.7.11
HA core version / HA版本
2023.8.2
Integrated mode / 集成方式
Automatic (自动模式)
The problem / 问题详情
我有个Yeelight语音助手,是第三方的小爱音箱,通过miot auto集成到ha,里面没有Play Control conversation的实体,就是判断语音文字作联动的那个。但是我看米家产品库里面,这个语音助手和小爱音箱一样,是有text-content的Properties的(SIID: 5
,PIID:1),想问一下怎么修改集成,自己添加这个实体进去?
Entity attributes / 实体属性
Play Control conversation
Home Assistant Logs / 系统日志
No response
text-content属性是作为intelligent-speaker服务下play-text及execute-text-directive方法的入参,无法读取其属性值。
conversation实体需要该音箱在小爱音箱APP内可以看到对话记录才能支持。
text-content属性是作为intelligent-speaker服务下play-text及execute-text-directive方法的入参,无法读取其属性值。 conversation实体需要该音箱在小爱音箱APP内可以看到对话记录才能支持。
这款音箱通过米家的设备卡片进入后有对话记录,但是无法显示在小爱音箱App内
这个问题解决了吗?我也有同款音箱
| gharchive/issue | 2023-08-22T07:34:40 | 2025-04-01T06:37:46.729509 | {
"authors": [
"LiYefei",
"al-one",
"jasdkc"
],
"repo": "al-one/hass-xiaomi-miot",
"url": "https://github.com/al-one/hass-xiaomi-miot/issues/1271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2223694179 | add get_secure_socket method
Add method for producing always-secured sockets and modify existing code to consume it.
Codecov Report
Attention: Patch coverage is 19.04762% with 34 lines in your changes are missing coverage. Please review.
:exclamation: No coverage uploaded for pull request base (protodrg@232d67f). Click here to learn what that means.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## protodrg #16 +/- ##
===========================================
Coverage ? 76.66%
===========================================
Files ? 75
Lines ? 5339
Branches ? 0
===========================================
Hits ? 4093
Misses ? 1246
Partials ? 0
Files
Coverage Δ
smartsim/_core/launcher/dragon/dragonLauncher.py
26.53% <19.04%> (ø)
smartsim/_core/launcher/dragon/dragonSockets.py
34.48% <19.04%> (ø)
| gharchive/pull-request | 2024-04-03T19:01:22 | 2025-04-01T06:37:46.736531 | {
"authors": [
"ankona",
"codecov-commenter"
],
"repo": "al-rigazzi/SmartSim",
"url": "https://github.com/al-rigazzi/SmartSim/pull/16",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
826010590 | Earlgrey: yosys fails to parse aes_key_expand
Parsing aes_key_expand.sv file results in error in yosys:
aes_key_expand.sv:164: ERROR: Can't resolve function name `\aes_circ_byte_shift'.
Blocked by: https://github.com/alainmarcel/Surelog/issues/1109
Surelog problem is resolved, but now generated bitstream is not working on HW.
| gharchive/issue | 2021-03-09T14:24:48 | 2025-04-01T06:37:46.766606 | {
"authors": [
"kamilrakoczy"
],
"repo": "alainmarcel/uhdm-integration",
"url": "https://github.com/alainmarcel/uhdm-integration/issues/231",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1914222440 | MFA Evaluation sans Break Glass Accounts
Currently the controls for MFA evaluation validate that the Break Glass accounts are excluded from the MFA Conditional Access Policy.
Would you be able to exclude them from being evaluated entirely. i.e., if the Breakglass has MFA it's a pass or if the BreakGlass doesn't have MFA it's a pass? All accounts outside of the BG accounts require MFA?
The new ItemName would be MFA Enforcement (M).
CORRECTION/ Adjusted Request
Two ideas to test
Control Name: Global Admins MFA Enablement (M)
Upload a list as a .txt to the storage account following a specific format to list the GA’s that require MFA. Departments can add the BG accounts if they’d like those to also have MFA enabled. Otherwise they would omit the BG Accounts.
o The check would take the UPN’s listed and verify if MFA is enabled. If yes, then they pass the control. If not, then they fail the control.
OR use similar logic to the BG account check and add multiple inputs for GA’s in the config.json. Departments can add the BG accounts if they’d like those to also have MFA enabled. Otherwise they would omit the BG Accounts.
o The check would take the UPN’s listed and verify if MFA is enabled. If yes, then they pass the control. If not, then they fail the control.
Short Term fix
Attestation for control Global Admins MFA Enablement (M) is uploading GlobalAdminsHaveMFAEnabled.txt to the storage account. If the file exists than it’s a pass for the control. If not, it fails the control.
| gharchive/issue | 2023-09-26T20:09:19 | 2025-04-01T06:37:46.774469 | {
"authors": [
"MathesonSho"
],
"repo": "alalvi00/GuardrailsSolutionAccelerator",
"url": "https://github.com/alalvi00/GuardrailsSolutionAccelerator/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1042877604 | Update README.md
Bonsoir, j'ai corrigé le README.md dans lequel tout était écrit en anglais sauf le mot "prépositions".
Bonne continuation
Adsam39
Merci @Adsam39 !
| gharchive/pull-request | 2021-11-02T21:41:40 | 2025-04-01T06:37:46.783371 | {
"authors": [
"Adsam39",
"mininao"
],
"repo": "alan-eu/french-departments",
"url": "https://github.com/alan-eu/french-departments/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
441188445 | Design 2nd degree transformer, and related composition patterns for kernel learners and distance learners
As discussed in #5
Implemented here?
https://github.com/goastler/sktime-kernels/blob/master/sktime/transformers/kernels.py
Looks nice, @goastler !
ok, based on discussion on slack, here's a generic design and how it should interact with pipeline composition and tuning
slightly modified from #5 since decided to not use xpandas
Part 3. Pairwise transformer aka degree 2 transformer.
should be a class with parameters, attributes, methods.
Does not use abstract base class. Inherit from sklearn.BaseEstimator, or transformer
Parameters - private variables. Correspond to "hyper-parameters" to be set or tuned. Like in supervised estimator.
Attributes - private variables. Correspond to "model parameters" set by the fit method. Not to be set by the user or via an interface.
Constructor _ _ init _ _
arguments: all parameters, explicitly, with sensible default setting.
behavior: Sets self.parameters to the values provided.
public method fit
arguments:
X - a pandas data frame. no default.
optimal argument:
sample_weight - a vector of weights, equal length to X.
behavior:
fits model and stores it in attribute variables. May access but not modify parameters.
public method transform
arguments:
X - an pandas data frame. no default. Column headers and types should be the same as of fit's X argument
Xnew - a pandas data frame. no default. Column headers and types should be the same as of fit's X argument
optimal argument:
sample_weight - a vector of weights, equal length to X.
behavior:
Returns an named 3D array K. First dimension is indexed by rows of X. Second dimension is indexed by rows of Xnew. Third dimension is indexed, with transformed columns having headers defined by the transformer.
May access attributes and hyper-parameters. May not modify attributes and hyper-parameters.
public method get_params
arguments:
deep - a boolean indicating whether parameters of nested estimators should be returned
behaviour:
returns string -> value dictionary of parameters, following the sklearn naming convention (nested estimators' parameters by < estimatorname >__< parametername >, returned if and only if deep = true)
public method set_params
arguments:
string -> value dictionary of parameters, following the sklearn naming convention (nested estimators' parameters by < estimatorname >__< parametername >)
behaviour:
sets parameters, and nested parameters if provided, to the values as defined by the dictionary mapping
The counterpart of this, in composition is a "distance method", or "kernel method".
Part II: kernel/distance methods
params etc are all the same as in the estimator design (as above)
public method fit_deg2
arguments:
K - a matrix or xarray of size (N x N), e.g., a kernel or distance matrix. no default.
optimal argument:
sample_weight - a vector of weights, length N.
behavior:
fits model and stores it in attribute variables. May access but not modify parameters.
public method pred_deg2
kappa - a matrix or xarray of size (N x Nnew), e.g., a cross-kernel or cross-distance matrix. no default.
optimal argument:
sample_weight - a vector of weights, length Nnew.
behavior:
returns model predictions, a data frame or vector of length Nnew
Composition pattern 1: pipeline
this would be realized by a class that inherits from estimator and behaves like whatever is at its end, usually a supervised kernel learner. I explain the below for the supervised learning case, the other cases are analogous (by dispatch)
Class pipeline_deg2
private variable trafo
private variable estim
Constructor _ _ init _ _
arguments: an instance of a descendant of transformer_deg2, and and instance of an appropriate descendant of estimator (plus mixin). estim needs to be a kernel/distance learner
behavior: stores the transformer as self.trafo and estimator as self.estim
public method fit
arguments:
X - a pandas data frame. no default.
y - a pandas data frame, of equal length. no default.
optional argument:
sample_weight - a vector of weights, equal length to X.
behavior:
trafo.fit(X)
K = trafo.transform(X)
estim.fit_deg2(K,y)
public method predict
arguments:
Xnew - a pandas data frame.
optional argument:
sample_weight - a vector of weights, equal length to Xnew.
behavior:
returns estim.pred_deg2(trafo.transform(Xnew))
Composition pattern 2: tuning
the composition
pipeline_deg2(mykernel(params),mysupkernellearner(moreparams))
is a supervised learner, with get_params/set_params accessing a joint parameter set - some coming from the kernel, some coming from the kernel learner.
This can now be passed to vanilla GridSearchCV, that is
GridSearchCV(pipeline_deg2(mykernel(params),mysupkernellearner(moreparams)), tuneparams)
being a tuned kernel learner that behaves like a supervised learner, e.g., classifier or regressor.
Composition pattern 3: kernel/distance learner composition
Factoring out the kernel as an object in its own right enables composition of such objects in their own right, such as:
multiple kernel learning
self-tuning kernels/distances
kernel reduction, i.e., building a kernel for sequences/series from a kernel for primitives, such as done in the string kernels, time warping kernels, etc
The generic pattern for such reduction is the compositional one:
myMKL([kernel1trafo(),kernel2trafo(), ..., kernelNtrafo()])
could be the degree 2 transformer that is the kernel mixture of the N component kernels and automatically fits the mixture parameters on the training data
Or, myDynamicTimeWarpingKernel(PrimitiveKernel(params),moreparams), and so on.
Minor comment: as composition operations become more complex, especially when used multiple times. Any opinions about operator overloading for pipeline/composite construction?
looking at @goastler's code, I think it's actually smart to combine the more heavy 2nd degree transformers with kernel functions and the current construction (or even a factory) pattern that makes transformers out of kernel functions (which already exists within @goastler's code).
I.e., have a class Kernel (aka "KernelFromKernelFunction") which is a 2nd degree transformer and which you construct with a kernel function. Of course not all 2nd degree transformers are of this type, but vanilla kernels will be.
Interesting. Tell me more about it
(continuing from #388 here since there it's off-topic)
@moradisten, many time series (classification) methods rely on a distance or kernel, in the way that it's a composite of the choice of kernel/distance and the method that computes distance/kernel matrices.
Therefore kernels and distances are natural encapsulation and abstraction points in the sense of a template pattern.
A concrete distance method would be a composite, following the sklearn estimator composition formalism.
Having a way to construct a method in the way Knn(MyFavouriteDistance(param1 = 2), k=4) with components all estimators would be great.
Seems pretty interesting. In fact, My research was focused on DTW distance
measure, but I wanted to explore more distance measures like WDTW, DDTW o
LCSS. So, that would be interesting to research and implement. But, I'd
have to study sklearn a little bit more so I can understand the composition
formalisms
El vie., 4 sept. 2020 a las 16:09, fkiraly (notifications@github.com)
escribió:
Interesting. Tell me more about it
(continuing from #388
https://github.com/alan-turing-institute/sktime/issues/388 here since
there it's off-topic)
@moradisten https://github.com/moradisten, many time series
(classification) methods rely on a distance or kernel, in the way that it's
a composite of the choice of kernel/distance and the method that computes
distance/kernel matrices.
Therefore kernels and distances are natural encapsulation and abstraction
points in the sense of a template pattern.
A concrete distance method would be a composite, following the sklearn
estimator composition formalism.
Having a way to construct a method in the way Knn(MyFavouriteDistance(param1
= 2), k=4) with components all estimators would be great.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/alan-turing-institute/sktime/issues/52#issuecomment-687168589,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AHEHQFDUO33GMR6RPINNC4DSEDYH5ANCNFSM4HLIAZIQ
.
But, I'd
have to study sklearn a little bit more so I can understand the composition
formalisms
For learning, I recommend:
implement some "simple" estimators, e.g., your favourite proximty forest, and make a pull request
study how composition works in sklearn, in particular BaseEstimator and why you can see parameters of components of a pipeline via get_params
understand the gaussian_process module in sklearn, particularly how the Kernel class and its children work
Alright, perfect. I'll see what I can do :)
El sáb., 5 sept. 2020 a las 18:24, fkiraly (notifications@github.com)
escribió:
But, I'd
have to study sklearn a little bit more so I can understand the composition
formalisms
For learning, I recommend:
implement some "simple" estimators, e.g., your favourite proximty
forest, and make a pull request
study how composition works in sklearn, in particular BaseEstimator
and why you can see parameters of components of a pipeline via
get_params
understand the gaussian_process module in sklearn, particularly how
the Kernel class and its children work
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/alan-turing-institute/sktime/issues/52#issuecomment-687632060,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AHEHQFGFLJ4WYEY7JAM5E43SEJQ2VANCNFSM4HLIAZIQ
.
@moradisten, great, let us know if/when/once you are interested to work on this.
Sure, I can start now that I'm on vacation right now. I just want to know
how you work and the steps you follow so that I we can have good
synchronization :)
El lun., 7 sept. 2020 a las 18:01, fkiraly (notifications@github.com)
escribió:
@moradisten https://github.com/moradisten, great, let us know
if/when/once you are interested to work on this.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/alan-turing-institute/sktime/issues/52#issuecomment-688410814,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AHEHQFA7EK6VCYAKHSQDWUTSET7XDANCNFSM4HLIAZIQ
.
| gharchive/issue | 2019-05-07T11:47:58 | 2025-04-01T06:37:46.817997 | {
"authors": [
"fkiraly",
"mloning",
"moradisten"
],
"repo": "alan-turing-institute/sktime",
"url": "https://github.com/alan-turing-institute/sktime/issues/52",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1096607986 | [ENH] add missing fit parameters to statsmodels Holt-Winters exponential smoothing interface
This PR adds parameters of statsmodels.tsa.holtwinters in fit as loopthrough to sktime's ExponentialSmoothing interface.
So far, only the constructor parameters were accessible, not the parameters in fit.
Solves #1845.
when pip installing sktime 0.9.0 I still don't see the new params in ExponentialSmoothing
`Downloading sktime-0.9.0-cp38-cp38-macosx_10_15_x86_64.whl
class ExponentialSmoothing(_StatsModelsAdapter):
"""Holt-Winters exponential smoothing forecaster.
Default settings use simple exponential smoothing without trend and
seasonality components.
Parameters
----------
trend : {"add", "mul", "additive", "multiplicative", None}, default=None
Type of trend component.
damped_trend : bool, default=False
Should the trend component be damped.
seasonal : {"add", "mul", "additive", "multiplicative", None}, default=None
Type of seasonal component.Takes one of
sp : int or None, default=None
The number of seasonal periods to consider.
initial_level : float or None, default=None
The alpha value of the simple exponential smoothing, if the value
is set then this value will be used as the value.
initial_trend : float or None, default=None
The beta value of the Holt's trend method, if the value is
set then this value will be used as the value.
initial_seasonal : float or None, default=None
The gamma value of the holt winters seasonal method, if the value
is set then this value will be used as the value.
use_boxcox : {True, False, 'log', float}, default=None
Should the Box-Cox transform be applied to the data first?
If 'log' then apply the log. If float then use lambda equal to float.
initialization_method:{'estimated','heuristic','legacy-heuristic','known',None},
default='estimated'
Method for initialize the recursions.
If 'known' initialization is used, then `initial_level` must be
passed, as well as `initial_trend` and `initial_seasonal` if
applicable.
'heuristic' uses a heuristic based on the data to estimate initial
level, trend, and seasonal state. 'estimated' uses the same heuristic
as initial guesses, but then estimates the initial states as part of
the fitting process.```
when pip installing sktime 0.9.0 I still don't see the new params in ExponentialSmoothing
`Downloading sktime-0.9.0-cp38-cp38-macosx_10_15_x86_64.whl
@atsangarides, it's not in 0.9.0, only in the current development branch.
As such, it is scheduled to appear in the next MINOR release (0.10.0 probably), but won't ever be in 0.9.0.
You can use it right now if you follow the install guide for the most recent development version
https://www.sktime.org/en/stable/installation.html
of course there's always a higher risk for bugs compared to the release.
when pip installing sktime 0.9.0 I still don't see the new params in ExponentialSmoothing
`Downloading sktime-0.9.0-cp38-cp38-macosx_10_15_x86_64.whl
@atsangarides, it's not in 0.9.0, only in the current development branch. As such, it is scheduled to appear in the next MINOR release (0.10.0 probably, sometime in the next weeks), but won't ever be in 0.9.0.
You can use it right now if you follow the install guide for the most recent development version https://www.sktime.org/en/stable/installation.html of course there's always a higher risk for bugs compared to the release.
ah, how silly of me! apologies, and thanks for the quick response!
| gharchive/pull-request | 2022-01-07T19:30:39 | 2025-04-01T06:37:46.826331 | {
"authors": [
"atsangarides",
"fkiraly"
],
"repo": "alan-turing-institute/sktime",
"url": "https://github.com/alan-turing-institute/sktime/pull/1849",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1351490497 | [ENH] option to keep column names in Lag
This PR adds a parameter keep_column_names (default=False) to the Lag transformer, which enables Lag to return the same column names in transform as in the input. This is for avoiding to produce column names lag_x__varname1, lag_x__varname2, etc, if only one lag x is input.
The default is False and not True according to the reasoning in the recent https://github.com/alan-turing-institute/sktime/pull/3261 which made the defaults consistent.
The use case for not having too long variable names is in cases where we lag twice, or the lagging happens internally in another estimator, such as the reduction prototypes: https://github.com/alan-turing-institute/sktime/pull/3333
FYI, what do you think, @KishManani?
Looks good to me!
| gharchive/pull-request | 2022-08-25T21:59:13 | 2025-04-01T06:37:46.830385 | {
"authors": [
"KishManani",
"fkiraly"
],
"repo": "alan-turing-institute/sktime",
"url": "https://github.com/alan-turing-institute/sktime/pull/3343",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2406301655 | No devices
I have a Electrolux washer and a Electrolux dryer but the integration reports that there are "no devices or entities":
Washer model: EW8F8669Q9
Dryer model: EW9H869E9
I see that the dryer is in the "supported list" but the washer is not. I have enabled debug logging, and have been looking for the log, but must admit I cannot find it, yet.
Log:
Logger: homeassistant.config_entries
Source: config_entries.py:586
First occurred: 11:25:20 PM (1 occurrences)
Last logged: 11:25:20 PM
Error setting up entry torbjorn.nesheim@mac.com for electrolux_status
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 586, in async_setup
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/electrolux_status/init.py", line 46, in async_setup_entry
if not await coordinator.async_login():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/electrolux_status/coordinator.py", line 39, in async_login
raise ex
File "/config/custom_components/electrolux_status/coordinator.py", line 32, in async_login
token = await self.api.get_user_token()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyelectroluxocp/oneAppApi.py", line 128, in get_user_token
token = await self._api_client.exchange_login_user(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyelectroluxocp/oneAppApiClient.py", line 142, in exchange_login_user
response.raise_for_status()
File "/usr/local/lib/python3.12/site-packages/aiohttp/client_reqrep.py", line 1070, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 429, message='', url=URL('https://api.eu.ocp.electrolux.one/one-account-authorization/api/v1/token')
This error originated from a custom integration.
Logger: custom_components.electrolux_status
Source: custom_components/electrolux_status/coordinator.py:38
integration: Electrolux Care Integration V2 (documentation, issues)
First occurred: 11:25:20 PM (1 occurrences)
Last logged: 11:25:20 PM
Could not log in to ElectroluxStatus, 429, message='', url=URL('https://api.eu.ocp.electrolux.one/one-account-authorization/api/v1/token')
| gharchive/issue | 2024-07-12T20:24:47 | 2025-04-01T06:37:46.893905 | {
"authors": [
"fjosepose"
],
"repo": "albaintor/homeassistant_electrolux_status",
"url": "https://github.com/albaintor/homeassistant_electrolux_status/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
177634516 | Having trouble building the application once I clone Git Repository.
Hello I get the following output in the console when I right click the build.xml and "Run As" Ant Build:
Buildfile: C:\Users\Saad\git\java-repl\build.xml
build:
update:
clean:
[java] Downloading http://jarjar.googlecode.com/files/jarjar-1.1.jar
[java] Failed to download http://jarjar.googlecode.com/files/jarjar-1.1.jar (java.io.FileNotFoundException: http://jarjar.googlecode.com/files/jarjar-1.1.jar)
BUILD FAILED
C:\Users\Saad\git\java-repl\build.xml:192: The following error occurred while executing this line:
C:\Users\Saad\git\java-repl\build.xml:68: The following error occurred while executing this line:
C:\Users\Saad\git\java-repl\build\shavenmaven.xml:23: Java returned: 1
Total time: 612 milliseconds
Can anyone help point me in the right direction. I am using Windows 10 with a new install of Eclipse Neon. I know the instruction say to run "$ ant" but I'm not sure how to run that in Eclipse.
I think I resolved this issue by removing the old location of the "jarjar-1.1.jar" location by looking at a post in the Pull Requests where lines were changed:
-http://jarjar.googlecode.com/files/jarjar-1.1.jar
+https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/jarjar/jarjar-1.1.jar
That's great but now I am getting this issue where I think the manual.zip file under /lib is not able to be accessed. I downloaded the Archive utility for Eclipse but it still only opens in Explorer.exe on my machine and throws the error and the build fails...
Buildfile: C:\Users\Saad\git\java-repl\build.xml
build:
update:
clean:
compile:
BUILD FAILED
C:\Users\Saad\git\java-repl\build.xml:195: The following error occurred while executing this line:
C:\Users\Saad\git\java-repl\build.xml:98: Execute failed: java.io.IOException: Cannot run program "unzip": CreateProcess error=2, The system cannot find the file specified
Total time: 656 milliseconds
windows doesn't have the unzip.exe by default. I grabbed one off the google and placed it somewhere already denoted in the system PATH variable, and off it went. At least, until it refused to accept my additional packages I tried to import by default, because I have no real idea how to do what I want and suck at this, but I did get it to move past the unzip command.
Thanks for the workaround @XEROenvy.
Here it is in a single command:
index dcbf914..dca8bec 100644
--- a/build/build.dependencies
+++ b/build/build.dependencies
@@ -2,7 +2,7 @@ mvn:org.hamcrest:hamcrest-core:jar:1.3
mvn:org.hamcrest:hamcrest-library:jar:1.3
mvn:junit:junit-dep:jar:4.8.2
-http://jarjar.googlecode.com/files/jarjar-1.1.jar
+https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/jarjar/jarjar-1.1.jar
' | git apply
JarJar issue is now resolved.
| gharchive/issue | 2016-09-18T07:00:41 | 2025-04-01T06:37:46.906652 | {
"authors": [
"XEROenvy",
"albertlatacz",
"attheveryend",
"cmantas"
],
"repo": "albertlatacz/java-repl",
"url": "https://github.com/albertlatacz/java-repl/issues/103",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
456868686 | Fix setValueForPosition if value is not object or array
Fixes https://github.com/aldeed/simple-schema-js/issues/227.
Thanks @dovydaskukalis. Merged and released in 0.1.4
| gharchive/pull-request | 2019-06-17T10:50:37 | 2025-04-01T06:37:46.930434 | {
"authors": [
"aldeed",
"dovydaskukalis"
],
"repo": "aldeed/node-mongo-object",
"url": "https://github.com/aldeed/node-mongo-object/pull/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
946403194 | 🛑 Metasearch is down
In 09b5c97, Metasearch (https://ss.alefvanoon.xyz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Metasearch is back up in 3e8be74.
| gharchive/issue | 2021-07-16T15:40:02 | 2025-04-01T06:37:46.958563 | {
"authors": [
"alefvanoon"
],
"repo": "alefvanoon/Status",
"url": "https://github.com/alefvanoon/Status/issues/123",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1188648512 | 🛑 Teddit is down
In cf01046, Teddit (https://teddit.alefvanoon.xyz) was down:
HTTP code: 525
Response time: 1514 ms
Resolved: Teddit is back up in 1fa04e4.
| gharchive/issue | 2022-03-31T20:00:39 | 2025-04-01T06:37:46.960946 | {
"authors": [
"alefvanoon"
],
"repo": "alefvanoon/Status",
"url": "https://github.com/alefvanoon/Status/issues/2412",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1221925961 | 🛑 Bandwidth Hero is down
In 72c0e40, Bandwidth Hero (https://bh.alefvanoon.xyz) was down:
HTTP code: 525
Response time: 112 ms
Resolved: Bandwidth Hero is back up in 9f5bd53.
| gharchive/issue | 2022-04-30T21:34:21 | 2025-04-01T06:37:46.963305 | {
"authors": [
"alefvanoon"
],
"repo": "alefvanoon/Status",
"url": "https://github.com/alefvanoon/Status/issues/2943",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.