id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2760290394 | 🛑 PLDT is down
In fe3cc19, PLDT (https://pldthome.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PLDT is back up in 258edac after 1 hour, 43 minutes.
| gharchive/issue | 2024-12-27T01:55:34 | 2025-04-01T06:37:07.630563 | {
"authors": [
"KMHARS"
],
"repo": "KMHARS/uptime",
"url": "https://github.com/KMHARS/uptime/issues/621",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
170516603 | Plugin cannot import time records from Firefox (password created, password last used/modified)
Would it be possible for the plugin to take both available in Firefox dates and transfer them to KeePass?
I know KeePass has two columns:
Creation Time
Last Modification Time
After importing all passwords they are both set at the time of the import.
However, this information is stored in Firefox under:
First time used
Last Change
Would it be possible to import all those dates together with the passwords?
added @ v1.0.2
| gharchive/issue | 2016-08-10T20:55:24 | 2025-04-01T06:37:07.632618 | {
"authors": [
"KN4CK3R",
"L0ginErr0r"
],
"repo": "KN4CK3R/KeePassBrowserImporter",
"url": "https://github.com/KN4CK3R/KeePassBrowserImporter/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2128978353 | 🛑 Cube Ent. is down
In e67c7d1, Cube Ent. (http://www.cubeent.co.kr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Cube Ent. is back up in d5455e9 after 1 day, 11 hours, 52 minutes.
| gharchive/issue | 2024-02-11T12:58:13 | 2025-04-01T06:37:07.646777 | {
"authors": [
"SOLPLPARTY"
],
"repo": "KPOPCORD/status",
"url": "https://github.com/KPOPCORD/status/issues/196",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
429579740 | Hitting 'home' / 'Pos1' key on modlist throws error.
Background
CKAN Version:
v1.26.0
KSP Version:
1.6.1.2401
Operating System:
Windows 7
Have you made any manual changes to your GameData folder (i.e., not via CKAN)?
No
Affected mods and mod versions:
N/A
Problem
I get an exception whenever I hit the 'home' key in windows 7. Expected behavior would be to jump to the top of the list of installed mods. My workaround is to hit page up repeatedly (but I usually forget and keep hitting home key).
What steps did you take in CKAN?
Hit 'Home' key
What did you expect to happen?
Jump to top of list of mods
What happened instead?
Unhandled exception occurs.
Screenshots:
CKAN error codes (if applicable):
System.InvalidOperationException: Die aktuelle Zelle kann nicht auf eine unsichtbare Zelle festgelegt werden. (Rough translation: the current cell can't be set to an invisible cell)
bei System.Windows.Forms.DataGridView.set_CurrentCell(DataGridViewCell value)
bei CKAN.Main.ModList_KeyDown(Object sender, KeyEventArgs e)
bei System.Windows.Forms.Control.OnKeyDown(KeyEventArgs e)
bei System.Windows.Forms.DataGridView.OnKeyDown(KeyEventArgs e)
bei System.Windows.Forms.Control.ProcessKeyEventArgs(Message& m)
bei System.Windows.Forms.DataGridView.ProcessKeyEventArgs(Message& m)
bei System.Windows.Forms.Control.WmKeyChar(Message& m)
bei System.Windows.Forms.Control.WndProc(Message& m)
bei System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
Thanks for the report.
Can confirm the issue, I added the error message to your post.
I suspect it's a similar issue to https://github.com/KSP-CKAN/CKAN/issues/2703, CKAN tries to select a cell in a hidden column.
| gharchive/issue | 2019-04-05T04:37:54 | 2025-04-01T06:37:07.666164 | {
"authors": [
"DasSkelett",
"Lucidus360"
],
"repo": "KSP-CKAN/NetKAN",
"url": "https://github.com/KSP-CKAN/NetKAN/issues/7108",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1831038700 | Kopernicus.dll is registered to Kopernicus but has not been removed
Is there an existing issue for this?
[X] I have searched the existing issues
Operating System
Win 10
CKAN Version
1.33.2
Game Version
1.12.5.3190
Did you make any manual changes to your game folder (i.e., not via CKAN)?
No response
Describe the bug
Can't install an update to Kopernicus 2, CKAN gives an error about inconsistencies found
No idea if this is a NetKAN or CKAN bug sorry if this is in wrong place, but metadata problem sounds more likely?
Steps to reproduce
cleared Kopernicus archive from CKAN cache
attempted to re apply update through CKAN as if it were a download error, still happens
Relevant log output
* Upgrade: Kopernicus Planetary System Modifier 2:release-1.12.1-176 to 2:release-1.12.1-177 (cached)
The following inconsistencies were found:
* D:/Games/SteamLibrary/steamapps/common/Kerbal Space Program/GameData/Kopernicus/Plugins/Kopernicus.Parser.dll is registered to Kopernicus but has not been removed!
* D:/Games/SteamLibrary/steamapps/common/Kerbal Space Program/GameData/Kopernicus/Plugins/Kopernicus.dll is registered to Kopernicus but has not been removed!
Error during installation!
If the above message indicates a download error, please try again. Otherwise, please open an issue for us to investigate.
If you suspect a metadata problem: https://github.com/KSP-CKAN/NetKAN/issues/new/choose
If you suspect a bug in the client: https://github.com/KSP-CKAN/CKAN/issues/new/choose
That means your OS wouldn't let CKAN delete that file, which usually means either the permissions got messed up (possibly because CKAN was run as administrator previously) or the game is still running. Make sure the game is closed (reboot if necessary), check the permissions, and try again.
| gharchive/issue | 2023-08-01T11:13:15 | 2025-04-01T06:37:07.670004 | {
"authors": [
"HebaruSan",
"ink0r"
],
"repo": "KSP-CKAN/NetKAN",
"url": "https://github.com/KSP-CKAN/NetKAN/issues/9747",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
105061140 | Add Alternis Kerbol Rekerjiggered from Kerbal Stuff
This pull request was automatically generated by Kerbal Stuff on behalf of GregroxMun, to add Alternis Kerbol Rekerjiggered to CKAN.
Please direct questions about this pull request to GregroxMun.
This mod wants to overwrite configs for PlanetShine and DistantObject, depends Kopernicus, and recommends KopernicusExpansion and PlanetShine.
PlanetShine and DistantObject already have config splits, so it's just a matter of creating AlternisKerbolRekerjiggered, DistantObject-AlternisKerbolRekerjiggered, and AlternisKerbolRekerjiggered-PlanetShine
Closing in favour of https://github.com/KSP-CKAN/NetKAN/pull/2443
| gharchive/pull-request | 2015-09-06T01:36:43 | 2025-04-01T06:37:07.673395 | {
"authors": [
"KerbalStuffBot",
"plague006"
],
"repo": "KSP-CKAN/NetKAN",
"url": "https://github.com/KSP-CKAN/NetKAN/pull/2255",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
563579718 | Add Infernal RO-Robotics from SpaceDock
This pull request was automatically generated by SpaceDock on behalf of whale_2, to add Infernal RO-Robotics to CKAN.
Please direct questions about this pull request to whale_2.
Mod details:
name = /mod/2329/Infernal%20RO-Robotics
author = whale_2
abstract = Infernal Robotics fork for Realism Overhaul
license = GPLv3
Homepage = https://github.com/whale2/InfernalRobotics/tree/master
description =
This is special fork of Infernal Robotics targeted at Realism Overhaul, though it should work just fine in usual KSP too.
Features autostrut-like mechanism for moving parts if needed.
Hey @whale2, this is marked as compatible with KSP 1.8 but RealismOverhaul is still on KSP 1.7. Is that OK?
Also what about this is particular to RO? It kind of sounds like this is an adoption/continuation of InfernalRobotics just using RO as branding...?
Hi HebaruSan,
It's quite a complicated story with many parties involved. As you
probably know, Infernal Robotics was picked up by Rudolf Meier and
completely rewritten along with some other mods like KJR on which IR Next
is kind of depends. At this point there already was KJR Continued by
pap1723, which was a part of RO set of mods. Currently, RO folks refuse to
provide any support if KJR Next is used, don't ask me why, but this
rendered IR Next less viable option for RO. I was eager to add IR to my RO
gameplay, so I picked up old IR, added some features to it as well as RO
rebalancing. This move was discussed with sirkut and ZodiusInfuser, (I also
messaged Ziw but got no answer) and I said I won't advertise it much on
forums so not to add to confusion with different IR forks.
So technically, my IR fork works as well in non-RO game as in RO, but from
"product placement" POV, it is intended for RO where IR Next is harder to
use. Hope it explains the matter.
As of 1.8 - RO gradually moves to this version after Kopernicus got
released for 1.8.1 - you can see it in so called Golden Spreadsheet (
https://docs.google.com/spreadsheets/d/1Ldf_nCZw0MiCP-Y5YFuvQioEKihY1SNgD-5yS983itE/edit?usp=sharing)
Also, preferred method of installing RO is CKAN, so I wanted to add this
RO-branded IR to it.
On Wed, Feb 12, 2020 at 5:42 PM HebaruSan notifications@github.com wrote:
Hey @whale2 https://github.com/whale2, this is marked as compatible
with KSP 1.8 but RealismOverhaul is still on KSP 1.7. Is that OK?
Also what about this is particular to RO? It kind of sounds like this is
an adoption/continuation of InfernalRobotics just using RO as branding...?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/KSP-CKAN/NetKAN/pull/7689?email_source=notifications&email_token=AAKALOW6ZQK7H3U5TQVQYBLRCQRGTA5CNFSM4KTMFMTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELRPCFY#issuecomment-585298199,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAKALORJIO26HWAHVMLP5HLRCQRGTANCNFSM4KTMFMTA
.
Hmm, would it make sense to add IR Next as a conflicting mod? I believe it uses the same directory under GameData.
Yes, I think several new relationships are probably necessary here.
Can this mod be used with KJRNext?
Can this mod be used with KJRNext?
Frankly - I didn't test it.
OK, if any users complain about problems, let us know and we can add a conflict.
| gharchive/pull-request | 2020-02-11T23:07:34 | 2025-04-01T06:37:07.684727 | {
"authors": [
"HebaruSan",
"Space-Duck",
"whale2"
],
"repo": "KSP-CKAN/NetKAN",
"url": "https://github.com/KSP-CKAN/NetKAN/pull/7689",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2008725577 | Add sidewalk segmentation feature to svea_vision package
This PR introduces sidewalk segmentation feature to svea_vision package. The implementation is primarily done in the sidewalk_segmentation.py node.
The new feature includes the following key functionalities:
Image segmentation: The segment_image method is used to segment the sidewalk from the input image using FastSAM model.
Prompting: The segmented results can be prompted with a bounding box, a set of points or text to guide the segmentation. The type of prompt can be specified using the prompt_type parameter.
Point cloud extraction: The extract_pointcloud method is used to extract the point cloud data corresponding to the segmented sidewalk.
Customization: A diverse set of ROS parameters are used in this feature, allowing for flexibility in configuring the sidewalk segmentation such as the prompt type, the value of the prompt, topic names etc.
Logging: The time taken for each step (inference, prompt, postprocess, extract point cloud, publish) can be logged for performance analysis using verbose parameter.
Static image publisher: Alongside the main sidewalk_segmentation.py node, a utility node static_image_publisher.py is included in this PR. This script is used to publish a static image or a set of images to a ROS topic, which can be useful for testing and debugging.
Really nice. We can discuss in more detail what/how to bring this into svea.
| gharchive/pull-request | 2023-11-23T19:41:08 | 2025-04-01T06:37:07.690558 | {
"authors": [
"kaarmu",
"sulthansf"
],
"repo": "KTH-SML/sidewalk_mobility_demo",
"url": "https://github.com/KTH-SML/sidewalk_mobility_demo/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
454670045 | Generate model input files from one common format
At present, each of the three (four if OSeMOSYS_PuLP is included) language versions of OSeMOSYS use a different input file format.
Write a script which generates a correctly formatted datafile for the:
[ ] GAMS version
[ ] PuLP version
[ ] Pyomo version
from a GNU MathProg dat file.
Alternatively, generate a common file format (possible CSV) which can be read by all language versions of OSeMOSYS.
Split into #70 and #71
| gharchive/issue | 2019-06-11T12:48:17 | 2025-04-01T06:37:07.692788 | {
"authors": [
"willu47"
],
"repo": "KTH-dESA/OSeMOSYS",
"url": "https://github.com/KTH-dESA/OSeMOSYS/issues/23",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1099608202 | 🛑 nHentai API is down
In 6654a1c, nHentai API (https://nh.usui.moe) was down:
HTTP code: 0
Response time: 0 ms
Resolved: nHentai API is back up in 28f9e21.
| gharchive/issue | 2022-01-11T20:56:04 | 2025-04-01T06:37:07.719694 | {
"authors": [
"Kadantte"
],
"repo": "Kadantte/candy-up",
"url": "https://github.com/Kadantte/candy-up/issues/255",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1481809985 | Updating / Deleting a user (using Terraform) results in Error: User does not exist
Related to #33
Deleting a user by removing the resource from Terraform results in Error: User does not exist. The same exact thing happens when e.g. we try to update e.g. the username of the user
module.documentdb_init.mongodb_db_user.personalized_user["sherif.k.ayad@gmail.com"]: Destroying... [id=YWRtaW4uc2hlcmlmLmF5YWRAcGFydG5lci5pb25pdHkuZXU=]
╷
│ Error: User does not exist
The following is my config for the user resource:
locals {
personalized_users = {
"sherif.k.ayad@gmail.com": {
username = "sherif.k.ayad@gmail.com"
password = "some_secure_pass"
// ... some other stuff
}
}
}
resource "mongodb_db_user" "personalized_user" {
for_each = local.personalized_users
auth_database = "admin"
name = each.value.username
password = each.value.password
role {
db = "my_db_1"
role = "readWrite"
}
role {
db = "my_db_2"
role = "readWrite"
}
}
Found the reason! I was using usernames with dotes (. & @) characters .. despite that DocumentDB doesn't seem to complain, these were causing issues with the provider. I switched to using underscores (_) instead and the deletion / update of users is working like a charm.
| gharchive/issue | 2022-12-07T12:31:15 | 2025-04-01T06:37:07.730580 | {
"authors": [
"sherifkayad"
],
"repo": "Kaginari/terraform-provider-mongodb",
"url": "https://github.com/Kaginari/terraform-provider-mongodb/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2068837027 | Backup doesn't work when Bedrock server is running on a docker swarm
When Minecraft Bedrock Server is running on a docker swarm, the container name is appended with a random numbers/letters each time the stack is started, because of this the backup cannot connect to the container using the defined name.
This could be fixed by taking the name defined in the config.yml in this this instance "bds_minecraft_bedrock" and running the command below to get the container id which can then be used successfully as the name.
$(docker ps -q -f name=bds_minecraft_bedrock)
Cheers
Si
Kubernetes throws a wrench in things too. There's something happening recently that may or may not impact you here?
I've been working with itzg on adding SSH console support to the java/bedrock containers, and I've just merged the support in bedrockifier a few minutes ago. So instead of connecting via the container's name, it connects to the hostname for the container.
However, I think the catch might be that the hostname in this case is still the container name, or is the individual node's container hidden behind a more usable hostname? If the swarm hostnames are random, this might not help much. Unfortunately, I'm not super up to speed on swarm, as everything lives on a single VM for me.
Thanks for the reply, I'll take a look at the SSH option and see if its viable.
I'm completely new to docker, but as usual jumped in at the deep end with the whole swarm thing.
Currently I'm running a script on the host which literally runs the command below to switch out the name in the config file with the current container id, which has it working at least. "bds_minecraft_bedrock" in this case is the stack service name.
sudo sed -i -r "s/^( - name: ).*/\1 $(docker ps -q -f name=bds_minecraft_bedrock)/" /backup/config.yml
Cheers
Si
From: Kaiede @.>
Sent: 06 January 2024 21:51
To: Kaiede/Bedrockifier @.>
Cc: s-dukes @.>; Author @.>
Subject: Re: [Kaiede/Bedrockifier] Backup doesn't work when Bedrock server is running on a docker swarm (Issue #76)
Kubernetes throws a wrench in things too. There's something happening recently that may or may not impact you here?
I've been working with itzg on adding SSH console support to the java/bedrock containers, and I've just merged the support in bedrockifier a few minutes ago. So instead of connecting via the container's name, it connects to the hostname for the container.
However, I think the catch might be that the hostname in this case is still the container name, or is the individual node's container hidden behind a more usable hostname? If the swarm hostnames are random, this might not help much. Unfortunately, I'm not super up to speed on swarm, as everything lives on a single VM for me.
—
Reply to this email directly, view it on GitHubhttps://github.com/Kaiede/Bedrockifier/issues/76#issuecomment-1879839085, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AIC4Q6ARGEZOLENFWSOKZMTYNHBO5AVCNFSM6AAAAABBPZYICCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZZHAZTSMBYGU.
You are receiving this because you authored the thread.Message ID: @.***>
It's in the test tag now, and itzg's containers should have support in the latest tag. Examples using docker-compose are here until I can write up more complete documentation: https://github.com/Kaiede/Bedrockifier/tree/main/Examples
I would be interested to see how it helps, as one reason for this work was to make Kubernetes easier to support.
Just tested… it works!!! 😊 but I did have to set the hostname in the docker compose via the hostname environment variable.
Below is a sanitised copy of my docker compose:
version: '3.4'
services:
minecraft_bedrock:
image: itzg/minecraft-bedrock-server
deploy:
placement:
constraints:
- node.hostname == docker-node-2
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
hostname: bedrock_private
environment:
EULA: "TRUE"
VERSION: LATEST
UID: 0
GID: 0
TZ: europe/london
PACKAGE_BACKUP_KEEP: 2
ENABLE_SSH: "true"
SSH_ENABLE: "TRUE"
RCON_PASSWORD: fauxverysecurepassword
SERVER_NAME: Bedrock at Home
SERVER_PORT: 19132
SERVER_PORT_V6: 19133
GAMEMODE: survival
DIFFICULTY: hard
LEVEL_TYPE:
ALLOW_CHEATS: "false"
MAX_PLAYERS: 10
ONLINE_MODE: "true"
ALLOW_LIST:
VIEW_DISTANCE: 32
TICK_DISTANCE: 8
PLAYER_IDLE_TIMEOUT: 0
MAX_THREADS: 0
LEVEL_NAME: Bedrock at Home
LEVEL_SEED: myseed
DEFAULT_PLAYER_PERMISSION_LEVEL: member
TEXTUREPACK_REQUIRED: "false"
SERVER_AUTHORITATIVE_MOVEMENT: server-auth
PLAYER_MOVEMENT_SCORE_THRESHOLD: 20
PLAYER_MOVEMENT_DISTANCE_THRESHOLD: 0.3
PLAYER_MOVEMENT_DURATION_THRESHOLD_IN_MS: 500
CORRECT_PLAYER_MOVEMENT: "false"
ALLOW_LIST_USERS:
OPS:
MEMBERS:
VISITORS:
ENABLE_LAN_VISIBILITY: "true"
expose:
- 2222
ports:
- "19132:19132/udp"
volumes:
- bedrock_data:/data
stdin_open: true
tty: true
backup:
image: kaiede/minecraft-bedrock-backup:test
deploy:
placement:
constraints:
- node.hostname == docker-node-2
restart: on-failure
depends_on:
- "minecraft_bedrock"
environment:
BACKUP_INTERVAL: "3h"
TZ: "Europe/London"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- bedrock_backup:/backups
- bedrock_data:/server
volumes:
bedrock_data:
driver: glusterfs
name: "gv0/volumes/minecraft_data"
bedrock_backup:
driver: glusterfs
name: "gv0/volumes/minecraft_backup"
Can you confirm which of these are correct?
ENABLE_SSH: "true"
SSH_ENABLE: "TRUE"
I think your documentation has it one way and itzg the other.
Thanks for the help!
From: Kaiede @.>
Sent: Sunday, January 7, 2024 6:06 AM
To: Kaiede/Bedrockifier @.>
Cc: s-dukes @.>; Author @.>
Subject: Re: [Kaiede/Bedrockifier] Backup doesn't work when Bedrock server is running on a docker swarm (Issue #76)
It's in the test tag now, and itzg's containers should have support in the latest tag. Examples using docker-compose are here until I can write up more complete documentation: https://github.com/Kaiede/Bedrockifier/tree/main/Examples
I would be interested to see how it helps, as one reason for this work was to make Kubernetes easier to support.
—
Reply to this email directly, view it on GitHubhttps://github.com/Kaiede/Bedrockifier/issues/76#issuecomment-1879966210, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AIC4Q6GIUZ74HX7KSVUG6D3YNI3KXAVCNFSM6AAAAABBPZYICCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZZHE3DMMRRGA.
You are receiving this because you authored the thread.Message ID: @.@.>>
I did have to set the hostname in the docker compose
Hmm, so when using docker-compose, I’ve seen that it does create hostnames for the services that are reachable from within the private network. It’s not quite the container name in my experience. For example, I have ~/minecraft/docker-compose.yml
services:
# My Java Server
yosemite:
… etc …
# My Bedrock Server
cascades:
… etc …
backup:
… etc …
The containers themselves wind up being minecraft_yosemite, minecraft_cascades and minecraft_backup. However, my config.yml for the backup uses yosemite:2222 and cascades:2222 for the ssh address, since that’s the hostname generated for my by docker-compose. Does this work for docker swarm or not, I wonder?
This is one reason I need time to update the documentation, to try to capture some of the subtleties of the new system.
Can you confirm which of these are correct?
The docker-compose.yml on my personal server is using ENABLE_SSH, so that’s the correct one. I’ll double check the examples and fix it there.
Tested and yes using the service name also works with swarm, setting a hostname isn't required if you use the service name set in the compose for the SSH connection.
So my bad, I just assumed the hostname would be needed, clearly there is some kind of resolution happening in the background of docker translating service names as well.
Tested and yes using the service name also works with swarm, setting a hostname isn't required if you use the service name set in the compose for the SSH connection.
Good to know. I’ll keep that in mind when writing up the docs.
Since it sounds like SSH is working well for this scenario, and the work is now fully released with updates to the Wiki, I'll go ahead and close this.
| gharchive/issue | 2024-01-06T21:13:15 | 2025-04-01T06:37:07.757277 | {
"authors": [
"Kaiede",
"s-dukes"
],
"repo": "Kaiede/Bedrockifier",
"url": "https://github.com/Kaiede/Bedrockifier/issues/76",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1704746948 | Cookie notification
npm install react-cookie-consent
Merging to dev branch to test on server.
| gharchive/pull-request | 2023-05-10T23:24:54 | 2025-04-01T06:37:07.828704 | {
"authors": [
"23kcarlson",
"escarlson"
],
"repo": "Kaleidoscope-Systems/epitaph",
"url": "https://github.com/Kaleidoscope-Systems/epitaph/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
479011407 | Removed broken links and fixed typing error and Added Legacy Page
Broken links - Swapnil Sharma and Pinank Solanki
@Varunvaruns9 @Swapnilr1
Pinank Solanki's Facebook ID seems to be https://www.facebook.com/psolanki10 so you can just update I think. Also need to remove his GitHub.
will do it and may I remove the github link because I can't find one?
@Swapnilr1 fixed it
@Varunvaruns9 please tell how to solve the issue.
Add some media queries here:
https://github.com/KamandPrompt/kamandprompt.github.io/blob/eab42c64b0e2b7e610fd84b8d77b890aa3b5fdb7/css/styles.css#L620
@JaiLuthra1 If you're not working on this, you should close this.
| gharchive/pull-request | 2019-08-09T14:18:33 | 2025-04-01T06:37:07.841615 | {
"authors": [
"JaiLuthra1",
"Swapnilr1",
"Varunvaruns9",
"vsvipul"
],
"repo": "KamandPrompt/kamandprompt.github.io",
"url": "https://github.com/KamandPrompt/kamandprompt.github.io/pull/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2329211364 | Backport (paid?) to 3.3.5a
Would be interested in that. Thanks.
I don't think you have enough money for that 😅
It might be a lot of work/many hours.
| gharchive/issue | 2024-06-01T15:54:32 | 2025-04-01T06:37:07.859860 | {
"authors": [
"Karl-HeinzSchneider",
"RadeghostWM"
],
"repo": "Karl-HeinzSchneider/WoW-DragonflightUI",
"url": "https://github.com/Karl-HeinzSchneider/WoW-DragonflightUI/issues/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1330792556 | 請問能否研究下兼容這個樣式
感謝這個提取磁連的腳本,剛剛發現之前一直用的那個批量腳本失效了。
一直在用一個樣式,版面文字都比較大容易看,希望把這個腳本也兼容上。
樣式需要先安裝Chrome擴充 xStyle或Stylish。感謝。
https://userstyles.org/styles/133936/acg-bt
请问一下这个样式是怎么出来的?是装了什么插件吗?我无法复现
好的 我看见了 今天有时间就更新
已更新,感谢反馈
感謝。現在這樣容易看得清。
| gharchive/issue | 2022-08-06T16:26:53 | 2025-04-01T06:37:07.891348 | {
"authors": [
"KazeLiu",
"YheonYeung"
],
"repo": "KazeLiu/GetDmhyDownloadUrl",
"url": "https://github.com/KazeLiu/GetDmhyDownloadUrl/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
115170270 | Expose named routes via UrlDispatcher.named_routes()
See issue #504.
Perfect!
| gharchive/pull-request | 2015-11-04T23:07:24 | 2025-04-01T06:37:07.903649 | {
"authors": [
"asvetlov",
"jashandeep-sohi"
],
"repo": "KeepSafe/aiohttp",
"url": "https://github.com/KeepSafe/aiohttp/pull/622",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1164887888 | Start to fix visualization class
Fix name of tests.
Start to fix visualization class
Pull Request Test Coverage Report for Build 1961885273
2 of 8 (25.0%) changed or added relevant lines in 2 files are covered.
9 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-0.02%) to 37.697%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
src/Kendrick/Visualization.class.st
0
6
0.0%
Files with Coverage Reduction
New Missed Lines
%
src/Kendrick/Visualization.class.st
9
0%
Totals
Change from base Build 1961884189:
-0.02%
Covered Lines:
5488
Relevant Lines:
14558
💛 - Coveralls
| gharchive/pull-request | 2022-03-10T08:04:08 | 2025-04-01T06:37:07.929090 | {
"authors": [
"SergeStinckwich",
"coveralls"
],
"repo": "KendrickOrg/kendrick",
"url": "https://github.com/KendrickOrg/kendrick/pull/435",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
166409720 | Gallium fails to precompile
Using master branches of julia and Gallium and its dependencies. I'm pretty sure that it worked at b480ce3 of julia.
AWS-Sachs-Ubuntu$ usr/bin/julia
_
_ _ _(_)_ | A fresh approach to technical computing
(_) | (_) (_) | Documentation: http://docs.julialang.org
_ _ _| |_ __ _ | Type "?help" for help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 0.5.0-dev+5510 (2016-07-19 18:28 UTC)
_/ |\__'_|_|_|\__'_| | Commit 0524a52 (0 days old master)
|__/ | x86_64-linux-gnu
julia> using Gallium
INFO: Precompiling module Gallium...
ERROR: LoadError: LoadError: syntax: invalid operator ".!"
in include_from_node1(::String) at ./loading.jl:426 (repeats 2 times)
in macro expansion; at ./none:2 [inlined]
in anonymous at ./<missing>:?
in eval(::Module, ::Any) at ./boot.jl:234
in process_options(::Base.JLOptions) at ./client.jl:239
in _start() at ./client.jl:318
while loading /home/sachs/.julia/v0.5/JuliaParser/src/lexer.jl, in expression starting on line 46
while loading /home/sachs/.julia/v0.5/JuliaParser/src/JuliaParser.jl, in expression starting on line 9
ERROR: LoadError: Failed to precompile JuliaParser to /home/sachs/.julia/lib/v0.5/JuliaParser.ji
in compilecache(::String) at ./loading.jl:505
in require(::Symbol) at ./loading.jl:337
in include_from_node1(::String) at ./loading.jl:426
in macro expansion; at ./none:2 [inlined]
in anonymous at ./<missing>:?
in eval(::Module, ::Any) at ./boot.jl:234
in process_options(::Base.JLOptions) at ./client.jl:239
in _start() at ./client.jl:318
while loading /home/sachs/.julia/v0.5/ASTInterpreter/src/ASTInterpreter.jl, in expression starting on line 8
ERROR: LoadError: Failed to precompile ASTInterpreter to /home/sachs/.julia/lib/v0.5/ASTInterpreter.ji
in compilecache(::String) at ./loading.jl:505
in require(::Symbol) at ./loading.jl:337
in include_from_node1(::String) at ./loading.jl:426
in macro expansion; at ./none:2 [inlined]
in anonymous at ./<missing>:?
in eval(::Module, ::Any) at ./boot.jl:234
in process_options(::Base.JLOptions) at ./client.jl:239
in _start() at ./client.jl:318
while loading /home/sachs/.julia/v0.5/Gallium/src/Gallium.jl, in expression starting on line 3
ERROR: Failed to precompile Gallium to /home/sachs/.julia/lib/v0.5/Gallium.ji
in compilecache(::String) at ./loading.jl:505
in require(::Symbol) at ./loading.jl:364
in eval(::Module, ::Any) at ./boot.jl:234
in macro expansion at ./REPL.jl:92 [inlined]
in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:46
814c974026b8c5dc1a19a19f403e8a49395455ea will fix this soon
I'm confused. 814c974 was prior to 0524a52, where the problem was reported.
sorry, what I meant was 814c974, but I will fix this soon
| gharchive/issue | 2016-07-19T19:06:50 | 2025-04-01T06:37:07.931492 | {
"authors": [
"Keno",
"josefsachsconning"
],
"repo": "Keno/Gallium.jl",
"url": "https://github.com/Keno/Gallium.jl/issues/136",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2222850083 | Common Data not able to be adapted into models Language Metadata.
Reusable fields applied to Content Items are not importable through ContentItemLanguageData.ContentItemData since the the column names for the actual content type do not contain those fields and they are contained in the ContentItemCommonData.
Using the example sample say you have a Reusable field schema withe a property name MetadataTitle shown below defined in the ContentItem dictionary. This will fail the check in the GenericAdapter for "Info doesn't contain column with name 'MetadataTitle' - _CustomProperties has invalid key, key SHALL have same name as column in target Info object" since this field exists in the ContentItemCommonData table and is not contained in the ColumnNames for the ContentItemModel loaded when adapting the
LanguageData.
ContentItemSimplifiedModel simplified = new()
{
ContentItemGUID = SampleArticleContentItemGuid,
Name = "SimplifiedModelSample",
IsSecured = false,
ContentTypeName = DataClassSamples.ArticleClassSample.ClassName,
IsReusable = true,
// channel name is required only for web site content items
ChannelName = ChannelSamples.SampleChannelForWebSiteChannel.ChannelName,
// required when content item type is website content item
PageData = new() {
ParentGuid = null,
TreePath = "/simplified-sample",
PageUrls = [
new()
{
UrlPath = "en-us/simplified-sample",
PathIsDraft = true,
LanguageName = ContentLanguageSamples.SampleContentLanguageEnUs.ContentLanguageName!
},
new()
{
UrlPath = "en-gb/simplified-sample",
PathIsDraft = true,
LanguageName = ContentLanguageSamples.SampleContentLanguageEnGb.ContentLanguageName!
}
]
},
LanguageData =
[
new()
{
LanguageName = ContentLanguageSamples.SampleContentLanguageEnUs.ContentLanguageName!,
DisplayName = "Simplified model sample - en-us",
VersionStatus = VersionStatus.InitialDraft,
UserGuid = UserSamples.SampleAdminGuid,
ContentItemData = new Dictionary<string, object?>
{
["ArticleTitle"] = "en-US UMT simplified model creation",
["ArticleText"] = "This article is only example of creation UMT simplified model for en-US language",
["RelatedArticles"] = null,
["RelatedFaq"] = null.
["MetadataTitle"] = "Some Value"
}
},
new()
{
LanguageName = ContentLanguageSamples.SampleContentLanguageEnGb.ContentLanguageName!,
DisplayName = "Simplified model sample - en-gb",
VersionStatus = VersionStatus.Published,
UserGuid = UserSamples.SampleAdminGuid,
ContentItemData = new Dictionary<string, object?>
{
["ArticleTitle"] = "en-GB UMT simplified model creation",
["ArticleText"] = "This article is only example of creation UMT simplified model for en-GB language",
["RelatedArticles"] = null,
["RelatedFaq"] = null.
["MetadataTitleCommon"] = "Some Value"
}
}
],
ContentItemSimplifiedModel.cs
public class ContentItemLanguageData
{
[Required]
public required string LanguageName { get; set; }
[Required]
public required string DisplayName { get; set; }
public VersionStatus VersionStatus { get; set; } = VersionStatus.InitialDraft;
[Required]
public required Guid? UserGuid { get; set; }
public Dictionary<string, object?>? ContentItemData { get; set; }
public Dictionary<string, object?>? ContentItemCommonData { get; set; } // Added property to contain common data fields
}
ContentItemSimplifiedAdapter.cs line: 118
var contentItemCommonDataModel = new ContentItemCommonDataModel
{
ContentItemCommonDataGUID = contentItemCommonDataInfoGuid ?? Guid.NewGuid(),
ContentItemCommonDataContentItemGuid = contentItemInfo.ContentItemGUID,
ContentItemCommonDataContentLanguageGuid = contentLanguageInfo.ContentLanguageGUID,
ContentItemCommonDataVersionStatus = languageData.VersionStatus,
ContentItemCommonDataIsLatest = true,
ContentItemCommonDataPageBuilderWidgets = null,
ContentItemCommonDataPageTemplateConfiguration = null,
CustomProperties = languageData?.ContentItemCommonData ?? [] // Allow custom properties to be adapted into the common data for things like metadata schema
};
Thank you for bringing this to our attention. We'll examine it internally and keep you updated on our progress.
| gharchive/issue | 2024-04-03T12:49:57 | 2025-04-01T06:37:07.944326 | {
"authors": [
"jkerbaugh",
"liparova"
],
"repo": "Kentico/xperience-by-kentico-universal-migration-toolkit",
"url": "https://github.com/Kentico/xperience-by-kentico-universal-migration-toolkit/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1669829016 | Error. While downloading
./animepahe-dl.sh: line 294: 6270 Killed xargs -I {} -P "$(get_thread_number "$1")" bash -c 'url="{}"; file="${url##*/}.encrypted"; download_file "$url" "${op}/${file}"' < <(grep "^https" "$1")
[Process completed (signal 9) - press Enter]
Which anime and episode?
Hey @Anas-Mughal, following the question from @lord8266, are you able to download this episode using normal download mode without -t?
Hey @Anas-Mughal, following the question from @lord8266, are you able to download this episode using normal download mode without -t?
Yes
@Anas-Mughal which anime and episode? I can quickly check it.
@Anas-Mughal which anime and episode? I can quickly check it.
Then I was downloading Parasyte. But now you can check by downloading Black Clover or Mob Psycho 100 but the error is same in downloading all anime.
Hey @Anas-Mughal, I tried the mentioned amines and I don't see the error. It seems the error [Process completed (signal 9) - press Enter] related to termux on Android. If you are using termux to run the script, please try to use smaller number with -t, or search online for this error and you may find a solution to this termux error.
Hey @Anas-Mughal, I tried the mentioned amines and I don't see the error. It seems the error [Process completed (signal 9) - press Enter] related to termux on Android. If you are using termux to run the script, please try to use smaller number with -t, or search online for this error and you may find a solution to this termux error.
Do you know how I can fix this? Because earlier I used to download from Termux and this problem did not occur. And I also reduced the value of argument -t and the same problem occurs.
@Anas-Mughal, I don't know how since I don't have this problem... Here is a googled result that looks like a solution: https://github.com/agnostic-apollo/Android-Docs/blob/master/en/docs/apps/processes/phantom-cached-and-empty-processes.md#how-to-disable-the-phantom-processes-killing. You can try to search [Process completed (signal 9) - press Enter] and find more results. Good luck!
| gharchive/issue | 2023-04-16T10:02:22 | 2025-04-01T06:37:07.968604 | {
"authors": [
"Anas-Mughal",
"KevCui",
"lord8266"
],
"repo": "KevCui/animepahe-dl",
"url": "https://github.com/KevCui/animepahe-dl/issues/88",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
2253424953 | Version in Cargo.toml not matching released version
crates.io lists version 0.5.1, but the Cargo.toml in the repository says 0.5.0, where does that mismatch come from?
Pushed cargo.toml file change
| gharchive/issue | 2024-04-19T16:33:33 | 2025-04-01T06:37:08.068410 | {
"authors": [
"KevinVoell",
"matthiasbeyer"
],
"repo": "KevinVoell/network_manager",
"url": "https://github.com/KevinVoell/network_manager/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2248515171 | Autocomplete is not applied if the UITextDocumentProxy.currentWord is nil
Hi everyone,
First of all, congrats on the amazing library! Kudos to the team 😄
I am facing a small issue after updating my project from the 7th version to the 8th version of KeyboardKit.
In the application that I'm currently work, we add AutoComplete.Suggestion default objects when the textField is empty. This behaviour is similar to what Apple does natively with their keyboard. For example, for the en-US locale and in an empty text field, the keyboard suggests "I", "The" and "I'm".
Using KeyboardKit 8+ I can add these options on the keyboard toolbar, however I cannot apply these suggestions when there's no text or spaces in the textField yet. That's happening due to some guards checks in the code.
I investigated the code a bit and I believe it could be slightly changed on KeyboardInputViewController.performAutocomplete to handle cases where there's no text present yet. I have made the changes and tested myself, it worked fine, but it would be nice to run some other tests to guarantee the functionality is not broken.
I appreciate your time reading my request and I apologize if there's a misunderstanding on my part. I'd be more than happy to help in case it's needed.
Cheers,
Rubens Pessoa
Hi @rubenspessoa
Thank you for reaching out with this!
In KK 8+, the controller resets the autocomplete context if the text is empty. While I'm sure the intention with this was good, it's still incorrect, since it's the responsibility of the autocomplete provider to decide this.
I will release this fix as a 8.5.1. 👍
| gharchive/issue | 2024-04-17T15:00:24 | 2025-04-01T06:37:08.073646 | {
"authors": [
"danielsaidi",
"rubenspessoa"
],
"repo": "KeyboardKit/KeyboardKit",
"url": "https://github.com/KeyboardKit/KeyboardKit/issues/710",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2709751406 | Move websocket headers to opt function 'WithWebsocketHeaders'
Follow-up up on the discussion in https://github.com/Khan/genqlient/pull/360#pullrequestreview-2471038655.
Move websocket headers to and opt function 'WithWebsocketHeaders'.
I have:
[x] Written a clear PR title and description (above)
[x] Signed the Khan Academy CLA
[x] Added tests covering my changes, if applicable
[x] Included a link to the issue fixed, if applicable
[x] Included documentation, for new features
[x] Added an entry to the changelog
@benjaminjkraft Here is a follow-up PR regarding our discussion here: https://github.com/Khan/genqlient/pull/360#pullrequestreview-2471038655
This is without tests now, which is not great. I'm struggling to find an easy way to test this.
Normally I would set up a middleware and set the authentication ctx key for the header. But it seems that the httptest server does not have middleware support out-of-the-box. Maybe I'm wrong?
Could be wise to hold off merging this until tests have been added 🤗
Thanks!
I think you should be able to do middleware? gqlgenServer is just an http.Handler, so if you have a middleware that's func(http.Handler) http.Handler you can just do NewServer(middleware(gqlgenServer)).
@benjaminjkraft Thanks! Tests added now!
| gharchive/pull-request | 2024-12-01T19:41:15 | 2025-04-01T06:37:08.087905 | {
"authors": [
"HaraldNordgren",
"benjaminjkraft"
],
"repo": "Khan/genqlient",
"url": "https://github.com/Khan/genqlient/pull/365",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
807614694 | Remove official support for direct MSL shader loading from documentation.
Shader code should be submitted as SPIR-V. Although some simple direct MSL shaders may work,
direct loading of MSL source code or compiled MSL code is not officially supported at this time.
Future versions of MoltenVK may support direct MSL submission again.
Addresses issue #1253
It's really my fault we have to do this. I wanted to fix this, but I'm so busy with other stuff at the moment that I don't have time. I guess this will do for now.
Meh. Don't be too hard on yourself. Direct MSL has fallen behind along several dimensions. There have been previous issues raised around how to map Metal resource indexes in a more sophisticated manner. And future Metal argument buffer use will complicate it even further. I expect it will need a fair bit of work to recover effectively and continue to maintain.
I'm also interested in following up on the relatively new pipeline caching options available through Metal. If we can mesh that with Vulkan's pipeline caching, that might provide a more Vulkan-friendly and maintainable approach to improving shader conversion and compiling performance, and mitigate some of the need to directly support MSL.
| gharchive/pull-request | 2021-02-12T23:36:18 | 2025-04-01T06:37:08.125177 | {
"authors": [
"billhollings"
],
"repo": "KhronosGroup/MoltenVK",
"url": "https://github.com/KhronosGroup/MoltenVK/pull/1266",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1739608888 | MSL: vertex shader's return type is void
Hit a bug, when metal-vertex shader is generated:
Full shader: https://shader-playground.timjones.io/c11d83135a1eac44c5753e147eba5fe6
Relevant part:
vertex void main0(constant uint* spvBufferSizeConstants [[buffer(25)]], device Input& _10 [[buffer(0)]], uint gl_VertexIndex [[vertex_id]])
{
main0_out out = {};
constant uint& _10BufferSize = spvBufferSizeConstants[0];
_10.ssbo[int(gl_VertexIndex)] = uint(int((_10BufferSize - 0) / 4));
out.gl_Position = float4(0.0, 0.0, 0.0, 1.0);
// no return here, so gl_Position was discarded
}
My bad: didn't know that msl version has to be set at 2.1 or newer for vertex-shader side effect to work.
| gharchive/issue | 2023-06-03T15:06:28 | 2025-04-01T06:37:08.127066 | {
"authors": [
"Try"
],
"repo": "KhronosGroup/SPIRV-Cross",
"url": "https://github.com/KhronosGroup/SPIRV-Cross/issues/2160",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
987154934 | fix parsing of bad binary exponents in hex floats
The binary exponent must have some decimal digits
A + or - after the binary exponent digits should not be interpreted as
part of the binary exponent.
Fixes: #4500
Amazingly, the CI-macos-clang-release build started only 30 minutes ago (roughly).
Amazingly, the CI-macos-clang-release build started only 30 minutes ago (roughly).
It had failed because of a machine issue, so I restarted it.
| gharchive/pull-request | 2021-09-02T20:53:30 | 2025-04-01T06:37:08.132573 | {
"authors": [
"dneto0",
"s-perron"
],
"repo": "KhronosGroup/SPIRV-Tools",
"url": "https://github.com/KhronosGroup/SPIRV-Tools/pull/4501",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1859392065 | Vulkan Instance creation is broken when using VK_EXT_VALIDATION_FEATURES_EXTENSION_NAME
I am trying to run the vulkan ray_tracing_extended sample. This used to work a few weeks back (when it was still named raytracing_extended), but I pulled the new version today and now instance creation does not work anymore. This probably is not a problem of the sample but of the entire framework though.
The error I get is:
[error] [framework\platform\platform.cpp:169] Error Message: Could not create Vulkan instance : ERROR_EXTENSION_NOT_PRESENT
The problematic extension seems to be VK_EXT_VALIDATION_FEATURES_EXTENSION_NAME:
If I comment out enabled_extensions.push_back(VK_EXT_VALIDATION_FEATURES_EXTENSION_NAME); in Vulkan-Samples/framework/core/instance.cpp (on my current main branch, this is line 226), the instance creation throws no error.
I thought this was a driver issue at first, since the querying of the extension looks correct to me, but I have tried on the following hardwarde (both on Windows 11, Visual Studio 2022):
NVidia RTX A1000 Laptop GPU, Driver version 536.96 and
NVidia RTX 2080 Ti, Driver version 536.40
The behaviour is the same on both machines: it only works, if I comment out the push_back line.
Thanks for raising this issue. We are already aware of it and will fix this soon with #774.
#774 has been merged, and this should be fixed. If not, please feel free to open a new issue.
| gharchive/issue | 2023-08-21T13:36:38 | 2025-04-01T06:37:08.145961 | {
"authors": [
"SaschaWillems",
"lobneroO"
],
"repo": "KhronosGroup/Vulkan-Samples",
"url": "https://github.com/KhronosGroup/Vulkan-Samples/issues/783",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1426412981 | Vulkaninfo: Escape json strings
Fix #696
Also:
Parse & print `VkPhysicalDeviceProperties:driverVersion' according to driver specific formats. Falls back on the Vulkan Major.Minor.Patch if on an unknown platform.
Switch the VkPhysicalDeviceProperties:apiVersion printing to put the major.minor.patch outside of the parenthesis. apiVersion = 1.3.245 (232343235) is how it is now printed, instead of ` apiVersion = 232343235 (1.3.245)
Cleanup the logic around printing UUID arrays. Took some effort but uses the operator<< rather than requiring explicit conversion to strings.
CI Vulkan-Tools build queued with queue ID 22165.
CI Vulkan-Tools build # 819 running.
CI Vulkan-Tools build # 819 passed.
Added bob as a reviewer since he would like to know about any changes to the format vulkaninfo uses.
CI Vulkan-Tools build queued with queue ID 22858.
CI Vulkan-Tools build # 820 running.
CI Vulkan-Tools build # 820 passed.
| gharchive/pull-request | 2022-10-28T00:28:37 | 2025-04-01T06:37:08.150148 | {
"authors": [
"charles-lunarg",
"ci-tester-lunarg"
],
"repo": "KhronosGroup/Vulkan-Tools",
"url": "https://github.com/KhronosGroup/Vulkan-Tools/pull/699",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
378530458 | Sharp Edges Ignored
Exporter ignores manually-assigned hard/sharp edges in Blender 2.8, resulting in an 'all-soft' mesh.
Repro in Blender 2.80 Experimental with glTF-Blender-IO:
Create a cube
Display>Shading>Smooth Edges
In Edit mode, select all edges. Edge>Edge Data>Mark Sharp
Select the Object Data tab, Turn on Auto Smooth and set the angle to 180.
Edges of cube should appear hard/sharp in the viewport.
With cube selected File>Export glTF 2.0 (glb). Make sure Export normals and Export Tangents are checked.
Write file to disk.
Load file in sandbox.babylonjs.com/
Hard edges will not be preserved.
Contrast the same workflow with Blender 2.79 using glTF-Blender-Exporter: Hard edges are preserved in .glb written to disk.
Note: In 2.80, setting Display>Shading>Flat faces does export a .glb with hard edges, but all edges are hard. What we want is the ability to define custom hard/soft edges on any given mesh (like we can do in Blender 2.79 with the glTF-Blender-Exporter addon).
Example files attached.
cube_hard_soft.zip
Note that an issue has been opened in developer.blender.org regarding sharp edges management:
https://developer.blender.org/T58638
Apologies for adding this comment on multiple issues, but I've seen several issues discussing 'Apply Modifiers' as a solution to different problems. Unfortunately, it appears to be a less-than-ideal fix for certain situations like this:
It appears to be a known issue that one cannot export a model that has manually-defined smooth/hard edges unless 'Apply Modifiers' is turned on. (Doesn't make much sense considering defining smooth/hard edges can be done without the use of any modifiers, but 🤷♂️)
The problem I'm having now is that I'd like to be able to export my Shape Keys AND have proper smooth/hard edges. However, Shape Keys don't export when 'Apply Modifiers' is turned on, aka the opposite problem. (Also a bit of a misnomer since Shape Keys appear to have zero to do with modifiers.)
Does anyone know how to get around this?
I'm the author of thread in blender, here a link of a my model and I get some errors and I can't export. https://drive.google.com/open?id=1RnLZ1OHdxNRgTU9E_jKdSc-1zp5SogkH
Anyway a noticed that, when you export with the script and with option apply modifiers, I get a very clean result, with smooth and sharp edges correctly, but if I do conversion of the model manually and then export the model I have a wrong shading loosing all sharp edges.
| gharchive/issue | 2018-11-08T00:41:43 | 2025-04-01T06:37:08.172388 | {
"authors": [
"SurlyBird",
"j-conrad",
"julienduroure",
"pafurijaz"
],
"repo": "KhronosGroup/glTF-Blender-IO",
"url": "https://github.com/KhronosGroup/glTF-Blender-IO/issues/73",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
384989419 | Virtual module - Scripted curator management
Q
A
Bug fix?
no
New feature?
yes
Needs wipe?
yes
Fixed issues
#517
Description:
This module adds BIS Curator module as default mean of AI commanding for commander and sub-commander slots.
It offers three curator modes, every slot can be configured to use different mode via CBA settings.
Content:
[x] Removed curator modules from mission file
[x] Created virtual module in liberation framework
[x] Full curator mode
[x] Limited mode with free camera movement
[x] Limited mode with locked camera
[x] No curator mode
[x] Changed build module pos load/save to use posWorld
[x] Added common_getFobAlphabetName
Successfully tested on:
[x] Local MP Vanilla
[ ] Dedicated MP Vanilla
Compatibility checked with:
NOTHING
Conflicts and tests in dedicated environment? 🙂
Conflicts resolved.
I will test it later on dedicated, if something will not work on it the code won't change much anyway so this can be reviewed anyway IMO.
| gharchive/pull-request | 2018-11-27T21:16:44 | 2025-04-01T06:37:08.221320 | {
"authors": [
"Wyqer",
"veteran29"
],
"repo": "KillahPotatoes/KP-Liberation",
"url": "https://github.com/KillahPotatoes/KP-Liberation/pull/538",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
998528737 | Some errors
Hi I am using this software with a 100/20 l mppt, I have encountered problems with decimals let me explain better, some times for example the battery voltage is 26v but the sensor sends 0.26v, this also happens to the total energy I already have put a filter that averages but is not enough
Could you try to provide the raw data of your solar charger?
@lucasimons
may i ask how are you wiring ?
I have connected the tx to D7 e gnd to gnd... I would not want my nodemcu to go into crisis with esp8266 because it actually sends a lot of data it is not possible to decrease the refresh rate, as soon as I get it I will use an esp 32 to see if I solve
@lucasimons
power supply how did you deal with? external source? or stepdown from Battery
Yes I have the 24v battery and a victron orion 24v to 12v converter and then connected a quick charge 3.0 module that powers a raspberry and the nodemcu however I think it's like putting a step down 24 to 5
Ok ,
Please disconect gnd from ve.direct port and try
(
Ve.direct connected only tx
Power supply vcc and gnd from step down
)
| gharchive/issue | 2021-09-16T18:54:48 | 2025-04-01T06:37:08.240291 | {
"authors": [
"KinDR007",
"lucasimons",
"syssi"
],
"repo": "KinDR007/VictronMPPT-ESPHOME",
"url": "https://github.com/KinDR007/VictronMPPT-ESPHOME/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2421332952 | 🛑 FSDH INT is down
In 330df53, FSDH INT (https://fsdh-portal-app-int.azurewebsites.net/register) was down:
HTTP code: 0
Response time: 0 ms
Resolved: FSDH INT is back up in 83c233d after 46 minutes.
| gharchive/issue | 2024-07-21T10:19:57 | 2025-04-01T06:37:08.242913 | {
"authors": [
"KingBain"
],
"repo": "KingBain/proto-datahub-uptime",
"url": "https://github.com/KingBain/proto-datahub-uptime/issues/357",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1371551744 | Solved: How to install on Windows 11
Thanks for this pixel hack, really appreciated. I already tried this on Android X86 9.0 on VirtualBox and it works fine.
Could someone help with installing this on Windows Subsystem for Android?
I installed the Android for Windows 11 using the instructions on this site:
https://www.androidsage.com/2022/06/28/download-wsa-android-12-1-with-magisk-root-google-play-store/
Then I installed Magisk using these instructions:
https://www.getdroidtips.com/root-windows-subsystem-for-android-via-magisk/
So, now I have a fully working WSA with Google services and Magisk works, but how to flash pixelify? The 'normal' version stops to an error about volume keys, so I think I need to download Pixelify-v2.1-no-VK.zip.
How about installation? I already copied the config.prop to internal folder, but then what? This is clearly not enough. Then I flashed the zip with magisk, but I still don't have unlimited storage in google photos (I did erase photos app's data after flashing).
EDIT:
I got this. I forgot to enable 'zygisk'. Now it works.
I have zygisk enabled but everytime i flash the module and reboot magisk it shows nothing in installed modules. do you have any idea?
After reading logs I can see, It is showing 2 errors
Installing Google Photos from backups
Failure [INSTALL_FAILED_VERSION_DOWNGRADE: Downgrade detected: Update version code 48627807 is older than current 48849424]
Please Disable Google Photos Auto Update on Playstore
chmod: /data/data/com.google.android.dialer/files/phenotype: No such file or directory
Google is installed.
Comment : This shows even though auto update is turned off on play store
Try using release WSA_2311.40000.5.0_x64_Release-Nightly-with-Magisk-26.4-stable-MindTheGapps-13.0.7z
from https://github.com/MustardChef/WSABuilds/releases
everytime i flash the module and reboot magisk it shows nothing in installed modules.
Update :
If I dont reboot it shows pixelify in installed modules and works fine but if i close wsa once or reboot magisk it doesnt show in installed modules.
| gharchive/issue | 2022-09-13T14:28:57 | 2025-04-01T06:37:08.255847 | {
"authors": [
"bjungbogati",
"liero0",
"morhadi"
],
"repo": "Kingsman44/Pixelify",
"url": "https://github.com/Kingsman44/Pixelify/issues/121",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1420971834 | Unnecessary setting
You can safely remove .UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking); from your contacts tests inside the ContactServiceTests.cs file, because you are detaching the entity from the context on line 150
Okaaaay. 10x.
| gharchive/issue | 2022-10-24T14:56:49 | 2025-04-01T06:37:08.267149 | {
"authors": [
"Kiril95",
"Vancho99"
],
"repo": "Kiril95/EntertainmentHub",
"url": "https://github.com/Kiril95/EntertainmentHub/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1619756340 | WIP ZIO JSON
This is work in progress, missing (and failing) tests. Feel free to contribute @Kirill5k I've invited you to edit my fork.
I needed to add snapshot dependency to ZIO JSON as some useful methods are not yet available in latest stable release.
Fixes #26
@Kirill5k can you take a look? there is some error when running testOnly JsonMapperSpec, but it's late and my mind is no longer working xD perhaps a fresh view will reveal the bug
For some reason
Some("{"_id": {"$oid": "640c698699f7394fe1fa68b9"}, "string": "string", "null": null, "boolean": true, "long": 1.678535046665E12, "int": 1.0, "bigDecimal": 100.0, "array": ["a", "b"], "dateInstant": {"$date": "2023-03-11T11:44:06.665Z"}, "dateEpoch": {"$date": "2023-03-11T11:44:06.665Z"}, "dateLocalDate": {"$date": "2022-01-01T00:00:00Z"}, "document": {"field1": "1", "field2": 2.0}}")
was not equal to
Some("{"_id": {"$oid": "640c698699f7394fe1fa68b9"}, "string": "string", "null": null, "boolean": true, "long": 1678535046665, "int": 1, "bigDecimal": {"$numberDecimal": "100.0"}, "array": ["a", "b"], "dateInstant": {"$date": "2023-03-11T11:44:06.665Z"}, "dateEpoch": {"$date": "2023-03-11T11:44:06.665Z"}, "dateLocalDate": {"$date": "2022-01-01T00:00:00Z"}, "document": {"field1": "1", "field2": 2}}"
Int is represented as 1.0, long appears once in normal, once in scientific notation, and bigdecimal is weird
I'm not sure what's going in. I suppose its related to the internal representation of Json.Num which is bigDecimal.
Now its just BigDecimal an issue, fixed int and long
Some("{"_id": {"$oid": "640c80dda25fe1341f6b231c"}, "string": "string", "null": null, "boolean": true, "long": 1678541021538, "int": 1, "bigDecimal": 100, "array": ["a", "b"], "dateInstant": {"$date": "2023-03-11T13:23:41.538Z"}, "dateEpoch": {"$date": "2023-03-11T13:23:41.538Z"}, "dateLocalDate": {"$date": "2022-01-01T00:00:00Z"}, "document": {"field1": "1", "field2": 2}}")
was not equal to
Some("{"_id": {"$oid": "640c80dda25fe1341f6b231c"}, "string": "string", "null": null, "boolean": true, "long": 1678541021538, "int": 1, "bigDecimal": {"$numberDecimal": "100.0"}, "array": ["a", "b"], "dateInstant": {"$date": "2023-03-11T13:23:41.538Z"}, "dateEpoch": {"$date": "2023-03-11T13:23:41.538Z"}, "dateLocalDate": {"$date": "2022-01-01T00:00:00Z"}, "document": {"field1": "1", "field2": 2}}")
Hmm OK, it seems the issue is JsonNumSyntax#toBsonValue will interpret BigDecimal(100.0) as isValidInt and encode that as BsonValue.int (that's why we see just 100), on the other hand we have JsonEncoder that directly encodes BigDecimal(100.0) (not cast to int), thence the diffrence. I wonder how it worked with circe.
Fixed, however in circe the toBigDecimal returned an Option, if it failed they default to BsonValue.double.
In our case Json.Num internally is already a BigDecimal, so it will always map to BsonValue.bigDecimal there is no case for BsonValue.double.
The test case succeeds, but I guess only because we don't check double values. I'm not sure if that's OK.
MongoCollectionSpec and MongoJsonCodecsSpec are failing.
This looks very good. I will give it a better look after several hours when I am free.
It looks like you forgot to commit Dependencies.scala changes
This is puzzling me now. The only diffrence I see is the missing parenthesis on the left hand side (or extra parameters on right hand side, depending how you look at it).
I have no idea why it is the way it is.
{"$oid":"640ce1fcc850af29e11314c9"}
was not equal to
"{"$oid":"640ce1fcc850af29e11314c9"}"
It was a CharSequence.. calling .toString solved the problem of parenthesis.
@Kirill5k all tests are passing :tada: Please review.
Hmm.. StackOverflow on the CI. I wonder why. There is no issue locally (using JDK17).
@Kirill5k can we merge this and cut a release?
Sure. all looks good to me.
Will do a release later today.
| gharchive/pull-request | 2023-03-10T23:34:13 | 2025-04-01T06:37:08.276148 | {
"authors": [
"Kirill5k",
"ioleo"
],
"repo": "Kirill5k/mongo4cats",
"url": "https://github.com/Kirill5k/mongo4cats/pull/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
265540222 | Suggestion: Accept donations
I wanted to give you $5 for this awesome tool, but I couldn't find any way to do so. A donate link would be nice.
P.S. I've used many other factorio calculators in the past, but this was by far the best one, and the only one that really gave me what I wanted. Thank you!
I have added a link to my Patreon page in e81df4f1a1e0c5f113f393df233ac50d31b674d7.
| gharchive/issue | 2017-10-15T02:32:30 | 2025-04-01T06:37:08.280113 | {
"authors": [
"KirkMcDonald",
"MakerBurst"
],
"repo": "KirkMcDonald/kirkmcdonald.github.io",
"url": "https://github.com/KirkMcDonald/kirkmcdonald.github.io/issues/55",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
342135094 | Out of memory error while trying to build seablock recipes
I loaded in seablock data to this calculator, but encounter out of memory errors in Chrome while in the simplex method.
The OOM seems to be related to BigInteger (either the object size or the garbage created). I encounter it pretty frequently while doing matrix manipulation in the simplex method.
This bug report lacks details, so it is difficult to say with certainty what the problem is. However, I can make an educated guess.
The current algorithm for detecting which portions of the recipe graph require representation with a linear program is a hack that works for the vanilla graph, but which I fully expect to fail catastrophically with other, more complex recipe graphs. Replacing this algorithm is one of the major barriers to supporting Bob's Mods (see also #35), among other alternate recipe graphs.
These sorts of failures could potentially take many forms. What you describe sounds like some sort of infinite recursion, which continually multiplies some numbers together until it OOMs.
I am closing this issue. Seablock is not supported at this time, and loading its recipe graph into the calculator is not expected to work. This may change in the future, but the relevant efforts are already covered by #35.
| gharchive/issue | 2018-07-18T00:34:25 | 2025-04-01T06:37:08.282438 | {
"authors": [
"KirkMcDonald",
"terite"
],
"repo": "KirkMcDonald/kirkmcdonald.github.io",
"url": "https://github.com/KirkMcDonald/kirkmcdonald.github.io/issues/91",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1494217921 | [retirements] Wrap retirement details
Lengthy retirement details break page layout spacing.
Wrapping the text should fix this but curious how the page would look 🤔
Questions:
What string length validation do we have on beneficiaryName
Retirement to validate changes against:
https://www.klimadao.finance/retirements/carboncar.klima/1
Might no longer be an issue with https://github.com/KlimaDAO/klimadao/pull/834
| gharchive/issue | 2022-12-13T13:14:21 | 2025-04-01T06:37:08.338274 | {
"authors": [
"0xAeterno"
],
"repo": "KlimaDAO/klimadao",
"url": "https://github.com/KlimaDAO/klimadao/issues/820",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2521300415 | example using VideoTexture from live feed camera
i have facing problem E/flutter ( 983): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: NoSuchMethodError: Class 'VideoTexture' has no instance method '[]'.
when pass camera to VideoTexture from flutter
HI @RansomBroker,
There is currently a bug with VideoTexture. I have resolved it but have not pushed to pub.dev because of other things that are being worked on as well.
Please use CanvasTexture as a workaround for the moment. Here is an example of how to do that.
Hope this helps.
when i use that code example the image show like that
Hi @RansomBroker,
I am sorry for the issue. This is the only way currently to use the video feed.
What platform are you using?
I just tested it on Mac and it works fine, but I have not tested it on Android or web.
There is currently an issue with windows and camera feeds. I will work on it soon.
Sorry for the issue. I hope to resolve it soon.
yeah im testing it on android and desktop
are video texture will release in near future? or webcam update for that issues on android and web?
Hi @RansomBroker,
I have updated the examples to fix the issue you are having with the android and web camera versions.
Hope this helps.
Hi @RansomBroker,
I think we should be able to close this issue.
| gharchive/issue | 2024-09-12T04:16:29 | 2025-04-01T06:37:08.364199 | {
"authors": [
"Knightro63",
"RansomBroker"
],
"repo": "Knightro63/three_js",
"url": "https://github.com/Knightro63/three_js/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1300878675 | v0.23 support
Hi!
Do you plan to support current newest version?
Thanks!
Hi, yes, I'm currently working on it. There are a couple of non-trivial changes that still requires work, but I'm getting there
Released in v0.23.0
| gharchive/issue | 2022-07-11T15:29:53 | 2025-04-01T06:37:08.365641 | {
"authors": [
"GerkinDev",
"mokone91"
],
"repo": "KnodesCommunity/typedoc-plugins",
"url": "https://github.com/KnodesCommunity/typedoc-plugins/issues/119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
592814061 | Speed up STRING ingest
Luca said he might want to try speeding up STRING ingest, possibly by parallelization.
Here is the ingest in question.
Possibly could parallelize here while reading lines, or maybe as @deepakunni3 said, this utility fxn could be sped up
Some questions on the utility: is there any reason why it returns None?
Some questions on the utility: is there any reason why it returns None?
No reason, other than to return a type of some kind for type-checking. Could refactor to return a boolean I guess?
Though it is possible to parse the lines using multiprocessing it is not possible to write the lines in the same way, how much memory do you think would it take to keep them in memory and write them all after the first read has finished?
Though it is possible to parse the lines using multiprocessing it is not possible to write the lines in the same way, how much memory do you think would it take to keep them in memory and write them all after the first read has finished?
Not sure, but you could get some idea by running:
python run.py transform -s StringTransform
then looking at the size of the files generated:
ls -lh data/transformed/STRING/
At least a few gigs I bet
The file size is at 122M. So ingesting that into an in-memory data structure shouldn't be a problem at the moment. But note that we are dealing with just a subset of the original data by restricting to human PPIs.
The actual master file is 68GB (compressed). I doubt we will ever need to parse all of the file. Just highlighting the upper bounds of memory requirements.
Hi @LucaCappelletti94 - for clarity, can you confirm you are working on this? @deepakunni3 said he could handle this if you are not. Just trying to avoid duplicating effort
Yes, it should be done by the end of the morning.
I see that the same "seen" list is used for check for both the already seen proteins and genes: could this cause a name clash?
I'm working on this issue here.
I have refactored the code and replaced the lists with sets, so now that aspect should run a bit faster. I am not sure if parallelizing the I/O can add a significant speedup as the remaining data elaboration is minimal. What do you think?
@LucaCappelletti94 There shouldn't be a name clash for seen. The identifiers are mutually exclusive for gene and protein
I have refactored the code and replaced the lists with sets, so now that aspect should run a bit faster. I am not sure if parallelizing the I/O can add a significant speedup as the remaining data elaboration is minimal. What do you think?
Thanks Luca! A very dramatic speed-up, STRING ingest is now down to 2m!
(venv) ~/PycharmProjects/kg-emerging-viruses *speed_up_string $ time python run.py transform -s StringTransform
WARNING:ToolkitGenerator:class "pairwise interaction association" slot "interacting molecules category" does not reference an existing slot. New slot was created.
WARNING:ToolkitGenerator:Unrecognized prefix: SEMMEDDB
[snip]
INFO:root:Parsing StringTransform
[transform.py][ transform] INFO: Parsing StringTransform
real 2m19.998s
user 2m14.680s
sys 0m2.388s
It almost leaves me with the doubt that I haven't done something wrong with the code: the output file looks okay? I think that other than the pythonification of the code the only real source of speed up was replacing the list with a set.
If this helps:
edges.tsv produced from your branch and from master is exactly the same (see m5 hash below, bak/ dir is produced from master).
Your nodes.tsv has fewer entries than master, but the few I checked out are just extra ENSEMBL ids not mentioned in edges.tsv
(venv) ~/PycharmProjects/kg-emerging-viruses $ ls -l data/transformed/STRING/*.tsv data/transformed/STRING/bak/*tsv
-rw-r--r-- 1 jtr4v staff 1255956605 Apr 3 11:57 data/transformed/STRING/bak/edges.tsv
-rw-r--r-- 1 jtr4v staff 2456430 Apr 3 11:57 data/transformed/STRING/bak/nodes.tsv
-rw-r--r-- 1 jtr4v staff 1255956605 Apr 5 09:12 data/transformed/STRING/edges.tsv
-rw-r--r-- 1 jtr4v staff 2391930 Apr 5 09:12 data/transformed/STRING/nodes.tsv
(venv) ~/PycharmProjects/kg-emerging-viruses $ md5 data/transformed/STRING/*.tsv data/transformed/STRING/bak/*tsv
MD5 (data/transformed/STRING/edges.tsv) = c960835ae79f6235d8b4a7be6ce4372f
MD5 (data/transformed/STRING/nodes.tsv) = 21ea74367b01086301f4a9dfba6e1bb9
MD5 (data/transformed/STRING/bak/edges.tsv) = c960835ae79f6235d8b4a7be6ce4372f
MD5 (data/transformed/STRING/bak/nodes.tsv) = 408ed8393fbce7abcf20864cee5bbd77
Ok! That seems promising, I was worried that the speedup was caused by some coding mistake such as an if-condition that was never met. Out of curiosity, how much time was it required on the same machine before?
Out of curiosity, how much time was it required on the same machine before?
14 hours on my laptop - a very dramatic speed-up
Wow, @LucaCappelletti94 This is amazing! A simple change made all the difference!
For posterity: In Python, lists are slow in lookup for membership. Whereas, sets are dramatically faster because of the way set stores its values: hash tables.
For posterity: In Python, lists are slow in lookup for membership. Whereas, sets are dramatically faster because of the way set stores its values: hash tables.
Good to know - I didn't realize either how much quicker sets are
Should I proceed with the pull request?
@LucaCappelletti94 Yes, the output looks proper. Feel free to make a PR
| gharchive/issue | 2020-04-02T17:43:58 | 2025-04-01T06:37:08.379652 | {
"authors": [
"LucaCappelletti94",
"deepakunni3",
"justaddcoffee"
],
"repo": "Knowledge-Graph-Hub/kg-covid-19",
"url": "https://github.com/Knowledge-Graph-Hub/kg-covid-19/issues/63",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2485275426 | feat(PieChart): mark proper composable scope for labels and holeContent
To be able to use the proper modifiers for alignment etc, mark the pie composable lambdas for label and holeContent with BoxScope
I think this is a good idea for the hole content, since there's a clear "box" that fits inside the hole.
For the labels, they are positioned so the left/right edge is next to the label connector depending on if they are on the right/left side of the pie. What's the use case for having them in a Box, and what would be the bounds of the box for each label?
I guess it is not that useful for labels, I just saw they were wrapped in a Box and wanted to make sure the proper scope is set. But I also don't see a clear use case, so I will update this PR to only target holeContent.
Or if https://github.com/KoalaPlot/koalaplot-core/pull/85 is a candidate for merging, we can just close this PR.
Yes let's use #85 since that also adds the content padding.
| gharchive/pull-request | 2024-08-25T13:57:31 | 2025-04-01T06:37:08.384412 | {
"authors": [
"gsteckman",
"mediavrog"
],
"repo": "KoalaPlot/koalaplot-core",
"url": "https://github.com/KoalaPlot/koalaplot-core/pull/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1010414066 | how can i got superpoint.pt files?
hi,professor:
thanks for your code! in the examples, requires superpoint.pt file, where can i find it?
PLEASE!
Hello,
If you are talking about the main.cc file, then it's assumed that you've exported superpoint.pt from the Python program as a PyTorch jit script. You can see how it can be done in the inferencewrapper.py.
| gharchive/issue | 2021-09-29T02:37:15 | 2025-04-01T06:37:08.492961 | {
"authors": [
"Kolkir",
"jcyhcs"
],
"repo": "Kolkir/superpoint",
"url": "https://github.com/Kolkir/superpoint/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1845361510 | Fixes Validate Client.Authenticate when given
Fixes Validate Client.Authenticate when given consumer or credential both nil
Fixes #153
Closing this one in favor of https://github.com/Kong/go-pdk/pull/153.
@barockok Thank you for your PR! We really appreciate it.
| gharchive/pull-request | 2023-08-10T14:54:45 | 2025-04-01T06:37:08.557723 | {
"authors": [
"barockok",
"gszr"
],
"repo": "Kong/go-pdk",
"url": "https://github.com/Kong/go-pdk/pull/157",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
60957510 | [objc/native] Improvements
I want to implement a couple more things:
About literals (I like literals because they generate less cluttered code snippets and allow more flexibility to the people pasting them):
[ ] Add a literals or verbose flag (verbose would be more consistent with other targets): If true, will generate literals from parameters/headers like currently, if false, will just use the objects computed by httpsnippet. It will be a nice options if people don't want too much verbose at the cost of flexibility for "pasters".
[x] Make body parameters literals (not sure about this one: very verbose)
[x] Make querystring parameters literals (not sure about this one either: very verbose)
About code style (JS):
[x] Comment what's going on in the generation (native.js)
[x] Improve the code climate rating (:cry:)
About improvements that could be made but are waiting on general guidelines:
[ ] Parse the response body from NSData. Need to wait on response definition for now.
[ ] Add an option for explanatory comments generation? Depends if other languages will do that too in the future. It could be a new guideline.
About making literal querystrings, a huge downside would be the extra verbose:
NSURLComponents *components = [[NSURLComponents alloc] init];
NSURLQueryItem *q1 = [NSURLQueryItem queryItemWithName:@"foo" value:@"bar"];
NSURLQueryItem *q2 = [NSURLQueryItem queryItemWithName:@"hello" value:@"world"];
components.queryItems = @[ q1, q2 ];
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:components.URL
cachePolicy:NSURLRequestUseProtocolCachePolicy
timeoutInterval:10.0];
// execute request through NSURLSession
Not sure about this. The same way, multipart requests are already very verbose in order to build the body.
@thibaultcha is this resolved?
@darrenjennings No way I could remember, sorry. Better close it and move on.
Sounds like a plan. Thanks!
| gharchive/issue | 2015-03-13T01:48:14 | 2025-04-01T06:37:08.562960 | {
"authors": [
"darrenjennings",
"thibaultCha",
"thibaultcha"
],
"repo": "Kong/httpsnippet",
"url": "https://github.com/Kong/httpsnippet/issues/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
806168918 | Fix/balancer eventual consistency improve
Summary
SUMMARY_GOES_HERE
Full changelog
[Implement ...]
[Add related tests]
...
Issues resolved
Fix #false stoper nic-7016
I guess this was opened in accident. Thus closing it.
| gharchive/pull-request | 2021-02-11T08:11:43 | 2025-04-01T06:37:08.565378 | {
"authors": [
"bungle",
"darshandeep"
],
"repo": "Kong/kong",
"url": "https://github.com/Kong/kong/pull/6830",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1570115891 | tests(gke): ensure cluster cleanup is always called
Fixes cluster not being cleaned up mentioned in #533.
Codecov Report
Base: 54.08% // Head: 53.78% // Decreases project coverage by -0.31% :warning:
Coverage data is based on head (158730e) compared to base (82ecf03).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## main #537 +/- ##
==========================================
- Coverage 54.08% 53.78% -0.31%
==========================================
Files 50 50
Lines 3901 3901
==========================================
- Hits 2110 2098 -12
- Misses 1534 1543 +9
- Partials 257 260 +3
Flag
Coverage Δ
integration-test
58.75% <ø> (-0.36%)
:arrow_down:
unit-test
3.28% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
pkg/clusters/addons/knative/knative.go
60.44% <0.00%> (-5.98%)
:arrow_down:
pkg/clusters/utils.go
50.20% <0.00%> (-1.66%)
:arrow_down:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2023-02-03T16:36:51 | 2025-04-01T06:37:08.575201 | {
"authors": [
"codecov-commenter",
"czeslavo"
],
"repo": "Kong/kubernetes-testing-framework",
"url": "https://github.com/Kong/kubernetes-testing-framework/pull/537",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2493894773 | Substitution wrong
fun test (A : U) (x : A) (f : A -> U) : f x where
| A x f = _
Result in stack overflow.
This is due to the wrong implemention of ShiTT.Eval.subst function.
Use refresh to reimplement it.
fixed
| gharchive/issue | 2024-08-29T09:16:22 | 2025-04-01T06:37:08.577890 | {
"authors": [
"KonjacSource"
],
"repo": "KonjacSource/ShiTT",
"url": "https://github.com/KonjacSource/ShiTT/issues/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1582558429 | type:text is not supported
While investigating https://github.com/KorAP/Krill/issues/86, I found that corpusTitle eq gingko is serialized as
{
"@type": "koral:doc",
"key": "corpusTitle",
"match": "match:eq",
"value": "gingko",
"type": "type:text"
}
whilst type:text is not a type supported according to the KoralQuery doc and it is practically also not supported in Krill.
The type is not added by any query rewrite as it is not added when sending a direct API request:
https://korap.ids-mannheim.de/instance/test/api/v1.0/search?q=ich&cq=availability+%3D+%2FCC-BY.*%2F+%26+docTitle+%3D+"gingko"&ql=poliqarp&cutoff=1&state=&pipe=
Could it be that Kalamar add the type?
It's interesting that this shows up in the KQ-Viewer. The type:text is an index type and is introduced to help the VC Builder to show allowed operators. With this issue: Do you mean this shouldn't show up in the serialization or is there a bigger issue?
Yes, it shouldn't show up in the serialization and it shouldn't be used in general. There should be no problem with that in the backend since Kalamar only sends the corpus query, not KoralQuery.
Could you check what request Kalamar actually sends to Kustvakt since in the example request, there are no matches while in https://github.com/KorAP/Krill/issues/86, there are some results?
Could you please check what request Kalamar actually sends to Kustvakt? I don't get any results sending the example direct API request using OAuth2 token and VPN, while Kalamar shows some results as reported in https://github.com/KorAP/Krill/issues/86.
Well - it is used by the corpus builder and it is used for indexing - so what do you mean by "it shouldn't be used in general"? Yes it is not helpful in a corpus request, but that is not happening.
I am not sure to which query you are refering to.
Well - it is used by the corpus builder and it is used for indexing - so what do you mean by "it shouldn't be used in general"? Yes it is not helpful in a corpus request, but that is not happening.
I suppose it shouldn't be used since it is not part of the KoralQuery doc and not supported in backend. Why is it used by corpus builder and indexing?
I am not sure to which query you are refering to.
sorry for not being clear. I mean the query in https://github.com/KorAP/Krill/issues/86 or
the one I wrote above:
https://korap.ids-mannheim.de/instance/test/api/v1.0/search?q=ich&cq=availability+%3D+%2FCC-BY.*%2F+%26+docTitle+%3D+"gingko"&ql=poliqarp&cutoff=1&state=&pipe=
but using Kalamar instead of a direct API request.
The KoralQuery doc currently only covers the request and error reporting stuff - neither the indexing nor the response data format. Krill supports it for indexing (see index/FieldDocument) and for responses (see response/MetaFieldsObj). type:text means, the field is indexed tokenized, so single words can be searched in (like for title) as well as a whole string match works. This obviously means that the operators in the visual corpus builder should differ.
That query doesn't show results to me. The request is:
https://korap.ids-mannheim.de/instance/test/api/v1.0/search?context=40-t%2C40-t&count=25&cq=availability+%3D+%2FCC-BY.*%2F+%26+docTitle+%3D+%22gingko%22&cutoff=true&offset=0&q=ich&ql=poliqarp
Thanks for your explanation.
The query should show results with OAuth2 token and VPN since the Gingko corpus is restricted.
But the VC is limited to CC-BY.*
Sorry you are right. The request shouldn't be restricted to CC-BY.*
Besides I make a mistake due to the URL encoding for diacritics etc
For the following query
https://korap.ids-mannheim.de/instance/test?q=Z%C3%BCndkerze&cq=corpusTitle+%3D+%22gingko%22&ql=poliqarp&cutoff=1&state=&pipe=
Kalamar would send the query below to Kustvakt, right?
curl -v -H "Authorization: Bearer token" 'https://korap.ids-mannheim.de/instance/test/api/v1.0/search?q=Z%C3%BCndkerze&cq=corpusTitle+%3D+%22gingko%22&ql=poliqarp&cutoff=1&state=&pipe='
This doesn't seem to be a problem from Kalamar and isn't related to type:text so I suppose we should discuss in https://github.com/KorAP/Krill/issues/86 instead
Yes, this is unrelated. Regarding this topic: I think the corpus assistant shouldn't alter the query serialized by the KoralQuery helper - but I think that's the only problem there is and it's a minor one, not affecting any functionality of the platform.
| gharchive/issue | 2023-02-13T15:23:04 | 2025-04-01T06:37:08.594510 | {
"authors": [
"Akron",
"margaretha"
],
"repo": "KorAP/Kalamar",
"url": "https://github.com/KorAP/Kalamar/issues/196",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
75692065 | Add example of embedded fragment in activity
I searched a bit and I couldn't seem to figure out how to embed a fragment in my activity.
For example I want to do something like this.
frameLayout {
linearLayout {
baselineAligned = false
orientation = LinearLayout.HORIZONTAL
fragment {
name = "FragmentClass"
}.layoutParams(width = matchParent, height = matchParent)
}.layoutParams(width = matchParent, height = matchParent)
}
This feature is not supported yet, though it is planned to add in the next version. Thank you!
Is there a workaround for this? Would a custom inline function that accomplishes this be difficult to make?
This would be a nice little feature, since as far as I know, the only solution is a "long" piece of code (at least compared to how much anko tries to save us), see: http://stackoverflow.com/questions/18296868/how-to-add-a-fragment-to-a-programmatically-generated-layout
I think the problem is that this does not conform to just creating a view like the rest of the dsl...
But for those that are lazy like me, we are force to still keep some xml layouts around just because of this feature not being implemented...
Nice job to the anko team anyways for all the rest of the features they give us!!
Unfortunately, it's impossible to create a fragment in Android without an explicit class declaration.
@yanex I don't think anyone is saying to create a fragment without an explicit class declaration. In Android xml layouts, you can insert a <fragment> tag directly, instead of inserting a container element and then using FragmentManager to inject the fragment into the placeholder.
Both methods require explicit Fragment classes. I believe this issue was created because we couldn't figure out how to translate XML layouts with <fragment> tags into Anko language.
I included in my example of what it might look like while referencing the class.
I've created the new issue for this: #362.
| gharchive/issue | 2015-05-12T18:22:28 | 2025-04-01T06:37:08.617352 | {
"authors": [
"bj0",
"dave08",
"gregpardo",
"yanex"
],
"repo": "Kotlin/anko",
"url": "https://github.com/Kotlin/anko/issues/39",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1206429513 | Price display cut off
For long tooltips that rests on bottom of screen, the price display is cut off like thus:
Hm, I'm not sure how this could be solved.
I was experimenting with placing the price on the item description itself. But there's so many different formats of item descriptions it would take quite a lot of effort to set that up.
I guess for a quickfix I could move the item description window up if it cuts past the bottom of the screen.
@Kouzukii could you please circle back to this? AllaganTools doesn't cause this issue because it adds its item information in the description area.
See screenshot:
Should be fixed now
| gharchive/issue | 2022-04-17T16:48:22 | 2025-04-01T06:37:08.662107 | {
"authors": [
"DenL",
"Kouzukii",
"filliph"
],
"repo": "Kouzukii/ffxiv-priceinsight",
"url": "https://github.com/Kouzukii/ffxiv-priceinsight/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
51353279 | Accept UIAppearance
Added capacity to customize layout using UIAppearance
Now you have one alternative to customize TSMessageView, you can use UIAppearance.
These are the properties
[TSMessageView appearance] setTitleFont:[UIFont boldSystemFontOfSize:6]];
[[TSMessageView appearance] setTitleTextColor:[UIColor redColor]];
[[TSMessageView appearance] setContentFont:[UIFont boldSystemFontOfSize:10]];
[[TSMessageView appearance]setContentTextColor:[UIColor greenColor]];
[[TSMessageView appearance]setErrorIcon:[UIImage imageNamed:@"NotificationButtonBackground"]];
[[TSMessageView appearance]setSuccessIcon:[UIImage imageNamed:@"NotificationButtonBackground"]];
[[TSMessageView appearance]setMessageIcon:[UIImage imageNamed:@"NotificationButtonBackground"]];
[[TSMessageView appearance]setWarningIcon:[UIImage imageNamed:@"NotificationButtonBackground"]];
@KrauseFx whats your thoughts about this PR? i think it a really good approach to using custom design...
+1 @KrauseFx
Any updates on this?
Looks great, thanks for the pull request :+1:
Could you just fix the merge conflicts:
We can’t automatically merge this pull request.
Thanks!
Thanks!
@KrauseFx Merged with 'master'.
Thanks @diogomaximo for working on this! :+1:
And sorry it took so long :disappointed:
| gharchive/pull-request | 2014-12-08T21:16:14 | 2025-04-01T06:37:08.679474 | {
"authors": [
"KrauseFx",
"diogomaximo",
"rodrigocotton",
"shams-ahmed"
],
"repo": "KrauseFx/TSMessages",
"url": "https://github.com/KrauseFx/TSMessages/pull/196",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2446361061 | 第三方OPENAI SDK支持问题
大佬你好。
官方的API真的太难搞了。现在注册页不送,还需要信用卡。但是第三方的key,价格还是可以接受的。
能否支持一下第三方api。
加个API server设置。
或者大佬告诉一下源码哪里可以改?
配置文件里面可以改,有个base url字样的字段
| gharchive/issue | 2024-08-03T13:49:38 | 2025-04-01T06:37:08.694401 | {
"authors": [
"KroMiose",
"lixz123007"
],
"repo": "KroMiose/nonebot_plugin_naturel_gpt",
"url": "https://github.com/KroMiose/nonebot_plugin_naturel_gpt/issues/200",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
459977027 | merge package from arlac77/npm-package-template-esm-only
package.json
chore(package): c8@^5.0.2
chore(scripts): cover@#overwrite c8 --temp-directory build/tmp ava && c8 report -r lcov -o build/coverage --temp-directory build/tmp
chore(scripts): posttest@markdown-doctest
Coverage increased (+2.6%) to 76.225% when pulling e08caa427485371143fa8f11a0b04137a1bc1371 on npm-template-sync-1 into e00dc2d82fc7f15442bf6496a2ac0446b5434eea on master.
:tada: This PR is included in version 3.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-06-24T16:08:50 | 2025-04-01T06:37:08.710611 | {
"authors": [
"arlac77",
"coveralls"
],
"repo": "Kronos-Integration/kronos-endpoint",
"url": "https://github.com/Kronos-Integration/kronos-endpoint/pull/538",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
522793160 | merge from arlac77/npm-package-template-esm-only
.gitignore
chore(git): update .gitignore from template
Coverage remained the same at 86.377% when pulling d883d70f47ea4a69b603338ff2d49cce3a47c5f7 on npm-template-sync/1 into 2e2d5bfb6aac23bb2cef3b2538c62802bac1399b on master.
Codecov Report
Merging #599 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #599 +/- ##
=======================================
Coverage 84.97% 84.97%
=======================================
Files 3 3
Lines 892 892
Branches 62 62
=======================================
Hits 758 758
Misses 134 134
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2e2d5bf...d883d70. Read the comment docs.
:tada: This PR is included in version 3.1.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-11-14T11:18:44 | 2025-04-01T06:37:08.719037 | {
"authors": [
"arlac77",
"codecov-io",
"coveralls"
],
"repo": "Kronos-Integration/kronos-endpoint",
"url": "https://github.com/Kronos-Integration/kronos-endpoint/pull/599",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
544002661 | merge from arlac77/npm-package-template-esm-only
README.md
docs(README): update from template
:tada: This PR is included in version 2.0.13 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-12-30T20:29:53 | 2025-04-01T06:37:08.722059 | {
"authors": [
"arlac77"
],
"repo": "Kronos-Integration/service-logger-gelf",
"url": "https://github.com/Kronos-Integration/service-logger-gelf/pull/672",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
} |
700308259 | merge from arlac77/template-github-action,arlac77/template-kronos-component
README.md
docs(README): update from template
Codecov Report
Merging #820 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #820 +/- ##
=======================================
Coverage 28.94% 28.94%
=======================================
Files 1 1
Lines 76 76
Branches 1 1
=======================================
Hits 22 22
Misses 54 54
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 549f308...3bd81d1. Read the comment docs.
| gharchive/pull-request | 2020-09-12T17:14:33 | 2025-04-01T06:37:08.727829 | {
"authors": [
"arlac77",
"codecov-commenter"
],
"repo": "Kronos-Integration/service-logger-gelf",
"url": "https://github.com/Kronos-Integration/service-logger-gelf/pull/820",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
} |
397909282 | merge package from arlac77/npm-package-template
package.json
chore(package): rollup-plugin-executable@^1.4.2
chore(scripts): cover@#overwrite c8 --temp-directory build/coverage ava && c8 report -r lcov --temp-directory build/coverage
chore(package): add nyc from template
chore(package): set $.ava.require='esm' as in template
chore(package): set $.ava.files='tests/-test.js,tests/-test.mjs' as in template
chore(package): set $.ava.extensions='js,mjs' as in template
:tada: This PR is included in version 2.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-01-10T16:24:58 | 2025-04-01T06:37:08.734588 | {
"authors": [
"arlac77"
],
"repo": "Kronos-Integration/service-uti",
"url": "https://github.com/Kronos-Integration/service-uti/pull/423",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
} |
949665122 | Support for Envoy's Ext Authz Dynamic Metadata
Right now, the only way to retrieve information from the auth pipeline is through the Festival Wristbands and their support for custom claims. This will probably not play well for the Envoy feature for emiting "dynamic metadata" from the external authorization to be consumed by other filters (e.g. rate limit). See External Authorization Dynamic Metadata for more info.
Ideas for the implementation
To introduce a new phase to the auth pipeline, to be defined by the user in the CR. This phase will build a JSON (map[string]string) with entries declared in the CR and dynamically resolved in the auth pipeline, similarly to how it's done for wristband custom claims. See https://github.com/Kuadrant/authorino/blob/7612c4965ba08d1edee4ac410bbb35aad9392953/pkg/config/wristband.go#L67-L70 and https://github.com/Kuadrant/authorino/blob/7612c4965ba08d1edee4ac410bbb35aad9392953/pkg/config/wristband.go#L125-L132
Another possible implementation could be by continuing relying on the wristband to be the vessel but setting a different location for it to be passed back (#113), perhaps enhanced with additional support for non-signed and non-encoded wristbands issued. I'm afraid that this might be an overuse of the wristband feature though.
A more concrete implementation idea for this:
To define a new (final) phase for the auth pipeline called response, i.e. another array of evaluators (or "response configs"), just like we have for the other phases (identity, metadata and authorization), to be cached within the APIConfig object as ResponseConfigs []common.AuthConfigEvaluator.
After authorization phase is finished (and successful), the AuthPipeline would call the evaluators of the response phase (concurrently between them, as usual), handling the evaluated objects like it does for the other 3 phases, i.e. storing them in a map Response map[*config.ResponseConfig]interface{}.
The wristband issuer would become a type of evaluator of the response phase. Another type of response evaluator would be the DynamicMetadata evaluator:
type ResponseConfig struct {
Name string `yaml:"name"`
Wristband *response.WristbandIssuer `yaml:"wristband,omitempty"`
DynamicMetadata *response.DynamicMetadata `yaml:"dynamicMetadata,omitempty"`
}
with
type DynamicMetadata struct {
Name string
Value struct {
Static string
FromJSON string
}
}
evaluateAllAuthConfigs strategy would be used at the response phase. Once finished with all evaluators of the phase and returned control to AuthPipeline.Evaluate(), the pipeline would then build the AuthResult object, now:
type AuthResult struct {
Code rpc.Code
Message string
Headers []map[string]string
DynamicMetadata interface{} // or perhaps `map[string]interface{}`
}
OTB, this implementation would enable having multiple wristbands issued at the end of an auth pipeline instead of just one (if that even makes sense for any use case), as well as multiple “dynamic metadata” objects (probably to be merged into a single one before responding back to Envoy).
Something that would make this even more interesting would be modifying the AuthPipeline so the evaluated objected returned in the authorization phase also end up in the authorization JSON – i.e., an extension of what goes in https://github.com/Kuadrant/authorino/blob/7612c4965ba08d1edee4ac410bbb35aad9392953/pkg/service/auth_pipeline.go#L338
So the response evaluators (wristband issuer and dynamic metadata) could select values from the authorization phase as well (aside from the already possible ones identity and metadata). This could open up for solving #109.
| gharchive/issue | 2021-07-21T12:39:49 | 2025-04-01T06:37:08.751491 | {
"authors": [
"guicassolato"
],
"repo": "Kuadrant/authorino",
"url": "https://github.com/Kuadrant/authorino/issues/138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2398193687 | authpolicy_controller_test.go test reads resources that it does not watch.
I updated a fork of the operator to log when a reconcile gets a resource it does not watch and these are the resources that the authpolicy_controller_test.go gets without a watch:
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-79n5x, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-knkzp, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-knkzp, name=test-placed-gateway-istio
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-hrjhn, name=test-placed-gateway-istio
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-79n5x, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-79n5x, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-79n5x, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-knkzp, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-79n5x, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-hrjhn, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-knkzp, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-hrjhn, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-79n5x, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-knkzp, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-79n5x, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-knkzp, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-fq6kh, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-fq6kh, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-fq6kh, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-hrjhn, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fgfm7, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-fgfm7, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fgfm7, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fgfm7, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-8l6fd, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-8l6fd, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-8l6fd, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=/v1, Kind=Service, namespace=test-namespace-8l6fd, name=test-placed-gateway-istio
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fgfm7, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fgfm7, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fgfm7, name=on-test-placed-gateway-using-toystore-route
Get without watch: gvk=security.istio.io/v1beta1, Kind=AuthorizationPolicy, namespace=test-namespace-fq6kh, name=on-test-placed-gateway-using-toystore-route
Fixed as part of the SOTW changes (https://github.com/Kuadrant/kuadrant-operator/pull/952).
| gharchive/issue | 2024-07-09T13:14:04 | 2025-04-01T06:37:08.755795 | {
"authors": [
"chirino",
"guicassolato"
],
"repo": "Kuadrant/kuadrant-operator",
"url": "https://github.com/Kuadrant/kuadrant-operator/issues/749",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1796198237 | Improvements/fixes to the E2E CI workflow
Some improvements/fixes to the E2E CI workflow:
Add the contributor "role" to those that don't require approval. I have reviewed the PRs after the e2e workflow was in place and have seen that even though we are all members of the Kuadrant org, in the event for the pull_request we are marked as contributors.
Rename the GH environments so they are clearly identified as part of the e2e workflow
Merge all jobs into one using several steps instead
/cc @david-martin
/cc @mikenairn
/lgtm
/approve
/hold
Holding until we're happy the 2 new e2e environments are ready
@david-martin both e2e-external and e2e-internal are created, the external one with the required approval process
/unhold
| gharchive/pull-request | 2023-07-10T07:53:43 | 2025-04-01T06:37:08.759685 | {
"authors": [
"david-martin",
"roivaz"
],
"repo": "Kuadrant/multicluster-gateway-controller",
"url": "https://github.com/Kuadrant/multicluster-gateway-controller/pull/318",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2018500267 | Please any help ! I can not access "app_data" only "documents" works.
Describe the bug
firstly, I made a container Id with "a".
when I call CloudStorage.appendFile(path, content) I got the error like this.
ERR_DIRECTORY_NOT_FOUND app_data
but, when after CloudStorage.setDefaultScope("documents"); It works.
At first, I thought I couldn't find it because the names of the bundle identifier and container ID identifier were different.
My bundle identifier was com.a-test, and I realized that the cloud container Id was set to iCloud.com.a, so I created a new container Id iCloud.com.a-test.
However, app_data is still not found.
I don't want the data to be saved to be visible to the user.
Please help.
(When I already checked Cloud is available by your method)
Environment:
Device: Mobile
OS: iOS 17.0
Unfortunately, I'm not able to reproduce this on my end. The container ID indeed needs to be iCloud.com.a-test when the bundle identifier is com.a-test, so that might've caused the initial trouble. Beyond that, this seems like a configuration issue I can't diagnose on my end.
If you can provide a repository with a minimal reproducible example, I should be able to help you further.
| gharchive/issue | 2023-11-30T11:59:28 | 2025-04-01T06:37:08.764023 | {
"authors": [
"YoonJeongLulu",
"mfkrause"
],
"repo": "Kuatsu/react-native-cloud-storage",
"url": "https://github.com/Kuatsu/react-native-cloud-storage/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
656343223 | skill cardの実装
skillsはskill cardを実装してgird cssで配置する
ref: #3
done: #4
| gharchive/issue | 2020-07-14T05:44:30 | 2025-04-01T06:37:08.766775 | {
"authors": [
"Kudoas"
],
"repo": "Kudoas/Portfolio",
"url": "https://github.com/Kudoas/Portfolio/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
305425252 | yankpad-map direct bind?
Hi again!
so as you can see im doing my yankpad spring cleaning so more question, hope you dont mind :)
follwowing your great advice with issue #28 , i started using yankpad-map.
for several reasons i would like to use them via hydras. so i stated playing around with trying to get a function for each yankpad-map and came up with something like this (spoilers: i cant code :))):
(defun zzzzxxxx ()
" insert yankpad o "
(interactive)
(yankpad-map "o")
)
now this clearly dosent work since i dont know how to instruct emacs to press/enter 'o' after i launch the yankpad-map via the function.
any tips on how to do so?
thx!
Z
It seems like #32 may be of interest to you, since that adds hydra-like functionality to yankpad-map. If you want to do what you describe, you could use something like this:
(defun zzzzxxxx ()
" insert yankpad o "
(interactive)
(setq unread-command-events (listify-key-sequence (kbd "o")))
(yankpad-map))
You can now use yankpad-map-simulate for this.
| gharchive/issue | 2018-03-15T05:59:18 | 2025-04-01T06:37:08.777568 | {
"authors": [
"Kungsgeten",
"zeltak"
],
"repo": "Kungsgeten/yankpad",
"url": "https://github.com/Kungsgeten/yankpad/issues/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
174910952 | Add dev branch badges
Added specific branch badges in order to provide a quick summary to people
I'm fixing a conflict, I accidentally merged a commit from a gitter badge bot which broke everything. The reversion checks have to pass, so give it a fix minutes.
| gharchive/pull-request | 2016-09-03T18:33:48 | 2025-04-01T06:37:08.782475 | {
"authors": [
"Flarp",
"Kurimizumi"
],
"repo": "Kurimizumi/Honeybee-Hive",
"url": "https://github.com/Kurimizumi/Honeybee-Hive/pull/15",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
511384187 | Teensy 3.6 vs Teensy 4.0 DMA Speed and frameCount()
Hi.
I've recently been working on migrating a project from a Teensy 3.6 to a Teensy 4. I'm noticing the Teensy 3.6 is much faster updating the screen using DMA.
It looks like the SPI clock speed setting for the Teensy 4 is not ignored, but capped at a fairly slow speed. I've tried allowing the library to allocate the frame buffer as well as allocating it myself using DMAMEM.
I'm wondering if anyone else is seeing similar speed differences? Any workarounds?
Thanks in advance for the help.
Arduino 1.8.10
Teensyduino 1.48
Teensy 3.6
240 Mhz
#define ILI9341_SPICLOCK 30000000
frameCount() = 24 fps
#define ILI9341_SPICLOCK 60000000
frameCount() = 45 fps
Teensy 4.0
600MHz
#define ILI9341_SPICLOCK 144000000u
frameCount() = 28 fps
#define ILI9341_SPICLOCK 72000000u
frameCount() = 28 fps
#define ILI9341_SPICLOCK 36000000u
frameCount() = 19 fps
#define ILI9341_SPICLOCK 18000000u
frameCount() = 12 fps
#define ILI9341_SPICLOCK 999999999u
frameCount() = 28 fps
Sorry, I don't have too much time to look into this.
If you need higher SPI speeds, then maybe you need to change which clock is used to control SPI.
In particular what is the setting for CCM_CBCMR register.
I think by default we choose The second clock:
CCM_CBCMR = (CCM_CBCMR & ~(CCM_CBCMR_LPSPI_PODF_MASK | CCM_CBCMR_LPSPI_CLK_SEL_MASK)) |
CCM_CBCMR_LPSPI_PODF(6) | CCM_CBCMR_LPSPI_CLK_SEL(2); // pg 714
Actually it is now page 1112...
Which from our beginTransaction code looks like it starts off with 528MHZ clock going into SPI subsystem. If you try changing that (2) to (1), or change it after begin is called to (1), then I believe it will feed a 720mhz clock into SPI....
Also need to look at the PODF fields of that as well...
You might ask this types of question on the Forum and maybe will have some time to look again...
Hi Kurt,
I see there was some action in the PJRC forums regarding the SPI speed. Lucky timing for me.
I'm using your modified SPI library and setting #define ILI9341_SPICLOCK 80000000. frameCount() is around 61-62 fps. Very fast.
Thanks for the help and thank you so much for your libraries. I'll ask future questions in the PJRC forums.
| gharchive/issue | 2019-10-23T15:00:07 | 2025-04-01T06:37:08.791450 | {
"authors": [
"KurtE",
"mr-stivo"
],
"repo": "KurtE/ILI9341_t3n",
"url": "https://github.com/KurtE/ILI9341_t3n/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2567009725 | Title: Addition of a few basic codes like: contact book, note making app, to-do list
Is your proposal related to a problem? Please describe.
Although the repository is very vast it can be updated with a few basic programs in python. I would like to contribute to this issue.
Would you please add the gssoc-ext and hacktoberfest-accepted tags so that I can immediately work on it.
Add any other context or screenshots about the proposal request here.
N/A
I am the mentor of this project.
Hello @Chin-may02
Can you please describe little bit how the output will look like after update and give screenshots.!!
Hello, Here you go
Contact Book
Note making- yet to code
To do list
@SakiraAli1115 There are more projects that I will be adding along with the ones I have mentioned above with a readme for the beginners to better understand python.
Do assign me this task along with the tags gssoc-ext, hacktoberfest and hacktoberfest-accepted along with the level that seems fit.
Hello @Chin-may02
It is a Program not any application. Please read our project and understand, and thank you for your contribution ,make another pull issue by creating new concepts those are not exist in the project!!!
@Kushal997-das Kindly close this issue since it already exist in our project.
| gharchive/issue | 2024-10-04T18:16:41 | 2025-04-01T06:37:08.796276 | {
"authors": [
"Chin-may02",
"SakiraAli1115"
],
"repo": "Kushal997-das/Project-Guidance",
"url": "https://github.com/Kushal997-das/Project-Guidance/issues/1366",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1641964121 | feat: impl schema query API.
1. Does this PR affect any open issues?(Y/N) and add issue references (e.g. "fix #123", "re #123".):
[ ] N
[x] Y
feat: #418
2. What is the scope of this PR (e.g. component or file name):
3. Provide a description of the PR(e.g. more details, effects, motivations or doc link):
[ ] Affects user behaviors
[ ] Contains syntax changes
[ ] Contains variable changes
[ ] Contains experimental features
[ ] Performance regression: Consumes more CPU
[ ] Performance regression: Consumes more Memory
[x] Other
4. Are there any breaking changes?(Y/N) and describe the breaking changes(e.g. more details, motivations or doc link):
[x] N
[ ] Y
5. Are there test cases for these changes?(Y/N) select and add more details, references or doc links:
[x] Unit test
[ ] Integration test
[ ] Benchmark (add benchmark stats below)
[ ] Manual test (add detailed scripts or steps below)
[ ] Other
kclvm/capi/src/service/service.rs
6. Release note
Please refer to Release Notes Language Style Guide to write a quality release note.
None
Pull Request Test Coverage Report for Build 4532034868
0 of 0 changed or added relevant lines in 0 files are covered.
1 unchanged line in 1 file lost coverage.
Overall coverage remained the same at 89.313%
Files with Coverage Reduction
New Missed Lines
%
compiler_base/parallel/src/executor/timeout.rs
1
92.86%
Totals
Change from base Build 4529330673:
0.0%
Covered Lines:
2106
Relevant Lines:
2358
💛 - Coveralls
| gharchive/pull-request | 2023-03-27T11:45:55 | 2025-04-01T06:37:08.808318 | {
"authors": [
"Peefy",
"coveralls"
],
"repo": "KusionStack/KCLVM",
"url": "https://github.com/KusionStack/KCLVM/pull/471",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1249390632 | test CLA
What problem does this PR solve?
Issue Number: close #issue-id
Problem Summary:
What is changed and how it works?
Check List
Tests
[ ] Unit test
[ ] Integration test
[ ] Manual test (add detailed scripts or steps below)
[ ] No code
Side effects
[ ] Performance regression: Consumes more CPU
[ ] Performance regression: Consumes more Memory
[ ] Breaking backward compatibility
Documentation
[ ] Affects user behaviors
[ ] Contains syntax changes
[ ] Contains variable changes
[ ] Contains experimental features
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.
None
Pull Request Test Coverage Report for Build 2390115733
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 46.461%
Totals
Change from base Build 2390046878:
0.0%
Covered Lines:
2422
Relevant Lines:
5213
💛 - Coveralls
| gharchive/pull-request | 2022-05-26T10:29:16 | 2025-04-01T06:37:08.817501 | {
"authors": [
"chai2010",
"coveralls"
],
"repo": "KusionStack/kclvm-go",
"url": "https://github.com/KusionStack/kclvm-go/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
838537746 | java.lang.NullPointerException: firstElement must not be null
first of all , I like your library .. it's amazing and depending on android BottomNavigationView is great and useful
I try to use it , but I got an error ..
java.lang.NullPointerException: firstElement must not be null
at com.github.kwasow.bottomnavigationcircles.BottomNavigationCircles.selectFirstItem(BottomNavigationCircles.kt:107)
at com.github.kwasow.bottomnavigationcircles.BottomNavigationCircles.init(BottomNavigationCircles.kt:57)
at com.github.kwasow.bottomnavigationcircles.BottomNavigationCircles.(BottomNavigationCircles.kt:44)
------------
my xml file
<com.github.kwasow.bottomnavigationcircles.BottomNavigationCircles
android:id="@+id/bottomNav"
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:backgroundTint="@color/appDarkRed"
app:itemBackground="@color/appDarkGreen"
app:itemIconTint="@color/white"
app:layout_constraintBottom_toBottomOf="parent"
app:menu="@menu/bottom_nav_menu" />
Could you attach the menu xml file as well?
<item
android:id="@+id/navigationHome"
android:contentDescription="@string/home_home"
android:icon="@drawable/ic_home"
android:title="@string/home_home" />
<item
android:id="@+id/navigationMeals"
android:contentDescription="@string/home_meals"
android:icon="@drawable/ic_meals"
android:title="@string/home_meals" />
<item
android:id="@+id/navigationQr"
android:contentDescription="@string/home_qr"
android:icon="@drawable/ic_qr_code"
android:title="@string/home_qr" />
<item
android:id="@+id/navigationPackages"
android:contentDescription="@string/home_packages"
android:icon="@drawable/ic_packages"
android:title="@string/home_packages" />
<item
android:id="@+id/navigationMore"
android:contentDescription="@string/home_more"
android:icon="@drawable/ic_more"
android:title="@string/home_more" />
Try depending on the new commit and let me know if it's fixed.
dependencies {
implementation 'com.github.Kwasow:BottomNavigationCircles-Android:Tag'
}
I'll do a new release if it is fixed.
the same ..
java.lang.NullPointerException: firstElement must not be null
at com.github.kwasow.bottomnavigationcircles.BottomNavigationCircles.selectFirstItem(BottomNavigationCircles.kt:116)
at com.github.kwasow.bottomnavigationcircles.BottomNavigationCircles.onAttachedToWindow(BottomNavigationCircles.kt:60)
at android.view.View.dispatchAttachedToWindow(View.java:19553)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3430)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3437)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3437)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3437)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3437)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3437)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3437)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3437)
at android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3437)
at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:2028)
at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1721)
at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:7598)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:966)
at android.view.Choreographer.doCallbacks(Choreographer.java:790)
at android.view.Choreographer.doFrame(Choreographer.java:725)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:951)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:214)
at android.app.ActivityThread.main(ActivityThread.java:7356)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:930)
What are the operations you perform on the view in you code?
'com.github.Kwasow:BottomNavigationCircles-Android:c7b40feef9'
I mean do you do something to the view in your activity in java/kotlin?
nothing at all , my activity is empty and here's my xml
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/container"
android:layout_width="match_parent"
android:layout_height="match_parent">
<fragment
android:id="@+id/nav_host_fragment"
android:name="androidx.navigation.fragment.NavHostFragment"
android:layout_width="match_parent"
android:layout_height="0dp"
app:defaultNavHost="true"
app:layout_constraintBottom_toTopOf="@id/bottomNav"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:navGraph="@navigation/mobile_navigation"
tools:ignore="FragmentTagUsage" />
<com.github.kwasow.bottomnavigationcircles.BottomNavigationCircles
android:id="@+id/bottomNav"
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:backgroundTint="@color/appDarkRed"
app:itemBackground="@color/appDarkGreen"
app:itemIconTint="@color/white"
app:layout_constraintBottom_toBottomOf="parent"
app:menu="@menu/bottom_nav_menu" />
</androidx.constraintlayout.widget.ConstraintLayout>
The issue seems to be that the navigation view can't find the icon view on the first element. I've moved the layout listener to it's parent so maybe it won't cause any issues now. Please try com.github.Kwasow:BottomNavigationCircles-Android:7d1a269ca6
I got it. You are using com.google.android.material:material:1.3.0 which has com.google.android.material.bottomnavigation.BottomNavigationItemView as a public
but we are using latest version com.google.android.material:material:1.4.0-alpha01 which has BottomNavigationItemView as private
Yep, seems that updating to 1.4.0-alpha01 does also cause issues on my builds. Thanks for the details, I'll see what I can do anyways
I think you should wait for a stable release of material because alpha releases api can be changed in coming release.
You can just advice users not to update material library for now
I'll keep it on a separate branch and merge when it goes stable.
I've coded an update for the alpha update on the material-1.4.0-alpha1 branch.
If you need it you can try it out with:
implementation 'com.github.Kwasow:BottomNavigationCircles-Android:material-1.4.0-alpha1-SNAPSHOT' (which will always download the latest commit)
or
implementation 'com.github.Kwasow:BottomNavigationCircles-Android:material-1.4.0-alpha1-fb1570a749-1 (current latest commit)
| gharchive/issue | 2021-03-23T09:45:29 | 2025-04-01T06:37:08.833861 | {
"authors": [
"Kwasow",
"MoaazElneshawy",
"elmokadim"
],
"repo": "Kwasow/BottomNavigationCircles-Android",
"url": "https://github.com/Kwasow/BottomNavigationCircles-Android/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1714760415 | OVERLAPBLOCK (DESIGN 6)
Overlapblock (design 6)
Design a overlap block that look exactly like the above image...
Create a file "OVERLAP_DESIGN_6.html" in the modules/overlapblock folder.
Use simple HTML and CSS and JAVASCRIPT to create the overlapblock. You can use the interface development https://kwickerhub.com (This project) or you can write the lines of code yourself.
Step by Step Guide.
Fork this project(Use the 'fork' button in the top right corner) and Clone your Fork.
git clone https://github.com/YOUR_USERNAME/frontend
Open your code Editor and Create a file "OVERLAP_DESIGN_6.html" in the "modules/overlapblock" folder of this project you just cloned. No need to create head, title and body tags, Just Add a div tag with some embedded style(i.e use the style tag) and the script tag where necessary.
If you want to add an image resource, please add it in the folder ••
"modules/overlapblock/images_and_icons"
We recommend you use an svg for your image/icon.
Push your Code: You need to push your recent changes back to the cloud. Use the command below in the main directory of this Repository
git push origin dev
or use a GUI tool to avoid mistakes or complexity. LOL.
Make you Pull Request...
Good-luck.
please assign me these issue under ssoc 23
Hello @MaverickDe, Please assign this issue to me. I've started working on it.
Go on
On Mon, Nov 20, 2023, 4:31 AM Rishitha-VasiReddy @.***>
wrote:
Hello @MaverickDe https://github.com/MaverickDe, Please assign this
issue to me. I've started working on it.
—
Reply to this email directly, view it on GitHub
https://github.com/KwickerHub/frontend/issues/239#issuecomment-1818172570,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AYAUKVMDUHGRPERESWCQEGTYFLFJ5AVCNFSM6AAAAAAYFXG6UGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJYGE3TENJXGA
.
You are receiving this because you were mentioned.Message ID:
@.***>
| gharchive/issue | 2023-05-17T23:03:56 | 2025-04-01T06:37:08.842268 | {
"authors": [
"Karansankhe",
"MaverickDe",
"Rishitha-VasiReddy"
],
"repo": "KwickerHub/frontend",
"url": "https://github.com/KwickerHub/frontend/issues/239",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
244436245 | .streamrole takes an unusually long time to execute.
.streamrole seems to take an unusually long time to execute (regularly 15+ seconds). I'm not sure if this is expected for the commands functionality or if it is a bug. Someone else was having this issue yesterday and they claimed it would take minutes to execute.
It is not a bug. Recent patch to streamrole forces the bot to check everyone from the first role to be checked right away if they're streaming, and if they are, give them the role right away (assuming they fulfill the preconditions).
I sped it up 5x now, but it may error out if there are too many users. I'm not sure about role add ratelimiting.
| gharchive/issue | 2017-07-20T17:17:22 | 2025-04-01T06:37:08.844454 | {
"authors": [
"Kwoth",
"QuantumToasted"
],
"repo": "Kwoth/NadekoBot",
"url": "https://github.com/Kwoth/NadekoBot/issues/1426",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
145209994 | Fixed pokemon
i have fixed the stats between the pokemon types and added the fairy
type to the types!
great, thanks a lot
| gharchive/pull-request | 2016-04-01T15:27:09 | 2025-04-01T06:37:08.845663 | {
"authors": [
"Kwoth",
"LawlyPopz"
],
"repo": "Kwoth/NadekoBot",
"url": "https://github.com/Kwoth/NadekoBot/pull/171",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
978162921 | Running pyq in terminal resulting in exit .pyq.run line of code executing from python.q file
Questions
[ ] Which operating system are you using (if Linux, please provide flavour of it, i.e RedHat, CentOS or Ubuntu), is it 32-bit, or 64-bit?
Raspbian 32 bit
[ ] Which version of PyQ are you running? Please provide output of pyq --versions, if PyQ isn't operational, please provide Python interpreter version and PyQ version python -V; python3 -V; pip list | grep pyq:
Python 3.7.3
Python 3.7.3
pyq 5.0.0
[ ] Which version of kdb+ are you using, is it 32-bit or 64-bit?
32 bit
[ ] If on 64-bit, is your QLIC set? Please provide output env | grep QLIC on linux/macOS, or set|grep QLIC on Windows.
[ ] Did you use virtual environment to install PyQ? If not, why?
yes
[ ] Where is your QHOME? Please provide output env | grep QHOME on linux/macOS, or set|grep QHOME on Windows.
QHOME=/home/pi/virtualenv/q
[ ] Do you use Conda? If so, what version?
no
Steps to reproduce the issue
Expected result
Expected q/python interactive session in terminal
Actual result
(virtualenv) pi@raspberrypi:~/virtualenv $ pyq
[2] /home/pi/virtualenv/q/python.q:9: if[`python.q~last` vs hsym .z.f;exit .pyq.run .pyq.args]
^
Arrow ^ points to exit .pyq.run. Any ideas what to try from that point? What may have gone wrong on install process? Any suggestions very welcome - or if any Rasp Pi install guides available please share
Q works in terminal on my virtual environment.
Python version 3.7.3
pyq 5.0.0 installed using pip
Workaround
If you know workaround, please provide it here.
I don't think PyQ was ever tested on Raspberry Pi. Could you please provide output of pip install pyq in clean venv?
Thanks for above suggestion and resolution sashkab. I am getting access to existing work dev server so won't be pursuing further at this moment on my RPi 3.
| gharchive/issue | 2021-08-24T14:28:33 | 2025-04-01T06:37:08.852357 | {
"authors": [
"sashkab",
"stephenc-ie"
],
"repo": "KxSystems/pyq",
"url": "https://github.com/KxSystems/pyq/issues/144",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1641463567 | build clickhouse with address sanitizer failed
CMake Error at utils/clickhouse-dep/CMakeLists.txt:8 (add_custom_command):
Error evaluating generator expression:
$<TARGET_FILE:ch_contrib::jemalloc>
No target "ch_contrib::jemalloc"
Maybe we could add if for those cps .
这里cp了一堆target file, 问题在于这些target的编译开关不一定是开启的,这些file可能不存在。以jemalloc为例,如果编译使用了address sanitizer, jemalloc的开关必须为off, 此时它的target file不存在。
两种办法,一种加编译开关判断,另一种在cp之前加上[[ -f ]] && cp xx xx
问题存在于clickhouse_backend最新主分支中。
| gharchive/issue | 2023-03-27T06:26:06 | 2025-04-01T06:37:08.858036 | {
"authors": [
"taiyang-li"
],
"repo": "Kyligence/ClickHouse",
"url": "https://github.com/Kyligence/ClickHouse/issues/381",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1938589518 | 1.20.3
This is a branch to gather work on supporting features and representation changes from Minecraft 1.20.3.
Relevant changelogs:
23w40a
So, my current thought for handling version differences is, we should have a 'feature flag' system, where things like 'emit legacy hover event' or 'emit uuid as ints' vs 'emit uuid as string' are toggleable options. We'd need to produce presets for compatibility with different game versions, plus probably a 'latest' and 'most compatible' levels.
Should this be game versions? Then how do we handle snapshots?
Alternately, we could mark these revisions by:
datapack versions
protocol versions
data versions
Thoughts on what makes most sense?
We still need to implement:
[ ] handling for the type field
[ ] NBT component serialization (or DFU integration)
[ ] matching the strictness of Vanilla serialization
but I think it's worth merging this as-is just to have published snapshots that others can depend on.
| gharchive/pull-request | 2023-10-11T19:38:06 | 2025-04-01T06:37:08.862465 | {
"authors": [
"zml2008"
],
"repo": "KyoriPowered/adventure",
"url": "https://github.com/KyoriPowered/adventure/pull/986",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
293801882 | Doesn't work with google drive
After giving permission it returns with "Authentication failed!"
Can you specifiy where are you receiving that error?
I followed the steps here https://github.com/Kyrodan/KeeAnywhere/wiki/Getting-Started#usage and I was able to add my Google account and save my DB in GDrive.
"In this dialog click on Add... and chose your Cloud Provider from the drop-down list"...choosing google drive, putting the right password, keepass respond (and not gdrive) auth failed...
Hi,
Look in the Plugins directory if you have a file called KeeAnywhere.dll. If you have it, close keepass, delete this file, and try again.
Saludos
Maybe a proxy issue? If you use a proxy, please check out https://github.com/Kyrodan/KeeAnywhere/wiki/Advanced-Topics#using-a-proxy
Please also try the recently released v1.5.1 which contains a fix for better handling slow connections/long running up-/downloads.
Hmm, interesting. I haven't change anything for Proxy-Settings in 1.5.x. If it works in 1.4.1 it should work in 1.5.0/.1, too.
Will double-check, whether Google-Api has changed (and/or KeePass itself).
Hello I realized the same problem after updating from KeePass version 2.39.1 and KeeAnywhere-1.4.1.plgx to any newer version. Today I installed angain both mentioned versions and the opening / saving from GoogleDrive works fine again. I use the same proxy.
With KeePass-2.42 and KeeAnywhere-1.5.1 I get following msg:
Authentication failed!
Fehler beim Senden der Anforderung.
Google.Apis.Core
bei Google.Apis.Http.ConfigurableMessageHandler.d__59.MoveNext()
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei Google.Apis.Auth.OAuth2.Requests.TokenRequestExtenstions.d__0.MoveNext()
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei Google.Apis.Auth.OAuth2.Flows.AuthorizationCodeFlow.d__35.MoveNext()
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei Google.Apis.Auth.OAuth2.Flows.AuthorizationCodeFlow.d__30.MoveNext()
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei KeeAnywhere.StorageProviders.GoogleDrive.GoogleDriveStorageConfigurator.d__c.MoveNext()
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei KeeAnywhere.OAuth2.OAuth2Form.d__3.MoveNext()
Void MoveNext()
Please review.
Thanks René
Hmm, interesting. I haven't change anything for Proxy-Settings in 1.5.x. If it works in 1.4.1 it should work in 1.5.0/.1, too.
Will double-check, whether Google-Api has changed (and/or KeePass itself).
Hello,
did you found time for double checking?
We still got the same problem.
With the new versions it shows the Authentication failed thing.
Maybe duplicate of #123 .
| gharchive/issue | 2018-02-02T07:54:02 | 2025-04-01T06:37:08.872997 | {
"authors": [
"Kyrodan",
"MrSnooze",
"dariocdj",
"marcorichetta",
"ucerotk"
],
"repo": "Kyrodan/KeeAnywhere",
"url": "https://github.com/Kyrodan/KeeAnywhere/issues/121",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
598350594 | Restrictive search behavior for defense values
🐞 bug report
Hi! I'm a new user of PoE-Overlay and really liking it so far. Thanks for developing this tool!
Here's a report for an adverse behavior I noticed, which I wasn't able to work around. I'm assuming it's a bug (apologies if I'm mistaken), but if not, then this could be a feature request instead.
📝 Desription
When filtering searches based on total defense values (AR/EV/ES), specifying the range doesn't work the same way as in other modifiers. This is unintuitive, and forces search based on flat/percent/hybrid defense modifiers, which is not very useful for e.g. rare items.
The minimum value is capped at the raw defense value(s) for the given item base (even if the item base is not filtered for). The maximum value can't be increased beyond the current modified defense value(s) of the item.
To Reproduce
Bring up a search for any armor piece, and scroll up/down on the value range for defenses. In the example in the screenshot, the range can't be expanded further than 246 (Slink Boots base EV) ~ 490 (the current EV on the item) in either direction.
Expected behavior
I would expect range specification for defenses to work in exactly the same way as modifiers. The user should be able to search in the range (#, current value) ~ (current value, #), with the corresponding handles on the configuration interface to set default search behavior.
Screenshots
🌍 My Environment
OS: Windows 10 Pro x64
Version: 0.6.20
PoE: Steam 3.10.1c English
Thank you so much for posting this. I am in a similar boat. I love this program and want to use it. However, not being able to auto select a defensive value like energy shield and setting the minimum value to like -10% and max value to uncapped is the biggest hurdle that is keeping me from being able to use this program exclusively over poe trade macro.
0.6.21 (2020-04-12)
add own min max range settings for properties
add preselect attack and defense as setting
remove quality min/ max restriction (#611)
0.6.21 (2020-04-12)
add own min max range settings for properties
add preselect attack and defense as setting
remove quality min/ max restriction (#611)
Thank you so much for this update. This is amazing tool and these changes are huge! I feel like this is the best tool out there now and no reason to use anything else. The ability to set min-max ranges separately for defensive values and stat values is also a great feature. Thank you!
Thank you so much for this update. This is amazing tool and these changes are huge! I feel like this is the best tool out there now and no reason to use anything else. The ability to set min-max ranges separately for defensive values and stat values is also a great feature. Thank you!
You're welcome! Thanks for your positive feedback. Closed.
| gharchive/issue | 2020-04-11T21:02:50 | 2025-04-01T06:37:08.881186 | {
"authors": [
"Kyusung4698",
"tragicnate",
"ymyt"
],
"repo": "Kyusung4698/PoE-Overlay",
"url": "https://github.com/Kyusung4698/PoE-Overlay/issues/611",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2336978681 | Input doesn't seem to work
Since this is a very small project, I'm not actually sure how to go ahead and report a bug, What I do know is that BeesNES doesn't seem to take any inputs from my keyboard, and when I do put in new input configs, they stay temporarily (for as long as the program is running) but disappear on restart. Any idea what's up?
Someone else was having such an issue, and I suspect the reason is because
there is something on your system that registers as a game controller.
When a controller is plugged in, input uses that instead of the keyboard.
This is all temporary code; your key settings disappear because they are
not saved to a settings file. In the final version it will check your
controller settings and keyboard settings so this won't happen, but I still
want to know what is causing it to think there is a controller attached
when there seemingly isn't. The other person said he had no controllers
attached.
If you are building the source, can you have it print the names of the
controllers it is detecting?
In the final version, you will be able to specify at least 4 devices for
input, and each device will be polled in order until a key-press is found,
so this won't happen.
On Thu, Jun 6, 2024 at 8:19 AM FIM43-Redeye @.***>
wrote:
Since this is a very small project, I'm not actually sure how to go ahead
and report a bug, What I do know is that BeesNES doesn't seem to take
any inputs from my keyboard, and when I do put in new input configs, they
stay temporarily (for as long as the program is running) but disappear on
restart. Any idea what's up?
—
Reply to this email directly, view it on GitHub
https://github.com/L-Spiro/BeesNES/issues/4, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABYFQ2X4GTYGEC5HXE5LO3DZF6MHDAVCNFSM6AAAAABI3UF7N6VHI2DSMVQWIX3LMV43ASLTON2WKOZSGMZTMOJXHA3DQMI
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
So I'm building the source, but I'm unfortunately really not adept in C++ at all- I'm mostly really interested in this project and wanted to have a look, but to be honest, I don't really know where to begin, most of what I do is C#. Would a simple printf work? Should I use one for each function in Input that enumerates controllers? Apologies for my ineptitude.
Find this function (…\BeesNES\Src\Input\LSNDirectInput8.cpp):
BOOL PASCAL CDirectInput8::DIEnumDevicesCallback_GatherDevices(
LPCDIDEVICEINSTANCEW _lpdDi, LPVOID _pvRef ) {
Change it to:
BOOL PASCAL CDirectInput8::DIEnumDevicesCallback_GatherDevices(
LPCDIDEVICEINSTANCEW _lpdDi, LPVOID _pvRef ) {
std::vector * pvVector =
static_cast<std::vector >(_pvRef);
pvVector->push_back( (_lpdDi) );
::OutputDebugStringW( (*_lpdDi).tszProductName );
::OutputDebugStringW( L"\r\n" );
return DIENUM_CONTINUE;
}
You can also breakpoint that line to see what it prints if you run in the
debugger (via hitting F5).
Message ID: @.***>
So, oddly, that line doesn't seem to be called at all when I run it in the debugger. When I breakpoint it, the breakpoint never trips, and nothing prints to the debug window that looks like a product name.
Try the latest commit. It polls the keyboard even if a controller is
detected.
On Fri, Jun 7, 2024 at 2:35 AM FIM43-Redeye @.***>
wrote:
So, oddly, that line doesn't seem to be called at all when I run it in the
debugger. When I breakpoint it, the breakpoint never trips, and nothing
prints to the debug window that looks like a product name.
—
Reply to this email directly, view it on GitHub
https://github.com/L-Spiro/BeesNES/issues/4#issuecomment-2153063151, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABYFQ2UZUTJBQCELGJO7RYLZGCMXNAVCNFSM6AAAAABI3UF7N6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJTGA3DGMJVGE
.
You are receiving this because you commented.Message ID:
@.***>
Still nothing, so far. Buttons are useless, but the wrap_oal problem disappeared, so that was probably on my end. Interestingly, when I do hook up a controller, it picks up on it just fine, but then I can't assign inputs with it at all.
I wanted to redo the CPU and see if that was an issue.
This is now CPU #3. It’s like the original CPU but refined, sleek, and no hidden bugs outside of what I already know needs to be fixed.
CPU #2 had all kinds of issues and passed very few tests. It might have bee causing input-polling issues.
So try things now.
Tried now. Unfortunately input is still broken. No errors, though. Sorry for such a late response.
regsvr32 oleaut32.dll
Try this to address your error. Run with admin permissions.
Does it respond to your keyboard when you configure a controller?
Try running in compatibility mode.
Registering oleaut32.dll did not seem to help. More concerningly, I can't configure a controller at ALL, and none show up in the input devices list. Running in compatibility mode didn't help either.
Anything I can do on this end to make the code give more information as to what's wrong? This dumbfounds me.
Nothing shows in the list yet and those boxes don’t respond to controllers yet.
For now I am only trying to see if your keyboard can at least be recognized.
Input is polled here:
https://github.com/L-Spiro/BeesNES/blob/main/Src/Windows/MainWindow/LSNMainWindow.cpp#L1038
You could add print-outs to see if ::GetAsyncKeyState() returns anything for a key you know to be pressing.
https://github.com/L-Spiro/BeesNES/blob/main/Src/Windows/MainWindow/LSNMainWindow.cpp#L1587
This is where controllers are searched. You can try to follow it in any of the debug builds and check where it fails to gather your controller.
ChatGPT can help you adjust the code so that it finds your controller and then you can tell me what you did if it works.
Found it!
So the issue is that the controller dialog appears to not have any effect on what keys are actually recognized . The hardcoded keys starting at line 1158 of LSNMainWindow.cpp work fine (though a little futzing was needed to replace VK_OEM_1 with K, I assume that key's nice and handy on your keyboard). Aside from that it's a bit crunchy-sounding (significant slowdown on an R9 7940HS), but that's probably my issue to debug and not yours. Though it IS absolutely tapping out one of my cores.
My laptop seems much lower-spec than that and it can run at up to 90 FPS.
A lot of work goes into the visuals and audio. How well does Options -> Video Filter -> None work?
It might be a difference in AVX capabilities and I may have chosen bad defaults for my L. Spiro Filter for your AVX support since my own support may vary.
Audio may also be playing a part:
https://github.com/L-Spiro/BeesNES/blob/main/Src/Apu/LSNApu2A0X.h#L187
Try changing that * 3 to * 6 or something.
When you load a ROM, you will see this in Visual Studio: Kernel size: 95.
Change * 3 to something else to lower that number. Increasing it should lower that value but I may be misremembering which part of the equation that is influencing, so it might need to be lowered instead. Check the debug print to confirm.
My laptop has AVX, AVX2, and AVS-512, but not AVX10. Disabling the filter significantly increases performance, but still doesn't hit stable FPS.
Bumping up that value to 6 did not meaningfully improve performance, but it did push the kernel size to 191. However, I can confirm that the L. Spiro filter runs far worse than with no filter or the Blargg filter. Using it to artificially slow the system down creates a sound best described as the following, where | stands for a good audio sample and - stands for silence:
|---|---|---|---
The slices are extremely small, but there's definitely an audible distance between them. Not sure if that'll help much, but I wanted to report it.
Bumping up that value to 6 did not meaningfully improve performance, but it did push the kernel size to 191.
That number is supposed to go down. Higher means worse performance.
Change it to * 1.
The audio buffers are 16 samples long. This will be configurable later but for now I want it to be as close to real-time as possible.
All-in-all this is supposed to run on medium-end hardware even with some settings cranked up, but I am going to just have to get some more hardware for testing on my end.
https://github.com/L-Spiro/BeesNES/blob/main/Src/Audio/LSNAudioBase.h#L142
You can also decrease the output rate by changing both of those 44100’s to something else.
Even with the kernel shrunk to 31, performance appears similar to what it initially was. I think something different from audio is eating cycles.
What are the specs of your laptop? If it's Intel, that may explain how optimizations that work excellently on it seem to crash and burn on an AMD system. I can try to get access to another Intel system and test it there to see if the performance is linked to manufacturer.
I was thinking it is the AMD part too, because your specs are way better than mine:
11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz.
I will need to get an AMD machine unless you are able to find bottlenecks.
Performance might improve once I had GPU support but if “None” filter is still slow then it seems unlikely to be the only issue.
Eating an entire core is by design, but there may be something about that design that just doesn’t play well with AMD.
I ran the performance profiler- it's a bit clumsy, but here are my results:
[Uploading Report20241003-1059.zip…](Github didn't like the report, so I had to put it in a zip file.)
I always hate to doublepost, but I think I found the bottleneck:
https://github.com/L-Spiro/BeesNES/blob/main/Src/LSNLSpiroNes.cpp#L61
This particular line is eating nearly 50% of total CPU through the external call. I can't analyze it any further due to some confusion about missing symbol files.
I don’t know how that can be such a problem; it’s just peeking for a new message.
Can you work with Ms. ChatGPT to find some way to make it better on AMD?
I have embarrassed myself. Turns out I just had to swap it to Release configuration and now it runs smoothly. Swapping PeekMessageW for GetMessageW still uses a very large amount of CPU, if a little less, but I'm not sure how to optimize that - maybe it's just normal and the profiler is just seeing the work that function does.. I think the input issue is solved per earlier.
BeesNES now runs beautifully on my system. I don't know how to strip the FPS limit out, so I can't push it to its limit and see just how well it runs, but it handles full speed like a champ. The filters also run buttery smooth, with no FPS drop on your custom ones. I think the AVX optimization is working well on AMD as well.
Thanks again for making such an amazing program! I genuinely feel privileged for the opportunity to use a sub-cycle NES emulator in real time. Hope I didn't waste too much of your time, and thanks again for all your help.
I’m just glad to hear it is working as-expected and input works!!!
On Fri, Oct 4, 2024 at 2:16 FIM43-Redeye @.***> wrote:
I have embarrassed myself. Turns out I just had to swap it to Release
configuration and now it runs smoothly. Swapping PeekMessageW for
GetMessageW still uses a very large amount of CPU, if a little less, but
I'm not sure how to optimize that - maybe it's just normal and the profiler
is just seeing the work that function does.. I think the input issue is
solved per earlier.
BeesNES now runs beautifully on my system. I don't know how to strip the
FPS limit out, so I can't push it to its limit and see just how well it
runs, but it handles full speed like a champ. The filters also run buttery
smooth, with no FPS drop on your custom ones. I think the AVX optimization
is working well on AMD as well.
Thanks again for making such an amazing program! I genuinely feel
privileged for the opportunity to use a sub-cycle NES emulator in real
time. Hope I didn't waste too much of your time, and thanks again for all
your help.
—
Reply to this email directly, view it on GitHub
https://github.com/L-Spiro/BeesNES/issues/4#issuecomment-2391928354, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABYFQ2X3RQH5JXB6HGTKQ6TZZV3WRAVCNFSM6AAAAABI3UF7N6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOJRHEZDQMZVGQ
.
You are receiving this because you commented.Message ID:
@.***>
| gharchive/issue | 2024-06-05T23:18:51 | 2025-04-01T06:37:08.916465 | {
"authors": [
"FIM43-Redeye",
"L-Spiro"
],
"repo": "L-Spiro/BeesNES",
"url": "https://github.com/L-Spiro/BeesNES/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1227125868 | Improve the SonarCloud reports
Report coverage correctly
Remove migrations from duplications
Fix current warnings/errors
Seems like dotnet-coverage swallows the exit status of dotnet test and always returns 0 🤦🏻♂️
Will rip out and replace with coverlet 😩
| gharchive/pull-request | 2022-05-05T20:07:07 | 2025-04-01T06:37:08.949351 | {
"authors": [
"pixeltrix"
],
"repo": "LBHackney-IT/bonuscalc-api",
"url": "https://github.com/LBHackney-IT/bonuscalc-api/pull/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
450027075 | Fix malformed URI in PropertyComponent test
Thanks, @ndushay!
Coverage remained the same at 81.825% when pulling af1ccf4f09571bf9ad2653f477ec0fe95e3bd5b2 on mjgiarlo-patch-1 into 5fd3365354fde2fff4dfb4f7a44f45b070972457 on master.
| gharchive/pull-request | 2019-05-29T21:23:34 | 2025-04-01T06:37:08.957768 | {
"authors": [
"coveralls",
"mjgiarlo"
],
"repo": "LD4P/sinopia_editor",
"url": "https://github.com/LD4P/sinopia_editor/pull/580",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1827477864 | Remove need for supports
On the multi-part model the bottom part has countersunk holes that will require support. Totally workable, but since you are such big in Voron community and their models are example of how to design with not supports, it would be cool to make this model to the same standard.
Thanks for noticing this issue. I will merge a fix for this soon
Super cool. Thanks...
| gharchive/issue | 2023-07-29T13:24:01 | 2025-04-01T06:37:08.958969 | {
"authors": [
"cneshi",
"nebulorum"
],
"repo": "LDOMotors/steppy",
"url": "https://github.com/LDOMotors/steppy/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1244738990 | When removing all layers must delete all widget data
If the widget data is not deleted then errors can occur if the widget is reopened. As connections are still connected etc...
At the moment the dockwidget is removed but this doesn't delete the widget
This problem was only in debug and seemed to be caused the stepping buttons from the debugger.
| gharchive/issue | 2022-05-23T08:01:59 | 2025-04-01T06:37:08.960313 | {
"authors": [
"stevenBrownie"
],
"repo": "LEB-EPFL/eda-napari",
"url": "https://github.com/LEB-EPFL/eda-napari/issues/14",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1633135585 | keyError 6
Hi,I'm very inspired by you work,but when i create class SPMotif, it makes a program error that keyError 6 in torch.save(collate(data_list,id)):
in _collcate
key, [v[key] for v in values], data_list, stores, increment)
remove the `pos' attribute in spmotif_dataset.py line 103
data = Data(x=x,
y=y,
z=z,
edge_index=edge_index,
edge_attr=edge_attr,
# pos=p,
edge_gt_att=torch.LongTensor(ground_truth),
name=f'SPMotif-{self.mode}-{idx}',
idx=idx)
I'm closing the issue and feel free to reopen it if you guys have any further questions.
| gharchive/issue | 2023-03-21T02:41:13 | 2025-04-01T06:37:08.969376 | {
"authors": [
"Arthur-99",
"LFhase",
"ZYF150322661776"
],
"repo": "LFhase/CIGA",
"url": "https://github.com/LFhase/CIGA/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2706582026 | 资源加载问题
问题描述 (Description of the problem)
项目需要加载的资源,比如图片、字体之类的,能不能考虑直接存放在项目里,不用再去请求其他站点,这样更有利于本地部署访问,避免因为网络问题造成体验不好
如何复现该问题 (How to reproduce the problem)
无
操作系统和浏览器信息 (Operating System and Browser Information)
docker运行1.5.4版本
@itelite 图片我后面可以放到项目里,但是字体我不会放,不太好处理(假如有更新的话),字体你可以自己下载放到项目中,都是 google 的
| gharchive/issue | 2024-11-30T03:22:39 | 2025-04-01T06:37:08.975946 | {
"authors": [
"LHRUN",
"itelite"
],
"repo": "LHRUN/paint-board",
"url": "https://github.com/LHRUN/paint-board/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1025252172 | Prevent API undefined property: stdClass:$server error when firewall has attached tags
Hello this is my first PR in this project:
Before this PR following error occurred when you query a firewall with an attached label:
Undefined property: stdClass::$server thrown in hetzner-cloud-php-sdk/src/Models/Firewalls/Firewall.php:125
I reproduced this error by extending the firewall.json and firewalls.json fixtures with tag objects.
This error happens because the firewall model assumes that only server resources can be attached to a firewall.
This PR checks every firewall applied service and only adds it to the appliedTo array when it is a server object. This is only a workaround around this problem. In the future these tags(and their corresponding servers) should also be applied to the appliedTo array but this needs some bigger refactoring in the firewall model code. So this is only a quick fix.
Thank you for your hard work!
Good catch! Thank you!
| gharchive/pull-request | 2021-10-13T13:27:07 | 2025-04-01T06:37:09.002462 | {
"authors": [
"LKaemmerling",
"geisi"
],
"repo": "LKDevelopment/hetzner-cloud-php-sdk",
"url": "https://github.com/LKDevelopment/hetzner-cloud-php-sdk/pull/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
212940645 | Do not accept an input for some drop-down menus
Resolves
GH-827
Proposed Changes
Removes the ability to accept an input from some drop-down menus
looks_changeeffectby
looks_seteffectto
sound_changeeffectby
sound_seteffectto
operator_mathop
Reorder sound effects blocks to match graphic effects sort order
Minor adjustment to sound effects "clear" block language
The changes look good!
One requested addition:
The "set rotation style" block also has a drop-down with a fixed set of options that should probably not accept an input, so I suggest making the same change there.
Also, a note:
The < key [space] pressed > boolean reporter does not accept an input in 2.0, but currently in 3.0 it does. I got excited for a minute about this because it could be a feature: you could for example loop through a list of letters, to check if each is pressed, rather than using separate if statements. This already works for letters, but numbers seem to be treated as ascii codes, so they do not work as expected (but as a result checking < key 10 pressed > is true when you press tab!). I'll open a separate issue about these questions.
Looks good to me.
@ericrosenbaum Good catch! All set. Give it one more check and then I'd be happy to land this if it looks ok to you. Thanks for filing the new issue about the key [x] pressed block. 😄
Looks good!
| gharchive/pull-request | 2017-03-09T05:42:37 | 2025-04-01T06:37:09.007363 | {
"authors": [
"ericrosenbaum",
"rachel-fenichel",
"thisandagain"
],
"repo": "LLK/scratch-blocks",
"url": "https://github.com/LLK/scratch-blocks/pull/829",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
411207028 | fix(wallet): ensure currency unit is saved on change
Description:
Ensure selected currency unit is saved and restored properly.
Motivation and Context:
This fixes a bug in which the selected currency unit is lost when you logout of a wallet or restart the app.
How Has This Been Tested?
Manually - change currency unit, log out, log back in, or restart the app and ensure that currency unit preference is restored.
Types of changes:
Bug fix
Checklist:
[x] My code follows the code style of this project.
[x] I have reviewed and updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
[ ] I have added tests to cover my changes where needed.
[ ] All new and existing tests passed.
[x] My commits have been squashed into a concise set of changes.
Coverage decreased (-0.03%) to 19.674% when pulling 4692d3403b8dfa4f800ad150248b9f3bd4412075 on mrfelton:fix/save-selected-currency-unit into 22e670a25d5351fff6444230144bffced389d678 on LN-Zap:master.
| gharchive/pull-request | 2019-02-17T16:41:06 | 2025-04-01T06:37:09.087516 | {
"authors": [
"coveralls",
"mrfelton"
],
"repo": "LN-Zap/zap-desktop",
"url": "https://github.com/LN-Zap/zap-desktop/pull/1594",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1341840615 | No Novel Cover on NovelHall
Steps to reproduce
Go to Browse and Click on NovelHall
Expected behavior
It should show the novel name with their cover
Actual behavior
It is not showing Novel Cover but when we click on a novel then it will show its cover
LNReader version
1.1.12
Android version
11
Device
Realme 3 pro
Other details
Acknowledgements
[X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open or closed issue.
[X] I have written a short but informative title.
[X] If this is an issue with an source, I should be opening an issue in the sources repository.
[X] I have updated the app to version 1.1.12.
[X] I will fill out all of the requested information in this form.
If this is an issue with an source, I should be opening an issue in the sources repository.
| gharchive/issue | 2022-08-17T14:19:37 | 2025-04-01T06:37:09.097905 | {
"authors": [
"Pheonix-Flames",
"rajarsheechatterjee"
],
"repo": "LNReader/lnreader",
"url": "https://github.com/LNReader/lnreader/issues/395",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1883108345 | fix most of the docstring errors
I made some minor changes to hopefully fix some docstrings in readthedocs
@ajshajib I expected that the pre-commit is formatting the lines to fulfill the PEP8 standards with black, or I am missing something?
@sibirrer, that's what it's expected to do, yes. But, it doesn't always work perfectly, unfortunately, particularly docformatter for docstring formatting. The black formatter itself works very well in my experience, but it only works on the code lines, not the docstrings. So, we still have to keep an eye out for blemishes. One reason the docformatter could break, is when it sees an unexpected way the docstring is laid out. For example, the expected Sphinx option is :return: and not :returns:.
| gharchive/pull-request | 2023-09-06T03:23:26 | 2025-04-01T06:37:09.152127 | {
"authors": [
"ajshajib",
"sibirrer"
],
"repo": "LSST-strong-lensing/sim-pipeline",
"url": "https://github.com/LSST-strong-lensing/sim-pipeline/pull/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
366975313 | C Install and Python Install Don't Get Along
In recent bug-testing I've been trying to have both the C install (generated with cmake combined with make + sudo make install) and a Conda Python install (generated from a python setup.py install --user up and running at the same time. During the installation of Python though, the following error pops up.
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [libccl.dylib] Error 1
make[2]: *** [CMakeFiles/ccl.dir/all] Error 2
make[1]: *** [pyccl/CMakeFiles/_ccllib.dir/rule] Error 2
make: *** [_ccllib] Error 2
Traceback (most recent call last):
File "setup.py", line 65, in <module>
'Topic :: Scientific/Engineering :: Physics'
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/__init__.py", line 140, in setup
return distutils.core.setup(**attrs)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 172, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 158, in call_command
self.run_command(cmdname)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/command/install_lib.py", line 105, in build
self.run_command('build_py')
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "setup.py", line 17, in run
raise Exception("Could not build CCL")
Exception: Could not build CCL
This error not occur in the case that only pyccl is installed or only the C is compiled. Not sure what causes this, but I would guess it is some sort of permissions error. Adding a sudo to the python install fixes this, but as this seems to be new behavior, I'm bringing it up just to make certain it is intended behavior.
It's a little hard to say. I did a little further playing around to make sure things still worked fine. I think the result was due to an unnecessary sudo in the make install for my MacOS X set-up. It looks like the Python installer attempts to build C shared library libccl.dylib in the same location, but if the python installer isn't run at the same permissions as the C installer, there can be a conflict.
I wonder if this exists in the pip install version; if this is just in the development installation I am happy to throw a warning up and call it a day. If this conflicts with the pip install, that is a bit more problematic.
Wait is sudo hard coded? That is bad form and we should fix immediately.
I don't think sudo is hard coded. Typical procedure for building the C library is make followed by make install. Previous builds have required me to use sudo make install due to permissions. This would then interfere with the new python installer (even if I try to build with --user) unless I include a sudo.
Now I seem to need to drop the sudo from make install and it all goes fine.
What is the known issue then? Sounds like this is an old thing the cmake build has solved?
Specifically that if you need to use a sudo make install to set up the C side, you need to use a sudo python setup.py install for the Python side.
ahhhh. I don't this qualifies as a CCL install issue, but we can add it to the docs!
unix/linux permissions are an issue no matter what you are doing. :P
Indeed. I agree. I'm going to add this to the wiki and close this.
pyccl shall probably build the c lib in a standard setuptools directory
rather than the recommended cmake directory.
On Fri, Oct 5, 2018, 8:39 AM Antonio Villarreal notifications@github.com
wrote:
Closed #485 https://github.com/LSSTDESC/CCL/issues/485.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/LSSTDESC/CCL/issues/485#event-1887476384, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAIbTHl4EOyDEpJX2F7z_cFKPZxERyjwks5uh30hgaJpZM4XI7zy
.
Ugh. This is hard. Do we really want to force people to install the C lib when doing python?
I thought the final python so file links to an archive library. That
archive has to be built. currently it shares the same location as the
building location of the C builder. It is not hard to split them. I will
file a PR.
On Fri, Oct 5, 2018, 1:19 PM Matthew R. Becker notifications@github.com
wrote:
Ugh. This is hard. Do we really want to force people to install the C lib
when doing python?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/LSSTDESC/CCL/issues/485#issuecomment-427487073, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAIbTOux5FwFcDasSykG_mGo8jyygE2Aks5uh767gaJpZM4XI7zy
.
| gharchive/issue | 2018-10-04T21:20:38 | 2025-04-01T06:37:09.164239 | {
"authors": [
"beckermr",
"rainwoodman",
"villarrealas"
],
"repo": "LSSTDESC/CCL",
"url": "https://github.com/LSSTDESC/CCL/issues/485",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
184853476 | Can we have a the module specify which version its on ?
Right now doing
import wagtailtrans
wagtailtrans.VERSION
does not tell which version the package is ? Is this something we can add, so that on the implementation side (project) side we can handle release changes smoothly.
Would this work for you @mevinbabuc ?
import pkg_resources
__version__ = pkg_resources.get_distribution("wagtailtrans").version
made a PR #30
PR #30 is merged to master, closing this ticket.
| gharchive/issue | 2016-10-24T14:17:00 | 2025-04-01T06:37:09.173469 | {
"authors": [
"Henk-JanVanHasselaar",
"mevinbabuc",
"mikedingjan",
"robmoorman"
],
"repo": "LUKKIEN/wagtailtrans",
"url": "https://github.com/LUKKIEN/wagtailtrans/issues/26",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2317093291 | Not working
OS: Windows 11
Python: 3.12.2
Emulator: Memu
Resolution: 1366*768 (os)
Runs fine but, the bot not detect any object to interact
You might need to re take the screenshots, just go thru the game and take
screenshots of the elements and replace them, make sure your game is in
English and that you don't have something like HDR on! If the problem
persists change the confidence value inside the program
On Sat, May 25, 2024, 9:43 AM Alejandro Quiñones Caicedo <
@.***> wrote:
OS: Windows 11
Python: 3.12.2
Emulator: Memu
Resolution: 1366*768 (os)
Runs fine but, the bot not detect any object to interact
—
Reply to this email directly, view it on GitHub
https://github.com/LUXTACO/DBFarmer/issues/1, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ARW6F2ZHEGA2I5ZN7DMGFUDZECWQNAVCNFSM6AAAAABII7S2LWVHI2DSMVQWIX3LMV43ASLTON2WKOZSGMYTOMBZGMZDSMI
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
| gharchive/issue | 2024-05-25T15:42:42 | 2025-04-01T06:37:09.178148 | {
"authors": [
"LUXTACO",
"Legnatbird"
],
"repo": "LUXTACO/DBFarmer",
"url": "https://github.com/LUXTACO/DBFarmer/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1111049339 | 🛑 Knjigocrvic is down
In e85321a, Knjigocrvic (https://knjigocrvic.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Knjigocrvic is back up in ca52d8c.
| gharchive/issue | 2022-01-21T23:20:59 | 2025-04-01T06:37:09.195023 | {
"authors": [
"LaMpiR"
],
"repo": "LaMpiR/uptime",
"url": "https://github.com/LaMpiR/uptime/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2119819908 | 🛑 SICGESP is down
In ab2256b, SICGESP (https://sicgesp.com.br/login/) was down:
HTTP code: 500
Response time: 734 ms
Resolved: SICGESP is back up in 055029b after 12 minutes.
| gharchive/issue | 2024-02-06T02:38:21 | 2025-04-01T06:37:09.209768 | {
"authors": [
"pazkero"
],
"repo": "LabGover/monitor",
"url": "https://github.com/LabGover/monitor/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
762864560 | Issue 41940: Fix up TabLoader junit tests
Rationale
Issue 41940: Fix up TabLoader junit tests
Related Pull Requests
https://github.com/LabKey/platform/pull/1733
https://github.com/LabKey/sampleManagement/pull/411
Changes
Close all TabLoaders, CloseableIterators, etc.
Correct reversed expected and actual parameters in assertEquals()
Combine nearly identical tests for TSV and CSV
Assert that file deletes were successful
Add test to verify that infer fields works for single data row with no \n
Note: New testSmallFile() junit test will fail until release20.11 branch is merged to develop
| gharchive/pull-request | 2020-12-11T20:28:53 | 2025-04-01T06:37:09.215761 | {
"authors": [
"labkey-adam"
],
"repo": "LabKey/platform",
"url": "https://github.com/LabKey/platform/pull/1781",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
268426522 | Remove tab page
We show the tab control pages, even though there is just one.
Hide the pages, and reclaim some of the UI space.
Closing, since this is currently the standard for DCAF UIs. Whether this should be the standard is an entirely different question.
| gharchive/issue | 2017-10-25T14:46:21 | 2025-04-01T06:37:09.217331 | {
"authors": [
"pollockm"
],
"repo": "LabVIEW-DCAF/Scaling",
"url": "https://github.com/LabVIEW-DCAF/Scaling/issues/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2696361772 | Nothing shows up when clicking view HowLongToBeat data (again)
Describe the bug
This issue was reported by Travers50 a few weeks ago. Now it's happening all over again.
I'm using his last post as a reference, since it's the same issue:
When pressing view HowLongToBeat datas to add the overlay onto my dashboard, no games show up.
To Reproduce
Steps to reproduce the behavior:
Click any game, press 3 bars, hover over howlongtobeat, press view howlongtobeat datas, search, see nothing
Expected behavior
I expect to see a list of any games that could be related to the title of the entry you're searching for.
Screenshots
Extensions log
Attach Playnite's Extensions.log file. It is located in Playnite's installation directory in the portable version or in %AppData%\Playnite (Can be pasted in Explorer) in the installed version
It's either a coincidence and they're simply making changes to their website, or they are actively trying to break the plugin. Let's hope it's the former and that it can be fixed again.
yeah can confirm getting the same issue again
Can confirm I have the same issue.
Same issue here.
Same problem for any games.
same anyone have the fix like the last time or it will be fixed?
Same here
Issue persisting after v3.6 update. The latest data being fetched is empty.
Could someone please upload one of their already setup games, the files are stored in ExtensionsData\e08cd51f-9c9a-4ee3-a094-fde03b55492f\HowLongToBeat I want to write a script that would allow you to manually add the HLTB data while this issue is getting fixed by the devs. (Don't have any game data since I reinstalled the addon). Thanks.
@Verssgn here's one.
02887464-34d7-4485-a2f1-38987d9601ac.json
@Verssgn here's one. 02887464-34d7-4485-a2f1-38987d9601ac.json
Thanks! If I get it to work I will post it here.
Here is a script that will allow you to download files for the addon from the HLTB website (This is a bandage solution), please note that the script sucks so for the button to appear refresh the website also give it couple of seconds to load - SCRIPT + TUTORIAL: https://greasyfork.org/en/scripts/519319-how-long-to-beat-to-playnite
Please do not post issues with the script here!
How Long To Beat To Playnite Script
Here is a script that will allow you to download files for the addon from the HLTB website (This is a bandage solution while this issue is happening), please note that the script sucks so for the button to appear refresh the website also give it couple of seconds to load - SCRIPT + TUTORIAL: https://greasyfork.org/en/scripts/519319-how-long-to-beat-to-playnite
Please do not post issues with the script here! Use the feedback section on greasyfork
Works great, thank you :)
For information:
https://howlongtobeat.com/forum/thread/681/6#post118870
Works like a charm. Thanks
Hi, I found the issue and submitted a fix via Pull Request.
While waiting for the official new release to be available, you can use this temporary release to test the fix on your side:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243
Just download and double-click on the .pext file while Playnite is open, or drag&drop the file into Playnite.
Hope it helps.
Hi, I found the issue and submitted a fix via Pull Request.
While waiting for the official new release to be available, you can use this temporary release to test the fix on your side: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243
Just download and double-click on the .pext file while Playnite is open, or drag&drop the file into Playnite.
Hope it helps.
This fixed it for me. Thanks a ton. People like you are what make Playnite so great!
Hi, I found the issue and submitted a fix via Pull Request.
While waiting for the official new release to be available, you can use this temporary release to test the fix on your side: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243
Just download and double-click on the .pext file while Playnite is open, or drag&drop the file into Playnite.
Hope it helps.
Ur truly a life saver mate, works like a charm, thanks a lot , hope Lacro could put you on his team of dev support :D
Hi, I found the issue and submitted a fix via Pull Request.
While waiting for the official new release to be available, you can use this temporary release to test the fix on your side: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243
Just download and double-click on the .pext file while Playnite is open, or drag&drop the file into Playnite.
Hope it helps.
ACK I really want to thank you for this, but now my stupid antivirus tags it as a virus and auto-blocks it everytime I try to download it. How can I stop it from doing that?
Great, I managed to download it from Firefox browser, but now my antivirus auto-tags the add-on itself as a virus and I can't load it anymore.
How can I fix that?
Great, I managed to download it from Firefox browser, but now my antivirus auto-tags the add-on itself as a virus and I can't load it anymore. How can I fix that?
Hi @WerewolfNandah , I don't know why your antivirus flagged this file as malicious. You could upload it to VirusTotal to scan it and check that it is totally safe.
Perhaps adding it as an exception in your antivirus could sort your issue.
If it does not work, you may decide to reinstall it from Playnite interface and wait for the next official release. That way, no manual installation will be required, so less risk of triggering any AV overzealous scans.
I finally managed to install the fix and make the app work again.
Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
This is broken again
I finally managed to install the fix and make the app work again. Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
Broken for me as well. The HLTB extension has always had a history of only working half the time though lol. I don't think it's the dev's fault - I think HLTB just updates their website a lot. Verssgn's workaround still works for me though.
I finally managed to install the fix and make the app work again. Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
Broken for me as well. The HLTB extension has always had a history of only working half the time though lol. I don't think it's the dev's fault - I think HLTB just updates their website a lot. Verssgn's workaround still works for me though.
Yeah, I mean, I can tell this is clearly not the devs' intentions, and I bet they must be as pissed as we users. But it's just annoying to see this extension can't work on its own.
At least we've got that workaround, you're right about that.
I finally managed to install the fix and make the app work again. Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
Same thing happened here. I was updating my library with two games, and while the plugin was downloading the information for both games simultaneously, one got the data, but the other failed to download anything. I retried for the other game with multiple different search words, but to no avail
HLTB has changed the default value of the parameter they recently added.
I updated the Pull Request branch to reflect the change.
Here is the compiled extension with the updated fix:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-2
I finally managed to install the fix and make the app work again. Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
Same thing happened here. I was updating my library with two games, and while the plugin was downloading the information for both games simultaneously, one got the data, but the other failed to download anything. I retried for the other game with multiple different search words, but to no avail
It works again!
Thank you so much!
HLTB has changed the default value of the parameter they recently added. I updated the Pull Request branch to reflect the change.
Here is the compiled extension with the updated fix: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-2
Once again, you are an absolute goat. Thank you man, mad respect. 🫡
HLTB has changed the default value of the parameter they recently added. I updated the Pull Request branch to reflect the change.
Here is the compiled extension with the updated fix: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-2
Thank you very much! This worked for me.
Yeah, same here :p
Working great now. Thanks a ton.
Does this still work? I had to (sadly) switch from arch kde to windows because kde HDR does not work for me. So I went back to playnite and tried this fix. Data still does not show up. Did something change again?
Does this still work? I had to (sadly) switch from arch kde to windows because kde HDR does not work for me. So I went back to playnite and tried this fix. Data still does not show up. Did something change again?
Yes, it has...
It's not working once again. Damnit.
HLTB has decided to change their Search API parameters again.
I updated the Pull Request branch to reflect the latest change.
Here is the compiled extension with the updated fix:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-3
HLTB has decided to change their Search API parameters again. I updated the Pull Request branch to reflect the latest change.
Here is the compiled extension with the updated fix: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-3
Well, I'm afraid they've probably changed something again, because the issue still persists after the hotfix.
You're our hero, bud, taking your time and effort to keep fixing something that stubbornly insists on breaking itself again.
Yes, the bug is back again. I'm so sorry. You guys fixed it, and again, they changed something... 😢
The new update has fixed the issue for me, thanks!
@johnnywnb Thanks a lot. I downloaded it and it works really well. Thank you @Lacro59 and @johnnywnb for all the hard work.
Oh yes! It works like a charm now. Thanks a lot again!!
I hate to be that guy again, but the issue has returned...
yeah its messed up again
HTLB has changed their api endpoint again. I've submitted a Pull Request for @Lacro59 to have a look at.
In the meantime, here is a hotfix release I compiled while waiting for the official release. Feel free to use it to test the fix on your side:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6.1-hotfix-search
HTLB has changed their api endpoint again. I've submitted a Pull Request for @Lacro59 to have a look at.
In the meantime, here is a hotfix release I compiled while waiting for the official release. Feel free to use it to test the fix on your side: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6.1-hotfix-search
Tested and works thank you. However the latest official version is 3.7.0 - so playnite wants to update to a broken one again. Maybe change the version number to 3.7.1?
@robbely thanks!
I've updated the release to use version 3.7.0 indeed:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.7.0-hotfix-search
Lacro will decide on the next version name (3.7.1 or 3.8). In the meantime, it's best to keep this hotfix release version similar to the current one so that when the new official release gets available, Playnite picks it up naturally.
| gharchive/issue | 2024-11-26T22:54:41 | 2025-04-01T06:37:09.264438 | {
"authors": [
"Arrkayd",
"Aryanblood17",
"FeyrisTan",
"JDOGGOKUSSJ2",
"K1LL3RPUNCH",
"Lacro59",
"Maple-Elter",
"PaulTheCarman",
"Sergiokool",
"Storbfall",
"Thiagojustino1",
"Verssgn",
"WerewolfNandah",
"bivasbh",
"bryjo3",
"johnnywnb",
"pedrosacca",
"robbely",
"samarthc",
"wensleyoliv"
],
"repo": "Lacro59/playnite-howlongtobeat-plugin",
"url": "https://github.com/Lacro59/playnite-howlongtobeat-plugin/issues/243",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2743124690 | WhatsApp Web does not display the QR code for login
Summary
When I load WhatsApp Web in Ladybird, the QR code for scanning does not appear.
Operating system
macOS
Steps to reproduce
Open Ladybird
Change UA to Chrome
Navigate to https://web.whatsapp.com/
Observe
Expected behavior
QR code shows up
Actual behavior
QR code doesn't show up
URL for a reduced test case
https://web.whatsapp.com/
HTML/SVG/etc. source for a reduced test case
N/A
Log output and (if possible) backtrace
It's spammed with A LOT of
342811.755 WebContent(25408): FIXME: InlineFormattingContext::dimension_box_on_line got unexpected box in inline context:
342811.755 WebContent(25408): Label <label.x17fgdl5.x1f6kntn.xt0psk2> at (885.15625,482.84375) content-size 0x0 [0+0+0 0 0+0+0] [0+0+0 0 0+0+0] children: inline
TextNode <#text>
Screenshots or screen recordings
Build flags or config settings
No response
Contribute a patch?
[ ] I’ll contribute a patch for this myself.
Our Text/input/wpt-import/html/syntax/parsing/html5lib_tests11.html test does the same output, possibly easier to reduce that
This seems to be caused by missing CacheStorage: https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage
Performing delete caches in Chrome Devtools as it's loading causes it to infinitely spin as well.
| gharchive/issue | 2024-12-16T18:44:05 | 2025-04-01T06:37:09.277164 | {
"authors": [
"Lubrsi",
"shlyakpavel"
],
"repo": "LadybirdBrowser/ladybird",
"url": "https://github.com/LadybirdBrowser/ladybird/issues/2941",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.