id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
59174171 | Fixed issues and added W15_lab06.md
I addressed issues #8, #9 and #13 and added W15_lab06.md as required by lab06
For #8, I didn't add the pictures for the flag and mine, i just made the colors nicer and removed the numbers.
For #9, I added dialog boxes that pop up when you win or lose.
Lab06 procedure changed; make a new pull request to the amazingcaleb branch
| gharchive/pull-request | 2015-02-27T01:14:45 | 2025-04-01T06:37:39.035509 | {
"authors": [
"amazingcaleb",
"mliou"
],
"repo": "UCSB-CS56-Projects/cs56-games-minesweeper",
"url": "https://github.com/UCSB-CS56-Projects/cs56-games-minesweeper/pull/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2142685388 | feat: osnap execution warning message
motivation
We want to show oSnap users a warning if their safe is misconfigured and their transaction will not auto execute.
We call an endpoint at https://osnap.uma.xyz/api/sapce-config to inspect the on-chain settings for a given safe.
In Proposals list (already proposed)
In transaction Builder (before proposal)
seeing an issue on space: https://snapshot-77xkgjvgj-uma.vercel.app/#/umadev.eth/create
not sure if this is us or snapshot
prd here: https://github.com/snapshot-labs/snapshot/pull/4567
| gharchive/pull-request | 2024-02-19T15:53:05 | 2025-04-01T06:37:39.076953 | {
"authors": [
"daywiss",
"gsteenkamp89"
],
"repo": "UMAprotocol/snapshot",
"url": "https://github.com/UMAprotocol/snapshot/pull/127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2159803339 | automated recompilation of the website every week (for updating pages like Publications that craw external services)
New paper that has posted on: https://arxiv.org/a/niffenegger_r_1.html
is not showing up on https://quantumdraft.umass-amherst.org/publications/.
The papers update only when a new version of the website is compiled. I will set up a recurrent job to do that once a week when it is closer to done. When I merge #5, the paper should be visible.
The papers update only when a new version of the website is compiled. I will set up a recurrent job to do that once a week when it is closer to done. When I merge #5, the paper should be visible.
it is now complete (see the .buildkite and .github/deploy configs)
| gharchive/issue | 2024-02-28T20:21:36 | 2025-04-01T06:37:39.084869 | {
"authors": [
"Krastanov",
"rniffenegger"
],
"repo": "UMassQIS/UMassQISWebsite",
"url": "https://github.com/UMassQIS/UMassQISWebsite/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1050986321 | Retest Only: CAMPD Right Menu - EPA 508 issue
Finding
Page/ Screen
Finding
WCAG 2.0 Standard(s)
Right Menu
The right menu contains 3 subtopics each of which has both a button and a link. Both the buttons and links have identical names but different functions. The buttons expand and collapse the corresponding submenus while the links lead to an external information page. These names should be updated to clarify the function of each.
2.4.4
Context
The tech team is working on the right menu component. The new menu component will be implemented with ticket Refactor CAMPD UI to use easey-design-system components #1572. This ticket should be retested with ticket #1572
Retest complete, there are no more expandable buttons, only links. Issue is fixed
| gharchive/issue | 2021-11-11T13:34:43 | 2025-04-01T06:37:39.144104 | {
"authors": [
"JanellC",
"aprematta"
],
"repo": "US-EPA-CAMD/easey-ui",
"url": "https://github.com/US-EPA-CAMD/easey-ui/issues/2283",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1183931158 | Add Hg Injection data elements to Import Endpoint
Add Hg Injection data elements to Import Endpoint
Test Case: TC1186
Test Passed
| gharchive/issue | 2022-03-28T20:09:10 | 2025-04-01T06:37:39.145710 | {
"authors": [
"mosesdeeCVP",
"vishnunavuluri"
],
"repo": "US-EPA-CAMD/easey-ui",
"url": "https://github.com/US-EPA-CAMD/easey-ui/issues/2945",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1412490531 | Load Test History Report Definition into database
Need to define & load the report definition...
Example Reports
https://teams.microsoft.com/l/file/CC8EC58F-2014-49DD-9E85-BEC9D39E270C?tenantId=88b378b3-6748-4867-acf9-76aacbeca6a7&fileType=pdf&objectUrl=https%3A%2F%2Fusepa.sharepoint.com%2Fsites%2FCAMDCVPTeam%2FShared Documents%2FTheEmissioners%2FECMPS Reports%2FMonitor Plan Audit.pdf&baseUrl=https%3A%2F%2Fusepa.sharepoint.com%2Fsites%2FCAMDCVPTeam&serviceName=teams&threadId=19:56d336e3060849efa0557d379b78d9ca@thread.skype&allowXTenantAccess=false&groupId=21a5e1cd-d3f1-48a6-801f-0d34e2d63d23
Example of test history report:
https://usepa.sharepoint.com/sites/CAMDCVPTeam/Shared Documents/Forms/AllItems.aspx?FolderCTID=0x012000B4ABB0EF9635994FA680705355892410&id=%2Fsites%2FCAMDCVPTeam%2FShared Documents%2FECMPS 2.0%2FECMPS Reports%2FTest History.pdf&parent=%2Fsites%2FCAMDCVPTeam%2FShared Documents%2FECMPS 2.0%2FECMPS Reports
Need clarififcation from Chris W on the priority of this report
| gharchive/issue | 2022-10-18T02:52:24 | 2025-04-01T06:37:39.148309 | {
"authors": [
"JanellC",
"jwhitehead77"
],
"repo": "US-EPA-CAMD/easey-ui",
"url": "https://github.com/US-EPA-CAMD/easey-ui/issues/4392",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1924302438 | Bug: Evaluate Critical Error Adjusted Value
Issue: When a user logs in, imports from a historical file back, and then evaluates the user receives critical errors regarding the adjusted value
Steps to Recreate:
Log in
Import form historical Q4 2022 Limestone
Evaluate the file
User gets critical errors regarding adjusted value
Link to evaluation report:
https://ecmps-tst.app.cloud.gov/reports?reportCode=EM_EVAL&facilityId=298&monitorPlanId=MDC-D4D7F122FD8F488F8B09A87DF926020E&year=2022&quarter=4
#5731, #5732 and #5759 are all the same problem where adjustedHourlyValue was not being read in on import. These have all been addressed with changes from #5759
Private Zenhub Image
Evaluated without Critical Error
| gharchive/issue | 2023-10-03T14:40:02 | 2025-04-01T06:37:39.151507 | {
"authors": [
"acollad1",
"jwhitehead77",
"mosesdeeCVP"
],
"repo": "US-EPA-CAMD/easey-ui",
"url": "https://github.com/US-EPA-CAMD/easey-ui/issues/5731",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1121335960 | TwoRegion_Summary_DomesticUse rds missing for 2012 and 2013
The two rds files are missing
https://edap-ord-data-commons.s3.amazonaws.com/index.html?prefix=stateio/
The json files are present, but not the rds files
Uploaded
| gharchive/issue | 2022-02-02T00:17:07 | 2025-04-01T06:37:39.171560 | {
"authors": [
"WesIngwersen",
"catherinebirney"
],
"repo": "USEPA/stateior",
"url": "https://github.com/USEPA/stateior/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1155763182 | Make a target for the SOHL land cover data
The SOHL land cover data is not being automatically recognized by the function that scans ScienceBase. We'll need to add this data to the pipeline manually. That could be done in a couple of ways:
After the table of ScienceBase links is fetched, create a target that adds a row for the SOHL data. I think this works best for use in our current workflow.
Create a separate target that fetches the SOHL data.
From @ajsekell: this happened because item_list_children() has a default max limit of 20 items. Increasing that limit resulted in retrieving the SOHL land cover item in the table of ScienceBase links.
I'm going to link this issue with the ScienceBase PR #54
Addressed in #75
| gharchive/issue | 2022-03-01T20:06:22 | 2025-04-01T06:37:39.194744 | {
"authors": [
"jds485"
],
"repo": "USGS-R/regional-hydrologic-forcings-ml",
"url": "https://github.com/USGS-R/regional-hydrologic-forcings-ml/issues/64",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1943027242 | Train To Pakistan
Book Title?
Train To Pakistan
Author?
Kushwant Singh
Genre of Book?
Fiction/Historical Fiction
#6
| gharchive/issue | 2023-10-14T06:40:47 | 2025-04-01T06:37:39.196301 | {
"authors": [
"USKhokhar"
],
"repo": "USKhokhar/fehrist",
"url": "https://github.com/USKhokhar/fehrist/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2551574146 | Move account link verification email template(s) into the account adapter instead of hardcoding them
Allow the account adapter to specify the email template that should be used to send a verification email, instead of hardcoding it into the view.
This requires adding an "account_verification_email_template" setting to the AccountAdapter, and updating the AccountLink view to use the template from the adapter instead of the hard-coded template. The default template in the adapter should be set to the template currently being used.
Closed by #526
| gharchive/issue | 2024-09-26T21:39:39 | 2025-04-01T06:37:39.216940 | {
"authors": [
"amstilp"
],
"repo": "UW-GAC/django-anvil-consortium-manager",
"url": "https://github.com/UW-GAC/django-anvil-consortium-manager/issues/519",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
128419724 | Fix/issue_209
count the number of output vars in each file rather than requiring the user specify this integer directly.
fixes #209
This could also be done with forcing vars... just sayin'
@tbohn - do you have any comments on the implementation here. I'm not super familiar with text file parsing in C so comments would be appreciated.
I'll put together another PR for the remaining issues in #366.
The text-parsing looks OK. The docs still mention nvars though...
Thanks @tbohn. I've update the docs, will squash, then merge on passed Travis tests.
| gharchive/pull-request | 2016-01-24T19:41:05 | 2025-04-01T06:37:39.219020 | {
"authors": [
"jhamman",
"tbohn"
],
"repo": "UW-Hydro/VIC",
"url": "https://github.com/UW-Hydro/VIC/pull/364",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
53668753 | MUMUP-1467 : change home to favorite on app place
-home
+favorites
:+1:
| gharchive/pull-request | 2015-01-07T19:07:49 | 2025-04-01T06:37:39.220142 | {
"authors": [
"jhanstra",
"timlevett"
],
"repo": "UW-Madison-DoIT/angularjs-portal",
"url": "https://github.com/UW-Madison-DoIT/angularjs-portal/pull/78",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
212936197 | PixelObjectList implementation - Duplicates by visual identification
This first PR only looks at the visual portion of the analysis.
Note there are some preliminary GPS functions and colour functions for now. They are a result of planning. They are included in this PR, but are not in a functional state. Please ignore these mostly empty functions.
This PR should have been completed a while ago....here it is.
Issues fixed with respect to your comments.
Two things that are not in this PR and will be worked on in a future one:
Settings Singleton (Should this be for the whole warg-cv suite? or just for my module? I'm thinking the whole thing)
Option for pre-computed data (for contours), I'm still not quite sure how I will do this exactly. Once again, this is a separate PR.
-GPS code works in unit tests, I just need a proper camera calibration (alpha values) and to test it in the field or with videos.
@benjaminwinger If you get the chance, can you review this PR?
Sorry, I've been a little busy (and sick).
Working on it.
Sorry no worries. I'm going to try and get a live version of this running this weekend though...
Also, regarding the duplicate comments from yesterday, the first time I accidentally closed the page and they disappeared, then my graphics driver crashed and I had to reboot my computer.
Apparently my comments weren't lost after all, though they weren't showing up before I submitted the rewritten ones.
| gharchive/pull-request | 2017-03-09T05:03:27 | 2025-04-01T06:37:39.229074 | {
"authors": [
"benjaminwinger",
"chrishajduk84"
],
"repo": "UWARG/computer-vision",
"url": "https://github.com/UWARG/computer-vision/pull/79",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
127069007 | Setup Slack channel for developers and maintainers
Slack seems to be the collaboration app to use right now. It'll great if someone can set up a channel as a common place for flow collaboration.
I just set up a channel! Give me an email and I'll send you an invite.
| gharchive/issue | 2016-01-17T02:01:10 | 2025-04-01T06:37:39.230413 | {
"authors": [
"ccqi",
"divad12"
],
"repo": "UWFlow/rmc",
"url": "https://github.com/UWFlow/rmc/issues/267",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
185000714 | Catching up with some stuff
Not finished with all of it.
Off to a good start. Mailroom is very nice.
| gharchive/pull-request | 2016-10-25T02:07:48 | 2025-04-01T06:37:39.238323 | {
"authors": [
"codedragon",
"komashu"
],
"repo": "UWPCE-PythonCert/IntroPython2016",
"url": "https://github.com/UWPCE-PythonCert/IntroPython2016/pull/77",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
2656072879 | S3 key backup
In your diagram you have sketches that keys will get backup to s3. But i dont see any configuration or credentials needed for a save s3 account to save backups there.
what im missing?
S3 backup is for Terraform state and created SSH keys for Hetzner. There is Terraform code for creating buckets, DynamoDB entries, and a bucket for SSH key backup. It assumes that you have AWS CLI installed and you are logged in with valid credentials
SSH backup will create a bucket and back up the keys that are in the coolify_hetzner_infra/.ssh directory during Terraform apply
| gharchive/issue | 2024-11-13T16:34:03 | 2025-04-01T06:37:39.266993 | {
"authors": [
"Ujstor",
"regenrek"
],
"repo": "Ujstor/self-hosting-infrastructure-cluster",
"url": "https://github.com/Ujstor/self-hosting-infrastructure-cluster/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1900360870 | 🛑 Encalcat Boutique is down
In 463d5e1, Encalcat Boutique (https://boutique.encalcat.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Encalcat Boutique is back up in 866ac23 after 47 minutes.
| gharchive/issue | 2023-09-18T07:48:00 | 2025-04-01T06:37:39.347091 | {
"authors": [
"UnSeulT"
],
"repo": "UnSeulT/upptime",
"url": "https://github.com/UnSeulT/upptime/issues/368",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2507462574 | 🛑 Bio 34 is down
In 0fc8d2b, Bio 34 (https://bio34.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bio 34 is back up in eaad15b after 16 minutes.
| gharchive/issue | 2024-09-05T11:00:26 | 2025-04-01T06:37:39.349447 | {
"authors": [
"UnSeulT"
],
"repo": "UnSeulT/upptime",
"url": "https://github.com/UnSeulT/upptime/issues/583",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1356938243 | Duplicate Entries
I checked the miner on my computer and noticed that two identical entries were created in the web panel.
I don't think this is a critical issue. But I wonder what is the reason?
Either they are two miners you have running with different build IDs, or it wasn't able to read the already existing entry in your database so it created a new one on the connection.
Understood. Thanks for the quick response.
| gharchive/issue | 2022-08-31T07:44:28 | 2025-04-01T06:37:39.351097 | {
"authors": [
"UnamSanctam",
"masterjek"
],
"repo": "UnamSanctam/UnamWebPanel",
"url": "https://github.com/UnamSanctam/UnamWebPanel/issues/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
431752821 | http://0.0.0.0:5000/neptune
I'm getting a link on Socialfish where I need to go to http://0.0.0.0:5000/neptune and fill in a username and a password, but if I just fill in my kali linux username and password there will come a pop-up with ''Bad''.
What do I have to do??
You don't have to fill in your kali linux username and password.
You have to fill in the username and passwords you used while executing SocialFish.py
For instance, as shown in the image I attached, the credentials would be:
Username: admin
Password: password
I stil get the pop-up “bad”
root@kali:~# cd Bureaublad/SocialFish
root@kali:~/Bureaublad/SocialFish# python3 SocialFish.py ./SocialFish root kali
'
' ' UNDEADSEC | t.me/UndeadSec
' ' youtube.com/c/UndeadSec - BRAZIL
. ' . ' '
' ' ' ' '
███████ ████████ ███████ ██ ███████ ██ ███████ ██ ███████ ██ ██
██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██
███████ ██ ██ ██ ██ ███████ ██ █████ ██ ███████ ███████
██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██
███████ ████████ ███████ ██ ██ ██ ███████ ██ ██ ███████ ██ ██
. ' '....' ..'. ' .
' . . ' ' ' v3.0Nepture
' . . . . . '. .' ' .
' ' '. ' Twitter: https://twitter.com/A1S0N_
' ' ' Site: https://www.undeadsec.com
' . '
Go to http://0.0.0.0:5000/neptune to start
Serving Flask app "SocialFish" (lazy loading)
Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
Debug mode: off
Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [11/Apr/2019 02:25:58] "GET /neptune HTTP/1.1" 200 -
127.0.0.1 - - [11/Apr/2019 02:26:06] "POST /neptune HTTP/1.1" 200 -
this is what i got, i didnt know how to screenshot in mac so i copied everything
Go to http://0.0.0.0:5000/neptune
That’s the problem, if I’m going to http://0.0.0.0/neptune and login I will get a pop up with “bad” on it. So what are the login username and password?
Try 127.0.0.1:5000/neptune.
Same problem , after login : Bad ????
Is linux is necessary for socialfishing ?
same problem
Having the same problem
How do I create a link to send the URL to the victim outside internal network?
Having the same problem
That’s the problem, if I’m going to http://0.0.0.0/neptune and login I will get a pop up with “bad” on it. So what are the login username and password?
I have the same problem as you solved it
I'm getting a link on Socialfish where I need to go to http://0.0.0.0:5000/neptune and fill in a username and a password, but if I just fill in my kali linux username and password there will come a pop-up with ''Bad''.
What do I have to do??
Reply-
You do not have to fill your kali linux id and pw
You have to fill username as "username" and password as "password"
https://0.0.0.0:5000/neptune i was get it but i can't to login whit the link socialfish what must can i do please?
Try http://127.0.0.1:5000/neptune.
it works
Arkadaşlar socialfish uygulamasında server ve token verebilecek varmı
| gharchive/issue | 2019-04-10T22:48:27 | 2025-04-01T06:37:39.371115 | {
"authors": [
"426344",
"51200",
"A1S0N",
"Ansh2411",
"Chakroot",
"Dexvar",
"Foxengton",
"Mardiasa",
"Nobs23",
"Uyuzbarmen",
"darkoit",
"krvikrantsingh51",
"morrocash",
"saugatsn",
"wassim180605"
],
"repo": "UndeadSec/SocialFish",
"url": "https://github.com/UndeadSec/SocialFish/issues/139",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2290781086 | 🛑 Orland Park is down
In 8ffdad1, Orland Park (https://evergreenslc.com/orlandpark) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Orland Park is back up in df67775 after 13 minutes.
| gharchive/issue | 2024-05-11T08:31:12 | 2025-04-01T06:37:39.383960 | {
"authors": [
"jc731"
],
"repo": "UniSynTechnologies/Upptime",
"url": "https://github.com/UniSynTechnologies/Upptime/issues/191",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
954788719 | Docker compose for demo
Hi,
I have been reading about unitime and trying to wrap my head around it, to see if we could use to help high-schools create and manage their timetables.
I was able to get set up locally using a docker-compose demo I found at https://github.com/vlatka-sinisa/docker-unitime and I thought it would be worth to mention it here, that having that docker-compose file in this repository could be a great way to help newcomers set up unitime locally. It is also a great way to document installation since dockerfiles tend to have all the instructions needed to set up the system.
If you think this is a good idea, I could create a pull request for it, maybe you have some guidance on how it would best be done.
Thanks
It is not my intention to have this docker-compose file serve as a way to deploy unitime to production but to set up the demo environment, which once coded as a set of docker containers, should end up being as simple as docker-compose up. I have learned that the easier we make it for beginners to set up a demo, the greater the chances of them contributing to the project.
If you had a chance to read through the files in that repository you will see they describe two containers, one for the DB and the other for the server, and in under 20 lines install all dependencies to get ready to run it locally.
Perhaps placing them inside the documentation folder is an option.
In any case thanks for the project, it is impressive!. Let me know if you are ok with adding these files to the documentation folder, I can do it as a PR.
Yes, I have looked at the files in the vlatka-sinisa/docker-unitime repo. I am a bit concern about the absolute paths (like /usr/local/tomcat) -- would it work with Docker on Mac OS or Windows? Most of the questions we get from people struggling to install UniTime are actually Windows users.
I can see having it under something like /Documentation/Docker-Example.
Oh that is the magic of docker, that path is internal to the container, for example https://github.com/vlatka-sinisa/docker-unitime/blob/master/docker/tomcat8/Dockerfile#L12 is mkdir /usr/local/tomcat/data but that happens inside a container. It is as if you got a fresh clean server and you can do that sort of thing inside of it, without affecting the external system.
With that docker setup, all a person needs to have on their machine is docker and docker-compose.
This is also of great help for development, since you can reproduce an environment in seconds, try something out, and if you need, recreate it also within seconds.
I tried running unitime to see if I could help with frontend, or to see if there was a REST api I could use, for example.
A simple docker installation is now available under Documentation/Docker and available starting with UniTime 4.8.126.
| gharchive/issue | 2021-07-28T12:09:06 | 2025-04-01T06:37:39.390127 | {
"authors": [
"avilaton",
"tomas-muller"
],
"repo": "UniTime/unitime",
"url": "https://github.com/UniTime/unitime/issues/92",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1709516977 | [Audit medium severity] Potential Attestation Loss
The Unirep protocol allows attesters to assert that changes should be made to the owners of particular epoch keys. Over the course of the epoch, these changes are recorded the attester’s epochTree which is a re-usable incremental merkle tree. When epochs expire, users are expected to apply the changes recorded in the epochTree by performing a state transition. To do so, however, the epoch tree root must be saved along with the state tree root in the history tree. In the Unirep contract, this is done when transitioning to a new epoch but only if an update has been made to the stateTree. Therefore, if no state tree update has been made the attestations are discarded.
Location where the epochTree is reset
function updateEpochIfNeeded(
uint160 attesterId
) public returns (uint48 epoch) {
...
if (attester.stateTree.numberOfLeaves > 0) {
uint256 historyTreeLeaf = PoseidonT3.hash(
[attester.stateTree.root, attester.epochTree.root]
);
uint256 root = IncrementalBinaryTree.insert(
attester.historyTree,
historyTreeLeaf
);
attester.historyTreeRoots[root] = true;
ReusableMerkleTree.reset(attester.stateTree);
attester.epochTreeRoots[fromEpoch] = attester.epochTree.root;
emit HistoryTreeLeaf(attesterId, historyTreeLeaf);
}
ReusableMerkleTree.reset(attester.epochTree);
emit EpochEnded(epoch - 1, attesterId);
attester.currentEpoch = epoch;
}
Impact
As the state tree is only changed when a user is added by the attester or when a user transitions state, it is possible for attestations to be lost. In particular, assuming no users are updated it is possible for current users of the protocol to collude to discard undesirable updates to the state (particularly if there are a small number of users). Since this epoch is essentially discarded and ignored, this users may then transition from the previous epoch to the next epoch (i.e. users may skip e to transition from e-1 to e+1).
Developer Response
It is expected that the attester will validate an epoch key before performing an attestation. As long as this is done properly, then the epoch tree will be empty if the state tree is empty.
I think we can add revert in attest
if there is no leaves in the current state tree, the contract reverts the attestation.
Yes we could, it would add a cold SLOAD (a few thousand gas). I think a warning is sufficient for this, attesters should be aware of the epoch keys they're attesting to (e.g. validate that the epoch key exists in the state tree).
| gharchive/issue | 2023-05-15T07:48:16 | 2025-04-01T06:37:39.437922 | {
"authors": [
"vimwitch",
"vivianjeng"
],
"repo": "Unirep/Unirep",
"url": "https://github.com/Unirep/Unirep/issues/456",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1173900633 | Invalid test generated with declared 'void' variable
Project: libbpf
File: libbpf/src/strset.c
Function: strset__add_str
Generated test:
TEST(error, strset__add_str_test_5)
{
struct strset set = {NULL, 0UL, 10UL, 0UL, NULL};
char s[] = "bcacbccccb";
void utbotInnerVar1 = 0;
set.strs_data = &utbotInnerVar1;
set.strs_hash = &utbotInnerVar1;
strset__add_str(&set, s);
struct strset expected_set = {NULL, 0UL, 0UL, 0UL, NULL};
EXPECT_EQ(expected_set.strs_data_len, set.strs_data_len);
EXPECT_EQ(expected_set.strs_data_cap, set.strs_data_cap);
EXPECT_EQ(expected_set.strs_data_max_len, set.strs_data_max_len);
}
Error:
/home/utbot/projects/libbpf/tests/src/strset_test.cpp:67:10: error: variable has incomplete type 'void'
void utbotInnerVar1 = 0;
The problem still exists on the latest master
@sava-cska, makes sense to follow the example of #200 and not instantiate incomplete types at all
| gharchive/issue | 2022-03-18T18:43:58 | 2025-04-01T06:37:39.448106 | {
"authors": [
"operasfantom"
],
"repo": "UnitTestBot/UTBotCpp",
"url": "https://github.com/UnitTestBot/UTBotCpp/issues/122",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1589050125 | Go. Assertions for structures should be accurate
Description
Now tests generated for Go structures have assertNotEquals for the whole structure.
Actual and expected structures have only one different field.
If some other field is different in actual and expected structures, the test will pass too.
Context
for example:
utbot-go/go-samples/simple/supported_types_go_ut_test.go see TestStructWithNanByUtGoFuzzer
Actual behavior
assert.NotEqual(t, Structure{int: -1, int8: 1, int16: 32767, int32: -1, int64: -1, uint: 18446744073709551615, uint8: 0, uint16: 1, uint32: 0, uint64: 18446744073709551615, uintptr: 18446744073709551615, float32: 0.02308184, float64: math.NaN(), complex64: complex(float32(0.02308184), float32(0.02308184)), complex128: complex(0.9412491794821144, 0.9412491794821144), byte: 0, rune: -1, string: "", bool: false}, actualVal)
Expected behavior
assert.NotEqual(t, math.NaN(), actualVal.float64)
Environment
IntelliJ IDEA 2022.1 - 2022.2 Ultimate/Community
GoLand 2022.2
Can we identify which fields has changed from initial ones?
Then can make assertNotEquals for them only.
Other possible solutions are:
also add assertEquals for other fields of the structure? Like the following:
assert.NotEqual(t, math.NaN(), actualVal.float64)
assert.Equal(t, -1, actualVal.int)
assert.Equal(t, 1, actualVal.int8)
...
Or it would be better to use one assertEquals for the whole structure? Like:
assert.Equal(t, Structure{int: -1, int8: 1, int16: 32767, int32: -1, int64: -1, uint: 18446744073709551615, uint8: 0, uint16: 1, uint32: 0, uint64: 18446744073709551615, uintptr: 18446744073709551615, float32: 0.7815346, float64: 0.3332183994766498, complex64: complex(float32(0.7815346), float32(0.7815346)), complex128: complex(0.3332183994766498, 0.3332183994766498), byte: 0, rune: -1, string: "", bool: false}, actualVal)
During discussion of the issue @Markoutte suggested an assertion approach, that can be useful for all languages:
#1881
| gharchive/issue | 2023-02-17T09:43:38 | 2025-04-01T06:37:39.453291 | {
"authors": [
"alisevych"
],
"repo": "UnitTestBot/UTBotJava",
"url": "https://github.com/UnitTestBot/UTBotJava/issues/1809",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
874014048 | Bot Raid Protection
Need to remake https://github.com/Sirush/UDHBot/pull/112 in this repo
I'll implement it using the same logic as the setup as Sirush#112 but I'll make this into a proper service with a command that can be used to enable it manually with some sort of cooldown before it automatically turns back off.
Do we still need this? Does wick do this? If we do still need something, was the original solution to just kick any new joins after X number of people join at the same time acceptable?
Think I got most of the way with this, but stopped. I'll see if this was complete and try testing it sometime in the next couple days.
Wick has this feature but only on premium.
So that might still be good to have, especially since you already did some work and it would be a shame to waste it!
| gharchive/issue | 2021-05-02T18:44:05 | 2025-04-01T06:37:39.495002 | {
"authors": [
"Pierre-Demessence",
"SimplyJpk"
],
"repo": "Unity-Developer-Community/UDC-Bot",
"url": "https://github.com/Unity-Developer-Community/UDC-Bot/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
958550802 | chore!: replace MLAPI namespace with Unity.Netcode
part of our upcoming MLAPI rebranding, we need to change our namespaces around the codebase.
I'm wondering if using "Unity.NGO" instead of "Unity.Netcode" would be better to differentiate with dots netcode.
Changes LGTM assuming there's alignment on the namespace. I don't have a problem with it, although tools is using Unity.Multiplayer and we may want to change that too to follow suit (I'll follow up on this).
This will need some really good messaging so internal devs all know what to expect, and any outstanding PRs know to merge backwards even if there's no conflicts.
Yeah, @becksebenius-unity I think you're right, I think we'd have to all move under Unity.Netcode
Yeah, @becksebenius-unity I think you're right, I think we'd have to all move under Unity.Netcode
For Boss Room, I'd lean toward keeping with our "BossRoom" namespace, since it's all user side code. It'd make sense to not mix namespaces for this.
| gharchive/pull-request | 2021-08-02T22:17:25 | 2025-04-01T06:37:39.537450 | {
"authors": [
"MFatihMAR",
"SamuelBellomo",
"becksebenius-unity",
"mattwalsh-unity"
],
"repo": "Unity-Technologies/com.unity.multiplayer.mlapi",
"url": "https://github.com/Unity-Technologies/com.unity.multiplayer.mlapi/pull/1007",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1887368921 | Error implicating imported features on core features
While testing constraints, I encountered the following issue, but it only occurs in this particular configuration. See screenshots.
Important: If other features are used on both import1-side and the other side, taut will not occur.
Maybe I'm stupid, but can one of you check this?
It's the same for me. looks really weird.
Maybe solved by #94
Fixed by #94
| gharchive/issue | 2023-09-08T10:12:54 | 2025-04-01T06:37:39.570258 | {
"authors": [
"MartinMUU",
"SundermannC",
"ThiBruUU",
"felixrieg"
],
"repo": "Universal-Variability-Language/uvl-lsp",
"url": "https://github.com/Universal-Variability-Language/uvl-lsp/issues/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1307468778 | Problem running example_control_loop.py
I've attached a photo of the error. It occurs from the rtde_config.ConfigFile(config_filename) on line 43 of the program.
Any suggestions of points in the right direction are appreciated
It's likely because you're running example_control_loop.py from main folder instead of examples folder.
Version 2.7.2 is also fixing minor issue in control loop example.
| gharchive/issue | 2022-07-18T06:32:05 | 2025-04-01T06:37:39.595275 | {
"authors": [
"michal-milkowski",
"t-little"
],
"repo": "UniversalRobots/RTDE_Python_Client_Library",
"url": "https://github.com/UniversalRobots/RTDE_Python_Client_Library/issues/5",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1244066962 | [student page - hops] a graduated student has only 27% in hops
014807781
lots of simillar cases, eg 011547400
It has now magically changed to 100%: https://oodikone.helsinki.fi/students/014807781 ...
Both seems fine
| gharchive/issue | 2022-05-21T19:42:34 | 2025-04-01T06:37:39.601741 | {
"authors": [
"LeoVaris",
"mluukkai",
"vaahtokarkki"
],
"repo": "UniversityOfHelsinkiCS/oodikone",
"url": "https://github.com/UniversityOfHelsinkiCS/oodikone/issues/3700",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1413420900 | Term program interaction update
Critical Changes
Term and Program selection updates: after a term is selected, only programs with related Intended Programs Terms with the selected term appear as options
Interaction updates: Interactions created from Application Registration now have additional information (in particular, Term and Program information), interactions are now created for any additional applications started after application registration
actually good to go now!
actually good to go now!
| gharchive/pull-request | 2022-10-18T15:22:15 | 2025-04-01T06:37:39.603264 | {
"authors": [
"nicole-dmass"
],
"repo": "UniversityOfSaintThomas/UST-EASY",
"url": "https://github.com/UniversityOfSaintThomas/UST-EASY/pull/86",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2002696992 | feat: init context
About the changes
Ability to pass initial context fields when initializing the client
Design considerations:
event though having a strongly typed context would be a nicer solution overall it's a bigger change that I'd like to handle together with the updateContext refactoring to have a symmetrical API. We can try this breaking change in version 2.0 of this client. For now I'd like to avoid increasing the API surface area and adding 2 types of init/updateContext
since updateContext accepts [String: String] I am thinking about exposing the same Stringly typed context into the init method
I extracted calculateContext method that will apply to both updateContext and init to split flat context [String, String] into standard context fields and user defined properties. But from the usage perspective it's just one Stringly typed map
appName and environments are always taken from the explicit fields (for backwards compatibility) and everything else can be overwritten with context
Important files
Discussion points
This seems reasonable to me, and solves the same problem as #71, only request I have would be to make the Context fields public to allow the following use-case.
Today, if I want to "clear" certain fields (e.g. userId/sessionId on log out), but keep other existing values (e.g. appVersion), I would need to keep a separate copy of properties outside to keep track of values, since Context.properties is not public today. That's why I made this change: https://github.com/Unleash/unleash-proxy-client-swift/pull/71/files#diff-4196a132b84d6fcbe509066c27b6f6ccd247f880ae7e81826c98355c50256889R2
var newContext = client.context
keys.forEach({ newContext.properties.removeValue(forKey: $0.rawValue) })
return client.updateContext(newContext)
| gharchive/pull-request | 2023-11-20T17:40:29 | 2025-04-01T06:37:39.617915 | {
"authors": [
"jlubawy",
"kwasniew"
],
"repo": "Unleash/unleash-proxy-client-swift",
"url": "https://github.com/Unleash/unleash-proxy-client-swift/pull/73",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1949765393 | feat: default session id
About the changes
What problem are we solving?
enabled info and variant info in the same feature have to be consistent when stickiness is set to default
also we want to have consistency between parent and child features
Solution:
if the user didn't provide sessionId, generate one one the fly
Important files
Discussion points
Is Math.random() good enough?
Is Math.random() good enough?
I think so. In the end we only need it to decide rollout here. Should be good enough for 0.1% splits after our hashing
Yeah agreed, I don't think it matters that much in this context
| gharchive/pull-request | 2023-10-18T13:30:27 | 2025-04-01T06:37:39.621155 | {
"authors": [
"kwasniew",
"sighphyre"
],
"repo": "Unleash/unleash-proxy",
"url": "https://github.com/Unleash/unleash-proxy/pull/155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2288493395 | Properly preserve gym mega evolutions
Finished mega evolutions do not get cleared so we need to preserve when they finish as well.
Screenshot of ReactMap for some additional context.
| gharchive/pull-request | 2024-05-09T21:28:20 | 2025-04-01T06:37:39.626812 | {
"authors": [
"Mygod"
],
"repo": "UnownHash/Golbat",
"url": "https://github.com/UnownHash/Golbat/pull/230",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1242691769 | Add Mysterious Sword Master Youmu (from LostWord)
New sale https://twitter.com/gift_news/status/1527532221623836672
Thanks, merged with a slight name change
| gharchive/pull-request | 2022-05-20T06:36:24 | 2025-04-01T06:37:39.639346 | {
"authors": [
"Araraura",
"Ununoctium117"
],
"repo": "Ununoctium117/fumosite",
"url": "https://github.com/Ununoctium117/fumosite/pull/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2479475082 | Assignment1
What changes are you trying to make? (e.g. Adding or removing code, refactoring existing code, adding reports)
What did you learn from the changes you have made?
Was there another approach you were thinking about making? If so, what approach(es) were you thinking of?
Were there any challenges? If so, what issue(s) did you face? How did you overcome it?
How were these changes tested?
A reference to a related issue in your repository (if applicable)
Checklist
[ ] I can confirm that my changes are working as intended
Will this be reviewed and then merged?
| gharchive/pull-request | 2024-08-22T00:38:59 | 2025-04-01T06:37:39.664498 | {
"authors": [
"NamreenSyed"
],
"repo": "UofT-DSI/shell",
"url": "https://github.com/UofT-DSI/shell/pull/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1179578577 | Home Page (Functions)
Summary
This is the sub issue under Home Page to improve the functionalities of the Home Page of the website.
For Sub-Issue
Change Proposal
Change project types into Card and Button
Propose and add changes to the website to improve usability and interaction
Deadline of the task: TBD
Additional Info
Closing issue to combine it with UI
| gharchive/issue | 2022-03-24T14:13:50 | 2025-04-01T06:37:39.666858 | {
"authors": [
"CatherineZM",
"Felix-Deng"
],
"repo": "UofT-VEEP/VEEP-Website",
"url": "https://github.com/UofT-VEEP/VEEP-Website/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
135052625 | Replace highlight.js with prism.js?
Would there be any interest in replacing highlight.js with prism.js1?
Refs.
https://github.com/PrismJS/prism
PS: seems like I haven't followed this project for sometime now, as it was still written in Golang when I used it 😂
The reason we use highlight is the language auto detection. It we can
replace that portion I would look into replacing with one of many different
highlight engines.
On Feb 20, 2016 4:26 AM, "k0nsl" notifications@github.com wrote:
Would there be any interest in replacing highlight.js with prism.js1?
https://github.com/PrismJS/prism
PS: seems like I haven't followed this project for sometime now, as it
was still written in Golang when I used it 😂
—
Reply to this email directly or view it on GitHub
https://github.com/Upload/Up1/issues/51.
PS: seems like I haven't followed this project for sometime now, as it was still written in Golang when I used it 😂
It actually still is written in Golang, however we've added a second Node server for the time being. Both are currently maintained and we don't have any plans for deprecation at the moment.
The advantage of the Go server is that it is dependency free beyond Go itself
@andre-d:
Yes, that's the issue I stumbled upon when I tried to replace it myself. I couldn't be bothered with it and reverted back to the latest release of highlight.js :)
@k3d3:
Thanks for the clarification.
| gharchive/issue | 2016-02-20T09:26:33 | 2025-04-01T06:37:39.674236 | {
"authors": [
"andre-d",
"k0nsl",
"k3d3"
],
"repo": "Upload/Up1",
"url": "https://github.com/Upload/Up1/issues/51",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
130673992 | Release Angular1Meteor through Npm
Expose Angular1Meteor through Npm as default.
Also use it in an Atmosphere package to be compatible with Meteor versions before 1.3
Only things that could be exposed in npm are: angular-meteor-data and angular-meteor-auth.
Compilers stays in atmosphere.
Names in npm:
angular-meteor-data as angular-meteor
angular-meteor-auth stays the same
What you all think about using webpack?
I prepared an example how it would be look like.
https://github.com/kamilkisiela/angular-meteor/tree/v1.4.x-npm/packages/angular-meteor-data
About name of the main package.
At the moment angular-meteor has two dependencies:
angular-meteor-data
angular-templates
But angular-templates is a compiler so it cannot be published as npm package.
That's the reason of my proposal of keeping angular-meteor-data as angular-meteor in npm.
It allows to include possible angular-templates in main angular-meteor package in the future.
@kamilkisiela sounds great.
It's also aligned with Angular2-Meteor where the process will be:
npm install angular2
npm install angular2-meteor
meteor add angular2-compilers
so in Angular1-Meteor it might look like:
npm install angular
npm install angular-meteor
meteor add angular-compilers // which will consist of `angular-html-templates` (now called angular-templates) and `pbastowski:angular-babel` (until we will get everything inside the official `ecmascript`)
we also are using Webpack there so Webpack sounds great.
@Urigo May I do something in that direction so we can already work on it in v1.4?
v1.4.x branch is rapidly updated so should I still wait? Or may I prepare PR so everybody could switch into angular1-meteor with build process (webpack)?
@Urigo And I think we should use some code standards to keep code clean also to prevent silly mistakes like using not defined variables etc. My proposal is to use eslint with airbnb's rules with few changes.
We're using eslint in angular2-now and it works great :)
@kamilkisiela I think that this weekend we will close a beta for 1.3.6 and then use your change in 1.3.7.
So let's wait for Sunday for the Npm branch.
About the code cleanup I think that's a great suggestion.
Can you open it as a separate issue to track?
also, do you think we could connect it to Bithound?
Ok
Bithound supports ESLint do yeah, it is possible
I'm impatient so I pulled down 1.3.6 branch and added build process (webpack) with linting utility (eslint).
You can see it here:
https://github.com/kamilkisiela/angular-meteor/tree/v1.3.7/packages/angular-meteor-data
Testing
To run tests in watch mode so velocity can receive changes and wepack can rebuild output file:
npm run watch
npm run test:local
To run tests in CI mode:
npm test
Building
Outputs not minified bundle:
npm run build:dist
Outputs minified bundle:
npm run build:prod
Both at once:
npm run build
I came up with idea.
Names of modules and services are being used as a string so to avoid silly mistakes like typo etc ,we can use something like this:
import { Mixer, module as mixerModule } from './modules/mixer';
export const module = 'angular-meteor.reactive';
export const Reactive = '$$Reactive';
angular.module(module, [
mixerModule
]);
service(Reactive, [
Mixer
function($$Mixer) { /* ... */ }
])
done.
Thank you @kamilkisiela for the huge help and great work!
| gharchive/issue | 2016-02-02T12:50:47 | 2025-04-01T06:37:39.701287 | {
"authors": [
"Urigo",
"kamilkisiela"
],
"repo": "Urigo/angular-meteor",
"url": "https://github.com/Urigo/angular-meteor/issues/1178",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
102261991 | Initial tweaks to website
What's Changed
Bring some things above the fold
Add link to submit
Move around front page elements
Add Open Source Meta Tags (cc @RichardLitt)
What's Next
Going to explore some new designs for the visual thingies
Clean up some of the text on certain sections of the documentation. Paragraph width, etc.
:+1:
| gharchive/pull-request | 2015-08-20T23:49:09 | 2025-04-01T06:37:39.703733 | {
"authors": [
"RichardLitt",
"simonv3"
],
"repo": "Urigo/angular-meteor",
"url": "https://github.com/Urigo/angular-meteor/pull/604",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1821895650 | [BUG] [IOS] Cannot build, error: 'guard' body must not fall through, consider using a 'return' or 'throw' to exit the scope
Describe the bug
When trying to build a project in xcode, the error occurs, and the build fails.
The error is:
'guard' body must not fall through, consider using a 'return' or 'throw' to exit the scope
To Reproduce
Steps to reproduce the behavior:
Build
See error
Expected behavior
Build without failing.
Additional context
macOS 12.6
Xcode 14.0.1
"@usercentrics/react-native-sdk": "^2.8.2"
"react-native": "0.68.6"
Hey @bitfabrikken ,
do you think you could provide us with the stack trace/build log so we can see where the error is coming from?
Cheers,
Rui
@userCTest It's happening in UsercentricsAnalyticsEventType+Int.swift, line 8, col 9.
@userCTest podfile included in case it helps
require_relative '../node_modules/react-native/scripts/react_native_pods'
require_relative '../node_modules/@react-native-community/cli-platform-ios/native_modules'
platform :ios, '12.0'
install! 'cocoapods', :deterministic_uuids => false
target 'asdf' do
config = use_native_modules!
use_frameworks! :linkage => :static
$RNFirebaseAsStaticFramework = true
$RNFirebaseAnalyticsWithoutAdIdSupport=true #added to not use ad-ids in analytics
flags = get_default_flags()
use_react_native!(
:path => config[:reactNativePath],
:hermes_enabled => flags[:hermes_enabled],
:fabric_enabled => flags[:fabric_enabled],
:app_path => "#{Pod::Config.instance.installation_root}/.."
)
post_install do |installer|
react_native_post_install(installer)
__apply_Xcode_12_5_M1_post_install_workaround(installer)
end
end
I had the same problem without using the new Track API and I managed to fix it
You can use patch-package to make the patch
`diff --git a/node_modules/@usercentrics/react-native-sdk/ios/Extensions/UsercentricsAnalyticsEventType+Int.swift b/node_modules/@usercentrics/react-native-sdk/ios/Extensions/UsercentricsAnalyticsEventType+Int.swift
index 95a0570..1bffb9f 100644
--- a/node_modules/@usercentrics/react-native-sdk/ios/Extensions/UsercentricsAnalyticsEventType+Int.swift
+++ b/node_modules/@usercentrics/react-native-sdk/ios/Extensions/UsercentricsAnalyticsEventType+Int.swift
@@ -2,9 +2,10 @@ import Usercentrics
public extension UsercentricsAnalyticsEventType {
static func initialize(from value: Int) -> UsercentricsAnalyticsEventType {
static func initialize(from value: Int) -> UsercentricsAnalyticsEventType? {
guard let eventType = UsercentricsAnalyticsEventType.values().get(index: Int32(value)) else {
assert(false)
return nil
}
return eventType
}
diff --git a/node_modules/@usercentrics/react-native-sdk/ios/RNUsercentricsModule.swift b/node_modules/@usercentrics/react-native-sdk/ios/RNUsercentricsModule.swift
index 383e19c..4d701f9 100644
--- a/node_modules/@usercentrics/react-native-sdk/ios/RNUsercentricsModule.swift
+++ b/node_modules/@usercentrics/react-native-sdk/ios/RNUsercentricsModule.swift
@@ -188,7 +188,8 @@ class RNUsercentricsModule: NSObject, RCTBridgeModule {
}
@objc func track(_ event: Int) -> Void {
usercentricsManager.track(event: UsercentricsAnalyticsEventType.initialize(from: event))
guard let usercentricsAnalyticsEventType = UsercentricsAnalyticsEventType.initialize(from: event) else { return }
usercentricsManager.track(event: usercentricsAnalyticsEventType)
}
@objc func reset() -> Void {
`
Everything seems to work, but maybe they'll come up with a better solution for this
@bitfabrikken
I had the same problem without using the new Track API and I managed to fix it
You can use patch-package to make the patch
UsercentricsAnalyticsEventType+Int.swift
`import Usercentrics
public extension UsercentricsAnalyticsEventType {
static func initialize(from value: Int) -> UsercentricsAnalyticsEventType? {
guard let eventType = UsercentricsAnalyticsEventType.values().get(index: Int32(value)) else {
assert(false)
return nil
}
return eventType
}
}
`
RNUsercentricsModule.swift line 190
@objc func track(_ event: Int) -> Void { guard let usercentricsAnalyticsEventType = UsercentricsAnalyticsEventType.initialize(from: event) else { return } usercentricsManager.track(event: usercentricsAnalyticsEventType) }
Everything seems to work, but maybe they'll come up with a better solution for this
@bitfabrikken
I had the same problem without using the new Track API and I managed to fix it
You can use patch-package to make the patch
UsercentricsAnalyticsEventType+Int.swift
`
import Usercentrics
public extension UsercentricsAnalyticsEventType {
static func initialize(from value: Int) -> UsercentricsAnalyticsEventType? {
guard let eventType = UsercentricsAnalyticsEventType.values().get(index: Int32(value)) else {
assert(false)
return nil
}
return eventType
}
}
`
RNUsercentricsModule.swift line 190
@objc func track(_ event: Int) -> Void { guard let usercentricsAnalyticsEventType = UsercentricsAnalyticsEventType.initialize(from: event) else { return } usercentricsManager.track(event: usercentricsAnalyticsEventType) }
Everything seems to work, but maybe they'll come up with a better solution for this
@sebastian-godja thx for the info!
@bitfabrikken thanks for creating the support ticket. If that's ok with you, will carry the discussion on the ticket?
@userCTest I an using "react-native": "0.72.3"
import Usercentrics
public extension UsercentricsAnalyticsEventType {
static func initialize(from value: Int) -> UsercentricsAnalyticsEventType? {
guard let eventType = UsercentricsAnalyticsEventType.values().get(index: Int32(value)) else {
assert(false)
return nil
}
return eventType
}
}
Thanks for this, it allows me to build further.
But then it stops on some errors:
Undefined symbol: _SKAdNetworkCoarseConversionValueHigh
Undefined symbol: _SKAdNetworkCoarseConversionValueLow
Undefined symbol: _SKAdNetworkCoarseConversionValueMedium
Undefined symbol: _SKStoreProductParameterAdNetworkSourceIdentifier
glad to know @bitfabrikken!! And thanks @sebastian-godja for the workaround! :)
Anyway, as I said before, we will still be looking into this, so hopefully, some updates will follow soon.
Cheers,
Rui
fixes in https://github.com/Usercentrics/react-native-sdk/pull/88
| gharchive/issue | 2023-07-26T08:52:37 | 2025-04-01T06:37:39.735280 | {
"authors": [
"bitfabrikken",
"sebastian-godja",
"uc-leo",
"userCTest"
],
"repo": "Usercentrics/react-native-sdk",
"url": "https://github.com/Usercentrics/react-native-sdk/issues/86",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2048496667 | Discuss how to handle exception/None for default resource
I am currently throwing an error when the default reource is not set. But it might be more handy to return `None` since that might be easier to test for in the data uploads and downoads. For the up and downloads we would have to check whether the default resource exists. the session just sets it, but currently you can also set the irods_default_resource to 'bogus' and the property would be set without throwing an error ...
Feel free to change and if you are happy to merge.
Originally posted by @chStaiger in https://github.com/UtrechtUniversity/iBridges/issues/14#issuecomment-1859994562
I don't think this is currently a problem anymore, reopen if wrong.
| gharchive/issue | 2023-12-19T11:46:31 | 2025-04-01T06:37:39.744225 | {
"authors": [
"qubixes"
],
"repo": "UtrechtUniversity/iBridges",
"url": "https://github.com/UtrechtUniversity/iBridges/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1291984278 | Updated Date in the Footer
[x] Add Automatically Updated Date script
JavaScript will automatically update the footer year automatically each year
| gharchive/issue | 2022-07-02T06:41:45 | 2025-04-01T06:37:39.748287 | {
"authors": [
"V-FOR-VEND3TTA"
],
"repo": "V-FOR-VEND3TTA/Big-O-Media",
"url": "https://github.com/V-FOR-VEND3TTA/Big-O-Media/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
958253759 | Pause and resume actions or not?
Context
The definition of the startPause action [6.8.1] states: "... Actions can continue...", whereas the definition of the stopPause action states: "... Movement and all other actions will be resumed (if any)..."
The "finished" column in the table in [6.8.2] suggests that actions should be paused (startPause: "... All actions will be paused..." and stopPause: "... All paused actions will be resumed...").
Questions
Is it safe to assume that actions should be paused and that the first quote is a mistake?
If used in a preemptive fashion as described in Issue #8 (which makes perfect sense to me - if I want to pause something, I must preempt it), do we maybe need some additional flag indicating whether or not to resume any preempted actions?
if you have atomic actions you may delay the action result until this action if finished, and the mark the pause action as finished. or you reject the pause action with an error.
so would prefer the first way.
@AntonDueck : I agree; if an already running action cannot be paused, the state of 'startPause' is RUNNING, until the running action is finished. Rejecting 'startPause' would be unlucky, because it depends on timing whether 'startPause' is accepted or not.
you have to inspect the action result (of all actions) after your instant action is finished to see what is been paused and which actions have finished.
| gharchive/issue | 2021-08-02T15:20:49 | 2025-04-01T06:37:39.789032 | {
"authors": [
"AntonDueck",
"Pythocrates",
"collani-bosch"
],
"repo": "VDA5050/VDA5050",
"url": "https://github.com/VDA5050/VDA5050/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
931819115 | handling date for scatter plot etc
This PR involves several works concerning scatter plot and its viz (at web-eda)
Detailed description and some screenshots can be found at the corresponding viz
Note: these involve significant changes from previous scatter plot (XYPlot) component, thus merging with plot tidy-up work by Bob may need to be done carefully
Updated with tidy-up works in conjunction with the corresponding viz part. Details can be found at the corresponding PR
I presume that Bob agreed with this PR as he approved corresponding viz PR :)
| gharchive/pull-request | 2021-06-28T17:53:31 | 2025-04-01T06:37:39.792960 | {
"authors": [
"moontrip"
],
"repo": "VEuPathDB/web-components",
"url": "https://github.com/VEuPathDB/web-components/pull/161",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
917585628 | plot thumbnails to reflect config set when expanded
when ive minimized a visualization (i was looking at scatter) and gone back to where i can add another, its reasonable a user might be looking to have the configurations they set when expanded/full screen to still apply.
So if a user chooses log scale y axis in histogram, the next freshly made histogram should have log scale on by default?
Some settings (like log y scale) could be sticky, but others, like bin width, should not.
Currently, a user can copy/clone a visualisation if they wanted to preserve some settings. I tried this with histogram. You can set a custom log-scale and bin width, then clone the viz, then change the variable in the new viz, the log scale persists, but the bin width goes back to default, which is IMO the desired behaviour.
I mean when I configure a viz and then minimize it, the thumbnail version should probably keep the configuration. I didn't mean to apply the configuration to a new viz. Sorry I wasn't clear.
That's OK.
The behaviour you describe should be what happens! There may be a bug.
Could you give more details please?
On Fri, Jun 11, 2021 at 11:23 AM Danielle Callan @.***>
wrote:
I mean when I configure a viz and then minimize it, the thumbnail version
should probably keep the configuration. I didn't mean to apply the
configuration to a new viz. Sorry I wasn't clear.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/VEuPathDB/web-eda/issues/172#issuecomment-859477798,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACLLHZ4WARFKV3CTNPPJCDTSHPZVANCNFSM46O5Y45A
.
Did a quick test. The scatter plot config (the plot type radio buttons) is
honoured in the thumbnail versions, so I don't see the problem you see -
yet!
On Fri, Jun 11, 2021 at 11:35 AM Bob MacCallum @.***> wrote:
That's OK.
The behaviour you describe should be what happens! There may be a bug.
Could you give more details please?
On Fri, Jun 11, 2021 at 11:23 AM Danielle Callan @.***>
wrote:
I mean when I configure a viz and then minimize it, the thumbnail version
should probably keep the configuration. I didn't mean to apply the
configuration to a new viz. Sorry I wasn't clear.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/VEuPathDB/web-eda/issues/172#issuecomment-859477798,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACLLHZ4WARFKV3CTNPPJCDTSHPZVANCNFSM46O5Y45A
.
right. it may be that what im asking for isnt very easy. but the client side controls dont seem to persist. like if i click something off in the legend for ex. but im not sure a user has any reason to know why some things persist and others dont. itll probably be particularly noticeable for something like switching axes. because some types of plots i think client side switching makes sense, and others a new request should be made to the data service.
or maybe its even specific to the plotly legend, i havent played around w it too much yet tbh.
We are not persisting the result of legend interactions. I'm sure this is something we can address.
Oh I see. Won't it be nasty persisting internal plotly things? That's why we've disabled virtually all of the interactivity features.
implementation hint: plotly has a callback to capture entire state of a plot (ask @dmfalke for more)
post phase 1 - roll our own legend will make this work redundant (will need to redo it)
Tested in GEMS1 -> Bar plot of "Study timepoint" with "age group" as overlay. When I switch off an age group in legend and minimize the plot, when I open it again, the state persists (that age group is still off). Passes QA.
@nkittur-uga thank you for your tests! 👍
| gharchive/issue | 2021-06-10T17:12:25 | 2025-04-01T06:37:39.804647 | {
"authors": [
"bobular",
"d-callan",
"dmfalke",
"moontrip",
"nkittur-uga"
],
"repo": "VEuPathDB/web-eda",
"url": "https://github.com/VEuPathDB/web-eda/issues/172",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1506076147 | Allow specifying vhdl_ls version to use
When selecting embedded for the language server, the plugin will always use the latest vhdl_ls release.
This can break stuff without actually changing anything. In my case, I'm running the VHDL LS plugin on Ubuntu 18.04. The latest vhdl_ls release 0.22 now requires glibc version > 2.27. Ubuntu 18.04 only comes with glibc version 2.27 so I cannot use the VHDL LS plugin anymore.
It would be nice to be able to specify an explicit vhdl_ls version with the embedded option.
It would probably be better if we build vhdl_ls using a bit older docker image so that it does not require such a new glibc. Currently it is build using ubuntu-lastest
I did some investigation and github is about to deprecate the ubuntu-18:04 runner so it does not seem attractive to use it. You could always build vhdl_ls yourself. I believe the vscode plugin supports pointing out binaries you build yourself.
Using an old version of vhdl_ls is not good since a lot of new functionality is being added right now.
Yes you can use your own compiled language server, just point it out with the vhdlls.languageServerUserPath option.
Another option is to build the binaries without any libc dependency using the musl target. That would reach the most possible systems. If someone makes a PR of that it probably would be approved.
Yes cargo install just adds the binary. Cargo has no concept of a data folder. When building locally you should just check out the code and run cargo build --release and point to the binary in the target/release folder and it will find the libraries.
Anyway we should just build a musl version that does not depend on glib and avoid this complexity for users such as you.
Note that it is not recommended to mix vhdl_libraries folder with a vhdl_ls binary when they are not from the same commit. The standard.vhd package is tightly coupled with the binary and has recently changed as the vhdl_ls analysis became smarter.
@Bochlin I am adding x86_64-unknown-linux-musl to the github release targets with a plan to phase out x86_64-unknown-linux-gnu. Is it possible to make a new rust_hdl_vscode release that uses the musl binary?
@Bochlin https://github.com/VHDL-LS/rust_hdl/releases/tag/v0.24.0 now includes musl builds which do not have any glibc dependency on Linux
So rust_hdl_vscode should switch from linux-gnu to linux-musl zip folder on linux.
Published 0.4.0 which uses the musl build instead.
@Bochlin excellent
@wrightsg could you try if it solved your original problem?
I can confirm that the extension works again on Ubuntu 18.04 with version v0.4.0.
Thank you!
| gharchive/issue | 2022-12-21T10:30:45 | 2025-04-01T06:37:39.813045 | {
"authors": [
"Bochlin",
"kraigher",
"wrightsg"
],
"repo": "VHDL-LS/rust_hdl_vscode",
"url": "https://github.com/VHDL-LS/rust_hdl_vscode/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
811296578 | Documentation
adds the data provenance section
updates notebook links
adds a knn and token signature subsection
fixes minor formatting issues
It might be necessary to update the refdata dependency in docs/requirements.txt to >=0.2.0(?)
good catch!
| gharchive/pull-request | 2021-02-18T17:12:50 | 2025-04-01T06:37:39.815132 | {
"authors": [
"maqzi"
],
"repo": "VIDA-NYU/openclean-core",
"url": "https://github.com/VIDA-NYU/openclean-core/pull/110",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2229997681 | Add command line to not display splash screen.
Title says it all. :-)
When you're debugging a startup issue, the splash screen gets old real quick!
Tom
already added with --debug 2 (or greater)
| gharchive/pull-request | 2024-04-07T23:01:54 | 2025-04-01T06:37:39.817731 | {
"authors": [
"VK2BEA",
"tomverbeure"
],
"repo": "VK2BEA/HP8753-Companion",
"url": "https://github.com/VK2BEA/HP8753-Companion/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1531781699 | Allow symfony/yaml:^6
Hello!
Can you allow Symfony Yaml package version as ^5.4 || ^6 please?
Problem 1
- Root composer.json requires vkcom/modulite-phpstan * -> satisfiable by vkcom/modulite-phpstan[v1.0.0].
- vkcom/modulite-phpstan v1.0.0 requires symfony/yaml ^5.4 -> found symfony/yaml[v5.4.0, ..., v5.4.17] but the package is fixed to v6.2.2 (lock file version) by a partial update and that version does not match. Make sure you list it as an argument for the update command.
There is pull request
https://github.com/VKCOM/modulite-phpstan/pull/12
@KorDum, hi!
We fixed the symfony/yaml version problem in #42. Do we close your issue?
| gharchive/issue | 2023-01-13T06:33:05 | 2025-04-01T06:37:39.836861 | {
"authors": [
"Danil42Russia",
"KorDum"
],
"repo": "VKCOM/modulite-phpstan",
"url": "https://github.com/VKCOM/modulite-phpstan/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
430213095 | Missing groups.getCallbackServers method.
Reference
groups.getCallbackServers
Спасибо, исправили!
| gharchive/issue | 2019-04-08T00:44:54 | 2025-04-01T06:37:39.840199 | {
"authors": [
"Mobyman",
"leonov-kirill"
],
"repo": "VKCOM/vk-php-sdk",
"url": "https://github.com/VKCOM/vk-php-sdk/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2166123519 | "This contract object doesn't have address set yet, please set an address first."
VWBLのsetKey関数を実行すると、以下のエラーが出てきてどうすればいいのかよくわかりませんので、お手隙の時に回答していただけたら幸いです
以下のことを試しました
オンチェーンでNFTの情報を保存するために、npmでimportしたvwbl sdkを使用しておらず、vwbl sdkのコードを改造しています
正しいSignMessageが取れているので、コントラクトアドレスは正しいかと思います
NFTをミントしたときに使用したdocumentId、walletAddressを使用しました
walletAddressはMetamaskが生成したアドレスを使用しています
鍵の生成方法は、vwbl sdkが採用している方法を使っています
原因はmainnetのエンドポイントにしていたことでした。testnetのエンドポイントに切り替えたらエラー上手く行きました。
https://docs.vwbl-protocol.org/endpoint.html#test-env
問題が解決したのでイシューを閉じます
| gharchive/issue | 2024-03-04T07:27:48 | 2025-04-01T06:37:39.926376 | {
"authors": [
"jomspk"
],
"repo": "VWBL/VWBL-SDK",
"url": "https://github.com/VWBL/VWBL-SDK/issues/131",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1702249193 | Проблемы с браузером Vivaldi
Operating system / операционная система
Windows 10
Running as service / Запуск программы как сервис
I run it as a regular program / Запускаю программу обычным образом
Describe the bug / Опишите ошибку программы
При включении программы получаю проблемы с браузером Vivaldi. Открывает или не открывает сайты рандомным образом из поиска, при открытии сайтов сильно тормозит. Например стабильно не открывает Хабр, выдаtт ошибку SSL-сертификата. С другими браузерами проблем нет.
Additional information / Дополнительная информация
No response
Такая же фигня. Отключение QUIC в доп настройках (vivaldi:flags) не спасло. Как это дебажить?
+1 к проблеме. Также будто появились проблемы с видеокартой. Люто фризит и выключает всё что нагружало комп (игру, сам браузер и дискорд). До установки такой пробелемы не было. Хотелось бы получить инструкцию по удалению GoodbyeDPI
Решила проблему менее радикально. Баги происходят именно от изменения экспериментальных настроек.
Experimental QUIC protocol/TLS 1.3 hybridized Kyber support
Без них GoodbyeDPI всё ещё работает, просто иногда приходится перезапускать страницу видоса.
Проблема была решена после установки релиза 0.2.3rc1. Можно закрывать тему.
В общем, отключение экспериментальных настроек и так далее проблему не решили. Всё равно ровно раз в 30 минут происходит перегрузка браузера с последующим отключением всего и вся. Версия самая последняя, которая 7 часов назад была залита. Пока не найдётся решение, буду смотреть через активности в дискорде.
@BlackAures1 решение простое — использовать blacklist.
Если ты про это, то это именно то что я и запускала :)
| gharchive/issue | 2023-05-09T15:13:47 | 2025-04-01T06:37:39.939214 | {
"authors": [
"BlackAures1",
"Shionsan",
"ValdikSS",
"kkaazzoo"
],
"repo": "ValdikSS/GoodbyeDPI",
"url": "https://github.com/ValdikSS/GoodbyeDPI/issues/306",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2279222401 | 🛑 b2b MarketingServices is down
In ba005dd, b2b MarketingServices (https://b2b-marketingservices.de/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: b2b MarketingServices is back up in bac9420 after 8 minutes.
| gharchive/issue | 2024-05-04T22:27:32 | 2025-04-01T06:37:39.944824 | {
"authors": [
"Valecha24"
],
"repo": "Valecha24/WebMonitoring",
"url": "https://github.com/Valecha24/WebMonitoring/issues/1229",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2324400211 | 🛑 b2b MarketingServices is down
In 0a05880, b2b MarketingServices (https://b2b-marketingservices.de/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: b2b MarketingServices is back up in 400b4cc after 8 minutes.
| gharchive/issue | 2024-05-29T23:41:01 | 2025-04-01T06:37:39.947429 | {
"authors": [
"Valecha24"
],
"repo": "Valecha24/WebMonitoring",
"url": "https://github.com/Valecha24/WebMonitoring/issues/3933",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2327655419 | 🛑 b2b MarketingServices is down
In b5c45a3, b2b MarketingServices (https://b2b-marketingservices.de/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: b2b MarketingServices is back up in 67fe35e after 8 minutes.
| gharchive/issue | 2024-05-31T11:44:56 | 2025-04-01T06:37:39.949870 | {
"authors": [
"Valecha24"
],
"repo": "Valecha24/WebMonitoring",
"url": "https://github.com/Valecha24/WebMonitoring/issues/4102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2266897251 | 🛑 b2b MarketingServices is down
In b8eb032, b2b MarketingServices (https://b2b-marketingservices.de/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: b2b MarketingServices is back up in a2415dd after 8 minutes.
| gharchive/issue | 2024-04-27T07:50:36 | 2025-04-01T06:37:39.952286 | {
"authors": [
"Valecha24"
],
"repo": "Valecha24/WebMonitoring",
"url": "https://github.com/Valecha24/WebMonitoring/issues/472",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1174484392 | [CI] Add Quality Gate Action
Add a "quality-gate" GitHub Action that runs build, lint then unit test.
The action can run on push, PR, or manually.
I added a badge to the read-me but I can't see if it will properly work until it runs on the main repo.
Additional changes: I had to add a condition to the release workflow so that the it wouldn't fail on my fork.
Next steps:
Remove the duplicated workflows from CircleCI
Create a Github workflow for Cypress
Add better integration for test results, linting, coverage reporting, etc .. (maybe something like https://danger.systems/js/ )
Thank for this!
| gharchive/pull-request | 2022-03-20T09:21:34 | 2025-04-01T06:37:39.960679 | {
"authors": [
"ValentinH",
"raed667"
],
"repo": "ValentinH/react-easy-crop",
"url": "https://github.com/ValentinH/react-easy-crop/pull/366",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2296008217 | fix(CHB-2989): update engine version on instance
Resolves Jira ticket
Context
Missed a spot where engine version is specified
Summary of Changes
A brief description of the changes included in the PR. (i.e. rationale behind solution for implementation of a new feature, etc.) Screenshots if applicable.
Additional Considerations
Any additional consequences, side effects, uncertainties stemming from the changes in this PR.
Instructions for the Reviewers
What do you expect from the reviewers? E.g. What code should be run to reproduce the results? Is there anything that needs attention?
Housekeeping Checklist
[ ] Linked the PR to a Jira ticket?
[ ] Linked the Jira ticket to the PR?
[ ] Checked Draft/vs Ready to Review?
[ ] Tagged the reviewers if Ready for Review?
On CH backend we use
lifecycle {
ignore_changes = [
engine_version,
]
}
to allow aurora to auto update without breaking our deployments.
thanks!!
| gharchive/pull-request | 2024-05-14T17:29:41 | 2025-04-01T06:37:39.969581 | {
"authors": [
"pchrapka"
],
"repo": "ValidereInc/grafana-fargate",
"url": "https://github.com/ValidereInc/grafana-fargate/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
677463636 | Artifacts in the synthetic low light clean image
Hi, I got the artifacts when generating synthetic low light clean images. According to your paper, the fake low light clean images = long exposure images / ratio, while actually such operation(large integers are divided by ratio then make float to integers) squeezes the range of the values which loses the accuracy and generates the "non-continuous step" in the image, which feels like a HDR image displayed on an 8-bit screen. The result is as follows:
original long exposure image
synthesize low light clean image after auto-brightness for imshow
original low light noisy image
synthesize noise image based on the "non-continious" low light clean image
How do you fix the artifacts?
while actually such operation(large integers are divided by ratio then make float to integers)
You shouldn't convert the float to int in this step
Hi, but anyway the photon-electrons map converted from the low-light-clean raw are integers, right??
My way:
long-exposed-clean raw(integer) -> synthetic-low-light-clean(float)-> photon-electrons map(integer)-> poisson noisy photon-electrons map(integer) ->poisson noisy raw(integer)
It's not practical to keep float number in the step of generating the Poisson noisy photon-electrons map
| gharchive/issue | 2020-08-12T07:28:33 | 2025-04-01T06:37:40.129990 | {
"authors": [
"ProNoobLi",
"Vandermode"
],
"repo": "Vandermode/ELD",
"url": "https://github.com/Vandermode/ELD/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2035184802 | Multiple BN window corrupts Binary Ninja menu
Version and Platform (required):
Binary Ninja Version: 3.6.4712-dev Personal, 599e2ad7
OS: macos
OS Version: 14.1
CPU Architecture: arm64
Bug Description:
When we have multiple BN instance, Binary Ninja menu have some duplicated entries.
Steps To Reproduce:
Open multiple BN instance
Click Binary Ninja menu
See there are multiple entries
Screenshots:
This is a QT bug and a duplicate issue of https://github.com/Vector35/binaryninja-api/issues/2630
| gharchive/issue | 2023-12-11T09:11:26 | 2025-04-01T06:37:40.161611 | {
"authors": [
"op2786",
"psifertex"
],
"repo": "Vector35/binaryninja-api",
"url": "https://github.com/Vector35/binaryninja-api/issues/4818",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
281953463 | Case labels missing from all views except HLIL
Unlike assembly view the IL views do not show the case labels for resolved jump tables.
The case labels in asm view were broken in 2.4.2900 with commit https://github.com/Vector35/binaryninja/commit/039aee205de3625a3b4c5bc99891fff2c5a932fc
| gharchive/issue | 2017-12-14T01:34:02 | 2025-04-01T06:37:40.162900 | {
"authors": [
"bpotchik",
"plafosse"
],
"repo": "Vector35/binaryninja-api",
"url": "https://github.com/Vector35/binaryninja-api/issues/889",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
861584716 | Fix wrong CorePluginABIVersion, plugin_abi_version, and plugin_abi_minimum_version
The current Rust lib incorrectly defines the functions needed to register ABI versions.
As I'm not familiar with rustgen I cannot provide the right fix to generate uitypes.h and retrieve the variables BN_MINIMUM_UI_ABI_VERSION and BN_CURRENT_UI_ABI_VERSION.
With this fix the included plugins build and run with the latest BinaryNinja dev release.
A yeah, you are right. If you want, you can close this PR and integrate the changes yourself; I'm not at my dev station right now and cannot make the changes till later this week.
| gharchive/pull-request | 2021-04-19T17:36:32 | 2025-04-01T06:37:40.164595 | {
"authors": [
"marpie"
],
"repo": "Vector35/binaryninja-api",
"url": "https://github.com/Vector35/binaryninja-api/pull/2380",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
48089102 | Fix README aws.yml alias
Fixes the aws.yml in the readme to have the appropriate alias. Fixing the error:
'block in visit_Psych_Nodes_Alias':
Unknown alias: development (Psych::BadAlias)
Thank you!
| gharchive/pull-request | 2014-11-07T14:48:31 | 2025-04-01T06:37:40.182942 | {
"authors": [
"elkelk",
"loganb"
],
"repo": "Veraticus/Dynamoid",
"url": "https://github.com/Veraticus/Dynamoid/pull/181",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1935499310 | refactored V_63469
ok
Ok
| gharchive/pull-request | 2023-10-10T14:29:15 | 2025-04-01T06:37:40.195413 | {
"authors": [
"Ildar1",
"agilebotanist"
],
"repo": "VeriDevOps/RQCODE",
"url": "https://github.com/VeriDevOps/RQCODE/pull/107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2569396481 | Add Unit Tests for Text Extraction Functions
Testing Framework: Use a testing framework like unittest or pytest to create unit tests for the text extraction functions.
• Coverage: Write tests for various scenarios, including:
• Valid PDF and DOCX files.
• Empty documents.
• Documents with non-text content (images, charts).
• Files with special characters and different encodings.
• Continuous Integration: Integrate the tests into the CI pipeline to ensure they run automatically on code changes.
Relevant Code Sections:
• extract_text_from_pdf(file)
• extract_text_from_word(file)
Acceptance Criteria:
Unit tests should be created using unittest or pytest.
Tests should cover all outlined scenarios (valid files, empty files, non-text content, special characters, etc.).
Tests should be integrated with the CI pipeline and run automatically on every code push.
Code coverage should increase with the addition of these tests.
Technical Considerations:
• Use unittest or pytest for testing.
• Mock file handling where necessary to simulate different scenarios.
• Ensure that test dependencies are properly configured in the CI pipeline.
will work on this, assign this to me
@Shizu-ka I am going to give first right to refusal to my troops then I will assign to you if no one takes it.
I can work on this if it's still available
@jonulak go for it.
Unit tests are looking good, but I'll need access to git actions to implement automated tests.
| gharchive/issue | 2024-10-07T05:52:52 | 2025-04-01T06:37:40.267666 | {
"authors": [
"Shizu-ka",
"jeromehardaway",
"jonulak"
],
"repo": "Vets-Who-Code/VetsAI",
"url": "https://github.com/Vets-Who-Code/VetsAI/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1606472256 | why not create client for VMAlertmanagerConfig and VMAUTH
https://github.com/VictoriaMetrics/operator/issues/481 had done the work using code-generator to generate client for the vm's crd. But types like VMAlertmanagerConfig and VMAuth don't have the "//+genclient" tag to create client.
I am wondering why, can they be added now?
Must be fixed at v0.31.0 version
| gharchive/issue | 2023-03-02T09:45:57 | 2025-04-01T06:37:40.339778 | {
"authors": [
"Haleygo",
"f41gh7"
],
"repo": "VictoriaMetrics/operator",
"url": "https://github.com/VictoriaMetrics/operator/issues/599",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
287084810 | Tab bar should be always visible
It is really annoying that the tab bar appears/disappears if you open/close an article.
The tab bar should be always visible to avoid reorganization of the layout. Each time you open an article the tab bar appears and the other controls are moved down - and the opposite way if you close the last article tab.
That's true, it is quite annoying
This will be solved when I replace the tab bar.
Solved, great implementation
| gharchive/issue | 2018-01-09T13:15:36 | 2025-04-01T06:37:40.360195 | {
"authors": [
"MacDisein",
"TAKeanice",
"josh64x2"
],
"repo": "ViennaRSS/vienna-rss",
"url": "https://github.com/ViennaRSS/vienna-rss/issues/1060",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2555146045 | issue #4
Description
This update proposes a significant enhancement to the user interface (UI), focusing on multiple aspects of visual and functional improvements. The main areas of change include the overall layout, background gradients, typography (fonts, text sizing, and placement), and box designs. The aim is to modernize the interface, improve clarity, and create a more engaging and cohesive user experience that aligns with the CampX brand.
Current Behavior
Layout: The current layout, while functional, lacks structure and modern aesthetics. Spacing, alignment, and organization of elements could be improved for better readability and flow.
Background: The existing background is static and lacks depth. It does not fully capture the essence of CampX's adventurous and nature-focused branding.
Fonts & Text Sizing: The current typography is basic and inconsistent. It lacks a clear visual hierarchy, making it harder for users to navigate the page intuitively.
Text Placement: Text placement is not optimized for readability, leading to an inconsistent and cluttered appearance.
Box Designs: Input fields and form elements are functional but lack the modern styling that would enhance the overall user experience.
Proposed Behavior
Layout: The new layout introduces a cleaner, more structured design that enhances user navigation and overall visual appeal. Elements are more balanced with improved spacing and alignment to guide the user's attention smoothly.
Background: Gradients have been added to the background, bringing a sense of depth and dynamism. The use of soft gradients adds a modern and vibrant feel that better reflects CampX's adventurous and outdoor-centric identity.
Fonts & Text Sizing: The updated typography uses more contemporary and readable fonts, with clearly defined sizes that establish a stronger visual hierarchy. This helps in guiding users through the page efficiently.
Text Placement: Text placement has been optimized to ensure clarity and readability. Proper spacing around the text allows for a more professional and polished look.
Box Designs: Form fields and buttons have been redesigned to appear more modern, with cleaner lines, better spacing, and intuitive styling. The new design makes interactive elements like input fields, buttons, and validation messages more user-friendly and visually engaging.
Screenshots
Additional Context
This update is informed by modern UI/UX principles and user feedback, highlighting the need for a more intuitive, engaging, and visually appealing interface. The introduction of gradients, updated fonts, and restructured layout ensures the design remains contemporary while aligning with the core themes of the CampX brand. These changes are not just aesthetic but aim to create a seamless and enjoyable user experience.
Impact
Usability: The clearer layout, modern typography, and improved text placement enhance readability and ease of navigation, making the login and registration process smoother.
Visual Appeal: The addition of gradients, refined text sizing, and polished form elements elevates the overall aesthetic of the UI, making a strong first impression on users.
Accessibility: By ensuring better readability, contrast, and responsiveness, the redesign caters to a wider range of users, including those with visual impairments.
Performance: A more streamlined design with modern elements can improve loading times, especially on mobile devices, leading to a more efficient and responsive experience.
Related Issues
This proposal complements ongoing discussions about UI consistency across the CampX platform, ensuring that design standards and visual appeal are maintained throughout. Additionally, it addresses user feedback regarding the need for better accessibility and visual coherence across devices.
@BinaryBhakti , Thank you for the new issue! Everything looks great, but I have a few suggestions. It would be ideal to have a background that is not completely dark or white—perhaps a lighter background with some subtle dark shades. I’d also like to see some padding around the campground picture for better spacing. Lastly, please ensure there's a carousel feature for displaying multiple campground images. Looking forward to seeing these changes!
Do you have any logo from where I can extract colors, and keep the color theme accordingly
I don't have a logo at the moment, and honestly, I'm not great with UI. So, I'd prefer avoiding any green. Black and white shades would be ideal for the color theme. Thanks!
If this doesn't match your requirements then I will be happy to change the design according to your needs.
@BinaryBhakti , I'm happy with this. Make sure you sync your fork before making a pull request. Thank you!
Ok
You haven't assigned me this issue. Could you assign me?
@BinaryBhakti ,I've assigned it to you!
| gharchive/issue | 2024-09-29T20:40:37 | 2025-04-01T06:37:40.372110 | {
"authors": [
"BinaryBhakti",
"Vignesh025"
],
"repo": "VigneshDevHub/CampX",
"url": "https://github.com/VigneshDevHub/CampX/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1142025103 | 🛑 collectedememoire is down
In c9078d2, collectedememoire ($COLLECTEDEMEMOIRE) was down:
HTTP code: 503
Response time: 450 ms
Resolved: collectedememoire is back up in e72550f.
| gharchive/issue | 2022-02-17T22:59:14 | 2025-04-01T06:37:40.376799 | {
"authors": [
"Vikingfr"
],
"repo": "Vikingfr/upptime",
"url": "https://github.com/Vikingfr/upptime/issues/292",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
377119858 | What is nb_code_styles for?
See here. nb_code_styles is assigned but never used. What is it for?
@chere005 did you add it? I don't remember writing it.
Aha. I did add it but its reference has been removed.
It's meant for "notebook cell style", which may be displayed in an .nb file like this:
(* ::Input:: *)
1 + 2
Then the cell below will be displayed as an input cell.
This pattern sounds to have little significance. Maybe it's time to remove it.
| gharchive/issue | 2018-11-04T04:26:52 | 2025-04-01T06:37:40.379489 | {
"authors": [
"Shigma",
"batracos"
],
"repo": "ViktorQvarfordt/Sublime-WolframLanguage",
"url": "https://github.com/ViktorQvarfordt/Sublime-WolframLanguage/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
853934071 | Replaced ifend per endif
As Delphi complains a bit about that:
Legacy '$IFEND' directive found. Consider changing to '$ENDIF' or enable $LEGACYIFEND at line 39 (39:3)
Then ZDesigner compiles perfectly.
Thanks! I'll change to $endif everywhere. I had forgotten about this
| gharchive/pull-request | 2021-04-08T21:24:31 | 2025-04-01T06:37:40.380728 | {
"authors": [
"Txori",
"VilleKrumlinde"
],
"repo": "VilleKrumlinde/zgameeditor",
"url": "https://github.com/VilleKrumlinde/zgameeditor/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
49076293 | Lags on Firefox
Hello!
in chrome it works well, but ofcourse it lags in FireFox, very slow, and lags.
Hi @VincentGarreau I'm willing to fix this issue if you can mentor me through this bug. Any idea why its lagging on Firefox or what sections of the code I should look into that might be causing these performance issues?
@VincentGarreau you could start ;)
@VincentGarreau if 'distanceParticles' is commented out, then the lag goes away
This might not be solvable because Firefox has some serious issues with lineTo. It's 21x slower than chrome: http://jsperf.com/draw-lines
I tried making a minimal proof of concept drawing 300 lines (no animation, no alpha transparency) and the performance degrades quickly with canvas size, it's completely unusable on rMBP. Mozilla also seem to be aware lineTo is slow: https://bugzilla.mozilla.org/show_bug.cgi?id=1001954
My small demo page drawing 300 lines: http://jsfiddle.net/4ry8pdpb/1/, in my actual project it's closer to 600 lines that's rendered effortlessly by chrome.
Thanks for your detailed explanation @Celc! I noticed that Firefox has serious issues with lineTo, and your explanation enlightened me.
| gharchive/issue | 2014-11-17T11:16:59 | 2025-04-01T06:37:40.387650 | {
"authors": [
"Celc",
"VincentGarreau",
"errogaht",
"ignaty",
"lorenmh",
"watadarkstar"
],
"repo": "VincentGarreau/particles.js",
"url": "https://github.com/VincentGarreau/particles.js/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2715925179 | [bug fix] Allow commiting of files large than 100mb if handled by LFS
The plugin refuses to add files bigger than 100mb if the remote is github, even if an file is handled by LFS.
I added some logic that checks if a file is (or will be) handled by LFS, before blocking the commit.
Detailed Problem Description
Commit Where Check for Too Big Files Was Added
I have not yet implemented IsomorphicGit, but I am happy to do that if I get feedback on whether my change is being considered.
The changes were only tested on my Windows 11 machine (not on Linux or macOS).
This is my first ever PR to an open source project, so any feedback is welcome. I would be very happy if this actually were merged. :D
Thanks for the fixes. I just noticed that you are currently passing the vault_path to the git lfs check, but it needs the repo relative path, which is currently not stored in those objects. So it needs a bit more of restructure, which I will add to this pr in the next days.
| gharchive/pull-request | 2024-12-03T20:33:50 | 2025-04-01T06:37:40.391623 | {
"authors": [
"TimonGisler",
"Vinzent03"
],
"repo": "Vinzent03/obsidian-git",
"url": "https://github.com/Vinzent03/obsidian-git/pull/822",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1999568670 | Windows defender detect it as viruses and deleted it
Is this false alarm?
Yes. I think it's because I don't sign the executable files so Windows defender just marks it as a virus since its from an unknown publisher. It is safe to run and is only a false alarm.
| gharchive/issue | 2023-11-17T17:00:19 | 2025-04-01T06:37:40.394205 | {
"authors": [
"AsangaColney",
"Viren070"
],
"repo": "Viren070/Emulator-Manager",
"url": "https://github.com/Viren070/Emulator-Manager/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1062862197 | GVR only showing Viro360Image on left eye
Requirements:
Please go through this checklist before opening a new issue
[x] Review the documentation
[x] Search for existing issues in: viromedia/viro & ViroCommunity/viro
[x] Use the latest ViroReact release
Environment
Please provide the following information about your environment:
Development OS: Mac
Device OS & Version: Android 10
Version: "@viro-community/react-viro": "^2.21.1", "react-native": "0.66.2",
Device(s): Pixel XL
Description
I'm only seeing the image on one side of the screen. There is a small blue sparkle which I couldn't make out for the right eye. Zooming in with a screenshot didn't help either.
Reproducible Demo
export default function Screen() {
return (
<ViroVRSceneNavigator
initialScene={{
scene: MyStartScene,
}}
/>
);
}
export const MyStartScene = () => {
return (
<ViroScene>
<Viro360Image source={require('../../../../assets/images/grid.jpeg')} />
</ViroScene>
);
};
I did see something about stereoMode having issues with items other than 'None' - is it possible that the steroMode is not being set down through react --> android?
https://forum.unity.com/threads/google-vr-unity-2019-2-1-lwrp-only-showing-left-eye-right-eye-blank.735389/
Reproduced here
| gharchive/issue | 2021-11-24T20:20:25 | 2025-04-01T06:37:40.416568 | {
"authors": [
"NS-BOBBY-C"
],
"repo": "ViroCommunity/viro",
"url": "https://github.com/ViroCommunity/viro/issues/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
181256190 | Add Lability Module Caching cmdlets
Now that Lability is downloading and caching PowerShell and DSC resource modules, we could do with some additional cmdlets to manage these (currently I have to manipulate the cache directly in the filesystem).
Get-LabModule(Cache) - Returns cached modules
Remove-LabModule(Cache) - Deletes a cached module
Clear-LabModuleCache - Empties all cached modules
@csandfeld If we think that Lability is the correct place, we could add the following too?:
Install-LabModule - Registers all required cached modules defined in a Lability configuration
Used to enable compiling MOFs on the Lability host
Should default to CurrentUser scope
Uninstall-LabModule - Unregisters all modules defined in a Lability configuration
Should default to CurrentUser scope
Might be better just deleting all user-scoped modules?!
Not sure it gets you closer to a decission, but see my comment in #147 :-)
| gharchive/issue | 2016-10-05T20:23:33 | 2025-04-01T06:37:40.428101 | {
"authors": [
"csandfeld",
"iainbrighton"
],
"repo": "VirtualEngine/Lability",
"url": "https://github.com/VirtualEngine/Lability/issues/153",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1190861193 | "DatetimeIndex" has no attribute "strftime"
Minimal working example
# ––– file strftime_bug.py
import pandas as pd
days: pd.DatetimeIndex = pd.date_range('2020-1-1', periods=3)
print([day for day in days.strftime("%d")])
Behaviour
mypy emits error:
$ mypy strftime_bug.py
strftime_bug.py 5: error: "DatetimeIndex" has no attribute "strftime"
Found 1 error in 1 file (checked 1 source file)
execution works:
$ python strftime_bug.py
['01', '02', '03']
Versions
$ python --version
Python 3.10.4
$ mypy --version
mypy 0.942
$ pip freeze | grep stub
pandas-stubs==1.2.0.56
pandas-stubs has moved to a new repository and will now be managed alongside pandas itself: https://github.com/pandas-dev/pandas-stubs
You might try using the newest version pip install pandas-stubs==1.4.2.220626 which comes from that repository. If it doesn't work please considering opening an issue in the new repository.
| gharchive/issue | 2022-04-03T08:36:01 | 2025-04-01T06:37:40.434099 | {
"authors": [
"claudio-ebel",
"zkrolikowski-vl"
],
"repo": "VirtusLab/pandas-stubs",
"url": "https://github.com/VirtusLab/pandas-stubs/issues/162",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
626599877 | [ADDING] New company which use your solution
Hello,
Javier Ramirez from VirusTotal asked us to open an issue about this .. Well, that's not a issue at all.
Thanks for all the work you did and do, it's an amazing jobs.
Our company use your solution since 3 years - we would be glad to be within the company list : www.touchweb.fr / TouchWeb - as you proposed in the README.
Thanks for all,
Kind regards,
Vincent
Added in 9a3b4e3d7d246df9cadd43420d24195a7a7ea758. Thank you very much for your feedback!
| gharchive/issue | 2020-05-28T15:23:29 | 2025-04-01T06:37:40.436294 | {
"authors": [
"plusvic",
"vincent-guesnard"
],
"repo": "VirusTotal/yara",
"url": "https://github.com/VirusTotal/yara/issues/1291",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2183036053 | [Feature] 希望表格支持多列排序
What problem does this feature solve?
现状:
目前使用SORT_CLICK事件 return false进行自定义排序方法,想达成多列排序功能。
但updateSortState方法入参看似支持数组形式,但传入数组代码报错
示例代码:
报错位置:
What does the proposed API look like?
期望1:
支持多列排序功能开关,SORT_CLICK事件返回数组当前所有被排序的数据
期望2:
updateSortState方法支持传入数据形式更新排序图标的变换
嗯嗯 目前基本表格内部还没支持多个排序规则
嗯嗯 目前基本表格内部还没支持多个排序规则
这个有计划支持么
嗯嗯 目前基本表格内部还没支持多个排序规则
I can help add multiple sorting to the repository if you don't have plans or the ability to add this feature anytime soon. This feature is sorely lacking
| gharchive/issue | 2024-03-13T03:18:33 | 2025-04-01T06:37:40.440365 | {
"authors": [
"AntonPolyakin",
"ChenBin12138",
"fangsmile",
"feEden"
],
"repo": "VisActor/VTable",
"url": "https://github.com/VisActor/VTable/issues/1265",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
826839300 | bug:- the search bar is not working.
search bar;-
while searching someting like;- project... etc noting happens.
can you assign this to me under gssoc21
@kumarishalini6 Yeah Sure !! You can implement that feature
Any update ?
Reply within 2 days or else i have to close this issue.
| gharchive/issue | 2021-03-10T00:14:48 | 2025-04-01T06:37:40.445150 | {
"authors": [
"Vishal-raj-1",
"harshgupta20",
"kumarishalini6"
],
"repo": "Vishal-raj-1/Awesome-JavaScript-Projects",
"url": "https://github.com/Vishal-raj-1/Awesome-JavaScript-Projects/issues/275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1288875207 | About memory overflow error during training
Hi! Thanks for the code. When I train the model to 42 batch, I met with the following error using 4*gtx2080ti:
“CUDA: out of memory, tried to allocate...”
I set the batchsize = 10, it still occurs. Is it totally a hardware problem? What device you use to train the model?
I notice that you said your code doesn't support multi-gpu training 1 years ago, is it still not supported yet?
Thank you!
hi, it is because the code does not well support multi-gpu training. Sorry for this inconvenience!
| gharchive/issue | 2022-06-29T15:10:26 | 2025-04-01T06:37:40.454186 | {
"authors": [
"Wangyf1998",
"junchen14"
],
"repo": "Vision-CAIR/VisualGPT",
"url": "https://github.com/Vision-CAIR/VisualGPT/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
876489276 | Existing PSP/PS1 titles not shown
I'm currently running HexFlow on my Vita (1000) using my existing Sony memory card. So, I've got titles that were installed from PSN before jailbreak on the card and in bubbles shown on the home screen.
My issue is that the existing bubbles for PSN downloaded titles are not showing in HexFlow.
The titles show as available in the BubbleMaager app, but don't show as created, as they already existed prior to the install.
Is there a way to get HexFlow to check for existing bubbles for PSP/PS1 titles?
Additional Info: Reinstalling a title from PSN store does not change the issue. Still doesn't show. Tested with FF Origins reinstall.
| gharchive/issue | 2021-05-05T14:06:11 | 2025-04-01T06:37:40.456602 | {
"authors": [
"TouringBubble"
],
"repo": "VitaHEX-Games/HexFlow-Launcher",
"url": "https://github.com/VitaHEX-Games/HexFlow-Launcher/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
937664432 | Slicing appears off by 1px
Normally not really visible, I think this is only exaggerated on the diagonal lines we have on the image itself.
Have tried multiple options but every set of images appears to result in the same thing happening. Any ideas?
This is in use with Open Sea Dragon, and effect is only visible when zoomed it at least 5 times.
Further to this, even the "duomo" image from OSD is suffering from this same effect, and it appears to be down to the overlap 1px rule. I believe MagickSlicer doesn't yet support this. There's a solution albeit, using a different library, see my ticket on OSD https://github.com/openseadragon/openseadragon/issues/2004
| gharchive/issue | 2021-07-06T08:47:42 | 2025-04-01T06:37:40.493497 | {
"authors": [
"iantearle"
],
"repo": "VoidVolker/MagickSlicer",
"url": "https://github.com/VoidVolker/MagickSlicer/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2204053733 | finecmdline not loading when syntax errors in config (how do yall solve not having commandline accessible when you have an error in your config?)
I am a complete noob to neovim scripting, and am not sure if this is because I am using this wrong, but it is maddening when I go to save a file and I get Not an editor command: FineCmdline ! i am pretty sure this is just a limitation of the idea of a nicer command line plugin, do I have any other option other than opening a new terminal and using nano, How do you solve this problem?
I recommend mapping the enter key to execute the command FineCmdline, and leave : as is. That way you can always use the default Neovim command line.
Another thing you can try is calling the lua function directly.
vim.keymap.set('n', '<Enter>', '<cmd>lua require("fine-cmdline").open()<cr>')
If the lua function doesn't work either, then the issue is in the way you are loading the plugin.
Also worth mention, you can open Neovim without a config using the command nvim --clean in your terminal.
| gharchive/issue | 2024-03-23T21:25:00 | 2025-04-01T06:37:40.505216 | {
"authors": [
"VonHeikemen",
"jamesonBradfield"
],
"repo": "VonHeikemen/fine-cmdline.nvim",
"url": "https://github.com/VonHeikemen/fine-cmdline.nvim/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1502587318 | [avatar]: proposal to add an additional appearance
In addition to outline + filled we should consider adding duaton for delicate and more subtle design.
@yinonov , @AyalaBu WDYS?
why? was there a request?
There will be soon.
Tamir spoke to me about adding this variant
is this aligned with the team designing the chat?
anyway, there's no formal request, let's discuss when there is
| gharchive/issue | 2022-12-19T09:18:59 | 2025-04-01T06:37:40.507753 | {
"authors": [
"AyalaBu",
"rachelbt",
"yinonov"
],
"repo": "Vonage/vivid-3",
"url": "https://github.com/Vonage/vivid-3/issues/920",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
867034207 | Vectorised print commands not disabling implicit output
When using the v command to vectorise print commands, the print commands do not disable implicit output.
Example: Try it Online!
well dang.
Try it Online!
It looks like this is actually a different issue. If you use O to disable the implicit output, nothing gets printed. Try it Online! If you then add an extra , after the vectorised print, it prints the same unexpected output, so it looks like the vectorised printing is actually modifying the list somehow. Try it Online!
This is a bigger issue than just vectorised printing.
| gharchive/issue | 2021-04-25T15:06:43 | 2025-04-01T06:37:40.532584 | {
"authors": [
"AMiller42",
"Lyxal"
],
"repo": "Vyxal/Vyxal",
"url": "https://github.com/Vyxal/Vyxal/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
977344848 | 🛑 Albert Boutique is down
In 7885521, Albert Boutique (https://www.albert.boutique) was down:
HTTP code: 503
Response time: 481 ms
Resolved: Albert Boutique is back up in 6f3aa5a.
| gharchive/issue | 2021-08-23T19:17:11 | 2025-04-01T06:37:40.546883 | {
"authors": [
"webvlaanderen"
],
"repo": "WEB-Vlaanderen/upptime",
"url": "https://github.com/WEB-Vlaanderen/upptime/issues/141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2126341221 | chore: prepare to publish to pub.dev
Status
IN DEVELOPMENT
Description
Prepare project to be published to pub.dev
Type of Change
[ ] ✨ New feature (non-breaking change which adds functionality)
[ ] 🛠️ Bug fix (non-breaking change which fixes an issue)
[ ] ❌ Breaking change (fix or feature that would cause existing functionality to change)
[ ] 🧹 Code refactor
[x] ✅ Build configuration change
[x] 📝 Documentation
[x] 🗑️ Chore
Still a few things left but the pana package that does the checklist needs this part in order to update the score
:tada: This PR is included in version 3.6.1 :tada:
The release is available on:
GitHub release
v3.6.1
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2024-02-09T01:45:15 | 2025-04-01T06:37:40.555680 | {
"authors": [
"SlayerOrnstein"
],
"repo": "WFCD/warframestat_client",
"url": "https://github.com/WFCD/warframestat_client/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
158561107 | Add a Gitter chat badge to README.md
Adds Gitter Badge from #2, but correctly placed.
Resolves #2.
This manually fixes Problem in #2 with @gitter-badger as addressed in gitterHQ/readme-badger#4 and gitterHQ/readme-badger#22.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.1 out of 2 committers have signed the CLA.:white_check_mark: jpbernius:x: gitter-badger
| gharchive/pull-request | 2016-06-05T16:02:00 | 2025-04-01T06:37:40.623398 | {
"authors": [
"CLAassistant",
"jpbernius"
],
"repo": "WINFOspace/Blog",
"url": "https://github.com/WINFOspace/Blog/pull/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
932594763 | Cannot read property 'content' of undefined for components with no style tags.
When components or dependencies have no style tags in them, I get error:
[vite] Internal server error: Cannot read property 'content' of undefined
@edanweis It was fixed in 0.1.9.
That was fast. @WJCHumble but now I am getting:
Internal server error: TypeError: Cannot read property 'spaces' of undefined
in two of my components.
@edanweis Can you provide reproduction? I am not sure what is it problem.
I discovered it was a missing dependency, but when using vite-plugin-vue2-css-vars with pnpm, the missing dependency error was not handled or shown in console. I am working through other problems and will create more issues as I go. Thanks Wu
| gharchive/issue | 2021-06-29T12:28:23 | 2025-04-01T06:37:40.636317 | {
"authors": [
"WJCHumble",
"edanweis"
],
"repo": "WJCHumble/vite-plugin-vue2-css-vars",
"url": "https://github.com/WJCHumble/vite-plugin-vue2-css-vars/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
694408810 | Encourage bios to be written in 3rd person.
Is your feature request related to a problem? Please describe.
I notice a lot of the bios are written in the 1st person (I ...). Let's encourage users to submit them in the third person (she/they)
Describe the solution you'd like
[ ] Change the hint on the speaker bio box to a persistent-hint.
[ ] Review the bios in Airtable and rewrite them in 3rd person.
Describe alternatives you've considered
[ ] Could also write a vuelidate custom validator to check for use of "I" or わたし.
Additional context
Please solve each of these as separate PRs / tasks.
In hindsight, our site is not a conference CFP form, it's just a database of profiles, so I don't see why bios in first person are a big problem (people write their LinkedIn bios in first person and it seems acceptable?).
If we put too many small restrictions, it just makes the form harder to fill and doesn't bring much value (I don't think bios in first person are a huge decrease in the quality of entries...). We're just adding the risk of more people dropping the nomination process halfway through because it's too "mendokusai".
Honestly, if everyone is okay and no feelings are harmed, I'd vote to just drop this feature and keep the form simple.
I would even go further and open a ticket to remove the "mandatory" condition from the "Japanese name" as well, since some foreigners may prefer to not have their names translated (or just don't know how to translate it / how to input Japanese characters).
What do you think?
I agree with this, since it generates an error maybe it would discourage people from completing the form.
However, the persistent hint is a good idea, but this is only a guideline instead of validation. It's a minor change but I will make another PR for this
Yeah, perhaps we are making another challenging form to complete. 😩
Thanks @RossellaFer for your hard work on this ticket nonetheless.
Thank you both for your work on this! ❤️
| gharchive/issue | 2020-09-06T14:50:29 | 2025-04-01T06:37:40.741349 | {
"authors": [
"RossellaFer",
"ann-kilzer",
"tuttiq"
],
"repo": "WWCodeTokyo/speak-her-db",
"url": "https://github.com/WWCodeTokyo/speak-her-db/issues/119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
449248331 | [BUG] Scaling of Krypton Forms with multiple monitors
Scaling of Krypton forms do not work when dragging them between monitors with different scaling.
When you use multiple monitors and they have different scaling (for example a full hd laptop with 100% and a 4K screen with 150%), Krypton forms are not scaled according to the monotor settings when dragging them from one monitior to the other. Regular windows forms are scaled depending on the settings for each monitor.
I use the 4.6 version.
Any feedback would be greatly appreciated.
Just Checking: @PontusLindberg Have you set the following in your app.config
<System.Windows.Forms.ApplicationConfigurationSection>
<add key="DpiAwareness" value="PerMonitorV2" />
</System.Windows.Forms.ApplicationConfigurationSection>
And in the App.Manifest:
<!-- Windows 10 -->
<supportedOS Id="{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}" />
etc....
<!-- Indicates that the application is DPI-aware and will not be automatically scaled by Windows at higher
DPIs. Windows Presentation Foundation (WPF) applications are automatically DPI-aware and do not need
to opt in. Windows Forms applications targeting .NET Framework 4.6 that opt into this setting, should
also set the 'EnableWindowsFormsHighDpiAutoResizing' setting to 'true' in their app.config. -->
<application xmlns="urn:schemas-microsoft-com:asm.v3">
<windowsSettings>
<dpiAware xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">true</dpiAware>
</windowsSettings>
</application>
May be related to #73
Thanks @Smurf-IV, I will check and get back to you.
@Smurf-IV , the Krypton forms we use are in dlls which are 'add-ons' to an Autodesk application. As long as we stick to regular Windows forms, those are scaled correctly when dragged between monitors. When we change to Krypton, the scaling when dragging between monitors stop working. When we change back to Windows forms, it works fine again.
Have a look at this: https://docs.microsoft.com/en-us/dotnet/framework/winforms/high-dpi-support-in-windows-forms I thought that the app.config configuration was only supported in 4.7 and higher?
EDIT: Yes high DPI awareness support only exists in .NET 4.7 or higher: _Starting with the .NET Framework 4.7, Windows Forms includes enhancements for common high DPI and dynamic DPI scenarios. _
Aha, so we need need newer Krypton dlls?
Yes, 5.470 has this configuration built into it, but you'll also may need to re-target your projects to .NET 4.7 or newer.
Removed bug label, as .NET Framework 4.7 or higher is required for high DPI scaling.
| gharchive/issue | 2019-05-28T12:45:04 | 2025-04-01T06:37:40.754529 | {
"authors": [
"PontusLindberg",
"Smurf-IV",
"Wagnerp"
],
"repo": "Wagnerp/Krypton-NET-5.470",
"url": "https://github.com/Wagnerp/Krypton-NET-5.470/issues/190",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1620386873 | @json-rpc-tools/utils is deprecated
As you can see here, @json-rpc-tools/utils package is deprecated.
Is there an alternative way to construct my response in the wallet, for example, in
web-examples/wallets/react-wallet-v2/src/utils/EIP155RequestHandlerUtil.ts
all the responses are built with the formatJsonRpcResult method from this package.
Hi @Neti-Sade,
I believe the package was deprecated in order to migrate the functionality into a different package: @walletconnect/jsonrpc-utils
You should be able to use the same functions as before when importing from that package instead.
| gharchive/issue | 2023-03-12T14:32:13 | 2025-04-01T06:37:40.800039 | {
"authors": [
"Neti-Sade",
"bkrem"
],
"repo": "WalletConnect/web-examples",
"url": "https://github.com/WalletConnect/web-examples/issues/132",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1088440486 | Highlight cells
Task consists of next steps:
Implement a function that creates cell entities of provided color.
The next samples can be helpful:
How to draw board from sprites.
After closing the issue, consider closing related task:
#46.
#47.
Useful links:
Bevy entities and components.
Bevy resources.
Hey, I'll take this task
How to add new sprite from a function?
commands.spawn_bundle( SpriteBundle { ... });
Or, give me details, please.
fn f1( x : i32 )
{
/* we want to add new sprite from here */
}
You need to have commands instance.
It is an example
use bevy_ecs::world::World;
struct Position {
x: f32,
y: f32,
}
let mut world = World::new();
let entity = world.spawn()
.insert(Position { x: 0.0, y: 0.0 })
.id();
let position = world.entity(entity).get::<Position>().unwrap();
assert_eq!(position.x, 0.0);
Беру
Команда 0xADDC0DE
Assingned.
| gharchive/issue | 2021-12-24T16:07:37 | 2025-04-01T06:37:40.809446 | {
"authors": [
"Epsylon42",
"LastSymbol0",
"Wandalen",
"dmvict"
],
"repo": "Wandalen/game_chess",
"url": "https://github.com/Wandalen/game_chess/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1088545050 | Board margins
Tweak the camera to get expected results.
Task consists of next steps:
Investigate how to work the camera projection.
Add offsets in the camera projection.
Implement system that draws margins around board.
Add system to the Bevy main app.
Feature:
Board should not go outside window neither touch edges of window.
To avoid that introduce parameter during drawing, number of cells for gap between board and window edge.
The next samples can be helpful:
How to draw board from sprites.
How to draw sprite from image.
Taken
Taken
Please, add name of command.
Babrochky
Assigned.
| gharchive/issue | 2021-12-25T04:37:02 | 2025-04-01T06:37:40.813803 | {
"authors": [
"Wandalen",
"dmvict",
"ihor-tarasov"
],
"repo": "Wandalen/game_chess",
"url": "https://github.com/Wandalen/game_chess/issues/76",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1462750333 | Add an option to add trailing comma during split
For Go language a trailing comma is required when items place each on the own line, for example
func a(b int, c int, d int)
func a(
b int,
c int,
d int, // <-comma here is required
)
The same applied for arrays and dictionaries.
Can't find an option for this, can you point on it if I miss?
Yes, you can configure it. Option 'last_separator' - https://github.com/Wansmer/treesj#nodes-configuration
| gharchive/issue | 2022-11-24T04:44:09 | 2025-04-01T06:37:40.817859 | {
"authors": [
"Wansmer",
"alphatroya"
],
"repo": "Wansmer/treesj",
"url": "https://github.com/Wansmer/treesj/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1889535342 | chore(rust-sys): update build script
In this PR update the version of WasmEdge to 0.13.4 in build script.
Hello, I am a code review bot on flows.network.
It could take a few minutes for me to analyze this PR. Relax, grab a cup of coffee and check back later. Thanks!
@L-jasmine Could you please help review this PR? Thanks a lot!
| gharchive/pull-request | 2023-09-11T02:36:43 | 2025-04-01T06:37:40.868072 | {
"authors": [
"apepkuss",
"juntao"
],
"repo": "WasmEdge/wasmedge-rust-sdk",
"url": "https://github.com/WasmEdge/wasmedge-rust-sdk/pull/64",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
62140232 | Contents Buffer
Thanks for writing this plug in! I have a question about contents. I noticed that metalsmith outputs the "contents" as a buffer, so if I use this plugin, "contents" is not available to the JSON output. What if I want to use contents in the JSON file? Is there a way to do this without modifying metalsmith-writemetadata to output a string instead of a buffer?
No it is not possible without modifying it. I can have a look, so maybe there is an option.
If you have problems using the buffer you can have a look into my site: http://christian.sterzl.info or https://github.com/Waxolunist/christian.sterzl.info
Is this solution what you expected?
Yes! That works just like I would expect.
Cool. I had a small error. You should use 0.4.4.
| gharchive/issue | 2015-03-16T16:45:42 | 2025-04-01T06:37:40.872076 | {
"authors": [
"Waxolunist",
"dahmian"
],
"repo": "Waxolunist/metalsmith-writemetadata",
"url": "https://github.com/Waxolunist/metalsmith-writemetadata/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.