id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2085386682
|
[Bug]: Hazelcast Client is not working
Module
Core
Testcontainers version
1.19.3
Using the latest Testcontainers version?
Yes
Host OS
Linux
Host Arch
x86
Docker version
Client: Docker Engine - Community
Cloud integration: v1.0.31
Version: 24.0.7
API version: 1.43
Go version: go1.20.10
Git commit: afdd53b
Built: Thu Oct 26 09:07:41 2023
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 24.0.7
API version: 1.43 (minimum version 1.12)
Go version: go1.20.10
Git commit: 311b9ff
Built: Thu Oct 26 09:07:41 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.26
GitCommit: 3dd1e886e55dd695541fdcd67420c2888645a495
runc:
Version: 1.1.10
GitCommit: v1.1.10-0-g18a0cb0
docker-init:
Version: 0.19.0
GitCommit: de40ad0
What happened?
According to hazelcast client example shown in test container hazelcast example in github, this application should start and run the test case based on testcontainer hazelcast client container. But the application is not starting as it expecting hazelcast running on a certain port of localhost.
Relevant log output
No response
Additional Information
No response
Here is the sample code for my project
https://github.com/testcontainers/testcontainers-java/discussions/8051
Please, do not create an issue if the discussion has been created.
|
gharchive/issue
| 2024-01-17T04:47:33 |
2025-04-01T06:40:36.201922
|
{
"authors": [
"eddumelendez",
"haider665"
],
"repo": "testcontainers/testcontainers-java",
"url": "https://github.com/testcontainers/testcontainers-java/issues/8128",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1648121872
|
Fix javadoc for stop method
Not necessarily a fix for https://github.com/testcontainers/testcontainers-java/issues/1000, but at least make it easier for people to discover how to gracefully stop a container and/or send it different signals.
If/when this is merged, I'll leave it up to the committers to choose if https://github.com/testcontainers/testcontainers-java/issues/1000 should be closed, or if there is still intent to change the API. If the latter, I'm happy to raise a PR if someone gives brief guidance as to what api would be acceptable...
Thanks for your contribution, @big-andy-coates ! I've proceed with the description Kill and remove the container. for now. I think the suggestions about graceful shutdown should be described in the docs rather than the javadoc itself along with the use case description.
|
gharchive/pull-request
| 2023-03-30T18:34:56 |
2025-04-01T06:40:36.204591
|
{
"authors": [
"big-andy-coates",
"eddumelendez"
],
"repo": "testcontainers/testcontainers-java",
"url": "https://github.com/testcontainers/testcontainers-java/pull/6834",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
485794623
|
Allow set arbitrary parameters multiple times
@tillahoffmann Fixes based on comments in https://github.com/testcontainers/testcontainers-python/pull/34
Merge kwargs instead of override
Using kwargs to store all the settings to prevent named parameter conflict with kwargs
@tillahoffmann, @SergeyPirogov Could you please have a review on this PR when you available?
this is quite useful. Why isn't getting merged? Any reason?
|
gharchive/pull-request
| 2019-08-27T12:58:13 |
2025-04-01T06:40:36.206568
|
{
"authors": [
"Can-Sahin",
"weixu365"
],
"repo": "testcontainers/testcontainers-python",
"url": "https://github.com/testcontainers/testcontainers-python/pull/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1168314467
|
src/sync: Add Client::publish_success
Enables test instances to signal success both via a publish to the sync service and to Testground daemon via stdout.
Tagged and published.
|
gharchive/pull-request
| 2022-03-14T12:34:12 |
2025-04-01T06:40:36.220798
|
{
"authors": [
"mxinden"
],
"repo": "testground/sdk-rust",
"url": "https://github.com/testground/sdk-rust/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1123604266
|
verify cosign attestations.
Witness should be able to verify and create policy on cosign sigs
I'd love to investigate on this ☝️
/cc @mikhailswift
@developer-guy we really need to map out the problem set for this. If you are still interested it may be best to jump on a quick call.
This involves supporting the sigstore bundle: https://github.com/sigstore/protobuf-specs/blob/main/protos/sigstore_bundle.proto
and integration with OCI.
|
gharchive/issue
| 2022-02-03T22:09:11 |
2025-04-01T06:40:36.229257
|
{
"authors": [
"colek42",
"developer-guy"
],
"repo": "testifysec/witness",
"url": "https://github.com/testifysec/witness/issues/125",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
483971025
|
Curious performance issue when keyword matches potential variable
I built an expression library using Eval with about 50 operators (most of them taken from sample code – thanks a lot for that!) and noticed a curious drop in performance when defining a prefix function like not X and then having an expression like nothing == true where nothing could potentially be a variable. The performance drop is quite significant in my case, where by having adding that not prefix function slows down the overall evaluation by a factor of 5-6 compared to just having a ! prefix function.
I wonder if the evaluate method of the interpreter could be improved by introducing at the appropriate place a check that a function name shouldn't be evaluated against a substring. Though that might be done on purpose for some use case. In that case an alternative would be to provide a PatternOptions which allows telling the parser that this specific function keyword can't be a substring.
I can try to do those adjustments, though I'm not sure where to do them in the code. Any hints would be appreciated.
Some sample code which reproduces the slowdown (less than in my big library, but still by a factor of 3): https://gist.github.com/nighthawk/c7daa27285da406e5b2a71f8b789bf3f
Thanks a lot for your feedback and detailed bug report! It’s a really interesting issue indeed, I’ll look into it as soon as possible
Great, thanks. The behaviour makes sense when thinking about a negation operator and doing something like !foo but would be good to disable this for word-only operators. Or maybe the parser can detect word boundaries so that it knows !foo can really be treated like ! foo.
|
gharchive/issue
| 2019-08-22T12:41:59 |
2025-04-01T06:40:36.262078
|
{
"authors": [
"nighthawk",
"tevelee"
],
"repo": "tevelee/Eval",
"url": "https://github.com/tevelee/Eval/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
620482206
|
Docs for withUserKey are incomplete
https://textileio.github.io/js-threads/classes/_textile_context.context.html#withapikey
It says it just takes a parameter value or string. But think it's something like,
{
key: string,
secret: string,
type: 1 == user, 2 == account ? or 0 == account
}
Are you possibly referring to https://textileio.github.io/js-threads/classes/_textile_context.context.html#withuserkey? In any case, you are right, these need docs strings across the board.
|
gharchive/issue
| 2020-05-18T20:23:07 |
2025-04-01T06:40:36.323541
|
{
"authors": [
"andrewxhill",
"carsonfarmer"
],
"repo": "textileio/js-threads",
"url": "https://github.com/textileio/js-threads/issues/194",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
140800159
|
env: node: No such file or directory on compile and display JS
No sure what's causing this. TM_COFFEE seems to be set correctly.
Found the problem even with TM_COFFEE and TM_NODE set correctly if the PATH is doesn't include the node and coffee paths, the coffee tmbundle 'Run' command will not work. I"m not sure what is causing this but the error should be caught and expanded on with a suggestion to fix the path.
Fixed by improving requiredCommands, should now work automatically when node is installed through MacPorts or Homebrew. Thanks for the report.
|
gharchive/issue
| 2016-03-14T21:17:13 |
2025-04-01T06:40:36.325060
|
{
"authors": [
"infininight",
"rebelwarrior"
],
"repo": "textmate/coffee-script.tmbundle",
"url": "https://github.com/textmate/coffee-script.tmbundle/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1754060107
|
arreglo: de programa
arreglo del programa
arreglo del programa
ver el tema de los auspiciantes no me deja visualizar ruby los cambios.
|
gharchive/pull-request
| 2023-06-13T05:20:03 |
2025-04-01T06:40:36.325975
|
{
"authors": [
"lucabaello1998"
],
"repo": "teyet2023unahur/teyet2023-2",
"url": "https://github.com/teyet2023unahur/teyet2023-2/pull/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1261204321
|
Enable Resizable / Rotable node type
Hi,
I can see that you can set the node.nodestyle property to CUSTOM, DEFAULT, INPUT, OUTPUT. But not really sure how can you enable the the possibility to resize or rotate nodes.
Is it possible to enable the resizable/rotable node type?
Great Work!
Any news on this?
Hi @omarmrivas , sorry I totally missed that issue. Let me get into the 'flow' and I will try to implement resizable/rotatable node type.
Cheers
The issue is solved at least partially with #14, by implementing NodeResizer component bindings
|
gharchive/issue
| 2022-06-06T00:27:04 |
2025-04-01T06:40:36.330035
|
{
"authors": [
"ArtemyB",
"omarmrivas",
"tforkmann"
],
"repo": "tforkmann/Feliz.ReactFlow",
"url": "https://github.com/tforkmann/Feliz.ReactFlow/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
718635709
|
adding tfsec docs for check file
First stab at documentation generator, very basic but #LittleAndOften
Generate the CHECKS.md file from the registered checks
For now, run it as a pre-commit change to update.
Had to resubmit - till Form3 I had never used signed commits so my personal laptop isn't configured for it
|
gharchive/pull-request
| 2020-10-10T14:49:32 |
2025-04-01T06:40:36.336542
|
{
"authors": [
"owenrumney"
],
"repo": "tfsec/tfsec",
"url": "https://github.com/tfsec/tfsec/pull/415",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2470883811
|
Allow passing video constraints
In order to be able to pass videoConstraints such as facingMode, height or deviceId (see examples here https://www.npmjs.com/package/react-webcam) the state variable video_constraints: Var[dict] = {} is added.
This allows e.g. to request either the inner or outer camera of a phone, or introduce a "flip" button that allows switching between the two cameras:
webcam.webcam(
id=ref,
video_constraints=rx.cond(
CameraState.use_outer_camera,
{
"facingMode": "environment",
},
{
"facingMode": "user",
},
),
),
If only the selfie cam should be allowed (e.g. for a KYC process) then you can set the constraints to exact:
webcam.webcam(
id=ref,
video_constraints={"facingMode": {"exact": "user"}}
),
Beware that setting video_constraints={"facingMode": {"exact": "environment"}} will result that you can't take a picture with a webcam on a laptop because a laptop typically doesn't have an outer (environment) camera.
Looks good to me
|
gharchive/pull-request
| 2024-08-16T19:40:55 |
2025-04-01T06:40:36.352376
|
{
"authors": [
"dentro-innovation",
"tgberkeley"
],
"repo": "tgberkeley/reflex-webcam",
"url": "https://github.com/tgberkeley/reflex-webcam/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
337292361
|
select oralce sequence nextval not working well
knex.select(S_t_role_info.NEXTVAL).from('dual') run result is
the following picture
Clearly that's not the complete code that is running. Can you share the complete query code and the results in text form instead of a picture?
That works as expected. To write query that you are trying to do you need to use .raw
knex.select(knex.raw('??.NEXTVAL', ['S_t_role_info'])).from('dual').timeout(1000)...
knex.select(knex.raw('S_t_role_info.NEXTVAL')).from(knex.raw('dual')).first().timeout(1000).then(id => {
console.log({ id: id.NEXTVAL });
let insertId = id.NEXTVAL;
})
debug sql query is:
select * from (select S_t_role_info.NEXTVAL from dual) where rownum <= ?
in oracle the first() function call will results error: Error: ORA-02287: sequence number not allowed here
@KAIXIE so what kind of query you would like knex to generate? Btw. the one you wrote is not none of those that has been mentioned earlier in this thread.
Thank you for your patience, my logic wrong, this condition can not use first() function
|
gharchive/issue
| 2018-07-01T13:46:18 |
2025-04-01T06:40:36.361797
|
{
"authors": [
"KAIXIE",
"elhigu",
"ricardograca"
],
"repo": "tgriesser/knex",
"url": "https://github.com/tgriesser/knex/issues/2680",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
801236606
|
Finalizing large pull request
One last review? Merging went smoothly without conflicts or failing tests.
This PR is mainly documentation and some changes that I found sensible while writing documentation.
Did you try to merge this already with my other PR?
I hate to merge failing test ;-). I'll have a look at it and add it to the other pull request.
Then comment the failing test out ;-) ... I was hoping that adding multiple fermatas might be something quick, but nope
I commented it out... 🤪
I'll also include the fermata stuff. I cherry-picked your fermata commits and made it work, but right now libmei for some reason fails to serialize. I'll sort that out.
And there's one more thing I'll fix: The default file name and path should be set to the name and path of the sib file, but right now the path is not set and the default filename has a sib extension instead of an mei extension. That really annoys me.
O.K., done. I'd be really happy if you could take it for a test drive, @annplaksin!
I checked it out and ran the tests. 114 passing,...
Looks like it works and we're happy to merge. 🙂
Awesome!
I also added some testing goodies, fixing the accidentals.sib hiccup and triggering mocha from within ManuScript, if you create a *.bat file for that.
I realize I can't do the pull request from this repo because I simply pushed it here instead of forking. Because it's not really useful if it can't interact with the upstream repo, I will delete this repo later. If there's anything that you want (me) to do before deletion, let me know.
So many goodies! It sounds like Christmas... awesome! 😄
Nope, that's the problem with not forking. And creating a new remote afterwards won't help either in Github as far as I know.
I am not quite sure what should have happened when clicking on the extension test in the Plugin menu, because nothin happens. But the Test runner contains the extension test anyway.
If you know, what the purpose of the menu entry was, I would be glad.
Apart from that, deleting is fine with me.
You mean "Sibmei extension test"? The problem with the test extension is this:
AddToPluginsMenu('Sibmei extension test', null);
It does not have a "Run" method itself because Sibmei serves as the hub for exporting. So there's nothing to run.
That might be a flaw we should fix. Maybe define the extension name somewhere else instead of using the usual mechanism with AddToPluginsMenu()? That would prevent it from appearing in the menu. A global variable could be used instead, just like for SibmeiExtensionAPIVersion.
Defining the name in the usual manner is okay, I think.And having it in the menu seems good to me as well, because a user can see if the extension plugin is there and available. But yes, clicking and nothing happens isn't cool.
How about adding a short Run() method opening a popup message like "This plug-in is an extension of sibmei. Please run MEI export.".
We won't deliver the test extension to end users, so there's no big problem. But the test plugin also serves as reference, so it's a good idea to establish a proper way of doing things.
Another thing that I realize now is that the extension selection dialog will only be shown the first time sibmei is run in each session. That should definitely be fixed.
Or am I mistaken because I messed with my code? Checking now...
O.K., my mistake – works as expected. I'll add a message to the example extension per your suggestion.
Quak! 🦆
I am just testing it. That's great 🙂
Continuing here.
|
gharchive/pull-request
| 2021-02-04T12:24:46 |
2025-04-01T06:40:36.615886
|
{
"authors": [
"annplaksin",
"th-we"
],
"repo": "th-we/sibmei",
"url": "https://github.com/th-we/sibmei/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
438074181
|
decrypt key during get
As we as storing key as encrypted mode need to decrypt it during fetch it.
Thanks for this pull request!
|
gharchive/pull-request
| 2019-04-28T16:26:41 |
2025-04-01T06:40:36.625667
|
{
"authors": [
"kandarp26",
"thaiphan"
],
"repo": "thaiphan/magento2-s3",
"url": "https://github.com/thaiphan/magento2-s3/pull/80",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
145802633
|
Our current way of notifying beacon changes can easily cause resource exhaustion
TL;DR - We need to make some string changes or our perf in dealing with beacon changes on native transports is going to be horrifically bad.
Our current approach and its motivation
In our current design when a peer updates its beacons it sends out a brand new peerID. The idea is that listeners shouldn't be able to track the peer across beacon changes because it constantly uses new peerIDs. This peerID then gets sent up the stack from wifi or native to ThaliMobile who will then pass the event on if the peerID is new or if something else like a port changed. This keeps the top layers from getting slammed with repeat notification (which both Wifi and the native code will send).
The problem
In the best case this approach is inefficient. For example, imagine that we are on wifi with peers A and B. At time 0 A sends out a SSDP announcement with id A1. Peer B hears the announcement but it's a bit busy and ignores it for a second. At time 1 peer B changes its beacons (because it updated its DB) and sends out a SSDP announcement with id A2. In theory peer B doesn't know that A1 and A2 came from the same devices so it is going to try and make requests to both devices instead of recognizing that A2 is a replacement for A1 and so A1 can be ignored. This is obviously wasteful but not deadly.
In iOS and Android there will be problems because unlike in WiFi, it is not possible in iOS and Android to connect twice to the same peer. Follow this logic carefully because this part can be a bit confusing.
When we send up a native peerAvailabilityChanged event above the thaliMobile layer we include an IP address (in the case of iOS and Android this will be localhost) and a port. The upper level code can connect to that IP address and port and create as many TCP connections as they want. The reason is that all those TCP connections are muxed into a single TCP connection which is sent across the single native duplex stream between the peers.
So when A1 came up it had port X. When A2 comes up as an event it will have port Y. Note that X != Y. So from the upper layers perspective it loos like A1 and A2 are pointing at different peers. But they aren't. They are the same peer. When the node.js code connects to port X that causes a native connection to be made. When the node.js code then tries to connect to port Y that will cause an attempt to create a second, separate, native connection. It is that second attempt that will fail. Only one native connection per customer.
So what does the node.js code do when it gets the error on the second attempt?
In theory it could just ignore it since the error means that the peers are already connected. But it's not that easy (you didn't think it would be that easy, did you?).
Imagine the following scenario:
Time 0 - Peer B gets announcement A1 with port X
Time 1 - Peer B gets announcement A2 with port Y
Time 2 - Peer B attaches to port X, gets back a beacon set with nothing interesting in it and kills its local TCP connections to the mux listener on port X. Killing the TCP connections to the mux listener does NOT kill the native connection. This is because it's quite common for connections to the mux listener to be replaced. So we will only kill the native connection if there is an inactivity time out or if someone uses the feature I still have to implement to allow for directly killing outgoing native connections.
Time 3 - Peer B attaches to port Y which triggers an error that Peer B is already connecting to the peer behind Port Y.
The problem is, which peer is the error talking about? Remember, Peer B does not know that A1 and A2 belong to the same peer. So now peer B can't get the new beacons from port Y and has forgotten port X! EEEK!
How to fix
A possibly hacky work around is to keep track of all connections and ping them all whenever we get a conflict. But that is messy and silly. The reason its silly is that the concept underlying this approach, that we can hide Peer A's identity is currently wrong.
In the case of WiFi every SSDP announcement might go out with a unique peerID but it always has the exact same IP and port! In the case of iOS for implementation reasons we had to create peerIDs that consist of two parts, a constant UUID followed by a generation counter that changes every time the beacons are updated. So while the Node.js code doesn't know that A1 and A2 belong to the same peer, the native code does! And in Android every beacon announcement goes out with the devices Bluetooth address which is a constant!
So the real solution here is to change things as follows.
We should add a field to the wifi and native peer availability changed events that specifies if the beacons have been updated (this translates to a call to updateAdvertising). This means that a peer availability changed event can communicate a new peer, a peer leaving or an existing peer updating its beacons. This will require (minor) changes in the native layer, the native wrapper, the wifi layer, thali mobile and even notifications (so they trigger on beacon changes and know not to double schedule if they get multiple beacon changes from the same peer).
On WiFi it would probably be cleanest if we changed our peerID to use a design like iOS where we have a UUID we generate when we start up and then a generation counter we use as the beacons are changed. It's tempting to just use the IP and port advertised in the location header but if a device is coming on and off wifi both can change and cause confusion. Although to be fair one could plausibly argue that is a feature.
On iOS we just need to expose what we have. When we communicate up the stack we need to use the UUID we already generate as the peerID and then use the generation counter to drive beacon updates.
On Android we can use the bluetooth address as the peerID and we can use a hash of the BLE address (which changes on every update) as a beacon change flag. Yes, we have counters in the previous cases which give us ordering but we really don't need that level of knowledge. We just need to know that there was a change. So we can keep things simple and leave the update flag as a Boolean.
So was the original idea always wrong?
Actually, no, it wasn't. It was driven by the assumption that we would use BLE on Android not to only to signal new beacons but also to exchange beacons. In that case we really could create anonymity since every time we update the beacons we change our BLE address. But that design will still work when we install it. It just means we won't ever get a second notification with the same ID with a beacon update set to true. And the bugs above don't apply since we wouldn't have connection semantics to the GATT server. We will at worst just get an address not found error.
So this isn't terribly hard to fix.
We have done this by changing peerAvailabilityChanged to consist of a relatively persistent peerID and a generation.
|
gharchive/issue
| 2016-04-04T20:58:17 |
2025-04-01T06:40:36.635840
|
{
"authors": [
"yaronyg"
],
"repo": "thaliproject/Thali_CordovaPlugin",
"url": "https://github.com/thaliproject/Thali_CordovaPlugin/issues/700",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
357471689
|
Sites using self signed certs don't show
My home assistant is using a self signed certificate, on both android things and android the page is blank. I get the warning about the certificate, I click to accept then all I get is a blank background. I have the root certificate installed on my phone yet I still get the certificate error even though the certificate shows valid if I use chrome.
At this time the browser doesn't allow insecure connections. So it is probably throwing some kind of error on self-signed certificates. In order to address this, the browser code within the application needs to be updated.
Ahh ok thanks for the update. I thought adding the root CA cert to the device would work but it didn't. At least not for the alarm panel on an Android device, it did work for chrome and other apps. Thanks again.
I know that Android browser has issues with self signed certificates, especially from LetsEncrypt. I’m not sure how to resolve the issue without getting a different certificate.
I will investigate how to allow self signed certificates for the Android browser but no guarantee in the time frame or success.
I'm trying to compile mqtt alarm panel in Android studio but get this error
"Error:Build Config field cannot have a null parameter".
Have you run into this issue? Sorry if this is a noob question, I'm just getting into android development.
The Android application uses default values from the local.properties files for testing. The best way to deal with is to make them blank in the gradle file. Under the productFlavors change the debug to this:
dev { dimension "default" buildConfigField "String", BASE_ENVIRONMENT, '"DEV_ENVIRONMENT"' applicationId "com.thanksmister.iot.mqtt.alarmpanel" versionName "${versionMajor}.${versionMinor}.${versionPatch} Build ${versionBuild}-DEV" buildConfigField 'Integer', 'ALARM_CODE', '1234' buildConfigField 'String', 'DARK_SKY_KEY', '""' buildConfigField 'String', 'MAIL_GUN_KEY', '""' buildConfigField 'String', 'MAIL_GUN_URL','""' buildConfigField 'String', 'IMGUR_CLIENT_ID', '""' buildConfigField 'String', 'LATITUDE', '""' buildConfigField 'String', 'LONGITUDE', '""' buildConfigField 'String', 'MAIL_FROM', '""' buildConfigField 'String', 'MAIL_TO', '""' buildConfigField 'String', 'HASS_URL', '""' buildConfigField 'String', 'BROKER', '""' buildConfigField 'String', 'IMGUR_TAG', '""' buildConfigField 'String', 'TELEGRAM_TOKEN', '""' buildConfigField 'String', 'TELEGRAM_CHAT_ID', '""' }
Alternatively you can remove the BuildConfig.DEBUG information from the onCreate method of the MainActivity class:
` if(BuildConfig.DEBUG) {
configuration.alarmCode = BuildConfig.ALARM_CODE
darkSkyOptions.darkSkyKey = BuildConfig.DARK_SKY_KEY
darkSkyOptions.latitude = BuildConfig.LATITUDE
darkSkyOptions.longitude = BuildConfig.LONGITUDE
mqttOptions.setBroker(BuildConfig.BROKER)
configuration.webUrl = BuildConfig.HASS_URL
configuration.setMailFrom(BuildConfig.MAIL_FROM)
configuration.setMailGunApiKey(BuildConfig.MAIL_GUN_KEY)
configuration.setMailTo(BuildConfig.MAIL_TO)
configuration.setMailGunUrl(BuildConfig.MAIL_GUN_URL)
configuration.telegramChatId = BuildConfig.TELEGRAM_CHAT_ID
configuration.telegramToken = BuildConfig.TELEGRAM_TOKEN
imageOptions.imageClientId = BuildConfig.IMGUR_CLIENT_ID
imageOptions.imageSource = BuildConfig.IMGUR_TAG // Imgur tags
darkSkyOptions.setIsCelsius(true)
configuration.isFirstTime = false
configuration.setClockScreenSaverModule(true)
configuration.setPhotoScreenSaver(false)
configuration.setHasCameraCapture(true)
configuration.setWebModule(true)
configuration.setShowWeatherModule(true)
configuration.setTssModule(true)
}`
Thanks, I used your first suggestion and that took care of the issue. Now I'm off to figure out these java deprecation warnings.
|
gharchive/issue
| 2018-09-06T02:20:06 |
2025-04-01T06:40:36.667907
|
{
"authors": [
"thanksmister",
"thatkide"
],
"repo": "thanksmister/androidthings-mqtt-alarm-panel",
"url": "https://github.com/thanksmister/androidthings-mqtt-alarm-panel/issues/17",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1653454477
|
"unhide" query.instant.default.max_source_resolution Flag
Hello everyone
We are using thanos ruler for some recording rules which do use offset to compare certain values to historic data. The offset hits a date range where we do not have any raw data, but only downsampled data. As a result, the query does not return any results. After some search we discovered the hidden flag --query.instant.default.max_source_resolution which was introduced in https://github.com/thanos-io/thanos/pull/1431. After configuring this option, our queries do work. I was wondering whether we could make this hidden flag a regular one by "unhiding" it
Describe the solution you'd like
Setting --query.instant.default.max_source_resolution to e.g. 1h resolves our issue but this flag is hidden by default and there is also no documentation for this flag.
Additional context
I am not sure why exactly this feature was implemented as a hidden flag. In my opinion, this feature could be interesting for others as well, and it might make sense to add some documentation for and make it a regular flag. Any opinions on this?
Everyone agrees, I would be happy to add documentation for this flag and "unhide" it.
Thanks,
Reto
Hey @rekup,
Looks like this older comment might give some insight on why https://github.com/thanos-io/thanos/pull/1431#pullrequestreview-279331642, I'd say if those reasons are still valid it makes sense to keep it hidden. Alternatively, we can document this use case and fix and still keep it hidden.
|
gharchive/issue
| 2023-04-04T08:46:54 |
2025-04-01T06:40:36.675103
|
{
"authors": [
"matej-g",
"rekup"
],
"repo": "thanos-io/thanos",
"url": "https://github.com/thanos-io/thanos/issues/6261",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
975713937
|
*: update Cortex (and Prometheus) dependency
Update Cortex dependency (and Prometheus together) to include
https://github.com/cortexproject/cortex/commit/70dddb6b70b87f899ab115e79426a3ba522ae6ad.
storage.Querier finally has support for label.Matchers so now we can
fully pass them down. This is covered by tests. Plus, I've played around a bit
with these changes locally.
Signed-off-by: Giedrius Statkevičius giedrius.statkevicius@vinted.com
[x] I added CHANGELOG entry for this change.
[ ] Change is not relevant to the end user.
Nice! Is this pr ready for review now?
Nice! Is this pr ready for review now?
No, still need to fix the tests hence it is a draft
@yeya24 fixed the tests, PTAL :beers:
@yeya24 thanks for the very quick review! :heart:
|
gharchive/pull-request
| 2021-08-20T15:30:57 |
2025-04-01T06:40:36.678694
|
{
"authors": [
"GiedriusS",
"yeya24"
],
"repo": "thanos-io/thanos",
"url": "https://github.com/thanos-io/thanos/pull/4586",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1095381182
|
cli: print usage before error when argument parsing causes an error
This intends to improve error reporting in k8s. Before this change
thanos would print the error, followed by the usage. The latter part
would fill up a container status message (in say kubectl describe)
with the usage, hiding the actual error.
Signed-off-by: Jan Fajerski jfajersk@redhat.com
[ ] I added CHANGELOG entry for this change.
[x] Change is not relevant to the end user.
Changes
Verification
I'm sure there is a better solution for this, I went with the smallest change to see what the maintainers think.
I think this makes sense and it's a nice usability improvement. :+1:
Documentation check seems broken :(
|
gharchive/pull-request
| 2022-01-06T14:38:26 |
2025-04-01T06:40:36.681828
|
{
"authors": [
"GiedriusS",
"jan--f"
],
"repo": "thanos-io/thanos",
"url": "https://github.com/thanos-io/thanos/pull/5034",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
651404580
|
supports windows 7
this does not run on windows 7? can you build on lower libraries?
I'll take a look, thanks for the feedback.
|
gharchive/issue
| 2020-07-06T09:50:54 |
2025-04-01T06:40:36.682652
|
{
"authors": [
"peterpavles",
"thdal"
],
"repo": "thdal/MosaiqueRAT",
"url": "https://github.com/thdal/MosaiqueRAT/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
259398349
|
Relationship shows incorrect Model and "None results"
Laravel Version: 5.5
Voyager Version: 1.0
PHP Version: 7.1
Database Driver & Version:
Description:
When I add a Relationship to a BREAD table it allows me to select whichever table has the relationship.
However, there are two major issues:
When I go to edit the Relationship it now shows the incorrect table. Where it says "Websites" in the image below is incorrect - I chose a different table when creating the relationship
When viewing the BREAD the column says "None results" when there definitey should be results
Steps To Reproduce:
Edit a BREAD
Add a Relationship as you normally would
Save
Go back to the BREAD edit screen and modify your newly created Relationship
It will (probably) show the table which is last in your table list
Please share your BREAD settings for that table.
I am having the same Issue.
I am having the same Issue, please fix asap.
Will someone who is having this issue please provide more information as @marktopper requested?
Specifically, it would be very helpful to see the BREAD settings (model name especially) for both sides of the relationship, the full relationship editor view (only half is shown above), and (if possible) the actual query being run (found through laravel-debugbar) to produce the "None results"
The first problem was fixed in https://github.com/the-control-group/voyager/pull/2155
The second seems to be like https://github.com/the-control-group/voyager/issues/2342
If any of the problems still remain, please open a new issue
|
gharchive/issue
| 2017-09-21T07:17:59 |
2025-04-01T06:40:36.698499
|
{
"authors": [
"amitn322",
"emptynick",
"fletch3555",
"marktopper",
"quanttsm",
"scottybo"
],
"repo": "the-control-group/voyager",
"url": "https://github.com/the-control-group/voyager/issues/1790",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
395087216
|
Where's HasRelationships.php (disapears from Traits after composer update)
Version information
Laravel: v5.7
Voyager: v1.1.11
PHP: 7.2
Database: MySQL 5.7
Description
After composer update file HasRelationships.php in /vendor/tcg/voyager/src disapears
Steps To Reproduce
composer update
Expected behavior
i use my own model with Voyager relationship
Example
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
use TCG\Voyager\Traits\HasRelationships;
class Catproduit extends Model
{
use HasRelationships;
protected $table = 'catproduits';
protected $fillable = ['nom', 'slug'];
public function parentId()
{
return $this->belongsTo(self::class);
}
}
After composer update, i got this error:
include(/SRV_APP_PATH/vendor/composer/../tcg/voyager/src/Traits/HasRelationships.php): failed to open stream: No such file or directory
Its has been removed from 1.1.11 because it is completely useless, undocumented and there's no point in using it.
Also, as the issue-template suggests, please ask questions in our Slack group.
|
gharchive/issue
| 2019-01-01T14:23:39 |
2025-04-01T06:40:36.702220
|
{
"authors": [
"cotiga",
"emptynick"
],
"repo": "the-control-group/voyager",
"url": "https://github.com/the-control-group/voyager/issues/3861",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1513409700
|
add a feature request template
This pull request adds a feature request issue template to make life just a little bit easier for folks looking to request new features to our site
.deploy
.deploy
|
gharchive/pull-request
| 2022-12-29T06:04:25 |
2025-04-01T06:40:36.720019
|
{
"authors": [
"GrantBirki"
],
"repo": "the-hideout/tarkov-dev",
"url": "https://github.com/the-hideout/tarkov-dev/pull/298",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
267940813
|
SQL server float/decimal values not saved
Hi,
I'm working with SQL server and the float values are not able to be saved.
When it's a Integer, it's saved, but when it's a Float is it's inserted as a NULL value.
I also used the CAST(? as float) but not working...
my stdout is showing actually the value being a float : "latitude" : 12.3423 but this is stored as NULL in the database
Any idea?
Even if the field inserted as float (json stdout showing it as float), you should use the filter to convert the field.
filter {
mutate {
convert => { "fieldname" => "float" }
}
}
However for integer and string conversions are working well, so you have nothing to do..
Maybe floats should be automatically detected and converted by the plugin ... ?
As per this line https://github.com/theangryangel/logstash-output-jdbc/blob/079c3a6c7854a30c8f24d37a8366dce6d4036577/lib/logstash/outputs/jdbc.rb#L295 it used to work 😓
I’d guess that the shipped version of jruby doesn’t send them as the float type anymore. I’ll try and find some time to investigate tomorrow, but I’m pretty busy at work and have evening plans tomorrow. Might not get time until the end of the week.
I havent been able to reproduce this problem so far... I've added support for BigDecimal in d1a733d19531ee36553e3851807c08ec8c131520. This is currently untested, but the only case I can currently think of that might be producing this problem. I'll try and get a release cut for this later this week.
If you're able to find a way to reproduce this problem, or can provide a sample event as a JSON file for me to test with, I'll happily see if there's something else going on.
jdbc: "typ"=>"user", "balance"=>#BigDecimal:901fe3,'0.802E1',3(4), "@version"=>"1"
stdout:
{
"id" => 3,
"name" => "3",
"create_time" => "2018-02-26T10:16:27.000Z",
"update_time" => "2018-02-26T15:49:05.000Z",
"typ" => "user",
"balance" => 8.02,
"@version" => "1",
"@timestamp" => "2018-02-27T06:23:50.278Z",
"type" => "user",
"key" => "value"
}
Then,"balance" in mysql is null.
+1
|
gharchive/issue
| 2017-10-24T08:27:30 |
2025-04-01T06:40:36.760230
|
{
"authors": [
"Georgeqhh",
"genthalili",
"karnamonkster",
"theangryangel"
],
"repo": "theangryangel/logstash-output-jdbc",
"url": "https://github.com/theangryangel/logstash-output-jdbc/issues/99",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2336576550
|
Add support for vertical rulers/guides
This is a feature of the Monaco Editor that I think would be a welcome addition to Code App.
Currently, within VS Code, vertical rulers are added/modified through the following settings.json property:
"editor.rulers": [
80, // A vertical line at column 80.
120 // A vertical line at column 120.
]
For Monaco, to my knowledge, rulers is managed through its EditorOptions property.
Actually we should support settings.json.
Reference: https://code.visualstudio.com/docs/getstarted/settings#_workspace-settingsjson-location
|
gharchive/issue
| 2024-06-05T18:33:52 |
2025-04-01T06:40:36.762431
|
{
"authors": [
"bummoblizard",
"cosinami"
],
"repo": "thebaselab/codeapp",
"url": "https://github.com/thebaselab/codeapp/issues/1088",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
987486754
|
Feature Request: hospitalization rate
Thanks for this great custom component!
After all, as of yesterday, incidence is no longer the sole deciding factor for Corona countermeasures.
At least for Bavaria, the hospitalization rate will take its place as the "hospital traffic light".
It would be great if this component were supplemented by the hospitalization rate and the intensive care rate, which is provided here for all federal states: https://www.intensivregister.de/#/aktuelle-lage/laendertabelle
Maybe this repo is helpful in any way for scraping the data: https://github.com/br-data/corona-divi-api
With these two values, I can build my Corona Traffic Light within HA based on the regional principles:
red: >600 intensive care cases
yellow: >1.200 cases in hospitals
green: both values below their limit
Since these data is not provided by the RKI, a separate integration would be a better fit.
But contributions are always welcome, so feel free to add you local traffic light here. I'll try to support as much as possible.
There is an official git repository providing the hospitalization data now.
I would like this too.
I already added the hospitalization numbers into the parser with this PR.
Great @thebino!
Thanks!
|
gharchive/issue
| 2021-09-03T07:17:25 |
2025-04-01T06:40:36.769612
|
{
"authors": [
"Pe-MaKer",
"renarena",
"thebino"
],
"repo": "thebino/rki_covid",
"url": "https://github.com/thebino/rki_covid/issues/56",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
66772790
|
'number' pattern invalid if decimal
I'm using the built in 'number' pattern to validate for numeric input. Is it correct that something like '.234' is considered non numeric?
Is this covered by issue #103 and/or pull request #230?
If so, it looks like we do have a fix proposed; it's just not mergeable at this point. If I get a chance, I can try to recreate the pull request from my own repository since there isn't much activity on the other one.
@chiefGui If @neptunian agrees, do you want to close this as a duplicate of #103?
Thanks @platinumazure
|
gharchive/issue
| 2015-04-07T04:34:15 |
2025-04-01T06:40:36.782321
|
{
"authors": [
"neptunian",
"platinumazure"
],
"repo": "thedersen/backbone.validation",
"url": "https://github.com/thedersen/backbone.validation/issues/286",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
738659447
|
Update Vacuum Card to include rooms
Requirement
Dependencies
#84
#92
Related
#81
#85
Accidentally linked this to #81.
New Lovelace layout makes this unnecessary.
|
gharchive/issue
| 2020-11-09T04:11:33 |
2025-04-01T06:40:36.834939
|
{
"authors": [
"theglus"
],
"repo": "theglus/Home-Assistant-Config",
"url": "https://github.com/theglus/Home-Assistant-Config/issues/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
713674861
|
cleanup: run npx prettier -w .
✨
Thanks for this Joel!
Have you got this set up on any of your projects via eslint with some eslintrc, or prettierrc file?
It would be good to have this as a pre-commit hook, or something similar, so that this can be run as a part of usual workflow.
I have something like this on another project here:
https://github.com/Greening-Digital/constellate/blob/master/cl8-web/.eslintrc.js
Would you mind adding this prettier as a dev dependency or something similar for this PR?
Done in latest commit but now eslint is not super happy 😅
/Users/j42/Sites/grid-intensity/src/browser.bundle.js
4:1 error 'window' is not defined no-undef
/Users/j42/Sites/grid-intensity/src/browser.js
3:33 error 'fetch' is not defined no-undef
4:40 error 'localStorage' is not defined no-undef
14:19 error 'fetch' is not defined no-undef
/Users/j42/Sites/grid-intensity/src/gridIntensity.js
9:49 error 'localStorage' is defined but never used no-unused-vars
9:63 error 'fetch' is defined but never used no-unused-vars
31:5 error 'parsedIntervals' is not defined no-undef
33:7 error 'parsedIntervals' is not defined no-undef
35:12 error 'parsedIntervals' is not defined no-undef
/Users/j42/Sites/grid-intensity/src/index.test.js
15:42 error 'x' is defined but never used no-unused-vars
26:42 error 'x' is defined but never used no-unused-vars
38:45 error 'x' is defined but never used no-unused-vars
50:5 warning Test has no assertions jest/expect-expect
53:45 error 'x' is defined but never used no-unused-vars
64:45 error 'x' is defined but never used no-unused-vars
77:42 error 'x' is defined but never used no-unused-vars
93:42 error 'x' is defined but never used no-unused-vars
147:5 warning Test has no assertions jest/expect-expect
151:13 error 'result' is assigned a value but never used no-unused-vars
Thanks Joel 👍
I'm more comfortable in python than node, and I'm not sure of the idiomatic way to account for objects that are provided by the browser window. How should I fix them?
I'm totally down to make a separate issue for that, and then accept this PR, as it's already an improvement on what we had before.
I tried something with fetch and localstorage, takes a ridiculous amount of time to get something working but it works 😅
|
gharchive/pull-request
| 2020-10-02T14:41:57 |
2025-04-01T06:40:36.845888
|
{
"authors": [
"Jolg42",
"mrchrisadams"
],
"repo": "thegreenwebfoundation/grid-intensity",
"url": "https://github.com/thegreenwebfoundation/grid-intensity/pull/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
856057471
|
RSSI-distance estimation visualisation
[Who] As a developer and app user
[What] I need to assess accuracy of distance estimation based on RSSI measurements
[Value] In order to determine whether the default models and parameters are adequate for the target application, or further calibration is required
Describe the potential solution you'd like
Near real-time visualisation of estimated distance between devices on the user interface.
Closed by latest develop
|
gharchive/issue
| 2021-04-12T14:50:19 |
2025-04-01T06:40:36.847745
|
{
"authors": [
"adamfowleruk",
"c19x"
],
"repo": "theheraldproject/herald-for-android",
"url": "https://github.com/theheraldproject/herald-for-android/issues/167",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
624374562
|
Support tree/table data in Theia
Some data providers only provide tree or table data (no time line required). The respective outputs should be displayed properly.
@bhufmann Can this be closed? Does this refer to views like the Events Table?
@bhufmann Can this be closed? Does this refer to views like the Events Table?
No, it cannot be closed because it refers to data providers of type DATA_TREE, which is used, for example the Trace Compass function duration statistics.
@bhufmann Is this a duplicate of #144? Can it be closed?
yes, it's a duplicate and can be closed
|
gharchive/issue
| 2020-05-25T15:47:02 |
2025-04-01T06:40:36.857794
|
{
"authors": [
"bhufmann",
"ebugden",
"tahini"
],
"repo": "theia-ide/theia-trace-extension",
"url": "https://github.com/theia-ide/theia-trace-extension/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
572083261
|
adata.X = None raises “incompatible shape” error
If we only want to support del adata.X, we need to change the pbmc tutorial.
Do you want to allow del adata.X?
If adata.X = None should be legal, del adata.X should be too, right?
But yeah, right now there’s no way to do what the tutorial wants to …
I forget why this came up. I don't think we should allow del adata.X or adata.X = None.
Can we close this?
I stumbled on the same thing while running your 3k PBMC tutorial. Are you planning to allow del adata.X or adata.X = None? Or is this functionality gone?
I would say the tutorial is out of date. I don't think we currently support having no value for X, since it's pretty central to how AnnData objects work.
|
gharchive/issue
| 2020-02-27T13:27:01 |
2025-04-01T06:40:36.880235
|
{
"authors": [
"LustigePerson",
"flying-sheep",
"ivirshup"
],
"repo": "theislab/anndata",
"url": "https://github.com/theislab/anndata/issues/330",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
828510417
|
Project transition matrix in the embedding
[ ] Additional function parameters, change functionality or change defaults?
[ ] New estimator in cr.tl.estimators?
[ ] New kernel in cr.tl.kernels?
[ ] New gene trend model in cr.ul.models?
[ ] New plotting function in cr.pl?
[x] Other?
Similarly as scvelo does it.
closed via #520
|
gharchive/issue
| 2021-03-11T00:15:20 |
2025-04-01T06:40:36.882679
|
{
"authors": [
"michalk8"
],
"repo": "theislab/cellrank",
"url": "https://github.com/theislab/cellrank/issues/519",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
428427833
|
Leiden restrict_to parameter
Added restrict_to parameter to leiden by using louvain code as template.
Tests are not yet provided.
A simple example of execution and checks:
# First split on cluster 4
sc.tl.leiden(adata, restrict_to=('leiden_res0.4', ['4']), resolution=0.6,
key_added='leiden_res0.4_4_sub')
# Additional split
sc.tl.leiden(adata, restrict_to=('leiden_res0.4_4_sub', ['1', '2', '3', '4,4']),
resolution=0.6, key_added='leiden_res0.4_4_add_sub')
# All partitions together
sc.pl.tsne(adata, color=['leiden_res0.4', 'leiden_res0.4_4_sub',
'leiden_res0.4_4_add_sub'])
# Partition size check
## Original size of clusters
adata.obs['leiden_res0.4'].value_counts()
0 932
1 853
3 676
2 676
4 338
5 57
Name: leiden_res0.4, dtype: int64
# Check if first split is correct (can be iterated for subsequent splits)
## Assignment of samples in original clusters to subsplit clusters
adata.obs.loc[(adata.obs['leiden_res0.4'].isin(['4'])),
'leiden_res0.4_4_sub'].value_counts()
4,0 103
4,1 68
4,2 66
4,3 57
4,4 44
5 0
3 0
2 0
1 0
0 0
Name: leiden_res0.4_4_sub, dtype: int64
## Assignment of samples not in original clusters to subsplit clusters
adata.obs.loc[~(adata.obs['leiden_res0.4'].isin(['4'])),
'leiden_res0.4_4_sub'].value_counts()
0 932
1 853
3 676
2 676
5 57
4,4 0
4,3 0
4,2 0
4,1 0
4,0 0
Name: leiden_res0.4_4_sub, dtype: int64
...
Would you mind adding a test?
Great! Please create helper functions abstracting away identical code blocks in the two functions. E.g. those lines are completely identical to the ones in louvain:
https://github.com/theislab/scanpy/blob/4760cbdf264c88ab48e17efaab5b559ab064be49/scanpy/tools/_leiden.py#L108-L120
And the same might apply to the other code block.
I added helper functions. I am working on the tests.
Apparently there's a test https://github.com/theislab/scanpy/blob/fc24dfc62c049a0d0c9cc491d4647d03b52bfb10/scanpy/tests/test_rank_genes_groups_logreg.py#L22
that fails.
It is because after rank_genes_groups categories are naturally sorted. I don't think this is due to my changes, but let me know how can I help.
I added tests for both louvain and leiden with restrict parameter. Please review the test code to be sure it is clear and working.
@flying-sheep @ivirshup As you looked through this in detail already, one of you should merge this when ready, ok?
Only one thing that I would ask for: please don't put things in utils when it's not absolutely necessary.
Here, it would be very natural to add a _utils_clustering.py in scanpy/tools.py. scanpy/utils.py is way too overloaded with all kinds of things... And I will need to clean it up at some point.
Thank you for moving the helper functions there. Or, foreseeing that leiden might surpass louvain in the future, you could just add the helper functions to _leiden and import into _louvain from there.
@falexwolf I added _utils_clustering.py since I think it's the more maintainable way to do it (e.g. if in the future, new clustering methods are added).
My comments are addressed, idk if all of @ivirshup’s are.
I've got one minor comment left (one last redundant print statement), but otherwise I'm good.
Looks good to me! Thanks @fbrundu.
we should update the tutorials and notebooks to use leiden instead of louvain.
Yes, we should as soon as many people report seemless installation of the leiden package. I'm still using louvain as I never had any problems with it, but I agree that we should migrate when leiden is stable, mature and easily-installable.
Has anyone compared the two for speed? I guess Leiden has an extra step and therefore might be a bit slower. If they both scale well to >1 million cells, then I agree... otherwise the issues of louvain shouldn't be as bad for KNN-graphs as for more graphs with more heavy tailed degree distributions.
I looked at it a while ago (for one test dataset, probably), and got the impression that louvain was faster. That said, they're both very fast. I would note that solutions from either can be pretty unstable, frequently depending on size of the community.
@LuckyMD When you say heavy tailed, are you thinking of the unweighted KNN graph case or both?
They'll both be affected by the resolution limit, which might be what you're referring to. This is a well-described problem for Modularity with the configuration null model that it only optimally detects communities within a certain size range relative to the size of the network.
For me heavy-tailed networks are PPIs.. KNNs are a lot more regular than that. I'm not sure what a weighted KNN graph would be... are you talking about the PhenoGraph approach?
Yeah, I was pretty much talking about the resolution limit.
PhenoGraph would be an example of a weighted KNN, but so would the "connectivities" adjacency matrix we get from UMAP. This is what's used if you use sc.tl.leiden(..., use_weights=True), which is the default.
I wasn't aware we use a weighted matrix for clustering as default. Are the weights in the connectivities matrix related to the euclidean distances between cells?
Thanks for the link. This is definitely easier to understand than the paper ;).
If we use the connectivities weights from UMAP by default for clustering... how different is that to clustering on the UMAP embedding directly? Is there a rationale for not just using a binarized KNN-graph for this?
We didn't use the weights in Louvain (https://github.com/theislab/scanpy/blob/297d6246ccfbf398f771cee1bd4b81b57fc27c76/scanpy/tools/_louvain.py#L31)?
Why did you decide to change the default in Leiden
(https://github.com/theislab/scanpy/blob/297d6246ccfbf398f771cee1bd4b81b57fc27c76/scanpy/tools/_leiden.py#L31)? I'm fine with it, but a brief discussion would have been appropriate. :wink:
@LuckyMD
how different is that to clustering on the UMAP embedding directly?
It's very different. The choice of weights will likely not have a dramatic effect, you're always clustering a graph that proxies neighborhoods in high-dimensional space. If you embed this structure in 2 or 3 d, even if you use the fantastic UMAP for it, you'll make errors (https://twitter.com/falexwolf/status/1108284982001315840). Also, the most computationally intense part is the embedding optimization, not the graph construction.
@falexwolf I recall that image in the figure, thanks! I wasn't aware that the embedding was the difficult part.
I was also investigating how leiden got use_weights=True by default, and noticed the lack of discussion. Seems like it just sorta happened when leiden got added #361?
I think it'd be pretty different from clustering on the embedding, because the embedding has constraints based on things like minimum distance two points can be from each other, and the number of dimensions it's embedded in.
On the binarized KNN-graph, I think we've actually talked about this before (#240). I personally think using a weighted graph makes more sense. For example, say you have a cell type of which occurs 15 times in your dataset, but you've set k to 30. With a binarized graph there will be a less clear signal that this is a distinct cell-type.
From a slightly more empirical/ anecdotal perspective, on a couple datasets I tested, total degree of the generated graph was sub-linear (looked log-ish) w.r.t. k for the weighted umap graph. Here's using one of the bone marrow donors from the hca immune census (y-axis is log scaled so you can still see the total weighted degree increase):
To me, this suggested a stable representation of the dataset was being found. As a connected point, in my experience clustering results seems fairly robust to k for weighted graphs above a low threshold (I think dataset dependent, but 30-60 range). Using an unweighted graph, there is a much stronger dependence on k and some smaller clusters seem less stable (show up in a smaller proportion of clustering solutions from a parameter space).
I'm not sure I agree with your interpretation of your total degree plot. To me, increasing k is meant to have the effect of densifying the network, and thus obtaining a lower resolution view of the manifold. It is somewhat analogous to choosing a lower resolution value for leiden or louvain clustering. What you see is that in the weighted case, the overall degree does not really increase (thus possibly neither does the overall density), so that increasing k may have little effect on clustering at all. This is the most I can get from this plot... as density is really about local changes and not the global degree increase. But I would still ask whether it is a good thing that increasing k has little effect? Does increasing k then change the clustering results (in the weighed case?).
I wonder if the observation that you find smaller clusters better in the weighted case is robust. That would suggest that weights can counteract resolution limit issues, which would be very interesting...
Sorry about the delay, I've been working on some writing about this stuff (though from a different perspective).
I'm not sure k is "meant" to have any particular effect, since these methods weren't designed for KNN graphs. I'd also argue if the parameters are analogous, there's an advantage of simplicity to just choosing one of them.
I've got some plots for the effect of resolution and number of neighbors on the size of clusters which are found. This is for the 10x example dataset with 10k pbmcs using the v3 chemistry. What I've done is build the networks at 5 different values of k, four times each (different random seeds). For each of those networks, I ran clustering at 50 different resolutions (np.geomspace(0.05, 20, 50)). Here are the maximum cluster sizes found for each combination of k and resolution for the unweighted and weighted graph (color bar is logscale, from 1 to 6000, but I couldn't get useful ticks to work):
Overall, pretty similar. Now, the minimum cluster sizes (color scales are different, but you'll see why):
This looks to me using the weighted graph allows identifying small clusters even at low resolutions. The cluster of 24 cells looks like megakaryocytes, and are being detected at pretty much every clustering (996 out of 1000) using the weighted graph.
This looks a lot more convincing... It's a bit hard to read the second last plot though... The black parts are also clusters around size 2-10, no? Or am I misreading the scale? Do you have a version with a few more annotations on the colour bar? How often are megakaryocytes detected as a separate cluster in the unweighted case? It looks like unweighted case is definitely worse for higher resolutions with >10 neighbours though.
And coming back to the k discussion.. From my perspective, If you treat the clustering and knn graph generation as two separate steps, you may want k to have an effect. If you treat it as the same process, then I follow your argumentation that having a single parameter to affects the scale of the clustering suffices.
I took about 20 minutes on it, but couldn't figure out how to add more annotations. I've got interactive versions with hover over, but log scale is bugged in those libraries... I believe the bins that are the darkest shade in the minimum cluster size for the unweighted graph actually correspond to a minimum cluster size of 1 cell.
Megakaryocytes were detected as a distinct cluster every time that k was 10 in the unweighted case, but no other times.
I think that when we make a call on "this is a kind of cell" from unsupervised clustering, those results should be robust. That is, if there's strong signal in the data and your clustering algorithm can pick up that signal, good clusters shouldn't change much if you vary the parameters a little. If you can pick any parameters from a wide range and get results that are pretty consistent, that seems like good data and a good method to me.
I follow your argumentation on "good clusters". However, I also like the concept that putting k=35 means you make it harder to detect clusters of size < 35, as you 'over-connect' those clusters in a way. The weighted case is less interpretable in that way. However, here it clearly outperforms the unweighted case. I am still a little on the fence (due to interpretability), but I'd be okay with weighting I think.
|
gharchive/pull-request
| 2019-04-02T20:27:59 |
2025-04-01T06:40:36.912236
|
{
"authors": [
"LuckyMD",
"falexwolf",
"fbrundu",
"fidelram",
"flying-sheep",
"ivirshup"
],
"repo": "theislab/scanpy",
"url": "https://github.com/theislab/scanpy/pull/586",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
807389613
|
A bug in the rotate function
https://github.com/thejevans/mla/blob/cb6bb8d64c714bbfbbe6348cd0864bc3a4f6072f/mla/models.py#L57
I am looking into it. The old version work but the new one doesn't.
Never mind I found it. It is in line 113, the old cross_matrix function is the difference between skv - skv.T but here it is just skv. check https://github.com/thejevans/mla/blob/a9b07aa6179c1ba343c01c500e5e2e2b8bdf02e4/mla/tools.py
|
gharchive/issue
| 2021-02-12T16:58:43 |
2025-04-01T06:40:36.934359
|
{
"authors": [
"jasonfan1997"
],
"repo": "thejevans/mla",
"url": "https://github.com/thejevans/mla/issues/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2763549073
|
No Conan default profile
Assuming Cupcake and Conan are installed, running:
cupcake new
cupcake build
yields:
ERROR: Profile not found: default
...
subprocess.CalledProcessError: Command '['conan', 'profile', 'path', 'default']' returned non-zero exit status 1.
Is this a dev environment thing? Or does the profile live in the repo?
Is it possible for cupcake new to initialize the default Conan profile (assuming it is in the repo)?
I think I understand now that this is something that is cached in the developer's environment...
So perhaps cupcake build should run conan profile detect if ~/conan2/profiles/default does not exist.
Sorry, I'm assuming the user is already familiar with Conan. You need to configure the default Conan profile. I recommend looking at their tutorial to get acquainted.
|
gharchive/issue
| 2024-12-30T18:19:52 |
2025-04-01T06:40:36.940389
|
{
"authors": [
"jfoshee",
"thejohnfreeman"
],
"repo": "thejohnfreeman/cupcake.py",
"url": "https://github.com/thejohnfreeman/cupcake.py/issues/8",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
440515073
|
ESM
Enhancement: Add ESM distribution along with UMD and point to in package.json;
apply Babel (preset-env)
Refactoring: ESM in source
npm: Add rollup script and add it to test script
Thanks for the help!
|
gharchive/pull-request
| 2019-05-06T01:42:05 |
2025-04-01T06:40:36.969089
|
{
"authors": [
"brettz9",
"thelonious"
],
"repo": "thelonious/kld-polynomial",
"url": "https://github.com/thelonious/kld-polynomial/pull/9",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2169954393
|
Sidebar in mobile
Sidebar doesn't hide in mobile
Can you please send an image or gif?
|
gharchive/issue
| 2024-03-05T19:00:18 |
2025-04-01T06:40:36.975996
|
{
"authors": [
"khalidmaquilang",
"ogzcode"
],
"repo": "themesberg/flowbite-vue",
"url": "https://github.com/themesberg/flowbite-vue/issues/275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1450121478
|
Copy examples always in darkmode
https://flowbite.com/docs/components/card/
when the website defaults to darkmode, it is no possible to copy light version of the code. Everything rendered is dark and the toggle for the component and the website does not affect this.
Hello @matthewhutchings,
Thanks for the feedback - dark mode is deployed based on your configuration of Tailwind CSS, whether it looks for the default OS setting or by using a custom dark class on the body element.
Here's some more info on how the dark mode works with Flowbite and Tailwind CSS.
On the other hand, it is quite possible that we will develop a way to copy the code without the dark mode classes, even though they don't interfere with light mode in any way.
|
gharchive/issue
| 2022-11-15T17:14:58 |
2025-04-01T06:40:36.978476
|
{
"authors": [
"matthewhutchings",
"zoltanszogyenyi"
],
"repo": "themesberg/flowbite",
"url": "https://github.com/themesberg/flowbite/issues/333",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
737043349
|
Zoom option removed from the menu #415
#415
My account number: 1002530d2ef3da6207c5a5070d427984a8508a715ce18afb07f394aa47d73751
|
gharchive/pull-request
| 2020-11-05T15:41:53 |
2025-04-01T06:40:36.981589
|
{
"authors": [
"Hristijan95"
],
"repo": "thenewboston-developers/Account-Manager",
"url": "https://github.com/thenewboston-developers/Account-Manager/pull/428",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
243379523
|
Publish to Github Not Functional?
Expected Behaviour
I'm not certain what should happen - @pezholio any insight?
Current Behaviour (for problems)
Neither of the github features from the menu are functional
Steps to Reproduce (for problems)
open csv file
click file > github > export to github or `file > github > add file to github
new windows open, but it is blank
Your Environment
Include as many relevant details about the environment you experienced the bug in - this will help us resolve the bug more expediently
Operating System and version MacOS
The branch that has incorporated (StandardJS)[https://github.com/theodi/comma-chameleon/pull/177] has flagged this file with errors as follows
121:11 error Unexpected literal in error position of callback standard/no-callback-literal
125:9 error Unexpected literal in error position of callback standard/no-callback-literal
@pezholio realised you may not get a ping when I edit an existing comment so pinging anew here
Huh. That's weird. I get a new window that prompts me to login to Github, but when I try to submit a file, I get Dataset files is invalid. Were any changes made to the Octopub API since the FSA work was done in the new year?
I haven't got time at the mo to dig into this, but if it helps, the relevant backend code that talks to the Octopub API is here
Sidebar: When logging into Github to get my token, and trying to copy and paste my Github password from my password manager, I get this error.
Uncaught Exception:
ReferenceError: mainWindow is not defined
at exports.menu.submenu.click (/Applications/Comma Chameleon.app/Contents/Resources/app/main/menu.js:187:11)
at MenuItem.click (/Applications/Comma Chameleon.app/Contents/Resources/electron.asar/browser/api/menu-item.js:81:16)
at Function.delegate.executeCommand (/Applications/Comma Chameleon.app/Contents/Resources/electron.asar/browser/api/menu.js:119:40)
@pezholio - that issue should have been fixed in f21beb4
If you're using a binary dist, I think that commit hasn't made it into a tagged release yet.
If you're running from source, make sure you've got the latest commits fetched/pulled, although @quadrophobiac has been having some problems with that so I'd be interested to know if you have the same issues.
API shouldn't have changed, no. We should re-test with the current master and tag a release.
re-test Comma Chameleon you mean?
Yep
Yes - I've got the same problem. Tests pass, 'Export to GitHub' gives me a blank window with no errors logged to the console.
There's definitely something odd going on here though. Lets go back in time to a point where I know this feature definitely worked...
If I download the binary dist for say version 0.4.7, that feature does work .. but then on my local copy, if I git checkout af7f60e ( af7f60e is the commit tagged as 0.4.7), I've got the same problem on my working copy. I think whatever is going on is going to be related to something that is happening in the build process or a dependency or a setting that is on in dev but not when we build a release or something like that.. I don't think the actual "export to GitHub" code is fundamentally broken.
Sorry I don't have more time to spend on it right now but maybe that gives you a clue?
|
gharchive/issue
| 2017-07-17T12:28:53 |
2025-04-01T06:40:36.996476
|
{
"authors": [
"Floppy",
"chris48s",
"pezholio",
"quadrophobiac"
],
"repo": "theodi/comma-chameleon",
"url": "https://github.com/theodi/comma-chameleon/issues/178",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
564952171
|
The basic example in README produce error
error: Cannot use keyword 'await' outside an async function.
const { GoogleSpreadsheet } = require('google-spreadsheet');
// spreadsheet key is the long id in the sheets URL
const doc = new GoogleSpreadsheet('<the sheet ID from the url>');
await doc.useServiceAccountAuth({
client_email: process.env.GOOGLE_SERVICE_ACCOUNT_EMAIL,
private_key: process.env.GOOGLE_PRIVATE_KEY,
});
await doc.loadInfo(); // loads document properties and worksheets
console.log(doc.title);
await doc.updateProperties({ title: 'renamed doc' });
const sheet = doc.sheetsByIndex[0]; // or use doc.sheetsById[id]
console.log(sheet.title);
console.log(sheet.rowCount);
// adding / removing sheets
const newSheet = await doc.addSheet({ title: 'hot new sheet!' });
await newSheet.delete();
Yeah you can't use async/await at the root level of a script you run. I didn't want to add that in the examples though as I thought it may cause more confusion than anything else, not to mention just being extra noise around the important bits.
I usually do something like this if just running a script and want to use async await at the root level:
(async function main() {
// your code...
})();
|
gharchive/issue
| 2020-02-13T21:11:19 |
2025-04-01T06:40:37.001865
|
{
"authors": [
"theoephraim",
"wuuzw-test"
],
"repo": "theoephraim/node-google-spreadsheet",
"url": "https://github.com/theoephraim/node-google-spreadsheet/issues/298",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
86970041
|
Certificate Issue in V3
I was working on a project locally with v2 but since i upgraded to v3 - I am getting this error:
S3Exception in WrappedHttpHandler.php line 152:
Error executing "HeadObject" on "https://s3-sa-east-1.amazonaws.com/gambero/user_files"; AWS HTTP error: cURL error 60: Peer certificate cannot be authenticated with given CA certificates
The odd thing is that if i run the same thing on v2, on the same local server - it works pretty well.
Is there any way to disable the ssl check in v3?
The aws docs suggest updating the certificate: http://docs.aws.amazon.com/aws-sdk-php/v3/guide/faq.html#what-do-i-do-about-a-curl-ssl-certificate-error
From flysystem's end there's really not much I can do. Perhaps open an issue on the sdk repo?
I'll close this issue since it's not solvable from this end.
anyone come to this issue because similar problem in laravel trying to connect to local installation of minio with self-signed certificate, you can update your filesystems.php like this:
's3' => [
...
'use_path_style_endpoint' => true,
'http_handler' => new App\GuzzleHandler(),
];
with GuzzleHandler.php is
namespace App;
use GuzzleHttp\Client;
use Aws\Handler\GuzzleV6\GuzzleHandler as BaseAwsGuzzleHandler;
class GuzzleHandler extends BaseAwsGuzzleHandler
{
public function __construct()
{
parent::__construct(new Client(
['verify' => false]
));
}
}
anyone come to this issue because similar problem in laravel trying to connect to local installation of minio with self-signed certificate, you can update your filesystems.php like this:
's3' => [
...
'use_path_style_endpoint' => true,
'http_handler' => new App\GuzzleHandler(),
];
with GuzzleHandler.php is
namespace App;
use GuzzleHttp\Client;
use Aws\Handler\GuzzleV6\GuzzleHandler as BaseAwsGuzzleHandler;
class GuzzleHandler extends BaseAwsGuzzleHandler
{
public function __construct()
{
parent::__construct(new Client(
['verify' => false]
));
}
}
For those coming to look for a Symfony alternative to develop locally, you can define your client this way:
Aws\S3\S3Client:
arguments:
- version: 'latest'
endpoint: '%aws_s3_endpoint%'
region: '%aws_s3_bucket_region%'
use_path_style_endpoint: true
credentials:
key: '%aws_s3_access_key%'
secret: '%aws_s3_secret_key%'
http: { verify: false } }
The important thing to note is the http: { verify: false } }, that disables the peer verification. It is somehow documented in the Aws\AwsClient constructor
Please, take into account that this configuration should only be enabled in a local environment, never in a production system.
|
gharchive/issue
| 2015-06-10T13:25:52 |
2025-04-01T06:40:37.023380
|
{
"authors": [
"apit",
"frankdejonge",
"rubenrubiob",
"thiduzz"
],
"repo": "thephpleague/flysystem-aws-s3-v3",
"url": "https://github.com/thephpleague/flysystem-aws-s3-v3/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2266422140
|
3.x release date
Hi!
I see that you're actively developing on the 3.x branch, do you have an approximative release date for 3?
Thanks!
There's no tentative date for a stable release yet. There's an open PR #394 which I want to get merged but haven't had the time to review.
In the meantime you can use the 3.0.0-beta1 release and report back if you have any issues.
Thanks for your feedback !
|
gharchive/issue
| 2024-04-26T19:56:36 |
2025-04-01T06:40:37.035061
|
{
"authors": [
"ADmad",
"nlemoine"
],
"repo": "thephpleague/glide",
"url": "https://github.com/thephpleague/glide/issues/395",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
711978842
|
Add SagePay Direct PayPal integration
This is a pull request that adds the PayPal integration that was built as part of the oilstone/omnipay-sagepay fork, however that fork cannot be merged back in due to changes in the composer package definitions.
This is still a work in progress and will still need to be tested but it is a start to adding proper PayPal integration into this package.
Still a work in progress. No tests have been set up.
Some typo fixes have also been included in this pull request.
I've not managed to have a good look at this one yet, but it looks like the test failures are just formatting issues - missing terminating line endings on your new files.
I pulled out a few on these into a separate PR since this would be more readable with less changes & there is a lot of unrelated cleanup in it https://github.com/thephpleague/omnipay-sagepay/pull/162
I pulled out a few on these into a separate PR since this would be more readable with less changes & there is a lot of unrelated cleanup in it https://github.com/thephpleague/omnipay-sagepay/pull/162
|
gharchive/pull-request
| 2020-09-30T14:06:04 |
2025-04-01T06:40:37.039615
|
{
"authors": [
"bullenb",
"eileenmcnaughton",
"judgej"
],
"repo": "thephpleague/omnipay-sagepay",
"url": "https://github.com/thephpleague/omnipay-sagepay/pull/156",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1154036115
|
Support Status Code Ranges 2XX, 3XX, 4XX, 5XX
fixes #158
Loogs good, but can you please provide a test case for new feature?
Tests are failing
i will take care of fixin the issues and providing a test case
@scaytrase added test and seems to be passing scutinzer let me know if anything else is still missing.
Thanks @wolffc !
https://github.com/thephpleague/openapi-psr7-validator/releases/tag/0.18
|
gharchive/pull-request
| 2022-02-28T12:30:34 |
2025-04-01T06:40:37.041822
|
{
"authors": [
"scaytrase",
"wolffc"
],
"repo": "thephpleague/openapi-psr7-validator",
"url": "https://github.com/thephpleague/openapi-psr7-validator/pull/159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
113089868
|
Calendar spelt incorrectly
Hi,
Just though I would let you know that you spelt it "calender", when it should be "calendar" on the preview website.
@AmarHV generated new docs :)
Thanks for reporting :)
|
gharchive/issue
| 2015-10-23T20:03:14 |
2025-04-01T06:40:37.060463
|
{
"authors": [
"AmarHV",
"thesabbir"
],
"repo": "thesabbir/simple-line-icons",
"url": "https://github.com/thesabbir/simple-line-icons/issues/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
688501332
|
Daemonize does not work with requests library
The following code fails on my Mac OS X in daemon but not in foreground mode:
import requests
def main():
while True:
requests.get('https://httpbin.org/get')
sleep(10)
daemon = Daemonize(app="demo_app", pid="/tmp/demo_app.pid", action=main)
daemon.start()
I'm using Python 3.8.5, daemonize 2.5.0 and requests 2.24.0
It would be beneficial to see how it fails, what error or traceback is produced.
https://github.com/thesharp/daemonize/issues/75#issuecomment-890829014
In general, fork() is very broken on OSX, so this issue might be caused by that. Can you look at Console.app and include the traceback, per the instructions in #75 ?
|
gharchive/issue
| 2020-08-29T09:53:52 |
2025-04-01T06:40:37.070905
|
{
"authors": [
"miigotu",
"nickodell",
"tomgross"
],
"repo": "thesharp/daemonize",
"url": "https://github.com/thesharp/daemonize/issues/76",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2400989252
|
put alias after core instance
The output for test_inject_before changes from
...
(alias core export 0 "memory" (core memory (;0;)))
(core instance (;0;) (instantiate 0))
...
to
...
(core instance (;0;) (instantiate 0))
(alias core export 0 "memory" (core memory (;0;)))
...
Merged by #46
|
gharchive/pull-request
| 2024-07-10T14:49:35 |
2025-04-01T06:40:37.141140
|
{
"authors": [
"ahuoguo"
],
"repo": "thesuhas/orca",
"url": "https://github.com/thesuhas/orca/pull/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
439042256
|
Px: Code samples formatted inconsistently
Most code samples are indented with tabs, some with spaces.
Matching opening/closing tags often aren't indented to the same level.
There are curly quotes throughout.
Sometimes the attributes have spaces around the =.
Some text is English while most is Latin. (I'd like to see more realistic text, b/c I find it more engaging.)
Rather than use "©" I'd use the HTML entity ©. (P1)
@MichellanneLi can you take a look at the formatting issues that Curtis mentioned?
@curtisblackwell I'm planning to go back in and re-write some of the content in English so that will get rid of the filler copy.
|
gharchive/issue
| 2019-05-01T01:38:20 |
2025-04-01T06:40:37.146144
|
{
"authors": [
"curtisblackwell",
"thetuttingtutor"
],
"repo": "thetuttingtutor/accessguide",
"url": "https://github.com/thetuttingtutor/accessguide/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1436428691
|
Allow generation of hash-prefixed versions of TargetFiles
Please fill in the fields below to submit an issue or feature request. The
more information that is provided, the better.
Description of issue or feature request:
Allow generation of hash-prefixed versions of TargetFiles.
Current behavior:
Currently, python-tuf does not have the ability to generate hash-prefixed path names for targets files. One suggestion from @jku is to add a utility function in Metadata API's TargetFile like:
Target.File.get_prefixed_paths() -> List[str]
Expected behavior:
The expected behavior is to able to get the hash-prefixed path names for any given targets file.
Example:
https://github.com/vmware-labs/repository-editor-for-tuf/blob/main/tufrepo/git_repo.py#L210
Great idea! This might be a quick, easy win... 🙂
I think it makes sense (although the result is likely still not as simple as you might hope: dealing with files vs URLs is complicated and python-tuf can't really make it easier without making a lot of assumptions).
usage in a repository that wants to store actual target files and symlinks in the same directory would look like this (this is untested):
TARGET_PATH = "public/path/file.ext" # URI path fragment
LOCALTARGETS="/home/jkukkonen/local_copy_of_targets_repo" # filesystem path
with open(f"{LOCALTARGETS}/{TARGET_PATH}", "w") as f:
f.write("target data")
target = TargetFile.from_file(TARGET_PATH, f"{LOCALTARGETS}/{TARGET_PATH}")
for prefixed_path in target.get_prefixed_paths():
# prefixed_path is e.g. "public/path/5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03.file.ext"
os.symlink(TARGET_PATH, "f{LOCALTARGETS}/f{prefixed_path}")
it's still not super simple, but there is a couple of good reasons for that
python-tuf does not (and should not) translate between URLs and filesystem paths. The code above does this, with the understanding that it's the repository apps responsibility to make sure that works.
targets can in some situations change their content over time -- but the old symlinks can't then just refer to the new content, that would be wrong: with the API we have this is possible to handle (although my example above does not do it)
|
gharchive/issue
| 2022-11-04T17:42:14 |
2025-04-01T06:40:37.152136
|
{
"authors": [
"jku",
"trishankatdatadog",
"yzhan289"
],
"repo": "theupdateframework/python-tuf",
"url": "https://github.com/theupdateframework/python-tuf/issues/2166",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
203769503
|
Add CLI to the Repository Management Tools
Implemented CLI for the Repository Management Tools.
Added tuf_repository_cli.py file
CLI of TuF repository tools
Create RSA keys
Import RSA keys (public and private)
Create Ed25519 keys
Import Ed25519 keys (public and private)
Create Top-level Metadata (create repository, verification of key, load signing key,...)
Add target files
Remove target files.
Thanks for working on this pull request, @baloyan.
We continue to work on the new getting started guides, including the CLI and quickstart docs. You can review them here.
|
gharchive/pull-request
| 2017-01-27T23:58:09 |
2025-04-01T06:40:37.155504
|
{
"authors": [
"baloyan",
"vladimir-v-diaz"
],
"repo": "theupdateframework/tuf",
"url": "https://github.com/theupdateframework/tuf/pull/424",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2545040078
|
Bug: New update to Vibe causes crashes when attempting to transcribe
What happened?
Tested Vibe this morning. An .m4a audio had 20 minutes of silence and afterwards 50 minutes of aduio, but was transcribing "silence" for 30-40 minutes, even after audio appeared.
I then tried updating Vibe to solve the problem as there was an update. The app now crashes 2-3 minutes after attempting to transcribe the audio, making 0% progress every time.
Updated the Medium Language model. tested transcription. No change, still crashes.
Have now used the "Reset Vibe" button in the settings. No change, still crashes.
Will attempt to reinstall Vibe as final resort. But sending this bug report just so you know.
Steps to reproduce
See above.
What OS are you seeing the problem on?
No response
Relevant log output
App Version: vibe 2.5.4
Commit Hash: 0639a81cf382add6d51908098fafa2be0e72dc00
Arch: x86_64
Platform: windows
Kernel Version: 10.0.19045
OS: windows
OS Version: 10.0.19045
Cuda Version: n/a
Models: ggml-medium.bin
Default Model: "C:\\Users\\Daniel\\AppData\\Local\\github.com.thewh1teagle.vibe\\ggml-medium.bin"
Cargo features: vulkan
{
"avx": {
"enabled": true,
"support": true
},
"avx2": {
"enabled": true,
"support": true
},
"f16c": {
"enabled": true,
"support": true
},
"fma": {
"enabled": true,
"support": true
}
}
Vibe Test Command Prompt details.txt
The size of the original log from Vibe is a 263 MB text file - Currently unable to upload the file via my means.
I have tested as well a 1-minute audio, it crashed immediately.
Thanks for report. seems like the GPU is out of memory. there's no enough memory to use whisper medium with that.
You can try install the small model instead.
https://github.com/thewh1teagle/vibe/blob/main/docs/MODELS.md
is there anyone who can tell me that how to run it locally in my computer i am not talking about exe file i am talking about how i can run its code and customize it
|
gharchive/issue
| 2024-09-24T10:45:56 |
2025-04-01T06:40:37.177779
|
{
"authors": [
"Daniel-J-Barrows",
"faheemop",
"thewh1teagle"
],
"repo": "thewh1teagle/vibe",
"url": "https://github.com/thewh1teagle/vibe/issues/293",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
636645370
|
Add Bash Engine and SDK support
Include Odin Engine support to allow the execution of Bash jobs. Along with this, write a bash command line tool which emulates an Odin SDK which allows for the logging of live values.
I've written the SDK - no tests yet. Testing with pure Bash is proving to be difficult so I may need to write some tests in Python that calls the odin-bash script using os.system().
|
gharchive/issue
| 2020-06-11T01:10:56 |
2025-04-01T06:40:37.188059
|
{
"authors": [
"theycallmemac"
],
"repo": "theycallmemac/odin",
"url": "https://github.com/theycallmemac/odin/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1933217519
|
Layout refactoring
Removed the slash sign from void tags and added alt text to images.
PR Checklist
[ ] No broken links found using link-checker.
Linkchecker
Use the following command to check the broken links.
docker run --rm -it ghcr.io/linkchecker/linkchecker --check-extern http://172.16.1.16:4000
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2023-10-09T14:28:13 |
2025-04-01T06:40:37.215782
|
{
"authors": [
"CLAassistant",
"iraznatovskyi"
],
"repo": "thingsboard/thingsboard.github.io",
"url": "https://github.com/thingsboard/thingsboard.github.io/pull/1174",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
164805557
|
Added WoT.setTimeout and WoT.setInterval
Uses WotAPI executor to schedule function calls.
From JS, use WoT object similar to window object in the browser.
Apparently lambda with a variable number of variables requires a heavy workaround with an Interface for each number of arguments :(
|
gharchive/pull-request
| 2016-07-11T10:24:23 |
2025-04-01T06:40:37.249665
|
{
"authors": [
"mkovatsc"
],
"repo": "thingweb/thingweb",
"url": "https://github.com/thingweb/thingweb/pull/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1853924501
|
Combine PRs #25, #26, #27, #28 and bump release to 1.2.4
This PR represents the internal build we plan to use. It combines features added or fixed by earlier feature-based PRs:
#25
#26
#27
#28
...including resolution of conflicts, since many of them touch same lines, and adding some synergy (exception class include in two ways building on top of PR #26; using the exception when handling HTTPS arguments).
It is recommended to merge this PR (and hopefully score the earlier four automatically) in one simple swoop :)
Thanks for the merge! Some more extensions are brewing; not sure I want to tediously separate them by subject however :)
no worries :)
|
gharchive/pull-request
| 2023-08-16T21:23:38 |
2025-04-01T06:40:37.261912
|
{
"authors": [
"jimklimov",
"thinksabin"
],
"repo": "thinksabin/DTrackAuditor",
"url": "https://github.com/thinksabin/DTrackAuditor/pull/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2297385427
|
Store function name in URL so we can share the link
The main purpose is to help the CS team reference the function in a contract with ease
PR-Codex overview
This PR introduces dynamic loading of contract functions based on URL parameters and sets the active tab accordingly in a React component.
Detailed summary
Added useRouter from Next.js for URL handling
Dynamically load contract functions based on URL parameter
Set the active tab based on the state mutability of the selected function
Added URL query parameter handling for function selection
Improved user experience by updating URL on function selection
✨ Ask PR-Codex anything about this PR by commenting with /codex {your question}
Having issue with the router.push for the contract intro page
Cannot run that page in local
will have to revisit this PR later
|
gharchive/pull-request
| 2024-05-15T09:45:31 |
2025-04-01T06:40:37.269659
|
{
"authors": [
"kien-ngo"
],
"repo": "thirdweb-dev/dashboard",
"url": "https://github.com/thirdweb-dev/dashboard/pull/2571",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1916425282
|
Downgrade sdk to patch update
Problem solved
Some packages where set as minor update but they are minor upgrades
Changes made
[ ] Public API changes: list the public API changes made if any
[ ] Internal API changes: explain the internal logic changes
How to test
[ ] Automated tests: link to unit test file
[ ] Manual tests: step by step instructions on how to test
/release-pr
|
gharchive/pull-request
| 2023-09-27T22:28:14 |
2025-04-01T06:40:37.272085
|
{
"authors": [
"iketw"
],
"repo": "thirdweb-dev/js",
"url": "https://github.com/thirdweb-dev/js/pull/1677",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2352975860
|
refactor: inject storage and session handler in wallet connect connector [part 5]
PR-Codex overview
This PR updates wallet connection logic in React and React Native. It introduces webLocalStorage for web wallets and nativeLocalStorage for mobile wallets. The changes also include session handling updates.
Detailed summary
Introduces webLocalStorage for web wallets and nativeLocalStorage for mobile wallets.
Updates session handling for wallet connections.
Adds sessionHandler for handling URI redirection.
Enhances wallet creation process for different platforms.
The following files were skipped due to too many changes: packages/thirdweb/src/wallets/wallet-connect/controller.ts
✨ Ask PR-Codex anything about this PR by commenting with /codex {your question}
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#3314 👈
#3312
#3304
#3300
#3298
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @joaquim-verges and the rest of your teammates on Graphite
Merge activity
Jun 14, 3:05 PM EDT: Graphite rebased this pull request after merging its parent, because this pull request is set to merge when ready.
|
gharchive/pull-request
| 2024-06-14T09:42:49 |
2025-04-01T06:40:37.281111
|
{
"authors": [
"jnsdls",
"joaquim-verges"
],
"repo": "thirdweb-dev/js",
"url": "https://github.com/thirdweb-dev/js/pull/3314",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2552611140
|
parseDescription
Problem solved
Short description of the bug fixed or feature added
PR-Codex overview
This PR focuses on removing the parseDescription utility function from the BatchTable component and replacing its usage with a new approach that wraps the description in a ToolTipLabel component.
Detailed summary
Deleted the import statement for parseDescription from utils/parseDescription.
Replaced the accessor for the "Description" column in BatchTable:
Changed from using parseDescription(row.description) to a new JSX structure that includes ToolTipLabel and a paragraph element.
✨ Ask PR-Codex anything about this PR by commenting with /codex {your question}
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#4826 👈
#4825
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @kien-ngo and the rest of your teammates on Graphite
|
gharchive/pull-request
| 2024-09-27T10:45:46 |
2025-04-01T06:40:37.287777
|
{
"authors": [
"kien-ngo"
],
"repo": "thirdweb-dev/js",
"url": "https://github.com/thirdweb-dev/js/pull/4826",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2639021919
|
fix: DASH-410
FIXES: DASH-410
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#5328 👈
#5318
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @jnsdls and the rest of your teammates on Graphite
|
gharchive/pull-request
| 2024-11-06T19:26:46 |
2025-04-01T06:40:37.292263
|
{
"authors": [
"jnsdls"
],
"repo": "thirdweb-dev/js",
"url": "https://github.com/thirdweb-dev/js/pull/5328",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1761502449
|
[Pods] - Update card designs for kits
We have decided to go with Option 1 for the design using the cards with logos (please see the team channel for the link to the figma file)
Acceptance criteria
[x] Make the card titles more prominent
[x] Add the showcases implementation from Figma including the link that jumps to the showcases list
We have some more requested feedback to handle for this ticket before the PR can be merged.
Homepage
All these tasks relate to the cards on the homepage
[ ] Increase font weight of showcase title to be bold (at least 500)
[ ] The title and icon list should both be left-aligned / in line with each other - can increase the icon size or spacing as needed to make it fit the card box better
[ ] Need to investigate how the order of the icons for each card on the homepage shows up. Ideally we want the key tech items to show first, then the smaller tooling / other items.
Logos and Tech Stacks
[ ] The Tanstack logo is too big for it's containing boxes and the edges bleed outside of the containing box - size needs to be shrunk
[ ] If we can, update/replace the BullMQ logo? It's a little hard to read / tell what it is
[ ] Solid and SolidStart - should use the full blue logo instead of the blue & green one it currently has
[ ] DenoDB has their own icon in their repo, get and replace the current one
[ ] Remove Oak from the tech stack list entirely (not really needed)
Kit Page
[ ] The card heading on mobile - the showcase and source links should be the full width of the card (will need to hide the spacer div)
[ ] On tablet sizes - when you click the "view showcases" link, the sticky header is over the section title, need to add scroll padding
Kit Page - Showcases Section (bottom of page)
[ ] GitHub text should have a capital H
[ ] Title should be base font size
[ ] Repo and App links should be small font size, no font weight
[ ] Remove the hr
[ ] Light mode: set card background to be gray-100 and border to be gray-400
[ ] Light mode: set icon border to be gray-400
|
gharchive/issue
| 2023-06-16T23:38:07 |
2025-04-01T06:40:37.301126
|
{
"authors": [
"jdwilkin4",
"lindakatcodes"
],
"repo": "thisdot/starter.dev",
"url": "https://github.com/thisdot/starter.dev/issues/1284",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2644418375
|
[Bug] in version 3.0.17 getting 'Could not copy C:\N\adame.firebase.ios.core\10.29.0.1' error
Description
On windows when deploying to Android device (release mode) I getting a series of this and like this warnings in the build console
Could not copy C:\N\adame.firebase.ios.core\10.29.0.1\lib\net6.0-ios16.1\Firebase.Core.resources\FirebaseCoreInternal.xcframework\ios-arm64_x86_64-simulator\FirebaseCoreInternal.framework\Modules\FirebaseCoreInternal.swiftmodule\arm64-apple-ios-simulator.private.swiftinterface
Also when I try to update via Nuget Package Manager I get similar error
Could not find a part of the path 'C:\N\adame.firebase.ios.core\10.29.0.1\lib\net6.0-ios16.1\Firebase.Core.resources\FirebaseCoreInternal.xcframework\ios-arm64_x86_64-simulator\FirebaseCoreInternal.framework\Modules\FirebaseCoreInternal.swiftmodule\arm64-apple-ios-simulator.private.swiftinterface'.
I tried to delete bin/obj, delete nuget packages then 'dotnet restore'
The long path limit is disabled, my nuget,temp and project folders are all in the root of C:/
I reverted the package version to 2.5.35 and now its working fine like before the update.
Workload versions:
maccatalyst 18.0.8303/8.0.100 VS 17.11.35327.3
android 34.0.113/8.0.100 VS 17.11.35327.3
ios 18.0.8303/8.0.100 VS 17.11.35327.3
aspire 8.1.0/8.0.100 VS 17.11.35327.3
maui-windows 8.0.82/8.0.100 VS 17.11.35327.3
Steps to Reproduce
Update to 3.0.17 and try to deploy to Android device
Expected Behavior
Build done, Deploy done
Actual Behavior
Build failed
Basic Information
Version with issue: 3.0.17
Last known good version: 2.5.35
Strange, because the replacement of the firebase ios sdk, which is the major change in version 3.0.x, should not have any influence on the Android side.
Does the error also occur if you‘re connected to the mac build host?
Is your app using .NET 7 or 8?
Which MauiVersion do you use?
On Mac I can update to 3.0.17, but on startup the app throws an initalization error (from a different package? I'm using admob too)
Loaded assembly: /private/var/containers/Bundle/Application/4C0C72EC-A844-4DC0-BC07-57C407C885EC/Maui.app/hu/Maui.resources.dll [External]
2024-11-09 20:05:25.572 Maui[3411:699026] *** Terminating app due to uncaught exception 'GADInvalidInitializationException', reason: 'The Google Mobile Ads SDK was initialized without AppMeasurement. Google AdMob publishers, follow instructions here: https://googlemobileadssdk.page.link/admob-ios-update-plist to include the AppMeasurement framework and set the -ObjC linker flag. Google Ad Manager publishers, follow instructions here: https://googlemobileadssdk.page.link/ad-manager-ios-update-plist'
*** First throw call stack:
(0x1a9a6540c 0x1a2d41c28 0x1a9bbee8c 0x1003c1608 0x100370ecc 0x1b080b7a8 0x1b080c780 0x1b07e39f8 0x1b07f0c68 0x1b07f1430 0x1f480ab94 0x1f480a720)
When I revert the package to 2.5.35 everything works fine on Mac/iOS too
On windows I checked the update again via package manager console and ran this command:
Install-Package Plugin.FirebasePushNotifications
Got this error:
NotFound https://nuget.pkg.github.com/Th3L0x/download/adame.firebase.ios.core/index.json 295ms
OK https://nuget.devexpress.com/api/FindPackagesById()?id='AdamE.Firebase.iOS.Core'&semVerLevel=2.0.0 719ms
Install-Package : Could not find a part of the path 'C:\N\adame.firebase.ios.core\10.29.0.1\lib\net6.0-ios16.1\Firebase.Core.resources\FirebaseCoreInternal.xcframework\ios-arm64_x86_
64-simulator\FirebaseCoreInternal.framework\Modules\FirebaseCoreInternal.swiftmodule\arm64-apple-ios-simulator.private.swiftinterface'.
Poject version: .NET 8
MAUI version: maui 8.0.82/8.0.100 SDK 8.0.400
Do you have any news or progress on this?
No news right now. I'm working on a Macos with Jetbrains Rider as IDE at the moment. I can try to install the nuget on a Windows pc, but it will take time until I can do so. The one thing I can clearly say about this issue is, that it is probably the Firebase iOS SDK (Nuget adame.firebase.ios.core) that causes the troubles. If you have a proposal on how to change the code in this repository to make it work, let me know. Unfortunately, I cannot change code inside adame.firebase.ios.core or how MAUI bindings are used/compiled.
Okay, so the problem is with the AdamEssenmacher/GoogleApisForiOSComponents package
(adame.firebase.ios.core)
This is a bug or something like that in the VisualStudio (Windows) itself, it can not handle a long path issue no matter where you set the nuget packeg or temp folder or the main project path.
I call these command in powershell
dotnet nuget locals all -c # Clear all Nuget cache
cd $env:localappdata # Go to your AppData\Local folder
Get-ChildItem -Filter "*Xamarin*" # Check for your XamarinBuildDownloadCache folder
rm -Force -Recurse XamarinBuildDownloadCache # Delete that folder
Get-ChildItem -Filter "*Xamarin*" # Confirm that it has been deleted
then from cmd run
dotnet restore
dotnet build
From cmd I can build debug version, release version, deploy to device, publish.
From VisualStudio only the debug version can be built, but at least I can debug the code from Windows.
So i had to write some powershell script to automate the deploy/publish procces until someone fix this.
This iOS error on Mac
Loaded assembly: /private/var/containers/Bundle/Application/4C0C72EC-A844-4DC0-BC07-57C407C885EC/Maui.app/hu/Maui.resources.dll [External]
was solved by adding this key to the Info.plist:
<key>GADIsAdManagerApp</key>
<true/>
Thanks for taking the time and documenting the solution!
|
gharchive/issue
| 2024-11-08T15:47:24 |
2025-04-01T06:40:37.345377
|
{
"authors": [
"Th3L0x",
"thomasgalliker"
],
"repo": "thomasgalliker/Plugin.FirebasePushNotifications",
"url": "https://github.com/thomasgalliker/Plugin.FirebasePushNotifications/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2244561131
|
⚠️ Syncthing-KA-BI has degraded performance
In e223790, Syncthing-KA-BI ($URL_SYNCTHING_MERZ_NIMBUS_BI) experienced degraded performance:
HTTP code: 200
Response time: 9765 ms
Resolved: Syncthing-KA-BI performance has improved in 379c24d after 4 minutes.
|
gharchive/issue
| 2024-04-15T20:32:48 |
2025-04-01T06:40:37.368828
|
{
"authors": [
"thomasmerz"
],
"repo": "thomasmerz/upptime",
"url": "https://github.com/thomasmerz/upptime/issues/2645",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
367533632
|
provide Haïti to France mapping
Currently, only France is reactive to user input. Haïti is not.
Situation :
Select bottom part of the interface (Haïti part). Zoom in on Haïti land. Click on any location in Haïti.
Problem :
Nothing happens (in fact, the closest territory in France is computed)
Expected behavior :
The territory in Haïti is computed and its border is drawn.
The twin territory is drawn in France.
To implement this enhancement, it is necessary to compute what is the closest country, France or Haïti.
To implement this enhancement, one needs to understand the control flow when the user click on the map
|
gharchive/issue
| 2018-10-07T08:39:33 |
2025-04-01T06:40:37.381303
|
{
"authors": [
"thomaspeugeot"
],
"repo": "thomaspeugeot/tkv",
"url": "https://github.com/thomaspeugeot/tkv/issues/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
657640271
|
Whitelist multiple hosts using docker-compose
Hi!.
This is mostly a question as I'm getting used to knowing the application. The README specifies that it is possible to add multiple whitelisted users. Unfortunately it doesn't clarify what happens if I want to add a list of whitelists from an env var. Is it allowed?. Should I use comma separated values?
This is my docker-compose:
traefik-forward-auth:
image: thomseddon/traefik-forward-auth:2.1.0
container_name: traefik-forward-auth
networks:
- main
environment:
PROVIDERS_GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
PROVIDERS_GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
PROVIDERS_OIDC_ISSUER_URL: ${OIDC_ISSUER}
SECRET: secret_key
AUTH_HOST: ${AUTH_HOST}
COOKIE_DOMAIN: ${COOKIE_DOMAINS}
WHITELIST: ${WHITELIST}
labels:
- "traefik.enable=true"
- "traefik.backend=traefik-forward-auth"
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.rule=Host:auth.${DOMAIN_URL}"
- "traefik.port=4181"
- "traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181"
- "traefik.frontend.auth.forward.trustForwardHeader=true"
My whitelist:
export WHITELIST=adam@example.com,john@example.com
Expected behaviour:
Allow adam and john to log in.
Actual result:
Not Authorized.
Have a look at https://github.com/thomseddon/traefik-forward-auth/wiki/v2-Upgrade-Guide#option-changes. I've not set it using an environment variable, but as a command line parameter --whitelist=adam --whitelist=john should work.
Well actually looking at the code it seems you can use comma separated strings:
https://github.com/thomseddon/traefik-forward-auth/blob/174353743876301fa42a27d9e42ec528bcf06ebb/internal/config.go#L41
And here they test the two strings parsed with comma.
https://github.com/thomseddon/traefik-forward-auth/blob/174353743876301fa42a27d9e42ec528bcf06ebb/internal/config_test.go#L121
So yeah, following the code it should work. It even has a unit test...
There is a test for using the comma separated values via an env var that's currently passing: https://github.com/thomseddon/traefik-forward-auth/blob/master/internal/config_test.go#L218
Could you post the full debug log from startup (it will print out the config it's running with)
Think I've found the issue. I'm ussing version 2.1.0 but the comma separated whitelist has been added in version 2.2.0. I'll upgrade and test again.
It's working correctly in 2.2.0. Closing.
|
gharchive/issue
| 2020-07-15T20:40:36 |
2025-04-01T06:40:37.398090
|
{
"authors": [
"dantebarba",
"tdorsey",
"thomseddon"
],
"repo": "thomseddon/traefik-forward-auth",
"url": "https://github.com/thomseddon/traefik-forward-auth/issues/150",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1099465591
|
Difficulties testing locally (on linux) - keystore issues
Trying to build the source code and run it locally, but it doesn't seem to load the keystore correctly.
Issue description
Note, I'm using a very old keystore, I think from April 2021.
I followed the instructions in the readme. yarn, yarn prebuild etc.
I copied an existing keystore I'm using with thorswap and the release version of asgardex-electron, as follows (mkdir was necessary, as the storage subfolder was missing): mkdir ~/.config/Electron/storage/ && cp my-keystore.txt ~/.config/Electron/storage/keystore.json
I created in the same folder (git repo folder) a .env file containg REACT_APP_WALLET_PASSWORD=my_keystore_password
I run yarn dev
I browse to http://localhost:3000/
I get this error:
Thanks for letting me know what to try to use my keystore locally... cheers
Your Environment
Ubuntu Linux 20.04
NodeJS 16
I browse to http://localhost:3000/
Since ASGARDEX is running within an Electron environment using Node behind the scenes to get access to the file system, you can't run it in a browser. Pls try to follow instructions how to run the Electron app locally by following instructions in the README. If you still have issues, feel free to join THORChains Discord (#asgardex-desktop) https://discord.gg/4AcdkEBQ to have a chat there ...
|
gharchive/issue
| 2022-01-11T17:56:34 |
2025-04-01T06:40:37.419705
|
{
"authors": [
"cryptotester",
"veado"
],
"repo": "thorchain/asgardex-electron",
"url": "https://github.com/thorchain/asgardex-electron/issues/2016",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
380872032
|
THORN-2190: bumped SmallRye OpenApi
[X] Have you followed the guidelines in our Contributing document?
[X] [v2] Have you created a JIRA and used it in the commit message?
[ ] [v4] Have you created a GitHub Issue and used it in the commit message?
[X] Have you checked to ensure there aren't other open Pull Requests for the same issue?
[ ] Have you built the project locally prior to submission with mvn clean install?
Windows build failed because of disk space.
|
gharchive/pull-request
| 2018-11-14T20:25:00 |
2025-04-01T06:40:37.440376
|
{
"authors": [
"Ladicek",
"michalszynkiewicz"
],
"repo": "thorntail/thorntail",
"url": "https://github.com/thorntail/thorntail/pull/1172",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
67426362
|
Webfont smoothing for FireFox
In _typography.scss, it would probably make sense to include -moz-osx-font-smoothing: grayscale; right after -webkit-font-smoothing: antialiased;, to support FireFox.
Related: http://maximilianhoffmann.com/posts/better-font-rendering-on-osx & http://stackoverflow.com/questions/11459746
Actually, we are looking to remove all font smoothing. https://github.com/thoughtbot/bitters/pull/180 A lot of font smoothing options is dependent on a number of factors including what app you are using and the display your using is using to see the site.
One of the bigger factors for bitters is that we don't know what fonts a designer/dev is going to want to use with their site. They may use a font that has good hinting so it is difficult to make blanket rules like 'all antialiased all the time'. For now we've decided to remove all smoothing.
@drtimofey let me know what you think.
With #180 merged, can this be closed? Should we add -moz-osx-font-smoothing just to the button styles, as we do with -webkit-font-smoothing?
Closing
|
gharchive/issue
| 2015-04-09T18:46:28 |
2025-04-01T06:40:37.560276
|
{
"authors": [
"drtimofey",
"tysongach",
"whmii"
],
"repo": "thoughtbot/bitters",
"url": "https://github.com/thoughtbot/bitters/issues/166",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
122148380
|
Defining $all-text-inputs-invalid ?
I'm just getting started with Bourbon, so there may be a better way to do this. But it seems to me useful to define a new all-text-inputs-* variable for styling invalid input. Along the lines of Bourbon Text Inputs, I have done this:
$all-text-inputs-invalid: assign-inputs($text-inputs-list, invalid);
which enables me to write this:
#{$all-text-inputs-invalid} {
border: 1px solid #f00;
}
I was surprised not to find something like this in Bourbon already. Would it be a useful addition?
That does seem like an oversight. Go for it!
|
gharchive/issue
| 2015-12-14T22:40:05 |
2025-04-01T06:40:37.562388
|
{
"authors": [
"joshuaogle",
"tesujimath"
],
"repo": "thoughtbot/bourbon",
"url": "https://github.com/thoughtbot/bourbon/issues/804",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
200563204
|
Remove unused nested create routes
In the routes file, we have two create routes defined for the password resource. One on the password resources and another one as nested member routes for users, but we only use the resources one to reset user's password.
This commit removes the unused nested create routes for the password resource.
Ref:
https://github.com/thoughtbot/clearance/blob/master/config/routes.rb#L3-L5
https://github.com/thoughtbot/clearance/blob/master/config/routes.rb#L16
duplicate of #720
|
gharchive/pull-request
| 2017-01-13T07:17:02 |
2025-04-01T06:40:37.565091
|
{
"authors": [
"abunashir",
"derekprior"
],
"repo": "thoughtbot/clearance",
"url": "https://github.com/thoughtbot/clearance/pull/729",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
355410492
|
how do I install the missing dependency and after deleting the heroku file how do i redo the "brew link"
==> Downloading https://homebrew.bintray.com/bottles/heroku-7.12.1.high_sierra.bottle.tar.gz
==> Pouring heroku--7.12.1.high_sierra.bottle.tar.gz
Error: The brew link step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink bin/heroku
Target /usr/local/bin/heroku
already exists. You may want to remove it:
rm '/usr/local/bin/heroku'
To force the link and overwrite all conflicting files:
brew link --overwrite heroku
To list all files that would be deleted:
brew link --overwrite --dry-run heroku
Possible conflicting files are:
/usr/local/bin/heroku -> /usr/local/lib/heroku/bin/heroku
==> Summary
🍺 /usr/local/Cellar/heroku/7.12.1: 18,332 files, 53.9MB
Installing heroku has failed!
Installing parity
Installing hub
Installing imagemagick
Installing qt@5.5
Installing libyaml
Installing coreutils
Using yarn
Password:
Installing gpg-suite
Installing postgres
Installing redis
Homebrew Bundle failed! 1 Brewfile dependency failed to install.
failed
brew link --overwrite --dry-run heroku
Possible conflicting files are:
/usr/local/bin/heroku -> /usr/local/lib/heroku/bin/heroku
==> Summary
🍺 /usr/local/Cellar/heroku/7.12.1: 18,332 files, 53.9MB
Installing heroku has failed!
Installing parity
Installing hub
Installing imagemagick
Installing qt@5.5
Installing libyaml
Installing coreutils
Using yarn
Installing gpg-suite
Installing postgres
Installing redis
Homebrew Bundle failed! 1 Brewfile dependency failed to install.
failed
I think this is like issue #547, which was resolved by #549. I'll close for now, but feel free to reopen if you see this issue again. Thanks!
|
gharchive/issue
| 2018-08-30T03:30:23 |
2025-04-01T06:40:37.572127
|
{
"authors": [
"composerinteralia",
"dacasanovat"
],
"repo": "thoughtbot/laptop",
"url": "https://github.com/thoughtbot/laptop/issues/544",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
143769165
|
Is it necessary to include clear:both to an outer-container?
I have a section underneath my header and I am noticing that when viewing the section in Chrome's inspector, touches the top of the viewport.
When I add clear:both; to the section, the section goes properly underneath the header.
Inside your outer-container you should also have en element with span-columns, even if it's span-columns(12 of 12) or another full-width grid column. The outer-container already contains a clearfix, and that will give height to the container equal to the size of the floated columns within.
@christiantype does that help?
Closing issue, reopen if you still need help.
|
gharchive/issue
| 2016-03-27T02:23:24 |
2025-04-01T06:40:37.574309
|
{
"authors": [
"christiantype",
"wardpenney",
"whmii"
],
"repo": "thoughtbot/neat",
"url": "https://github.com/thoughtbot/neat/issues/434",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
55024935
|
Is there a specific version of JQuery required?
Additionally, if there is a specific version of JQuery can you please add it to the README? Thanks.
@ravenwilde did you run into trouble with a specific version of jQuery? Our intent is that Refills should always work with the latest version, or at least the version used on the site!
No problems, I'm just currently working on backwards compatibility for IE 8 and contemplating my options. Any advice?
@ravenwilde You should use 1.x of jQuery, since 2.x intentionally doesn't support IE8. http://jquery.com/browser-support/
yeah that's about what I'd determined... thanks for the confirmation though :)
|
gharchive/issue
| 2015-01-21T14:24:53 |
2025-04-01T06:40:37.579651
|
{
"authors": [
"Magnus-G",
"cllns",
"ravenwilde"
],
"repo": "thoughtbot/refills",
"url": "https://github.com/thoughtbot/refills/issues/218",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1890385780
|
configure: put the "advanced mode" in a new tab instead of a link
Suggestion from Lance. On the patch selection page, instead of having just the list of languages with a small and scary "advanced mode..." link, have two tabs: the default tab would be "Languages", with the current default UI, and the other tab could be "Other patches", "Gameplay patches", "All patches", or something else that tells right away that non-language patches exist, and that could invite a curious user to try some of them. Inside this tab, we would have the same advanced UI, just with a more enticing name.
The aim of this change is to improve discoverability of non-translation patches.
Because, with this change, switching between the two tabs feels more natural, the simple view (with only the languages) should reflect better any change that the advanced view would have done. For example, below the radio buttons, write the list of patches added from advanced mode, and have no radio button checked if the user removed the language from the advanced view.
If we want to try it, we need to implement it in a branch and to do some user testing before merging it:
How does a casual user tasked with installing a translation patch react? Ideally, they should either ignore the new tab, or go to the new tab and quickly go back to the original one (if they go to the new tab and use the advanced mode successfully and without struggle, they're probably too advanced for this test).
How does an user without any set goal act? Do they want to explore that part? Do they ignore it? Should probably be tested on several users of various skill level.
We tell the test user "you can install the patch xxx by xxx using thcrap" (that's the kind of scenario that a Touhou fan seeing a patch creator advertising their patch on Twitter would encounter), and let them try to install it without any further help.
Scenario 3 can be done right after scenario 1 by the same user - they did something simple, we ask for a more advanced task, their prior knowledge won't help. Scenario 2 should be done by a different user / group of users, because both 1 and 2 need a user who never saw the new UI with tabs (or even someone who never used thcrap configure v3 at all).
The part about making sure the simple tab shows the changes in the advanced tab don't need to exist for the user testing, I'm mostly concerned on how they will react to the tabbed UI.
See https://github.com/tudi20/thcrap/tree/page2-tabbed
|
gharchive/issue
| 2023-09-11T12:19:28 |
2025-04-01T06:40:37.607320
|
{
"authors": [
"Tudi20",
"brliron"
],
"repo": "thpatch/thcrap",
"url": "https://github.com/thpatch/thcrap/issues/237",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
835221893
|
ETCD reachability is only checked if there is a single endpoint
If more than one endpoint is set, there is no check to see if the cluster is reachable. Since we rely on library behavior for this, it might be best to somehow add a check of our own for this
Closing as wontfix, ETCD is deprecated with the introduction of #46, and support for it will be dropped in the next full release
|
gharchive/issue
| 2021-03-18T20:24:49 |
2025-04-01T06:40:37.625706
|
{
"authors": [
"LeeSmet"
],
"repo": "threefoldtech/0-stor_v2",
"url": "https://github.com/threefoldtech/0-stor_v2/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1663705657
|
Support for hex based TF Chain secrets
Many farms have been created with the TF Connect App, where the farmer only receives a hex based secret for the TF Chain account controlling the twin and the farm. It would be nice to support these farms in the farmerbot.
[x] rmb-rs - already supports this
[x] grid3_client_ts - feature request created here
With support in both, it would just be a matter of adding some logic to detect which form is provided and invoke the services appropriately.
It actually already works, just by subbing the hex secret into the MNEMONIC field in the .env file. All that's left, in that case is to add a note in the documentation that a "mnemonic or hex secret" is accepted.
Ok. I will update the documentation. I'll rename the MNEMONIC field to SECRET, I think it will be more appropriate.
|
gharchive/issue
| 2023-04-12T04:07:51 |
2025-04-01T06:40:37.628574
|
{
"authors": [
"brandonpille",
"scottyeager"
],
"repo": "threefoldtech/farmerbot",
"url": "https://github.com/threefoldtech/farmerbot/issues/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
885561978
|
nginx configurations shouldn't use StoredFactory
it needs to keep getting regenerated from the packages, always. we hit this issue after migration all the packages to a single bottle server, and all of the existing old, preconfigured locations still exist on the nginx locations object, meaning it will gets regenerated every time -with the old locations too-. Suggestion is make sure to delete the locations in 3bot start -as a method for an upgrade- and not use a StoredFactory for nginx SAL.
verifying this
Verified
Branch
development
Commit ID
73616bac6fad9857c0f915ecba2e6a7f54307412
|
gharchive/issue
| 2021-05-11T01:23:38 |
2025-04-01T06:40:37.630754
|
{
"authors": [
"RafyAmgadBenjamin",
"xmonader"
],
"repo": "threefoldtech/js-sdk",
"url": "https://github.com/threefoldtech/js-sdk/issues/3083",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
776125013
|
init stellar gui
Description
Advanced desktop wallet for stellar for the sdk framework
entrypoint: poetry run stellargui
jsng 'j.tools.stellargui.run()'
Codecov Report
Merging #2055 (ce0d280) into development (57fb5fe) will decrease coverage by 0.01%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## development #2055 +/- ##
===============================================
- Coverage 32.47% 32.45% -0.02%
===============================================
Files 136 136
Lines 13192 13198 +6
===============================================
Hits 4284 4284
- Misses 8908 8914 +6
Impacted Files
Coverage Δ
jumpscale/packages/tfgrid_solutions/chats/flist.py
33.57% <0.00%> (ø)
...umpscale/packages/tfgrid_solutions/chats/ubuntu.py
40.20% <0.00%> (ø)
jumpscale/sals/marketplace/apps_chatflow.py
13.00% <0.00%> (-0.17%)
:arrow_down:
jumpscale/sals/marketplace/deployer.py
12.35% <0.00%> (-0.04%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e02fc1a...cd2a24d. Read the comment docs.
|
gharchive/pull-request
| 2020-12-29T23:06:55 |
2025-04-01T06:40:37.640656
|
{
"authors": [
"codecov-io",
"xmonader"
],
"repo": "threefoldtech/js-sdk",
"url": "https://github.com/threefoldtech/js-sdk/pull/2055",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
446114901
|
Builders : create sandbox
create sandbox for the following builders :
BuilderBitcoin
BuilderEthereum
BuilderRipple
BuilderMinio
BuilderRestic
BuilderSyncthing
BuilderOpenResty
@Dinaamagdy , please update this card
finished and can find code there
8fe86d202d74fe4e38e614a1924ae4f6ae3ce89c
|
gharchive/issue
| 2019-05-20T13:21:40 |
2025-04-01T06:40:37.642592
|
{
"authors": [
"Dinaamagdy",
"rkhamis"
],
"repo": "threefoldtech/jumpscaleX",
"url": "https://github.com/threefoldtech/jumpscaleX/issues/514",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
421078818
|
Big fail to upload properly
see: https://docs.grid.tf/threefold/proj_bancadati/issues/90
Ok so this is actually not a bug but just a limitation of the archive with the current config it has. Let me explain, when uploading a file, we do process it through a pipeline that will split the files and then erasure code it over multiple shards. All of this generates metadata so we can later on reconstruct the file from the metadata information.
What happens here, is we reach a metadata size that is bigger then what 0-db can accept. 0-DB has a limit of around 8MiB of data per write call. So here, when all the file is uploaded, we try to write the metadata to the tlog, but 0-db refuse cause this metadata block is too big. Since the write to the tlog file, minio signal the write of the file as failed too.
An easy way to solve this is to change the configuration of the minio itself. If you need to store bigger file, then you can for example set the BlockSize of the minio configuration to a Higher value. With a bigger BlockSize, minio will generate less metadata and thus you can store bigger files.
I've create a small sheet where you can play with the different configuration to see what fits your needs.
https://docs.google.com/spreadsheets/d/1M8lTpN00yFul4NH2el3gJ0bN-JJC4-o5HT721GWUYV4/edit?usp=sharing On this sheet, you can edit the blue cells and it will compute the size of the metadata generated for the config and file size you specified. If the result is highlighted in orange, that means you config don't support the file size and you need to tweak the config a bit.
|
gharchive/issue
| 2019-03-14T15:14:24 |
2025-04-01T06:40:37.645597
|
{
"authors": [
"zaibon"
],
"repo": "threefoldtech/minio",
"url": "https://github.com/threefoldtech/minio/issues/78",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1721943960
|
Incorrect resource packages for multiple solutions
Description
The resource packages for multiple solutions are incorrectly configured.
Caprover
according to the docs the packages should be:
Minimum: {cpu: 1, memory: 1024, diskSize: 50 }
Standard: {cpu: 2, memory: 1024 * 2, diskSize: 100 }
Recommended: {cpu: 4, memory: 1024 * 4, diskSize: 250 }
and currently they are
Funkwhale
according to the docs the packages should be:
Minimum: {cpu: 2, memory: 1024, diskSize: 50 }
Standard: {cpu: 2, memory: 1024 * 2, diskSize: 100 }
Recommended: {cpu: 4, memory: 1024 * 4, diskSize: 250 }
and currently they are
Mattermost
according to the docs the packages should be:
Minimum: {cpu: 1, memory: 1024 * 2, diskSize: 10 }
Standard: {cpu: 2, memory: 1024 * 4, diskSize: 50 }
Recommended: {cpu: 4, memory: 1024 * 4, diskSize: 100 }
and currently they are
Discourse
according to the docs the packages should be:
Minimum: {cpu: 1, memory: 1024 * 2, diskSize: 10 }
Standard: {cpu: 2, memory: 1024 * 2, diskSize: 50 }
Recommended: {cpu: 4, memory: 1024 * 4, diskSize: 100 }
and currently they are
Taiga
according to the docs the packages should be:
Minimum: { cpu: 2, memory: 1024 * 2, diskSize: 100 }
Standard: { cpu: 2, memory: 1024 * 4, diskSize: 150 }
Recommended: { cpu: 4, memory: 1024 * 4, diskSize: 250 }
and currently they are
ownCloud
according to the docs the packages should be:
Minimum: { cpu: 2, memory: 1024 * 16, diskSize: 250 }
Standard: { cpu: 2, memory: 1024 * 16, diskSize: 500 }
Recommended: { cpu: 4, memory: 1024 * 16, diskSize: 1000 }
and currently they are
Subsquid
according to the docs the packages should be:
Minimum: { cpu: 1, memory: 1024 , diskSize: 50 }
Standard: { cpu: 2, memory: 1024 * 2, diskSize: 100 }
Recommended: { cpu: 4, memory: 1024 * 4, diskSize: 250 }
and currently they are
Casperlabs
according to the docs the packages should be:
Minimum: {cpu: 1, memory: 1024 * 4, diskSize: 100 }
Standard: {cpu: 2, memory: 1024 * 16, diskSize: 500 }
Recommended: {cpu: 4, memory: 1024 * 32, diskSize: 1000 }
and currently they are
Wordpress
according to the docs the packages should be:
Minimum: { cpu: 1, memory: 2048 , diskSize: 10 }
Standard: { cpu: 2, memory: 2048 , diskSize: 50 }
Recommended: { cpu: 4, memory: 4096 , diskSize: 100 }
and currently they are
Umbrel
according to the docs the packages should be:
Minimum: { cpu: 2, memory: 2048 , diskSize: 10 }
Standard: { cpu: 2, memory: 4096 , diskSize: 50 }
Recommended: { cpu: 4, memory: 4096 , diskSize: 100 }
and currently they are
Status: pr #351 ready for review
|
gharchive/issue
| 2023-05-23T11:27:37 |
2025-04-01T06:40:37.659607
|
{
"authors": [
"0oM4R",
"mohamedamer453"
],
"repo": "threefoldtech/tfgrid-sdk-ts",
"url": "https://github.com/threefoldtech/tfgrid-sdk-ts/issues/255",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
167178125
|
alternatives to 'pretty' classnames
because we use the data attribute name purely as an indexing scheme and don't 'accept' names in our function calls, we don't generate 'pretty' classnames, which might make debugging hard. keeping this issue open for alternate solutions.
first attempt - a special prop label which gets added as the data-* attribute's value. not too bad!
have something better now, with tests. closing!
|
gharchive/issue
| 2016-07-23T10:19:21 |
2025-04-01T06:40:37.669516
|
{
"authors": [
"threepointone"
],
"repo": "threepointone/react-css",
"url": "https://github.com/threepointone/react-css/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2764111066
|
Reward model training implementation fails
Hello. The following is my training script for the reward model, which follows the setting in the paper(also 8 A100 GPUs). By the way, I used at most 20 min to finish training, but you report 1.5 h, which is weird.
My average statistics for MN, NR, CD& FD are 0.59, 0.54, 0.57, 0.74 , it seems to be a large gap to your result.
Would you mind pointing out the different parts or unveiling your training setting, thx.
### model
model_name_or_path: meta-llama/Llama-3.1-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset
dataset_dir: /dataset/reward_data/
dataset: trainset_reward_llama
template: llama3
cutoff_len: 2048
max_samples: 2000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b_sft/full/sft
logging_steps: 10
save_steps: 28
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 8
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
ddp_timeout: 180000000
save_only_model: true
There are several problems with your configuration:
The cutoff_len should be set to 8196 to avoid cut-off (Though some data may exceed the 8196 limitation).
Try to enlarge the max_samples if the trainset contains more than 2000 samples.
If you run out of memory, try to set per_device_train_batch_size to 1.
Set the num_train_epochs to a bigger one like 5.
Besides, would you report what framework (like vLLM, SGLang) is used to host reward models?
Thanks for your reply.
I have made changes based on your advice. But have little progress.
I follow all settings you mentioned in the paper by llama_factory.
The result after 3 epochs are:
{'MN': 0.48, 'CR': 0.6333333333333334, 'CD': 0.5933333333333333, 'FA': 0.7333333333333334}
after 5 epochs are:
{'MN': 0.6249999999999999, 'CR': 0.42500000000000004, 'CD': 0.7166666666666666, 'FA': 0.8}
I thought the key point is the cutoff_len, however it does not work even I have set it as 8196. I also investigate performance in different inference temperatures, it shows that temperature has little influence.
As I mentioned that you use 1.5h for training, however I only cost 20 min, so I wonder whether there is any technique gap between us, which could cause the difference.
Here is the manipulated file.
### model
model_name_or_path: meta-llama/Llama-3.1-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset
dataset_dir: /dataset/reward_data/
dataset: trainset_reward_llama
template: llama3
cutoff_len: 8096
max_samples: 5000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3_1-8b_bsft/full/sft
logging_steps: 10
save_steps: 84
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 5
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
ddp_timeout: 180000000
save_only_model: true
I would appreciate it if you could help.
Happy New Year First!
Here are the settings we used to train our reward model.
### model
model_name_or_path: meta-llama/Meta-Llama-3.1-8B-Instruct/
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: active_rmllama
template: llama3
cutoff_len: 8192
max_samples: 100000
overwrite_cache: true
preprocessing_num_workers: 16
## output
output_dir: saves/llama3-8b-re/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 1.0e-5
num_train_epochs: 5.0
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
ddp_timeout: 180000000
We rerun the training and get following scores:
Final result:
Missed-Need (MN) 0.8
Correct-Rejection (CR) 0.9333333333333333
Correct-Detection (CD) 0.9666666666666667
False-Alarm (FA) 1.0
Average: 0.925
Confusion Matrix:
{'TP': 57, 'FP': 6, 'TN': 54, 'FN': 3}
Accuracy: 0.925
Precision: 0.9047619047619048
Recall: 0.95
F1: 0.926829268292683
Besides, we are uploading the reward model to the huggingface.
As for the training time, the 1.5 Hour is the worst estimation with CPU Offloading.
Thanks for your reply. I have made changes based on your advice. But have little progress. I follow all settings you mentioned in the paper by llama_factory.
The result after 3 epochs are: {'MN': 0.48, 'CR': 0.6333333333333334, 'CD': 0.5933333333333333, 'FA': 0.7333333333333334} after 5 epochs are: {'MN': 0.6249999999999999, 'CR': 0.42500000000000004, 'CD': 0.7166666666666666, 'FA': 0.8}
I thought the key point is the cutoff_len, however it does not work even I have set it as 8196. I also investigate performance in different inference temperatures, it shows that temperature has little influence.
As I mentioned that you use 1.5h for training, however I only cost 20 min, so I wonder whether there is any technique gap between us, which could cause the difference.
Here is the manipulated file.
### model
model_name_or_path: meta-llama/Llama-3.1-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset
dataset_dir: /dataset/reward_data/
dataset: trainset_reward_llama
template: llama3
cutoff_len: 8096
max_samples: 5000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3_1-8b_bsft/full/sft
logging_steps: 10
save_steps: 84
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 5
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
ddp_timeout: 180000000
save_only_model: true
I would appreciate it if you could help.
We rerun the experiment with your settings, it seems that the major problem is the big batch size. As the training set is relatively small, to better optimize the model, we recommend using a smaller batch size and having more optimization steps. We will fix the parameters in our paper. Thanks very much for helping us find the mismatched content.
Happy new year and thanks for your detailed explanation. :)
Sorry to bother you in the first day of the year. Please enjoy your first day of 2025 first!
Change of batch size does work, but can not reach the experimental result as you display.
This takes me to the reward evaluation difference. For this part, my code is manipulated based on eval/reward_model_scoring.py, which is only available for the evaluation of the close-sourced LLM.
For the prompt template, I simply use the "system: {SYSTEM}\n user: {user_prompt}\n assistant:" format. May I ask whether there is any difference for the evaluation part?
I would be appreciate it if you were willing to share detailed code for the reward_scoring script for open-source llm!
By the way, I have found a tip:
You mentioned the prompt is
< Task >
Evaluate the task proposed by the proactive assistant as the user.
</ Task >
< Rule >
0. Analyze the current observation to understand your current situation
and requirements.
1. If the proposed task is ‘ null ‘ ( indicating no task is proposed under
the current observation ), follow these steps :
- Accept the ‘ null ‘ task if you believe there is no need for a task.
- Reject the ‘ null ‘ task if you believe a task is needed .
2. Minimize interruptions from the assistant by only accepting tasks that
are valuable.
3. Evaluate the current observation and make a judgment on the proposed
task accordingly.
</ Rule >
< Format >
You should answer with the following JSON format :
{
" thought ": " Give your thoughts first , then provide the judgment of the
task ." ,
" judgment ": " accepted or rejected "
}
</ Format >
However, the llm sometimes miss-understands the meaning of accepted and rejected as need proactive suggestions or not. So I change the prompt by simply adding
- Accept the ‘ null ‘ task if you believe there is no need for a task.
- Reject the ‘ null ‘ task if you believe a task is needed .
at the end of the " judgment ": " accepted or rejected ". The performance increases, this is quite helpful for un-tuned open source llm.
Happy new year and thanks for your detailed explanation. :) Sorry to bother you in the first day of the year. Please enjoy your first day of 2025 first!
Change of batch size does work, but can not reach the experimental result as you display. This takes me to the reward evaluation difference. For this part, my code is manipulated based on eval/reward_model_scoring.py, which is only available for the evaluation of the close-sourced LLM.
For the prompt template, I simply use the "system: {SYSTEM}\n user: {user_prompt}\n assistant:" format. May I ask whether there is any difference for the evaluation part?
I would be appreciate it if you were willing to share detailed code for the reward_scoring script for open-source llm!
The script eval/reward_model_scoring.py should work for both closed-sourced and open-sourced LLM. We organize the context in the format of multi-turn conversations. You can host OpenAI Compatible Server for open source models using frameworks like vLLM or SGLang. See https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
|
gharchive/issue
| 2024-12-31T08:34:51 |
2025-04-01T06:40:37.749754
|
{
"authors": [
"Elysia-afk",
"luyaxi"
],
"repo": "thunlp/ProactiveAgent",
"url": "https://github.com/thunlp/ProactiveAgent/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
281668556
|
代码问题
您好,我这边在Windows跑您的代码,Starting training using file E:\C++Code\Word2Vector\datasets\Sougo-T(sample).txt
Vocab size: 462667
Words in train file: 2655924606
462667
1983
success load data
success InitNet
success InitUnigramTable
start train
在训练的时候,// BP
g /= total;
total存在等于0的情况,参数是readme 设置的。麻烦帮忙解答一下,谢谢。
请问还有问题吗?抱歉之前没有收到邮件提醒,只收到了关闭问题的提醒。
|
gharchive/issue
| 2017-12-13T08:51:20 |
2025-04-01T06:40:37.752524
|
{
"authors": [
"heyLinsir",
"quan3401"
],
"repo": "thunlp/SE-WRL",
"url": "https://github.com/thunlp/SE-WRL/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2469522810
|
🛑 Nyasama is down
In 852260a, Nyasama (https://bbs.nyasama.com/) was down:
HTTP code: 403
Response time: 83 ms
Resolved: Nyasama is back up in a3f390d after 1 day, 2 hours, 33 minutes.
|
gharchive/issue
| 2024-08-16T05:39:45 |
2025-04-01T06:40:37.754994
|
{
"authors": [
"hakureirukoto"
],
"repo": "thwiki/status",
"url": "https://github.com/thwiki/status/issues/736",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2183870738
|
Search-TssSecret error
Verified issue does not already exist?
Yes
What error did you receive
Search-TssSecret -TssSession $session -SecretName 'synapsevd'
Cannot convert value "@{id=111452; name=AH-Atriuishealth.net\synapsevd; secretTemplateId=6191; secretTemplateName=AH-AD_Service_Accounts; folderId=5290; folderPath=\AH\IT\Management Accounts\Service Accounts;
siteId=46; active=True; checkedOut=False; isRestricted=False; isOutOfSync=False; outOfSyncReason=; lastHeartBeatStatus=Success; lastPasswordChangeAttempt=0001-01-01T00:00:00; responseCodes=; lastAccessed=;
extendedFields=; checkOutEnabled=False; autoChangeEnabled=False; doubleLockEnabled=False; requiresApproval=False; requiresComment=False; inheritsPermissions=False; hidePassword=False;
createDate=2022-08-08T17:46:23; daysUntilExpiration=106; hasLauncher=True; checkOutUserId=-1; checkOutUserName=}" to type "Thycotic.PowerShell.Secrets.Summary". Error: "Cannot convert the "@{id=111452;
name=AH-Atriuishealth.net\synapsevd; secretTemplateId=6191; secretTemplateName=AH-AD_Service_Accounts; folderId=5290; folderPath=\AH\IT\Management Accounts\Service Accounts; siteId=46; active=True;
checkedOut=False; isRestricted=False; isOutOfSync=False; outOfSyncReason=; lastHeartBeatStatus=Success; lastPasswordChangeAttempt=0001-01-01T00:00:00; responseCodes=; lastAccessed=; extendedFields=;
checkOutEnabled=False; autoChangeEnabled=False; doubleLockEnabled=False; requiresApproval=False; requiresComment=False; inheritsPermissions=False; hidePassword=False; createDate=2022-08-08T17:46:23;
daysUntilExpiration=106; hasLauncher=True; checkOutUserId=-1; checkOutUserName=}" value of type "System.Management.Automation.PSCustomObject" to type "Thycotic.PowerShell.Secrets.Summary"."
At C:\Program Files\WindowsPowerShell\Modules\Thycotic.SecretServer\0.61.0\functions\secrets\Search-TssSecret.ps1:274 char:17
... [Thycotic.PowerShell.Secrets.Summary[]]$restResponse.reco ...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CategoryInfo : InvalidArgument: (:) [], RuntimeException
FullyQualifiedErrorId : InvalidCastConstructorException
Please run the command using -Verbose
VERBOSE: Command invocation: Search-TssSecret -TssSession:TssSessionObject -SearchText:synapsevd -Verbose:True
VERBOSE: Filters: filter.searchText=synapsevd
VERBOSE: Performing the operation GET https://oct.secretservercloud.com/api/v1/secrets?sortBy[0].direction=asc&sortBy[0].name=Name&take=2147483647&filter.includeRestricted=true&filter.searchText=synapsevd
Cannot convert value "@{id=111452; name=AH-Atriuishealth.net\synapsevd; secretTemplateId=6191; secretTemplateName=AH-AD_Service_Accounts; folderId=5290; folderPath=\AH\IT\Management Accounts\Service Accounts;
siteId=46; active=True; checkedOut=False; isRestricted=False; isOutOfSync=False; outOfSyncReason=; lastHeartBeatStatus=Success; lastPasswordChangeAttempt=0001-01-01T00:00:00; responseCodes=; lastAccessed=;
extendedFields=; checkOutEnabled=False; autoChangeEnabled=False; doubleLockEnabled=False; requiresApproval=False; requiresComment=False; inheritsPermissions=False; hidePassword=False;
createDate=2022-08-08T17:46:23; daysUntilExpiration=105; hasLauncher=True; checkOutUserId=-1; checkOutUserName=}" to type "Thycotic.PowerShell.Secrets.Summary". Error: "Cannot convert the "@{id=111452;
name=AH-Atriuishealth.net\synapsevd; secretTemplateId=6191; secretTemplateName=AH-AD_Service_Accounts; folderId=5290; folderPath=\AH\IT\Management Accounts\Service Accounts; siteId=46; active=True;
checkedOut=False; isRestricted=False; isOutOfSync=False; outOfSyncReason=; lastHeartBeatStatus=Success; lastPasswordChangeAttempt=0001-01-01T00:00:00; responseCodes=; lastAccessed=; extendedFields=;
checkOutEnabled=False; autoChangeEnabled=False; doubleLockEnabled=False; requiresApproval=False; requiresComment=False; inheritsPermissions=False; hidePassword=False; createDate=2022-08-08T17:46:23;
daysUntilExpiration=105; hasLauncher=True; checkOutUserId=-1; checkOutUserName=}" value of type "System.Management.Automation.PSCustomObject" to type "Thycotic.PowerShell.Secrets.Summary"."
At C:\Program Files\WindowsPowerShell\Modules\Thycotic.SecretServer\0.61.0\functions\secrets\Search-TssSecret.ps1:274 char:17
... [Thycotic.PowerShell.Secrets.Summary[]]$restResponse.reco ...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CategoryInfo : InvalidArgument: (:) [], RuntimeException
FullyQualifiedErrorId : InvalidCastConstructorException
Provide a test case or steps to reproduce
Trying to run a Search on a secret that we have access to and is a valid secret.
Expected behavior
To return details on the secret searched for.
What Edition of Secret Server?
Cloud Subscription
What version of Secret Server
Secret Server Cloud
What PowerShell host was used when producing this error
Windows PowerShell ISE (powershell_ise)
PowerShell Host Version
Name Value
PSVersion 5.1.14393.6343
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.14393.6343
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
Cannot validate error using module 0.61.8 and SSC US
Sadly this is happening with our version of the module. We are also stuck with using this version for the time being too as it is tied to our PS version. Happy to provide what I can on this end if you think any additional information will help.
We had the same issue with version 0.60.6 of the module, upgrading to 0.61.8 fixed it for us.
$PSversiontable
Name Value
---- -----
PSVersion 7.4.4
PSEdition Core
GitCommitId 7.4.4
OS Microsoft Windows 10.0.17763
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
We're using on prem with version 11.7.000002
Can confirm - I had to update to PS7 and module version 0.61.3 to resolve, but that is going to be painful in some places. It would be really great if someone found a workaround for older modules.
|
gharchive/issue
| 2024-03-13T12:21:18 |
2025-04-01T06:40:37.784477
|
{
"authors": [
"enphyniti",
"jagger",
"jmackxiii",
"kjetils-labs"
],
"repo": "thycotic-ps/thycotic.secretserver",
"url": "https://github.com/thycotic-ps/thycotic.secretserver/issues/399",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1438163171
|
Low VRAM after running
My 2GB GPU is capable of running Stable Diffusion.
This script's GPU support also works just fine after generating an image.
After using the script, however, subsequent Stable Diffusion runs fail due to VRAM unavailability. Is the model persisting in VRAM after use? If so, can it be freed between uses?
Hi, thanks for reporting. I'm still pretty new at python and was assuming everything would be garbage collected when it goes out of scope, including the pytorch model in vram.
Some googling seems to confirm this. (https://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management)
I'm looking into it further ..
I will specifically delete the model before returning from the script in the next version, so we don't need to wait for the GC to kick in.
In v0.1.7 the model is deleted (freed) at the end of the script now, did that fix the problem ?
I had a similar issue while using your script in txt2img on a 8gb vram card yesterday (just out of memory error and couldnt generate any images anymore). Just tested it with v1.7 and it does not seem to happen anymore.
I'm closing this issue as the problem seems to be solved.
|
gharchive/issue
| 2022-11-07T10:59:21 |
2025-04-01T06:40:37.789505
|
{
"authors": [
"Lektro9",
"hacker1024",
"thygate"
],
"repo": "thygate/stable-diffusion-webui-depthmap-script",
"url": "https://github.com/thygate/stable-diffusion-webui-depthmap-script/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
825276409
|
feat: tiup: not marking required checks but wait for all to success
Close https://github.com/ti-community-infra/tichi/issues/393
As some of PR in TiUP repo may not run all of the CI jobs, just remove required mark for them and let bot wait to all checks to pass before commit merge.
/ok-to-test
|
gharchive/pull-request
| 2021-03-09T04:03:46 |
2025-04-01T06:40:37.790976
|
{
"authors": [
"AstroProfundis",
"Mini256"
],
"repo": "ti-community-infra/configs",
"url": "https://github.com/ti-community-infra/configs/pull/176",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1083193203
|
config: rename pingcap/ticdc to pingcap/tiflow
This is part job of step-4 in https://github.com/pingcap/ticdc/issues/3749
The change is triggered by:
sed -i "s/pingcap\/ticdc/pingcap\/tiflow/g" $(find . -type f)
/ok-to-test
/ok-to-test
/merge
|
gharchive/pull-request
| 2021-12-17T12:03:55 |
2025-04-01T06:40:37.792934
|
{
"authors": [
"Mini256",
"amyangfei"
],
"repo": "ti-community-infra/configs",
"url": "https://github.com/ti-community-infra/configs/pull/482",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
874571106
|
OpenAPI deepObject serialization for query parameters
First check
[x] I added a very descriptive title to this issue.
[x] I used the GitHub search to find a similar issue and didn't find it.
[x] I searched the FastAPI documentation, with the integrated search.
[x] I already searched in Google "How to X in FastAPI" and didn't find any information.
[x] I already read and followed all the tutorial in the docs and didn't find an answer.
[x] I already checked if it is not related to FastAPI but to Pydantic.
[x] I already checked if it is not related to FastAPI but to Swagger UI.
[x] I already checked if it is not related to FastAPI but to ReDoc.
[x] After submitting this, I commit to:
Implement a Pull Request for a confirmed bug - or implement the feature described in this issue :)
Description
Allow usage of deepObject serialization from OpenAPI 3.
What it could look like
from fastapi import FastAPI
app = FastAPI()
class UserQuery(BaseModel):
role: str
firstName: str
@app.get("/")
def read_root(id: Optional[UserQuery]):
return {"Hello": "World"}
Query:
/users?id[role]=admin&id[firstName]=Alex
Related
https://github.com/tiangolo/fastapi/issues/245#issuecomment-762729917
Links to a good example: https://stackoverflow.com/questions/48491688/how-to-define-parameters-with-square-brackets-in-openapi-swagger
https://github.com/tiangolo/fastapi/issues/203
See also #283 for broader support of encoding styles.
This is the sort of issue where it's not entirely clear if the responsibility of supporting this should be on FastAPI or Pydantic. On one hand, it has to do with value parsing and encoding, which is something FastAPI tends to fully delegate to Pydantic, but on the other hand it's not part of JSON Schema, it's an OpenAPI extension that only applies to route parameter objects, and most of these encoding styles don't make sense in a JSON context (which is what pydantic is generally meant for).
I would like to see these supported as well.
I'm keen to pursue this issue as I think it is a fairly common use-case (for example what to sort by or filter by)
See links on the different supported options below:
OpenAPI
Swagger
The current solution is to create your own parser if you want to support it. However, the only way I have found is to use a regex pattern to display on the OpenAPI spec. See below link for an example of a rough implementation of custom parsing for sortby and filterby
https://gist.github.com/ghandic/21c27470f6797dd856208a2c68f3e43a
Ideally this could look more as follows
from fastapi import QueryMixIn
from pydantic import BaseModel
...
class FilterBy(BaseModel, QueryMixIn):
name: str
q: str
class Config:
query_style = "deepObject"
class SortBy(BaseModel, QueryMixIn):
name: str
by: str
class Config:
query_style = "deepObject"
@router.get("/search")
async def search(
sort: Optional[SortBy] = Query(None, ),
filter: List[FilterBy] = = Query(None, ),
limit: int = Query(10, ge=1, le=50),
page: int = Query(1, ge=1),
) -> List[Record]:
...
Then the user could standardize their Query style across their API using inheritance
from fastapi import QueryMixIn
from pydantic import BaseModel
class QueryModel(BaseModel, QueryMixIn):
class Config:
query_style = "deepObject"
from custom import QueryModel
...
class FilterBy(QueryModel):
name: str
q: str
class SortBy(QueryModel):
name: str
by: str
@router.get("/search")
async def search(
sort: Optional[SortBy] = Query(None, ),
filter: List[FilterBy] = = Query(None, ),
limit: int = Query(10, ge=1, le=50),
page: int = Query(1, ge=1),
) -> List[Record]:
...
The tough problem I see would be validation, but since we're using pydantic could the user not use their validators on their custom QueryModel inherited class
Thoughts?
Please see example implementation here: https://github.com/ghandic/FastAPI-deepObject
Ideally, I'd like to not define the types twice, would need to understand more of how the routes get registered for that implementation.
Here is a example of what the OpenAPI spec should look like:
openapi: 3.0.2
info:
title: FastAPI
version: 0.1.0
paths:
/:
post:
summary: Demo
operationId: demo__post
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/Sort'
required: true
responses:
'200':
description: Successful Response
content:
application/json:
schema: {}
'422':
description: Validation Error
content:
application/json:
schema:
$ref: '#/components/schemas/HTTPValidationError'
get:
summary: Demo
operationId: demo__get
parameters:
- in: query
name: sort
schema:
$ref: '#/components/schemas/Sort'
style: deepObject. # ENABLE THIS
explode: true # ENABLE THIS
responses:
'200':
description: Successful Response
content:
application/json:
schema: {}
'422':
description: Validation Error
content:
application/json:
schema:
$ref: '#/components/schemas/HTTPValidationError'
components:
schemas:
Direction:
title: Direction
enum:
- asc
- desc
type: string
description: An enumeration.
HTTPValidationError:
title: HTTPValidationError
type: object
properties:
detail:
title: Detail
type: array
items:
$ref: '#/components/schemas/ValidationError'
Sort:
title: Sort
required:
- by
- direction
type: object
properties:
by:
title: By
pattern: ^[a-z]+$
type: string
example: name
direction:
$ref: '#/components/schemas/Direction'
ValidationError:
title: ValidationError
required:
- loc
- msg
- type
type: object
properties:
loc:
title: Location
type: array
items:
type: string
msg:
title: Message
type: string
type:
title: Error Type
type: string
For some demo app (proposed solution to be worked on)
from enum import auto
from fastapi import FastAPI, Query
from pydantic import BaseModel, Field
from fastapi_utils.enums import StrEnum
app = FastAPI()
class Direction(StrEnum):
asc = auto()
desc = auto()
class Sort(BaseModel):
by: str = Field(..., example="name", regex="^[a-z]+$")
direction: Direction
@app.post("/")
def demo(sort: Sort):
return {}
# NOTE: Currently says:
# AssertionError: Param: sort can only be a request body, using Body(...)
# .... /lib/python3.8/site-packages/fastapi/dependencies/utils.py", line 331, in get_dependant
# This is where we should allow using deepObject
@app.get("/")
def demo(sort: Sort = Query(...)):
return {}
@tiangolo would you be supportive of this if the effort was put in to build the PR? It would take a good amount of effort and even more testing.
For what it's worth I was able to support the full range of options in Xpresso, maybe that can serve as inspiration for FastAPI: https://xpresso-api.dev/0.42.3/tutorial/query_params/#customizing-deserialization
How about just Query(style="deepObject")?
Rejected in #9768
How about just Query(style="deepObject")?
Rejected in #9768
I don't think Sebastián closed the issue actually, I'll reply there
|
gharchive/issue
| 2021-05-03T13:22:47 |
2025-04-01T06:40:37.842208
|
{
"authors": [
"FlorianLudwig",
"adriangb",
"commonism",
"ghandic",
"sm-Fifteen"
],
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/issues/3163",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
489166104
|
[Question] File downloading
I'd like the user to be able to download a file I have within the /app directory via an API call. Which is the best way to do this, given that this file can be considerably big?
you can use https://www.starlette.io/responses/#fileresponse
On Wed, Sep 4, 2019 at 3:37 PM driribarne notifications@github.com wrote:
I'd like the user to be able to download a file I have within the /app
directory via an API call. Which is the best way to do this, given that
this file can be considerably big?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/tiangolo/full-stack-fastapi-postgresql/issues/58?email_source=notifications&email_token=AAINSPRJK4FFE3WA5KSK2SLQH622FA5CNFSM4ITSA2X2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HJICUMA,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAINSPVJ4N4NXPKO4D5TIEDQH622FANCNFSM4ITSA2XQ
.
--
benoit barthelet
http://pgp.mit.edu/pks/lookup?op=get&search=0xF150E01A72F6D2EE
Yes, I had been trying with this but I got stuck... I do not get where to use this async def or if I even have to use it. If someone have done this and would be glad to share a bit more of detail about the implementation I'd be very appreciated :)
Thanks for the help here @euri10 ! :clap: :bow:
Thanks for reporting back and closing the issue @driribarne :+1:
|
gharchive/issue
| 2019-09-04T13:37:38 |
2025-04-01T06:40:37.848576
|
{
"authors": [
"driribarne",
"euri10",
"tiangolo"
],
"repo": "tiangolo/full-stack-fastapi-postgresql",
"url": "https://github.com/tiangolo/full-stack-fastapi-postgresql/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2501812477
|
RuntimeError: Error(s) in loading state_dict for CondScoreModel:
I followed the readme:
Training the GraspGF.
Pretrain the pointnet for RL. Then, I got the pt.pt and score.pt. And I put pt.pt and score.pt in the folder: '/home/lq/codes/human-assisting-dex-grasp/Ckpt/gf_lq'.
However, when I ran sh ./rl_train.sh, the error occurred:
Traceback (most recent call last):
File "./Runners/TrainGFPPO.py", line 77, in
runner = GFPPO(vec_env=envs,
File "/home/lq/codes/human-assisting-dex-grasp/./Algorithms/ppo/gf_ppo_update.py", line 178, in init
self.score.load_state_dict(model_dict)
File "/home/lq/miniconda3/envs/dexgrasp-py38-new/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2215, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CondScoreModel:
Missing key(s) in state_dict: "obj_enc.sa1.mlps.0.layer0.conv.weight", "obj_enc.sa1.mlps.0.layer0.bn.bn.weight", "obj_enc.sa1.mlps.0.layer0.bn.bn.bias", "obj_enc.sa1.mlps.0.layer0.bn.bn.running_mean", "obj_enc.sa1.mlps.0.layer0.bn.bn.running_var", "obj_enc.sa1.mlps.0.layer1.conv.weight", "obj_enc.sa1.mlps.0.layer1.bn.bn.weight", "obj_enc.sa1.mlps.0.layer1.bn.bn.bias", "obj_enc.sa1.mlps.0.layer1.bn.bn.running_mean", "obj_enc.sa1.mlps.0.layer1.bn.bn.running_var", "obj_enc.sa1.mlps.0.layer2.conv.weight", "obj_enc.sa1.mlps.0.layer2.bn.bn.weight", "obj_enc.sa1.mlps.0.layer2.bn.bn.bias", "obj_enc.sa1.mlps.0.layer2.bn.bn.running_mean", "obj_enc.sa1.mlps.0.layer2.bn.bn.running_var", "obj_enc.sa2.mlps.0.layer0.conv.weight", "obj_enc.sa2.mlps.0.layer0.bn.bn.weight", "obj_enc.sa2.mlps.0.layer0.bn.bn.bias", "obj_enc.sa2.mlps.0.layer0.bn.bn.running_mean", "obj_enc.sa2.mlps.0.layer0.bn.bn.running_var", "obj_enc.sa2.mlps.0.layer1.conv.weight", "obj_enc.sa2.mlps.0.layer1.bn.bn.weight", "obj_enc.sa2.mlps.0.layer1.bn.bn.bias", "obj_enc.sa2.mlps.0.layer1.bn.bn.running_mean", "obj_enc.sa2.mlps.0.layer1.bn.bn.running_var", "obj_enc.sa2.mlps.0.layer2.conv.weight", "obj_enc.sa2.mlps.0.layer2.bn.bn.weight", "obj_enc.sa2.mlps.0.layer2.bn.bn.bias", "obj_enc.sa2.mlps.0.layer2.bn.bn.running_mean", "obj_enc.sa2.mlps.0.layer2.bn.bn.running_var", "obj_enc.sa3.mlps.0.layer0.conv.weight", "obj_enc.sa3.mlps.0.layer0.bn.bn.weight", "obj_enc.sa3.mlps.0.layer0.bn.bn.bias", "obj_enc.sa3.mlps.0.layer0.bn.bn.running_mean", "obj_enc.sa3.mlps.0.layer0.bn.bn.running_var", "obj_enc.sa3.mlps.0.layer1.conv.weight", "obj_enc.sa3.mlps.0.layer1.bn.bn.weight", "obj_enc.sa3.mlps.0.layer1.bn.bn.bias", "obj_enc.sa3.mlps.0.layer1.bn.bn.running_mean", "obj_enc.sa3.mlps.0.layer1.bn.bn.running_var", "obj_enc.sa3.mlps.0.layer2.conv.weight", "obj_enc.sa3.mlps.0.layer2.bn.bn.weight", "obj_enc.sa3.mlps.0.layer2.bn.bn.bias", "obj_enc.sa3.mlps.0.layer2.bn.bn.running_mean", "obj_enc.sa3.mlps.0.layer2.bn.bn.running_var".
Unexpected key(s) in state_dict: "obj_enc.stn.conv1.weight", "obj_enc.stn.conv1.bias", "obj_enc.stn.conv2.weight", "obj_enc.stn.conv2.bias", "obj_enc.stn.conv3.weight", "obj_enc.stn.conv3.bias", "obj_enc.stn.fc1.weight", "obj_enc.stn.fc1.bias", "obj_enc.stn.fc2.weight", "obj_enc.stn.fc2.bias", "obj_enc.stn.fc3.weight", "obj_enc.stn.fc3.bias", "obj_enc.conv1.weight", "obj_enc.conv1.bias", "obj_enc.conv2.weight", "obj_enc.conv2.bias", "obj_enc.conv3.weight", "obj_enc.conv3.bias", "obj_enc.conv4.weight", "obj_enc.conv4.bias".
这个错误表明在加载 state_dict 时,模型的结构与保存的权重不匹配。具体来说,有一些键在 state_dict 中缺失,而另一些键是多余的。
The command in shell "rl_train.sh" is:
python ./Runners/TrainGFPPO.py
--seed=0
--headless
--num_envs=100
--dataset_type='train'
--score_model_path='/home/lq/codes/human-assisting-dex-grasp/Ckpt/gf_lq'
--t0=0.005
--exp_name="ours"
--run_device_id=0
--constrained \
I can successfully run sh ./rl_train.sh with the pretrained checkpoint (https://drive.google.com/drive/folders/1-_AcEFnnVO9g-CeCPyjleuDYrUdxXtyd) you gave in the readme. And the command in shell "rl_train.sh" is:
python ./Runners/TrainGFPPO.py
--seed=0
--headless
--num_envs=100
--dataset_type='train'
--score_model_path='/home/lq/codes/human-assisting-dex-grasp/Ckpt/gf'
--t0=0.005
--exp_name="ours"
--run_device_id=0
--constrained \
The rl_train process is successful with your pretrained checkpoint:
Can you give some suggestions?
I run it successfully.
|
gharchive/issue
| 2024-09-03T02:38:59 |
2025-04-01T06:40:37.863704
|
{
"authors": [
"liuqi8827"
],
"repo": "tianhaowuhz/human-assisting-dex-grasp",
"url": "https://github.com/tianhaowuhz/human-assisting-dex-grasp/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
839626149
|
an error when running train.py.
My environment: Python3.6, cuda 10.2, pytorch 1.6.
And when I ran train.py, an ImportError libcudart.so.9.0: No such file or directory.
why it need cuda 9.0? Is there any problems in my environment?
and the system cuda version is 10.2.
it doesn't need cuda 9.0. when do you get this import error? Import which library?
it doesn't need cuda 9.0. when do you get this import error? Import which library?
in the file det3d/ops/iou3d_nms/init.py, after this sentence from det3d.ops.iou3d_nms import iou3d_nms_cuda, iou3d_nms_utils, the importerror occured.
ic, seem to be some compilation error for the nms.
export PATH=/usr/local/cuda-10.0/bin:$PATH
export CUDA_PATH=/usr/local/cuda-10.0
export CUDA_HOME=/usr/local/cuda-10.0
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
bash setup.sh
could you change this path information to your specific Cuda version and then recompile the nms function?
ic, seem to be some compilation error for the nms.
export PATH=/usr/local/cuda-10.0/bin:$PATH
export CUDA_PATH=/usr/local/cuda-10.0
export CUDA_HOME=/usr/local/cuda-10.0
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
bash setup.sh
could you change this path information to your specific Cuda version and then recompile the nms function?
yes, you are right. I changed the cuda version after compiling cuda extension.
Recompile the cuda extension, it works! Thank you!
|
gharchive/issue
| 2021-03-24T11:28:00 |
2025-04-01T06:40:37.917853
|
{
"authors": [
"as3382246",
"tianweiy"
],
"repo": "tianweiy/CenterPoint",
"url": "https://github.com/tianweiy/CenterPoint/issues/102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
294994350
|
Fixed typo
before before --> before
Thanks.
|
gharchive/pull-request
| 2018-02-07T03:27:31 |
2025-04-01T06:40:37.918897
|
{
"authors": [
"Cipherwraith",
"treeowl"
],
"repo": "tibbe/unordered-containers",
"url": "https://github.com/tibbe/unordered-containers/pull/184",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1090194262
|
Added release scripts.
Overview
This change adds a set of scripts to be able to cross compile for different CPU architectures to support changes in #81 . For example, we can compile both MacOS and MacOS M1 or Linux amd64 and Linux arm64 on each platform respectively. These scripts make the assumption that the compilation is happening on the target platform. In an ideal world, we would cross-compile both architecture and platform, but it would require more work to setup our build agents to handle that.
Changes
Added release scripts.
This commit adds release scripts that can cross compile with CGO support for MacOS and Linux for both x86_64 and arm64 based CPU architectures. It assumes that each build is being build on the target platform though. Cross compiling the world on Linux would be ideal, but it would require all of those toolchains to be setup on our build hosts. Doable, but I'll leave it for a future exercise.
Requirements
If you want to release on a Debain based system, you will need to install the ARM compiler aarch64-linux-gnu-gcc, which is included in crossbuild-essential-arm64:
sudo apt install crossbuild-essential-arm64
In order to release on MacOS, you'll need to install gpg to verify signatures:
brew install gnupg
In addition, you will need Xcode 12.2 or later to be able to support the M1 compilation:
Xcode 12.2 and later is a requirement for building universal binaries. Earlier versions of Xcode don’t contain the support needed to build and test universal versions of your macOS code.
Tests
Linux
mark@dev-mark:~/code/src/tidbyt.dev/pixlet|mark/release-scripts ⇒ make release-linux
rm -f pixlet
rm -rf ./build
./scripts/release-linux.sh
Fetching WebP Binaries
Fetched /tmp/libwebp-1.2.2-rc1/linux-x86-64 successfully
Fetched /tmp/libwebp-1.2.2-rc1/linux-arm64 successfully
Building linux_amd64
Built ./build/linux_amd64/pixlet successfully
Building linux_arm64
Built ./build/linux_arm64/pixlet successfully
mark@dev-mark:~/code/src/tidbyt.dev/pixlet|mark/release-scripts ⇒ ./build/linux_amd64/pixlet
Pixlet renders graphics for pixel devices, like Tidbyt
Usage:
pixlet [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
push Pushes a webp image to a Tidbyt device
render Runs script with provided config parameters.
serve Serves a starlark render script over HTTP.
Flags:
-h, --help help for pixlet
Use "pixlet [command] --help" for more information about a command.
mark@dev-mark:~/code/src/tidbyt.dev/pixlet|mark/release-scripts ⇒ ./build/linux_arm64/pixlet
zsh: exec format error: ./build/linux_arm64/pixlet
MacOS
mspicer@americano:~/code/src/tidbyt.dev/pixlet|mark/release-scripts ⇒ make release-macos
rm -f pixlet
rm -rf ./build
./scripts/release-macos.sh
Fetching WebP Binaries
Fetched /tmp/libwebp-1.2.2-rc1/mac-arm64 successfully
Fetched /tmp/libwebp-1.2.2-rc1/mac-x86-64 successfully
Building darwin_arm64
Built ./build/darwin_arm64/pixlet successfully
Building darwin_amd64
Built ./build/darwin_amd64/pixlet successfully
mspicer@americano:~/code/src/tidbyt.dev/pixlet|mark/release-scripts ⇒ ./build/darwin_amd64/pixlet
Pixlet renders graphics for pixel devices, like Tidbyt
Usage:
pixlet [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
push Pushes a webp image to a Tidbyt device
render Runs script with provided config parameters.
serve Serves a starlark render script over HTTP.
Flags:
-h, --help help for pixlet
Use "pixlet [command] --help" for more information about a command.
mspicer@americano:~/code/src/tidbyt.dev/pixlet|mark/release-scripts ⇒ ./build/darwin_arm64/pixlet
zsh: bad CPU type in executable: ./build/darwin_arm64/pixlet
These are currently dynamically linked. I think we probably want this to be statically linked. Working on that now.
Ok, some updates here.
MacOS
Currently, our released versions of pixlet are dynamically linked for MacOS. We release them through Homebrew which requires webp as a dependency. This will work for both releases on MacOS since the brew formula exist.
I vote we keep the MacOS binaries dynamically linked for now since this appears to be working well.
Linux
For linux, they are dynamically linked with webp being statically linked (-Wl,-Bstatic -lwebp -lwebpdemux -lwebpmux -Wl,-Bdynamic). What's a bit weird is we also release it through Homebrew for Linux that still has the WebP dependency. A Linux ARM version of WebP is not available through Homebrew so this will not work. This doesn't feel like a big deal because who is installing Linux binaries through Homebrew 😄 .
When looking to statically link webp for Linux ARM, I couldn't seem to find a modern version of libwebp statically compiled for Linux arm64 (the one in this PR from Arch Linux is dynamically linked). The one available for Ubuntu is 0.6.1-2.1 whereas the one for Arch is at 1.2.1-2. I then realized this is the version we are currently linking against for Linux builds 😅 .
I vote we stick with the old version of WebP for linux for both CPU architectures so they're at least on the same version. In addition, they both should static link the WebP library so it's portable and can be curled onto a Raspberry Pi V4.
Static everything
One other thing I thought was having a completely static binary. The main gotcha here is that this really isn't supported on MacOS, which came as a surprise to me.
|
gharchive/pull-request
| 2021-12-29T00:32:01 |
2025-04-01T06:40:37.944494
|
{
"authors": [
"betterengineering"
],
"repo": "tidbyt/pixlet",
"url": "https://github.com/tidbyt/pixlet/pull/93",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1375995513
|
[LOOP-4349] updates to Alert to allow muting
https://tidepool.atlassian.net/browse/LOOP-4349
@ps2 At this point, just looking for a initial conceptual review. Formal review will come later.
It seems like isMuted should be a property of the alert delivery system, not an individual alert. The same alert, when the system is muted, would be delivered muted, and in a non-muted context, not be muted. The alert itself doesn't change. What am I missing?
There needs to be a way to get all the muted scheduled alerts and reschedule them as unmuted alerts. So I think there needs to be something in the alert to indicate that it is muted. What alternative are you suggesting?
UNUserNotificationCenter .getPendingNotificationRequests should do that. There's also removePendingNotificationRequests, and removeAllPendingNotificationRequests
Also, there seems to be a need to know how the alerts were delivered (or the system state when the alert was delivered). For example, was critical alerts disabled when a critical alert was delivered. Was Tidepool Loop temp mute enabled when an alert was delivered.
You don't need to know which alerts were muted. You just need to unschedule any scheduled ones, and reschedule with appropriate sound.
As for you question about how the alerts were delivered, could you explain the need there?
You don't need to know which alerts were muted. You just need to unschedule any scheduled ones, and reschedule with appropriate sound.
As for your second question, about how the alerts were delivered, could you explain the need there?
If repeated alerts are blindly updated, this has a negative impact on the repeat interval. Also, this will also unneededly cutter the alert store. It seems better to only reschedule specific alerts.
As for the second, the product/risk needs will be discussed at standup. Let's continue there.
They don't need to be re-issued, cluttering up the alert store, just rescheduled with iOS.
This will still have the same impact of a negative impact on the repeat schedule.
The in-app alert will need to be rescheduled as well. The best way is to use the existing alert system interface. Maybe we can chat more about this in dev sync? I don't think we are on the same page.
Making backgroundContent non mutable is good. Adding mute flag to SettingsStore is good as we discussed, but not sure the type is right. Lastly, making the sound non-optional I think is incorrect. If you want to make it non-optional, we'd need to add a '.default' sound.
The .default sound is .vibrate based on https://tidepool.atlassian.net/browse/LOOP-4229
LGTM. This is some good cleanup. I do wonder about removing "vibrate", as all iOS notifications should vibrate. Or renaming 'vibrate' to default, as having sound == .default to indicate the default sound seems to be more clear than sound == .vibrate, as even all the other sounds should also vibrate.
There is a default critical alert sound and default non-critical alert sound on top of vibrate. So, I see it is possible to use the default sound when not muted and only vibrate when muted.
@ps2 looking for a formal review this time.
|
gharchive/pull-request
| 2022-09-16T14:00:51 |
2025-04-01T06:40:37.952695
|
{
"authors": [
"nhamming",
"ps2"
],
"repo": "tidepool-org/LoopKit",
"url": "https://github.com/tidepool-org/LoopKit/pull/473",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
89692731
|
"bootstrap" to UTC for all devices where possible + data model updates
This will be the real PR for bootstrapping to UTC. Supercedes #106, which I will close shortly.
#131 will replace this.
|
gharchive/pull-request
| 2015-06-20T00:35:32 |
2025-04-01T06:40:37.953906
|
{
"authors": [
"jebeck"
],
"repo": "tidepool-org/chrome-uploader",
"url": "https://github.com/tidepool-org/chrome-uploader/pull/128",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.