id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1847405034
|
global scope
https://github.com/radix-ui/themes/blob/39f5a438db0d8496134d4bc35d71899fee1b2562/packages/radix-ui-themes/src/styles/tokens.css#L8C16-L8C16
These tokens should be in :root namespace.
Not sure I follow, is there a reason they need to be in the root namespace?
If they are placed in the root namespace then nesting of theme scaling wouldn't be possible.
Take a look at this example which shows the ability to do nested scaling:
https://codesandbox.io/s/intelligent-framework-2xqjp8?file=/src/App.tsx
Note: The sizes are a little difficult (they look similar), but if you right click and view the actual font-size of the headings, you'll see the difference
agreed.
I'm a bit confused... Shouldn't I be able to set --scaling: 2 anywhere in the dom and for all nested values to use this scaling value.
:root {
--scaling: 1;
--spacing-5: calc(2rem * var(--scaling))
}
.my-node {
--scaling: 2
}
why is my-node scaling not applying to child elements... Am i missing something
|
gharchive/issue
| 2023-08-11T20:42:15 |
2025-04-01T06:40:10.260030
|
{
"authors": [
"jd-carroll",
"magicspon",
"needim"
],
"repo": "radix-ui/themes",
"url": "https://github.com/radix-ui/themes/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
744811523
|
Cannot build on iOS
As the title states, i cannot build my app using your library on iOS.
Ich do not use the lib on iOS at all, i just need it for android and cannot exclude it from iOS build.
Environment:
MacOS 10.15.7
Flutter 1.22.4
XCode 12.1
lib version 0.0.5+2
It throws the following error:
awesome_notifications-0.0.5+2/ios/Classes/lib/SwiftAwesomeNotificationsPlugin.swift:104:17: error: parameter of 'messaging(:didReceiveRegistrationToken:)' has different optionality than required by protocol 'MessagingDelegate'
public func messaging( messaging: Messaging, didReceiveRegistrationToken fcmToken: String) {
^
?
FirebaseMessaging.MessagingDelegate:2:19: note: requirement 'messaging(:didReceiveRegistrationToken:)' declared here
optional func messaging( messaging: Messaging, didReceiveRegistrationToken fcmToken: String?)
Suggesting the parameter must be optional. I made it optional and unwrapped inside your swift plugin and the build succeeded. Maybe you can give more insight.
Hey there.
Is this just a specific problem for me or does anyone else run into this.
I was unable to reproduce your error. What steps did you take to create your iOS app?
Try modifying the messaging method below in your local code:
https://github.com/rafaelsetragni/awesome_notifications/blob/b3e4da96f0302fef43f4c801d819dad044021332/ios/Classes/lib/SwiftAwesomeNotificationsPlugin.swift#L97
Hi. Thanks for your answer.
I was just normally building my app via "flutter build ios" and the error occured.
I already did modify the following code:
https://github.com/rafaelsetragni/awesome_notifications/blob/b3e4da96f0302fef43f4c801d819dad044021332/ios/Classes/lib/SwiftAwesomeNotificationsPlugin.swift#L104
public func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String) {
print("Firebase registration token: \(fcmToken)")
let dataDict:[String: String] = ["token": fcmToken]
NotificationCenter.default.post(name: Notification.Name("FCMToken"), object: nil, userInfo: dataDict)
// TODO: If necessary send token to application server.
// Note: This callback is fired at each app startup and whenever a new token is generated.
}
to the following
public func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String?) {
if let unwrapped = fcmToken {
print("Firebase registration token: \(unwrapped)")
let dataDict:[String: String] = ["token": unwrapped]
NotificationCenter.default.post(name: Notification.Name("FCMToken"), object: nil, userInfo: dataDict)
// TODO: If necessary send token to application server.
// Note: This callback is fired at each app startup and whenever a new token is generated.
}
}
But I just don't know why I had to do this.
You are rigth. My messaging method is deprecated, thats why you couldnt compile the final version in your app. I gonna update it.
Did you use another firebase services in your app? What version are they?
You are rigth. My messaging method is deprecated, thats why you couldnt compile the final version in your app. I gonna update it.
Did you use another firebase services in your app? What version are they?
I use FirebaseMessaging 7.1.0. That is the version installed by cocoapods.
Better! Do you wanna do a fork and send me this fix? Your name will be included as this project contributor.
Does this mean you experience the same problem after updating to the above version?
If so, I gladly provide the fix. Just don't want to break it for somebody else.
Im did not experienced this issue. For me, when i applyied your changes, XCode complains about the override operation not being possible due the messaging methos be different.
But send me your fork. This way i can merge your source and see every change that you did in your source and figure out whats goin on.
In fact, im merged your changes into my local files, and the messaging firebase method that you sent always complain about the override operation not being possible. In this way, i could not reproduce your error.
But in think im figuring out whats going on. You said that youre using FirebaseMessaging 7.1.0, but you want to say firebase_messaging: ^7.1.0 instead?
This plugin is not necessary to send push notifications using awesome_notifications. All that you need is inside the plugin and you only need to follow the steps extrictely inside Using Firebase Services (Optional) topic. Is not necessary to use any other plugin or implement any kotlin or java extra script.
Maybe firebase_messaging version is conflicting with my awesome_notifications firebase library, because firebase_messaging is using another library version, that overides mine.
Bingo! That was exactely what happened!
After update my libraries with pod update, i got your error!
On the newer Firebase version, the firebase team did a lot of changes, including change the method messaging without keep a deprecated version. So, all the sources bellow 7.1.0 will break. And they also changed a lot of things on Android sdk, as to deprecate the entire FirebaseInstance iil.
You got the newer firebase version after install a firebase plugin or update your pod files. Thats why you have those issues.
All the others awesome_notification developers gonna face the same after update the firebase package. So your change is totaly legit.
The problem that i facing is how to keep both methods working on same time to not injure the olders projects.
Can you cancel the current pull request to master branch and send the same pull request to update-firebase-sdk branch instead? Theres a lot changes to do on Android side as well.
Yeah that is exactly what I was hoping was not happening.
But now that we now the problem it is maybe fixable.
If you need anything else from me, don't hesitate to ask. Thanks for your effort :)
Ive merged your pull request and included a suport for older versions, as the code bellow. This way, both library versions are compatible with the core code being reusable:
// For Firebase Messaging versions older than 7.0
// https://github.com/rafaelsetragni/awesome_notifications/issues/39
public func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String) {
didReceiveRegistrationToken(messaging, fcmToken: fcmToken)
}
public func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String?) {
if let unwrapped = fcmToken {
didReceiveRegistrationToken(messaging, fcmToken: unwrapped)
}
}
private func didReceiveRegistrationToken(_ messaging: Messaging, fcmToken: String){
print("Firebase registration token: \(fcmToken)")
let dataDict:[String: String] = ["token": fcmToken]
NotificationCenter.default.post(name: Notification.Name("FCMToken"), object: nil, userInfo: dataDict)
}
|
gharchive/issue
| 2020-11-17T14:58:19 |
2025-04-01T06:40:10.325417
|
{
"authors": [
"derBeukatt",
"rafaelsetragni"
],
"repo": "rafaelsetragni/awesome_notifications",
"url": "https://github.com/rafaelsetragni/awesome_notifications/issues/39",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1216678173
|
Error with extra step in 0.7.0-beta.2
I'm trying to compile my app in IOS but I was having MissingImplementationException in AwesomeNotification.initialize.
I then noticed that this plugin as a beta version and in that, it requires a extra step for the IOS platform. However, I keep getting errors.
If I copy the following lines:
SwiftAwesomeNotificationsPlugin.setPluginRegistrantCallback { registry in
SwiftAwesomeNotificationsPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.awesomenotifications.AwesomeNotificationsPlugin")!)
FLTSharedPreferencesPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.sharedpreferences.SharedPreferencesPlugin")!)
}
To AppDelegate.swift, the I get:
.../ios/Runner/AppDelegate.swift:11:5: error: cannot find 'SwiftAwesomeNotificationsPlugin' in scope
But according to the plugin guide: "And you can check how to correctly call each plugin opening the file GeneratedPluginRegistrant.m". And in my GeratedPluginRegistrant.m, I cannot find SwiftAwesomeNotificationsPlugin. But I do find AwesomeNotificationsPlugin.
So I rewrite the code to:
AwesomeNotificationsPlugin.setPluginRegistrantCallback { registry in
AwesomeNotificationsPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.awesomenotifications.AwesomeNotificationsPlugin")!)
FLTSharedPreferencesPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.sharedpreferences.SharedPreferencesPlugin")!)
}
But I just get the same error, but updated:
.../ios/Runner/AppDelegate.swift:11:5: error: cannot find 'AwesomeNotificationsPlugin' in scope
Am I missing an import?
Fix
import shared_preferences_ios
import flutter_background_service_ios
Does above fix solved your problem? @TimeLord2010
It did. But I had to make some changes after:
import UIKit
import Flutter
import awesome_notifications
import shared_preferences_ios
import flutter_background_service_ios
@UIApplicationMain
@objc class AppDelegate: FlutterAppDelegate {
override func application(
_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
) -> Bool {
GeneratedPluginRegistrant.register(with: self)
FlutterBackgroundServicePlugin.setPluginRegistrantCallback { registry in
GeneratedPluginRegistrant.register(with: registry)
}
SwiftAwesomeNotificationsPlugin.setPluginRegistrantCallback { registry in
SwiftAwesomeNotificationsPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.awesomenotifications.AwesomeNotificationsPlugin")!)
FLTSharedPreferencesPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.sharedpreferences.SharedPreferencesPlugin")!)
}
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
}
hi, suddenly out of nowhere, i can't build my apps on ios anymore, i suspect because of this step. here is the error logs
Swift Compiler Error (Xcode): Cannot find 'FLTSharedPreferencesPlugin' in scope
/Users/hazbiy/Data/Kerja/Freelance/Project/Risearrow/Flutter/ios/Runner/AppDelegate.swift:17:12
Encountered error while building for device.
I think it's because i remove shared preferences on my main package because i moved it on my secondary packages, and when i check on pubscpec.lock, i can't find any line stating that shared_preferences_ios is installed (meanwhile all other shared_preferences is installed, even for web and linux). does it mean i need to manually include it in my apps? i thought it would be included by the awesome notification package
|
gharchive/issue
| 2022-04-27T01:58:32 |
2025-04-01T06:40:10.332235
|
{
"authors": [
"Divish1032",
"TimeLord2010",
"nurhazbiy"
],
"repo": "rafaelsetragni/awesome_notifications",
"url": "https://github.com/rafaelsetragni/awesome_notifications/issues/480",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
621575314
|
Transaction list fixes
Thank you for submitting this pull request :)
Fixes #1510
Short description
Definition of Done
[x] Steps to manually test the change have been documented
[x] Acceptance criteria are met
[x] Change has been manually tested by the reviewer (dApp)
Steps to manually test the change (dApp)
https://github.com/raiden-network/light-client/issues/1510
I thought that's how it was supposed to look? So all borders should be round?
I thought that's how it was supposed to look? So all borders should be round?
I think you might be right. The icon Sash gave me actually looks like that. I just viewed it in Illustrator.
Oh ok, it was a bit confusing to fix it because I didn't have access to the design
Thanks for the review 🙏
|
gharchive/pull-request
| 2020-05-20T08:41:46 |
2025-04-01T06:40:10.396965
|
{
"authors": [
"nephix",
"taleldayekh"
],
"repo": "raiden-network/light-client",
"url": "https://github.com/raiden-network/light-client/pull/1550",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
413987241
|
Parallel execution of tests pytest -n 8 leaves partial gas measurement
Currently, when tests are run in parallel, gas measurements happen concurrently, and the resulting gas measurement file is corrupt.
Typically the corrupt file contains only a part of the measurements
{
"SecretRegistry.registerSecret": 45757,
"TokenNetwork.closeChannel": 111236,
"TokenNetwork.openChannel": 97555,
"TokenNetwork.setTotalDeposit": 44509,
"TokenNetwork.settleChannel": 123338,
"TokenNetwork.unlock 1 locks": 32128,
"TokenNetwork.unlock 6 locks": 66029,
"TokenNetwork.updateNonClosingBalanceProof": 93752
}
This issue keeps track of, at least, detecting this in the CI and block merge with incomplete gas measurements.
This was fixed in 12961bea391e74aacd9f1193bfd08d7a74d349e2
|
gharchive/issue
| 2019-02-25T08:32:56 |
2025-04-01T06:40:10.398737
|
{
"authors": [
"pirapira"
],
"repo": "raiden-network/raiden-contracts",
"url": "https://github.com/raiden-network/raiden-contracts/issues/603",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
406117837
|
HDD Temperature Not Functioning
In the below screenshots you can see that the correct values are selected and that it is displaying temperature values. However in Rainmeter the temperatures are not updating at all and seem to be stuck. The D: Storage is showing the temp for my C Drive, updating the values in SMV does not seem to do anything either.
Modern Gadgets Version: 1.4.1
HWInfo64 Version: 6.000-3620
https://i.imgur.com/YjHD1Pv.png
https://i.imgur.com/vwkFtTc.png
https://i.imgur.com/GDEQ63O.png
From the screenshots you sent me, it appears that you configured the values for the A: and B: disks, rather than C: and D:. Go back to the SMV and configure the temperatures for Disks C: and D:, and everything should work.
Yup, that seemed to work! My bad I figured drive A and B were just going in order but it is the letter assigned to it. Makes sense now that I'm thinking about it.
|
gharchive/issue
| 2019-02-03T18:50:42 |
2025-04-01T06:40:10.410393
|
{
"authors": [
"XiCynx",
"raiguard"
],
"repo": "raiguard/ModernGadgets",
"url": "https://github.com/raiguard/ModernGadgets/issues/130",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
99365522
|
Heartbeat improvement
Hi @dhh,
From the server side, there's heartbeat sending from the server to all clients every 3 seconds. And I think this is a bit excessive and it may cause performance issue when dealing with finance or heavy load on the WebSocket.
From the client side, connection_monitor is sending the subscribe message every 4-8 seconds, this would increase the server load especially when all of the clients sending the subscribe message at the same time.
The heartbeat and connection_monitor are similar to reintroduce the asynchronous notification polling which WebSocket try to resolve. I reckon it could be improved by using socket.onclose event and socket.readyState for the reconnection instead of checking the stale connection.
socket.onclose = function() {
consumer.connection.reopen() if socket.readyState === 3;
}
connection_monitor is sending the subscribe message every 4-8 seconds
Because your users change pages every 4-8 seconds?
We've found that those callbacks were not reliable across all situations
and browsers. The only reliable method we've found has been using a
heartbeat.
How is a heartbeat every 3s per connection a performance problem in your
case?
On Thu, Aug 6, 2015 at 7:48 PM, Javan Makhmali notifications@github.com
wrote:
connection_monitor is sending the subscribe message every 4-8 seconds
Because your users change pages every 4-8 seconds?
—
Reply to this email directly or view it on GitHub
https://github.com/rails/actioncable/issues/55#issuecomment-128457052.
Hi @javan,
not sure whether it's a bug or not. But if you
Clone the actioncable-exmaple and https://github.com/leckylao/em-websocket-example
Change actioncable-example/app/assets/javascripts/channels/index.coffee to point to ws://localhost:3001
Run actioncable-example/ rails s as client and run em-websocket-example/ ruby app.rb as server
Then you will see "Received: {"command":"subscribe","identifier":"{"channel":"CommentsChannel"}"}" every 4-8 seconds, which I think it's due to connection_monitor that reopening the connection.
Thanks @dhh for the explanation.
How is a heartbeat every 3s per connection a performance problem in your case?
Some metrics I found:
https://mrotaru.wordpress.com/2013/06/20/12-million-concurrent-connections-with-migratorydata-websocket-server/
In this benchmark scenario, MigratoryData scales up to 12 million concurrent users from a single Dell PowerEdge R610 server while pushing up to 1.015 Gbps live data (each user receives a 512-byte message every minute).
Result in 12m, Average latency 268 milliseconds, Maximum latency 2024 milliseconds, Network Utilization 1.015 Gbps.
http://colobu.com/2015/05/22/implement-C1000K-servers-by-spray-netty-undertow-and-node-js/ Sorry about it's in Chinese, opens in Chrome should translate it.
Netty Server - To all 1.2 million per minute websocket send a message, the message content for the current time server. Send display single-threaded send, the server sends finished about 1.2 million total time 15 seconds.
Spray Server - To all 1.2 million per minute websocket send a message, the message content for the current time server. High CPU usage, send quickly, bandwidth of up to 46M. After a mass takes about 8 seconds.
Undertow - To all 1.2 million per minute websocket send a message, the message content for the current time server. Mass takes about 15 seconds to play again.
Heartbeat would create similar load to the server and the bandwidth, which I think it would cause a performance issue on large concurrent connections.
I think you're going to be disappointed if you actually try to do any Ruby
work in these channels and expect that level of performance :). The
heartbeat is just a single piece of text data that requires nothing to
generate. Actual apps will be doing actual things in these channels that
are far slower than what a heartbeat overhead will be.
In any case, I haven't found it to be optional. If you can't reliably tell
when the connection has been cut, then the app might be fast, but it won't
work well.
On Thu, Aug 6, 2015 at 7:58 PM, Lecky Lao notifications@github.com wrote:
Thanks @dhh https://github.com/dhh for the explanation.
How is a heartbeat every 3s per connection a performance problem in your
case?
Some metrics I found:
https://mrotaru.wordpress.com/2013/06/20/12-million-concurrent-connections-with-migratorydata-websocket-server/
In this benchmark scenario, MigratoryData scales up to 12 million
concurrent users from a single Dell PowerEdge R610 server while pushing up
to 1.015 Gbps live data (each user receives a 512-byte message every
minute).
Result in 12m, Average latency 268 milliseconds, Maximum latency 2024
milliseconds, Network Utilization 1.015 Gbps.
http://colobu.com/2015/05/22/implement-C1000K-servers-by-spray-netty-undertow-and-node-js/
Sorry about it's in Chinese, opens in Chrome should translate it.
Netty Server - To all 1.2 million per minute websocket send a message,
the message content for the current time server. Send display
single-threaded send, the server sends finished about 1.2 million total
time 15 seconds.
Spray Server - To all 1.2 million per minute websocket send a message,
the message content for the current time server. High CPU usage, send
quickly, bandwidth of up to 46M. After a mass takes about 8 seconds.
Undertow - To all 1.2 million per minute websocket send a message, the
message content for the current time server. Mass takes about 15 seconds to
play again.
Heartbeat would create similar load to the server and the bandwidth, which
I think it would cause a performance issue on large concurrent connections.
—
Reply to this email directly or view it on GitHub
https://github.com/rails/actioncable/issues/55#issuecomment-128551949.
I think you're going to be disappointed if you actually try to do any Ruby
work in these channels and expect that level of performance :)
Nah, those just some metrics I found that can better show the impact. Another thought, I don't think they implements something like heartbeat but can still achieve stability on that level :blush:
The heartbeat is just a single piece of text data that requires nothing to
generate. Actual apps will be doing actual things in these channels that
are far slower than what a heartbeat overhead will be.
Most of the case WebSocket are being use as notification which only happens when it occurs. In this case, I think heartbeat is generating more traffics than the actual jobs.
In any case, I haven't found it to be optional. If you can't reliably tell
when the connection has been cut, then the app might be fast, but it won't
work well.
In this case, how about moving heartbeat and connection_monitor into an optional configuration. So that it doesn't enable by default. If people facing trouble with the connection then they can enable it.
Optimizing for correctness out the box over peak performance seems like a better trade-off to me. If someone actually hits a point in a real application where heartbeat traffic proves to be an issue, we can consider offering a way to turn off heartbeats. Thanks for your consideration and I look forward to hearing about your Action Cable deployment!
On Aug 9, 2015, at 18:00, Lecky Lao notifications@github.com wrote:
I think you're going to be disappointed if you actually try to do any Ruby
work in these channels and expect that level of performance :)
Nah, those just some metrics I found that can better show the impact. Another thought, I don't think they implements something like heartbeat but can still achieve stability on that level
The heartbeat is just a single piece of text data that requires nothing to
generate. Actual apps will be doing actual things in these channels that
are far slower than what a heartbeat overhead will be.
Most of the case WebSocket are being use as notification which only happens when it occurs. In this case, I think heartbeat is generating more traffics than the actual jobs.
In any case, I haven't found it to be optional. If you can't reliably tell
when the connection has been cut, then the app might be fast, but it won't
work well.
In this case, how about moving heartbeat and connection_monitor into an optional configuration. So that it doesn't enable by default. If people facing trouble with the connection then they can enable it.
—
Reply to this email directly or view it on GitHub.
We've found that those callbacks were not reliable across all situations and browsers
In order to further understand the issue, could you provide an example/demo or explain more about the issue please? How is it not reliable? Thank you.
http://caniuse.com/#search=websocket shows WebScoket is now 87.23% support that maybe the issue should be and has been resolved natively.
This was in testing across a variety of browsers, restore situations, and
mobile scenarios. Don't have a test suite available for reproduction. Would
be nice to have, but it's unlikely that we're going to invest any time into
this, given that we have a working solution. All this testing was done in
the last 3-4 months, though. Feel free to work on such a suite if you'd
like to advance this.
On Sunday, August 9, 2015, Lecky Lao notifications@github.com wrote:
We've found that those callbacks were not reliable across all situations
and browsers
In order to further understand the issue, could you provide an
example/demo or explain more about the issue please? How is it not
reliable? Thank you.
http://caniuse.com/#search=websocket shows WebScoket is now 87.23%
support that maybe the issue should be and has been resolved natively.
—
Reply to this email directly or view it on GitHub
https://github.com/rails/actioncable/issues/55#issuecomment-129262433.
|
gharchive/issue
| 2015-08-06T06:10:17 |
2025-04-01T06:40:10.452838
|
{
"authors": [
"dhh",
"javan",
"leckylao"
],
"repo": "rails/actioncable",
"url": "https://github.com/rails/actioncable/issues/55",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
101583888
|
Double request
Hi!
I have a strange problem: when i navigate by clicking link between some pages turbolinks processed double request (FF,Safari, Сhrome). It occur only on two pages.
When i tried debug I found strange behavior: method processResponse() return valid doc inside them but to doc variable inside fetchReplacement() method was assigned 'undefined'
When I remove //=reuire all my scripts this problem still exists.
When I remove //=require turbolinks problem solved
Please fix this broblem. I have no clue why processResponse() return 'undefined' when inside this metod doc is valid.
Solved.
It cause because this two page has different templates with different assets.
When add data-no-turbolink to link problem is gone.
|
gharchive/issue
| 2015-08-18T06:46:44 |
2025-04-01T06:40:10.580985
|
{
"authors": [
"sedx"
],
"repo": "rails/turbolinks",
"url": "https://github.com/rails/turbolinks/issues/594",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2147983175
|
[bug] wallet_addEthereumChain or useSwitchChain on Rainbow Wallet not working
Is there an existing issue for this?
[X] I have searched the existing issues
RainbowKit Version
2.0.0
wagmi Version
2.5.6
Current Behavior
I already add a custom chain to getDefaultConfig
When I used switchChain from useSwitchChain wagmi for Rainbow wallet to switch to this chain, error occured like this:
When I used switchChain for other wallets like MetaMask, it worked, but rainbow didn't
When I tried using wallet_addEthereumChain to add this chain to Rainbow wallet, though I approved the addition of this chain, the error occured. I checked Rainbow wallet networks, this custom chain is already added but error showed:
Expected Behavior
When I used switchChain from useSwitchChain wagmi for Rainbow wallet to switch to a custom chain, it should add the custom chain to RainBow wallet and switch to it
If I use wallet_addEthereumChain, Rainbow wallet should add the chain and not show any error
Steps To Reproduce
No response
Link to Minimal Reproducible Example (CodeSandbox, StackBlitz, etc.)
No response
Anything else?
No response
@kingnight153 Could you please tell what chain you're switching to ?
Also would be super helpful if you can show your RainbowKit and Wagmi configuration setup 🙏
Here is the chain info: https://chainlist.org/?search=carbon
Yeah so that's an unsupported network from rainbow wallet. We work on improving the custom network so hopefully should be fixed soon. What you can do now is add the network manually and try to switch to the chain.
@kingnight153 Thanks for reporting. Bumping this to our browser-extension repo to further investigate
|
gharchive/issue
| 2024-02-22T01:14:33 |
2025-04-01T06:40:10.607467
|
{
"authors": [
"DanielSinclair",
"KosmosKey",
"kingnight153"
],
"repo": "rainbow-me/rainbowkit",
"url": "https://github.com/rainbow-me/rainbowkit/issues/1791",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1689461544
|
🛑 RiotSkin4CN-Download is down
In bac859d, RiotSkin4CN-Download (https://s-cd-4307-general.oss.dogecdn.com/riotskin4cn/live.json) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RiotSkin4CN-Download is back up in f5d1eb9.
|
gharchive/issue
| 2023-04-29T08:55:07 |
2025-04-01T06:40:10.618199
|
{
"authors": [
"rainzee"
],
"repo": "rainzee/RiotSkin4CN-Status",
"url": "https://github.com/rainzee/RiotSkin4CN-Status/issues/1144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
35391960
|
Canvas upon player control
The slideshow-recorder.js plugin puts a new canvas ontop of the player control of reveal.js. The plugin has to be updated if reveal.js changes the way of drawing the player control.
Redesigned plugin to use default audio player and not auto sliding (commit 8b5fa5dc12)
|
gharchive/issue
| 2014-06-10T15:12:16 |
2025-04-01T06:40:10.620552
|
{
"authors": [
"rajgoel"
],
"repo": "rajgoel/reveal.js",
"url": "https://github.com/rajgoel/reveal.js/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2557018497
|
🛑 Raju Prasad is down
In 7bc3934, Raju Prasad (https://www.rajuprasad.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Raju Prasad is back up in 5635478 after 11 minutes.
|
gharchive/issue
| 2024-09-30T15:31:53 |
2025-04-01T06:40:10.635278
|
{
"authors": [
"rajuprasad-dev"
],
"repo": "rajuprasad-dev/upptime-monitor",
"url": "https://github.com/rajuprasad-dev/upptime-monitor/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
179072212
|
without method doesn't use equals like it says in the documentation
I'm on version 0.21.
While I'm trying to use without function to remove exact strings from a strings array, some items are being removed (mistakenly?) due to usage of flip+contains methods.
Example:
R.without("ab", ["a","ab"])
// output: []
I would expect that the output would be ["a"] because it's a completely different item (although it is being contained by "ab" string)
You can run this code here
You have a type error:
without('ab', ['a', 'ab']);
// ! TypeError: Invalid value
//
// without :: Array a -> Array a -> Array a
// ^^^^^^^
// 1
//
// 1) "ab" :: String
//
// The value at position 1 is not a member of ‘Array a’.
Correct usage:
without(['ab'], ['a', 'ab']);
// => ['a']
See #1912.
|
gharchive/issue
| 2016-09-25T06:57:08 |
2025-04-01T06:40:10.739161
|
{
"authors": [
"davidchambers",
"portons"
],
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/issues/1918",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
110014370
|
#1417: Added dispatchables to docstrings
See #1417
Right now it is undocumented behaviour for some functions that they actually dispatch to a method. This PR adds a sentence to the docstrings that this dispatching is happening.
:deciduous_tree:
Can a native English speaker explain why not:
Dispatches to the takeWhile method of the second argument, if present
Can a native English speaker explain why not:
Dispatches to the takeWhile method of the second argument, if present
I like your version better, @raine! What do you think, @branneman?
Updated!
One of my commits snuck into your branch. Could you remove it?
Yeah, the squashing went wrong somehow. Gimme a minute.
Ugh. Think it's ok now. Is it?
Looks good! Merging.
|
gharchive/pull-request
| 2015-10-06T14:02:51 |
2025-04-01T06:40:10.743359
|
{
"authors": [
"branneman",
"davidchambers",
"raine"
],
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/pull/1428",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2509878655
|
feat: add lowerFirst string
Converts the first character of a string to lowercase.
IMHO: It looks too specific to add this to the library.
IMHO: It looks too specific to add this to the library.
Agreed. I think this comment on another recent PR is relevant:
https://github.com/ramda/ramda/pull/3488#issuecomment-2335287734
|
gharchive/pull-request
| 2024-09-06T08:49:24 |
2025-04-01T06:40:10.745114
|
{
"authors": [
"CrossEye",
"nhannt201",
"yurkimus"
],
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/pull/3487",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1065760620
|
Add packages
Adding packages while I tried to get OdroidC2 working locally. Sadly I don't get eth0 after boot (altought, I can see u-boot trying to get an IP ) so I cannot test upgrades, resets and the full cOS featureset. I'm suspecting kernel, but requires quite some time to experiment. This change adds firmware packages, firmware and dtbs to the example image
Another approach would be to test with the u-boot and old kernel, but would be suboptimal to release with those ( for the records, here is a spec that build the Odroid kernel https://github.com/rancher-sandbox/cOS-toolkit/commit/c31ebbd1ac151883b49a74fbf90955c38841eb3f).
Also, this PR adds packages to the toolchain image in order to run the arm image script successfully. For reference, that's where I was experimenting with : https://github.com/mudler/cos-embedded-images/
shouldn't affect CI, and OdroidC2 images are built only on master.. merging!
|
gharchive/pull-request
| 2021-11-29T08:46:44 |
2025-04-01T06:40:10.797083
|
{
"authors": [
"mudler"
],
"repo": "rancher-sandbox/cOS-toolkit",
"url": "https://github.com/rancher-sandbox/cOS-toolkit/pull/900",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
343236358
|
default service_event for ipsec to up if global flag is true
https://github.com/rancher/rancher/issues/14668
LGTM. TF is unrelated
|
gharchive/pull-request
| 2018-07-20T20:23:04 |
2025-04-01T06:40:10.798239
|
{
"authors": [
"alena1108",
"kinarashah"
],
"repo": "rancher/cattle",
"url": "https://github.com/rancher/cattle/pull/3226",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
113095032
|
Bug fixes
Cleanup instance from planner on its removal
When instance with missing dependencies gets removed up by deployment planner, we should remove it from the planner's list of instances. Otherwise it would participate in further deployment, and further down will hit the timeout waiting for this instance to hit valid state (Running/Stopped, etc)
Don't schedule service config.update on service in updating-active/activating state. Reconcile process invoked by update/activate operation, can result in some instances being stopped/destroyed. And stop/destroy processes have config.upgrade hooks resulting in subsequent reconcile requests.
At the end of every activate/update process, we double check that the reconcile have been finished successfully, and if not - the process gets rescheduled.
This 2 fixes above are supposed to fix the bugs related to service reconcile slowness, like:
https://github.com/rancher/rancher/issues/2044
Validate service selector on instance.restore
https://github.com/rancher/rancher/issues/2338
I feel like you need better PR titles. More uplifting like "Vast improvement in functional correctness"
@ibuildthecloud :) will improve the naming in the future PRs
|
gharchive/pull-request
| 2015-10-23T20:37:56 |
2025-04-01T06:40:10.801721
|
{
"authors": [
"alena1108",
"ibuildthecloud"
],
"repo": "rancher/cattle",
"url": "https://github.com/rancher/cattle/pull/984",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1503340465
|
Unable to Deactivate multiple Cluster Drivers and Node Drivers
Setup
Rancher version: v2.7-head (5df9701)
Browser type & version: Firefox
Describe the bug
Nothing happens when attempting to deactivate Cluster Drivers or Node Drivers.
To Reproduce
The issue applies to both Cluster Drivers and Node Drivers
Navigate to Cluster Management => Drivers => Cluster/Node Drivers
Select all active drivers
Click Deactivate
Result
Nothing happens, no error in console.
Expected Result
Confirmation dialog is shown, positive selection results in Cluster/Node drivers becoming deactivated.
Additional context
This looks to be an issue with the confirmation dialog.
This only happens when attempting to delete multiple cluster/node drivers.
Holding ctrl before to skip the confirmation dialog while clicking delete will perform the action as expected.
@rak-phillip, can you update this ticket to include both Cluster Drivers and Node Drivers so that they are both fixed and tested?
@jameson-mcghee I updated to call out both cluster & node drivers
During testing on v2.7-head (Commit ID: e54432e) I was able to verify that the confirmation prompt is now being generated when attempting to Deactivate multiple Cluster Drivers or Node Drivers, and users are able to successfully Deactivate multiple Cluster Drivers or Node Drivers. Therefore I am closing this ticket as Done.
v2.7-head (Commit ID: e54432e):
Cluster Drivers:
Node Drivers:
|
gharchive/issue
| 2022-12-19T17:55:57 |
2025-04-01T06:40:10.810532
|
{
"authors": [
"jameson-mcghee",
"rak-phillip"
],
"repo": "rancher/dashboard",
"url": "https://github.com/rancher/dashboard/issues/7762",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1405852152
|
Remaining shell items for Elemental in 2.7.0
Adds the remaining shell changes, which are completely isolated from any reference to Elemental, in 2.7.0
Codecov Report
Base: 36.86% // Head: 36.87% // Increases project coverage by +0.00% :tada:
Coverage data is based on head (ef6a419) compared to base (316889a).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## master #7168 +/- ##
=======================================
Coverage 36.86% 36.87%
=======================================
Files 986 986
Lines 17633 17633
Branches 4541 4541
=======================================
+ Hits 6501 6502 +1
+ Misses 11132 11131 -1
Flag
Coverage Δ
e2e
47.41% <ø> (+<0.01%)
:arrow_up:
merged
36.87% <ø> (+<0.01%)
:arrow_up:
unit
5.21% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
shell/components/GrowlManager.vue
0.00% <ø> (ø)
shell/components/Import.vue
0.00% <ø> (ø)
shell/components/form/MatchExpressions.vue
100.00% <ø> (ø)
shell/detail/provisioning.cattle.io.cluster.vue
0.00% <ø> (ø)
...dit/provisioning.cattle.io.cluster/MachinePool.vue
0.00% <ø> (ø)
shell/plugins/steve/steve-class.js
33.33% <0.00%> (-16.67%)
:arrow_down:
shell/utils/socket.js
48.52% <0.00%> (-12.75%)
:arrow_down:
shell/store/growl.js
23.33% <0.00%> (-3.34%)
:arrow_down:
shell/plugins/steve/subscribe.js
65.93% <0.00%> (-1.93%)
:arrow_down:
shell/models/provisioning.cattle.io.cluster.js
63.72% <0.00%> (-0.32%)
:arrow_down:
... and 9 more
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
gharchive/pull-request
| 2022-10-12T09:09:01 |
2025-04-01T06:40:10.829094
|
{
"authors": [
"aalves08",
"codecov-commenter"
],
"repo": "rancher/dashboard",
"url": "https://github.com/rancher/dashboard/pull/7168",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
305415677
|
Missing fields when create a storageClass
rancher/server:master 03/14
allowVolumeExpansion and parameters of storageClass are not populated in rancher API.
UI sent the allowVolumeExpansion and parameters to backend for creating a storageClass.
But rancher server doesn't populate them.
The name is the ID so it's not going to be updateable. Disable the field in edit
parameters is present in lastet master.
I can't see allowVolumeExpansion field in the UI, and @loganhz found the code related to allowVolumeExpansion is commented. @vincent99 do we still need this field?
As far as I remember volume expansion is alpha and off by default and there's no obvious way to tell if it's on to show the option, so I commented it out.
Version - 2.0 master 3/29
Verified fixed
Are there plans to add this option?
|
gharchive/issue
| 2018-03-15T04:57:17 |
2025-04-01T06:40:10.859365
|
{
"authors": [
"loganhz",
"vincent99",
"walkafwalka",
"zionwu"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/12116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
311022076
|
Unable to add Ingress when removing second rule
Rancher versions: 2.0
Steps to Reproduce:
Add Ingress
Fill out all fields for first rule (Request Host, Path, Target, Port)
Add second rule.
Remove second rule.
Click Save.
Other information:
For some reason this only occurs if there is any information in the path field. If that value is empty, I am unable to save new rule after performing the listed steps.
Results:
Unable to save new rule. Error message states port is still required.
Version - 2.0 master 4/11
Verified fixed
|
gharchive/issue
| 2018-04-03T22:35:30 |
2025-04-01T06:40:10.861817
|
{
"authors": [
"adingilloRancher",
"tfiduccia"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/12478",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
356347307
|
The port that I exposed was also able to visit at the beginning
Rancher versions:
rancher/rancher:v2.0.8
rancher/rancher-agent:v2.0.8
**Docker version: **
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Experimental: false
Operating system and kernel: (cat /etc/os-release, uname -r preferred)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
3.10.0-514.el7.x86_64
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
VirtualBox
Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)
single node rancher
Environment Template: (Cattle/Kubernetes/Swarm/Mesos)
Kubernetes
Steps to Reproduce:
The port that I exposed was also able to visit at the beginning. All other namespace are normal.
Might be the same as https://github.com/rancher/rancher/issues/15372 ?
When you can access your port after you have deleted the network policy in your namespace, then it is the same issue, I guess.
Then how can we solve this problem? Our situation is the same.
Can you try it in 2.1.1, please?
If you can reproduce it, can you provide the reproduce steps and the logs, please?
|
gharchive/issue
| 2018-09-03T03:29:05 |
2025-04-01T06:40:10.871280
|
{
"authors": [
"PeterZhai00",
"loganhz",
"sebastiansirch"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/15374",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
389871912
|
Ensure Rancher system pods are not killed by kubelet eviction manager
What kind of request is this (question/bug/enhancement/feature request):
Enhancement
Steps to reproduce (least amount of steps as possible):
Starve the free space in the root filesystem on a node that runs system-level pods (e.g. ingress-controller, pipeline jenkins or minio, prometheus, etc) by allocating disk space until kubelet's hard eviction threshold is crossed (by default nodefs.available<10%).
E.g. Create a large file with $ dd bs=1M if=/dev/zero of=/bigfile.tmp count=...
Result:
System-level pods are evicted/killed even before user pods (e.g. nginx-ingress-controller, Pipeline's Minio or Jenkins) resulting in service disruption (ingress) or data loss (pipeline build logs).
Warning Evicted Container nginx-ingress-controller was using 252Ki, which exceeds its request of 0.
The node was low on resource: ephemeral-storage.
Killing container with id docker://nginx-ingress-controller:Need to kill Pod nginx-ingress-controller-r7q7s.156f4e7f716145fe
Reason: Kubelet might evict a pod from its node when the node’s ephemeral storage is exhausted. Since these system-level pods have no resource requests/limit specified for ephemeral storage they are targeted first for eviction.
Expected Result:
System-level pods should never be killed/evicted by kubelet when resources are starved on a node or at least they should be selected for eviction with lower priority than user pods.
The kubelet ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests, then by Priority, and then by the consumption of the starved compute resource relative to the Pods’ scheduling requests.
(...)
Guaranteed pods and Burstable pods whose usage is beneath requests are evicted last. Guaranteed Pods are guaranteed only when requests and limits are specified for all the containers and they are equal. Such pods are guaranteed to never be evicted because of another Pod’s resource consumption.
source
In general, it is strongly recommended that DaemonSet not create BestEffort Pods to avoid being identified as a candidate Pod for eviction. Instead DaemonSet should ideally launch Guaranteed Pods.
source
Solutions that should be explored to address this:
Configure the pods of system-level services such as they are assigned a QOS class of Guaranteed and thus "guaranteed to never be evicted because of another Pod’s resource consumption." This would imho require to specify the following requests/limits for the pods:
spec.containers[].resources.limits.memory
spec.containers[].resources.requests.memory
spec.containers[].resources.limits.ephemeral-storage
spec.containers[].resources.requests.ephemeral-storage
Mark these pods as critical so they are protected from eviction: https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
Environment information
Rancher version 2.1.3
Kubernetes version v1.11.3
Is there any update on this? I think I'm having the same issue.
I'm getting such error for the cattle-system pods
The node was low on resource: ephemeral-storage. Container nginx-ingress-controller was using 56Ki, which exceeds its request of 0.
The node was low on resource: ephemeral-storage. Container agent was using 60Ki, which exceeds its request of 0.
|
gharchive/issue
| 2018-12-11T17:26:12 |
2025-04-01T06:40:10.880348
|
{
"authors": [
"jama707",
"janeczku"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/17049",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
784833308
|
[RFE] vSphere node driver - Need option to set 'numCoresPerSocket' in node template
What kind of request is this:
Feature request
Description
As of now, we can set the number of CPUs while defining the node template
But in some cases, the Cores Per Socket needs to be tuned (details are in the below blog post)
https://blogs.vmware.com/performance/2017/03/virtual-machine-vcpu-and-vnuma-rightsizing-rules-of-thumb.html
The go library provided by VMware has the option to change this parameter.
govc vm.change -vm master-01 -e cpuid.coresPerSocket=1
gz#14317
Hello,
I also have the same need.
Strange that few people are interested in this subject which would allow a better use of virtualization.
Someone would have a tip?
|
gharchive/issue
| 2021-01-13T06:37:01 |
2025-04-01T06:40:10.883524
|
{
"authors": [
"Eric-TAS",
"ansilh"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/30814",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1028264490
|
Import cluster command - Bug when more than first apply
Rancher Server Setup
Rancher version: 2.6.1
Installation option (Docker install/Helm Chart): Helm Chart
If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): EKS 1.20
Information about the Cluster
Kubernetes version: EKS version 1.21
Cluster Type (Local/Downstream): downstream
If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): imported
NB It's a simple Generic cluster import and not managed type EKS import
Describe the bug
The cluster import command cannot be applied more than one time. So no more idempotent as should be as a kubernetes deployment.
To Reproduce
Step 1) EKS Cluster creation OK
Step 2) Rancher Cluster creation using Generic Cluster Type OK
Step 3) Execute the import command : OK
kubectl apply -f https://rancher-instance.com/v3/import/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxj.yaml
and the imported cluster is well configured in Rancher
Step 4) Re-execute the import command: NOK
kubectl apply -f https://rancher-instance.com/v3/import/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxj.yaml
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver unchanged
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master unchanged
namespace/cattle-system unchanged
serviceaccount/cattle unchanged
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding unchanged
secret/cattle-credentials-2735ae9 unchanged
clusterrole.rbac.authorization.k8s.io/cattle-admin unchanged
deployment.apps/cattle-cluster-agent configured
service/cattle-cluster-agent unchanged
If the import command is applied a second time, the imported cluster is active, but can't explore anymore the cluster within Rancher:
As seen in previous command the cattle-cluster-agent is reconfigured and perhaps this is what causing the problem. But here we have a problem as the import command is not idempotent.
Result
Impossible to explore the cluster anymore
Expected Result
The cluster should stay accessible via Rancher
Still valid
Still valid
|
gharchive/issue
| 2021-10-17T09:09:34 |
2025-04-01T06:40:10.891680
|
{
"authors": [
"all4innov"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/35151",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2699716183
|
[BUG] rancher-monitoring ignoreNamespaceSelectors has wrong default value
The value of prometheus.prometheusSpec.ignoreNamespaceSelectors was set to "true" by accident during an old rebase. This diverges from upstream's default behavior seemingly for no special reason so it must be changed back to "false".
Additional context
SURE-9374
The bug related to the accidental setting of prometheus.prometheusSpec.ignoreNamespaceSelectors to true has been verified as resolved. After conducting tests on an RKE2 cluster, I can confirm that the value has been correctly reverted to false, aligning with upstream defaults.
Details:
Rancher Version: v2.10-head
Cluster Version: v1.31.3-rc1+rke2r1
Monitoring Chart Version: 105.1.1-rc.1+up61.3.2
The attached screenshot verifies the corrected configuration. The cluster and monitoring behavior are now consistent with the expected standards, ensuring no deviations from the upstream specifications.
|
gharchive/issue
| 2024-11-27T20:27:14 |
2025-04-01T06:40:10.895929
|
{
"authors": [
"deepakpunia-suse",
"jbiers"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/48199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
170288189
|
Enhancement Support CPU Quota and CPU period from compose/UI
Cattle doesn't support cpu_quota and cpu_period from compose. We need to add these to our launch configs.
It looks like libcompose is already able to support CPUQuota, it doesn't make it to the host though.
Closing in favor of the PRs that will allow us to support the missing cattle fields:
Cattle Changes: https://github.com/rancher/rancher/issues/4708
Rancher-compose: https://github.com/rancher/rancher/issues/6280
Exporting yml: https://github.com/rancher/rancher/issues/6281
UI: https://github.com/rancher/rancher/issues/6282
|
gharchive/issue
| 2016-08-09T22:39:59 |
2025-04-01T06:40:10.898899
|
{
"authors": [
"cloudnautique",
"deniseschannon"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/5677",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
233712608
|
Port etcd-operator to Rancher
Port CoreOS etcd-operator to Rancher orchestration platform. This will enable fully autonomous etcd clusters that self-heal, even in disaster situations such as majority member failure.
With the release of Rancher 2.0, development on v1.6 is only limited to critical bug fixes and security patches.
|
gharchive/issue
| 2017-06-05T21:13:19 |
2025-04-01T06:40:10.900187
|
{
"authors": [
"LLParse",
"loganhz"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/8968",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
252383077
|
EC2 tag UI issues
Rancher versions: v1.6.8-rc5
Results:
[x] Rename Tags (EC2) to EC2 Tags
[x] Add more space between Tags section and IAM Profile sections
[x] Make the Tags section full width like Labels is below it
[x] Should not allow comma in the EC2 Tag sections (no commas in Tag or Value sections)
Version - v1.6.8-rc6
Verified fixed
|
gharchive/issue
| 2017-08-23T18:59:19 |
2025-04-01T06:40:10.903884
|
{
"authors": [
"tfiduccia"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/9744",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
448020176
|
Support self-signed server CA chain on Windows
Problem:
Consider only the CA chain with only one CA
Solution:
Split multiple CAs from $SSL_CERT_DIR\serverca, then import one by one
Issue:
https://github.com/rancher/rancher/issues/20436
LGTM
|
gharchive/pull-request
| 2019-05-24T07:30:33 |
2025-04-01T06:40:10.905835
|
{
"authors": [
"loganhz",
"thxCode"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/pull/20441",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1004738095
|
Make impersonation service account more resilient
Currently, the impersonation path checks for the existence of a matching
impersonation clusterrole on the downstream cluster and then assumes
there is an impersonation service account to go with it. If for some
reason there was an interruption in between creating the clusterrole and
creating the service account, the service account won't exist and
further requests will be perpetually stuck. This change ensures that
won't happen by checking for a Not Found error when retrieving the
service account, and continuing to create the needed resources in such
an event. This also fixes an unclear error message to clarify that the
missing resource is the service account, not the secret, and to
distinguish it from another similar error message.
https://github.com/rancher/rancher/issues/34824
The function checks the role first, if the role does not exist then this whole section is skipped https://github.com/rancher/rancher/pull/34861/files#diff-fe913e39db39d26629ffb0d1e83c60678ce415c81af03283120afeec3233f011R57-R63
|
gharchive/pull-request
| 2021-09-22T20:18:42 |
2025-04-01T06:40:10.908811
|
{
"authors": [
"cmurphy"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/pull/34861",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
360090180
|
large numbers in yaml output are showing up in scientific notation
Steps:
rio run -n tstk1/tservice1 --memory-limit 150m nginx
rio inspect --format yaml tstk1/tservice1
Results: Notice that all large numbers are in scientific notations (for instance memoryLimitBytes is 1.572864e+08) instead of regular numbers.
no longer valid
|
gharchive/issue
| 2018-09-13T22:30:56 |
2025-04-01T06:40:10.910371
|
{
"authors": [
"tfiduccia"
],
"repo": "rancher/rio",
"url": "https://github.com/rancher/rio/issues/50",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
124264760
|
Fixing various style bugs
Added a new container-flex and col-flex to replace the weird multi stat container on hosts, vms, and container details pages. Flex works much better as it gives us a full height column with out a lot of hackery.
@westlywright Can you do the same on the service view? Or whatever view this is http://localhost:8000/apps/1e16/services/1s121/containers
@ibuildthecloud taken care of
I'm not sure I see how flexbox is an improvement...
vs old:
|
gharchive/pull-request
| 2015-12-29T20:33:01 |
2025-04-01T06:40:10.929926
|
{
"authors": [
"ibuildthecloud",
"vincent99",
"westlywright"
],
"repo": "rancher/ui",
"url": "https://github.com/rancher/ui/pull/419",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
64390045
|
[UI][Private Registry] When adding credential , there is no email format validation done for the email field.
Version - V013.0
There should be some basic email format validation check done when entering email for in Credential creation in "Add Registry" and "Add Credential" page.
The field is called "email" but the registry can do whatever it wants with it and Docker does not require to actually be an email.
It's marked required in the API so you have to enter something in it now, but I don't want to get into does-it-look-like-a-valid-email when that isn't even a requirement from Docker.
v0.17.0
Verified fixed
|
gharchive/issue
| 2015-03-25T22:47:47 |
2025-04-01T06:40:10.931841
|
{
"authors": [
"sangeethah",
"tfiduccia",
"vincent99"
],
"repo": "rancherio/rancher",
"url": "https://github.com/rancherio/rancher/issues/307",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
68985005
|
Deleting and restoring a service's container loses association with service
v0.16.0
Steps:
Create a Environment
Add a service
Go to Container page
Delete and restore service container
Go back to Service page
Results:
Service container no longer shows up. If I delete and restore from service page, it stays. If I delete from service page, but restore from container, it's removed.
Expected:
Should stay until purged.
Even when we attempt to delete and restore from service page, the container is removed from the service. Notice that when you refresh , the service does not have this container associated with it anymore.
Removing UI tag from this bug , since it reproduces from both the container view and service view.
v0.18.1
Verified fixed
|
gharchive/issue
| 2015-04-16T18:40:07 |
2025-04-01T06:40:10.935273
|
{
"authors": [
"sangeethah",
"tfiduccia"
],
"repo": "rancherio/rancher",
"url": "https://github.com/rancherio/rancher/issues/527",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
251953080
|
Update Dependencies via Dependable
Update Dependencies via Dependable
Dependency updates
Gem
Previous Version
New Version
Source
Diff
byebug
9.0.6
9.1.0
deivid-rodriguez/byebug
view
Hi randalldrjr/roth I need to contact to discuss something important for
you about your platform
please contact to me at cnanita@peerbanks.org
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon
Virus-free.
www.avast.com
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
On Tue, Aug 22, 2017 at 9:15 AM, Randall Reed, Jr. <notifications@github.com
wrote:
Update Dependencies via Dependable
Dependency updates
Gem Previous Version New Version Source Diff
byebug 9.0.6 9.1.0 deivid-rodriguez/byebug
https://github.com/deivid-rodriguez/byebug view
https://github.com/deivid-rodriguez/byebug/compare/v9.0.6...v9.1.0
You can view, comment on, or merge this pull request online at:
https://github.com/randallreedjr/roth_ira/pull/17
Commit Summary
Update Dependencies via Dependable
File Changes
M Gemfile.lock
https://github.com/randallreedjr/roth_ira/pull/17/files#diff-0 (2)
Patch Links:
https://github.com/randallreedjr/roth_ira/pull/17.patch
https://github.com/randallreedjr/roth_ira/pull/17.diff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/randallreedjr/roth_ira/pull/17, or mute the thread
https://github.com/notifications/unsubscribe-auth/AcIHEoOiIkImA8O3r5WzPCugwQiODiATks5satSAgaJpZM4O-ml_
.
--
PeerBanks Corporation
382 NE 191ST STSuite: 88441Miami, FL 33179
|
gharchive/pull-request
| 2017-08-22T13:15:44 |
2025-04-01T06:40:10.952524
|
{
"authors": [
"peerbanks",
"randallreedjr"
],
"repo": "randallreedjr/roth_ira",
"url": "https://github.com/randallreedjr/roth_ira/pull/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
253103594
|
Run tests in multiple threads
I've been using a multithreaded test runner I wrote for a while, and I was trying to see if there was any way to use this with, or maybe contribute it to this library. My runner runs each test case in its own thread; I get very nice speedups with it, and it can also be used to uncover some race conditions by allowing test cases to access shared state in a test class. I was poking around in here and found this comment in RandomizedRunner.runSuite:
// NOTE: this effectively means we can't run concurrent randomized runners.
final UncaughtExceptionHandler previous = Thread.getDefaultUncaughtExceptionHandler();
Does this mean it doesn't (and can't) run tests in multiple threads? I'm not sure where the restriction takes effect: is it global, per-suite, per-class or per-case?
Hi Michael. The reason we don't do multi-threaded parallel test execution is very simple: the JVM doesn't allow full sandboxing of tests and our primary goal is full reproducibility from a single seed. Think of the globals that can be conflicting -- thread management (as you quoted), security policies, system properties, class loading order (and static initializers). For very simple tests that don't interact with the environment parallel execution is great, but anything beyond that is a problem.
A half-baked solution to this is implemented in the ANT runner for randomized testing -- it splits the set of test suites into multiple JVMs (load-balancing tests execution between them). This requires multiple JVMs to start (overhead), but if you have many test suites, the concurrency is quite all right.
In short: it is possible to run tests in parallel within a single JVM, but there are deep-rooted potential more problems with it, so it won't be implemented as part of this project.
I understand the concern about repeatability. It's certainly true that you cannot guarantee a failure will be reproducible in the face of multi-threading in the general case. However I think that's also true of tests which themselves spawn multiple threads, so in some sense this is an impossibly high standard.
In addition to the perf gain for simpler tests, I've found the multithreaded test runner to be useful in cases where I explicitly want to test some supposedly thread-safe class by running tasks that call its methods from multiple threads. In such a case I'm not sure how one could ever guarantee reproducibility though?
Anyway, I still think this could be useful as a selectable option, say by annotating the test class in cases where the test writer explicitly want to enable it.
Tests that fork multiple threads are non-reproducible if they don't have a state. But still -- if something fails you at least get a chance to repeat the same test execution (with the same randomization seed), even if you re-run it multiple times just to see whether the failure is intermittent or permanent for a given seed. Randomized runner tries to ensure every thread forked from the same test has the same initial randomization seed, so it's not really dumb -- it does its best.
bq. I've found the multithreaded test runner to be useful in cases where I explicitly want to test some supposedly thread-safe class by running tasks that call its methods from multiple threads.
I think a better design here would be to create a single test, then fork a number (randomized!) of threads and pound your business logic from those threads (but still within a single test!).
Believe me I've been down the route of trying to come up with a clean multithreaded separation of tests within a single JVM, but it quickly hits a showstopper -- be it system property-dependent initialization of something (which many third-party libraries use), a race condition on a singleton somewhere, configured once... you name it.
bq. Anyway, I still think this could be useful as a selectable option, say by annotating the test class in cases where the test writer explicitly want to enable it.
Useful -- maybe. But misleading -- for sure. I bet people would abuse this and then be stuck at how to reproduce a particular problem. If you're looking for difficult-to-reproduce issues even in the current setup, look at Apache Solr tests -- they are notorious for failing on one machine, but not on another...
The better you can isolate a single test run, the easier it is to debug. If you're unhappy about it -- please go ahead and fork the project or create your own runner! I bet somebody wrote something like this already (scan JUnit's mailing list).
Yeah, OK. I do have my own runner already. It's not very complicated to write. I'm working in a project that is borrowing LuceneTestCase so trying to find a way to adapt to that. Perhaps we'll just use this runner in some classes and LuceneTestCase in others. Anyway, thanks for the thoughtful response.
You're welcome!
|
gharchive/issue
| 2017-08-26T15:59:23 |
2025-04-01T06:40:10.982099
|
{
"authors": [
"dweiss",
"msokolov"
],
"repo": "randomizedtesting/randomizedtesting",
"url": "https://github.com/randomizedtesting/randomizedtesting/issues/252",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
192905200
|
Search for tooltip in the grid (maybe even NBT)
I was trying to find a location to put a feature request, but haven't been able to find a place. Hope this is OK to put here.
I think it would be nice to be able to search NBT data, so when you're looking for say, an enchanted book, you could search for the enchant you want, and it will find them in the system :)
Can't you just do #efficiency to find for example efficiency enchanted tools?
Holy crap that works :) I didn't realize!!! Thanks!
|
gharchive/issue
| 2016-12-01T17:05:02 |
2025-04-01T06:40:11.037197
|
{
"authors": [
"DigitalSketch",
"raoulvdberge"
],
"repo": "raoulvdberge/refinedstorage",
"url": "https://github.com/raoulvdberge/refinedstorage/issues/698",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
197093057
|
Autocrafting .. craft with oredict
Issue description:
When i request to craft 64 sticky pistons .. I have the oredictionary for the pattren.
What happens:
it craft only one sticky pistion and continue with the crafting without giving the result
What you expected to happen:
i expect to have 64 sticky pistons in my RS System.. but what happend is i have only one.
Steps to reproduce:
make the pattren for it with oredict
request it from RS System
Wholla ... only one sticky piston is crafted
...
Version (Make sure you are on the latest version before reporting):
Minecraft: 1.10.2
Forge:Latest
Refined Storage:Latest
Does this issue occur on a server? [yes/no]
If a (crash)log is relevant for this issue, link it here:
[pastebin/gist/etc link here]
related to #766 , which has been fixed in dev.
|
gharchive/issue
| 2016-12-22T04:38:47 |
2025-04-01T06:40:11.040740
|
{
"authors": [
"Haddadmj",
"way2muchnoise"
],
"repo": "raoulvdberge/refinedstorage",
"url": "https://github.com/raoulvdberge/refinedstorage/issues/776",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1745848292
|
Multiple question / MissionBio Tapestri Data
Hello,
Thank you for this tool !
I'm really interested but i'm struggling to understand how do you obtain the variant reads counts matrix ? My guess is that you paired VAF variants with total reads counts of his amplicons ? If so won't it be biased by non-uniformity of the reads coverage of the so-called amplicon ? if it has been compute with an other method would it be possible to share it ?
You seem to also recommend to add the the clustering (with or without the mutation matrix ?) what is the mutation matrix in tapestri ? are you refering to the genotype matrix (where in each cell/barcode. 0: is wildtype, 1: one allele is alternate, 2: both alleles are alternate, 3: Missing genotype) ?
Also i'm not sure to understand what is the command argument -k and what it is doing ? maximum number of losses for an SNV is a bit vague.
Also would it be possible to provide a conda yaml environment or even a singularity / docker ? it would greatly simplify the use of your tools !
I'm aware it is a lot of questions, so thank you in advance for your time and kindness
Hello,
Thank you for your interest in using this tool and sorry for the late response.
Variant read count matrix -- yes, you are correct. This matrix contains the number of variant reads for each mutation in each cell. You can generate this matrix by multiplying the VAF with the total read counts (reference reads + variant reads) for each mutation.
Mutation matrix -- I have defined the mutation matrix in the paper (I updated the README to contain a link to the paper). Mutation matrix has entry of 0 if the mutation is abset, 1 if it is present and -1 if there is no information about the mutation in that cell.
-k arugment --Yes, this is the number of losses for a SNV in the phylogeny. More information can be found in the paper I have linked in the README (https://www.biorxiv.org/content/10.1101/2023.01.05.522408v1.abstract)
conda yaml -- Thank you for the suggestion. I will work upload a yaml file to use this tool in the next few days. The package requirement for ConDoR are very simple. We only need numpy, pandas, networkx and gurobipy.
|
gharchive/issue
| 2023-06-07T13:00:11 |
2025-04-01T06:40:11.050764
|
{
"authors": [
"reJELIN",
"sashitt2"
],
"repo": "raphael-group/constrained-Dollo",
"url": "https://github.com/raphael-group/constrained-Dollo/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2069357777
|
Loader v2
Loader was not showing on the V2
Also, I changed the behavior to show from the start of submission, rather than later for showing real time feedback.
Thanks @dipta007 for your contribution! Great work! I will build a new version.
|
gharchive/pull-request
| 2024-01-07T23:59:34 |
2025-04-01T06:40:11.051989
|
{
"authors": [
"dipta007",
"raphaelheinz"
],
"repo": "raphaelheinz/LeetHub-3.0",
"url": "https://github.com/raphaelheinz/LeetHub-3.0/pull/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1227475410
|
encoding.explode:true not working for schema properties
Hi Team,
I have several api endpoints that allow for a parameter to be specified multiple times. See example of one below.
{
"openapi": "3.0.1",
"info": {
"title": "TEST",
"version": "1.0.0"
},
"servers": [{
"url": "http://localhost:8080/api/",
"description": "TEST"
}],
"paths": {
"/deleteImage": {
"post": {
"tags": ["Images"],
"summary": "Delete Image",
"description": "The delete image service deletes the specified image. Multiple imageName parameters can be specified to delete multiple images.",
"operationId": "deleteImage",
"requestBody": {
"content": {
"application/x-www-form-urlencoded": {
"schema": {
"$ref": "#/components/schemas/DeleteImageRequest"
},
"encoding": {
"imageName": {
"explode": true
}
}
},
"multipart/form-data": {
"schema": {
"$ref": "#/components/schemas/DeleteImageRequest"
},
"encoding": {
"imageName": {
"explode": true
}
}
}
}
},
"responses": {
"200": {
"description": "Request succeeded"
}
}
}
}
},
"components": {
"schemas": {
"DeleteImageRequest": {
"required": ["accessKey", "imageName"],
"type": "object",
"properties": {
"accessKey": {
"type": "string",
"description": "Your unique access key that you can find on the Account page in the Cloud Console."
},
"imageName": {
"minItems": 1,
"type": "array",
"items": {
"type": "string",
"description": "The name of the image. This parameter can be specified multiple times to delete multiple files.",
"example": "/MyImageFolder/MyImage.png"
}
}
},
"description": "Delete Image Request Model"
}
}
}
}
In RapiDoc the explode:true property seems to be ignored and the imageName parameter is interpreted as a single parameter with comma delimitated values and an erroneous leading delimitator. If I test the example spec above from RapiDoc I get the following raw request bodies:
Multipart:
------WebKitFormBoundaryXXVgV2Xnct1AW3tT
Content-Disposition: form-data; name="accessKey"
XXX
------WebKitFormBoundaryXXVgV2Xnct1AW3tT
Content-Disposition: form-data; name="imageName"
,abc.png,def.jpg
------WebKitFormBoundaryXXVgV2Xnct1AW3tT--
Form Data:
accessKey=XXX&imageName=,abc.png,def.jpg
The example api above works correctly in Swagger UI though.
I also note that Parameters with explode:true seem to work fine in RapiDoc.
We have run into this problem as well. And the way I read the OpenAPI documentation it is supposed to explode arrays by default, without even requiring an explode:true setting:
Form fields can contain primitives values, arrays and objects. By default, arrays are serialized as array_name=value1&array_name=value2 and objects as prop1=value1&prop2=value2, but you can use other serialization strategies as defined by the OpenAPI 3.0 Specification.
But we are happy to supply the setting, we just need it to be supported.
|
gharchive/issue
| 2022-05-06T06:35:18 |
2025-04-01T06:40:11.061598
|
{
"authors": [
"Rauert",
"brunchboy"
],
"repo": "rapi-doc/RapiDoc",
"url": "https://github.com/rapi-doc/RapiDoc/issues/747",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
172737452
|
bytes_stored is broken when --nodes are passed
When you pass nodes the search for bytes_stored is broken.
/Users/athompson/code/personal/elasticsearch-drain/lib/elasticsearch/drain/cli.rb:80:in `block in remove_nodes': undefined method `bytes_stored' for nil:NilClass (NoMethodError)
from /Users/athompson/code/personal/elasticsearch-drain/lib/elasticsearch/drain/cli.rb:78:in `each'
from /Users/athompson/code/personal/elasticsearch-drain/lib/elasticsearch/drain/cli.rb:78:in `remove_nodes'
from /Users/athompson/code/personal/elasticsearch-drain/lib/elasticsearch/drain/cli.rb:33:in `asg'
from /Users/athompson/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
from /Users/athompson/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'
from /Users/athompson/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'
from /Users/athompson/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
from bin/drain:3:in `<main>'
This feature was introduced in #18
This is fixed by #23.
|
gharchive/issue
| 2016-08-23T15:53:15 |
2025-04-01T06:40:11.063485
|
{
"authors": [
"athompson-r7",
"dgreene-r7"
],
"repo": "rapid7/elasticsearch-drain",
"url": "https://github.com/rapid7/elasticsearch-drain/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1991449293
|
Fix skbuild arguments in build.sh
The previous method was incorrect and did not work as expected, for example specifying -g would not result in a Cython debug build.
To be entirely honest, I'm not always sure about those build tools either, this time I borrowed from cuDF. In any case, I've tested this to be working currently, so I'm gonna go ahead and merged it as is. Thanks @wence- !
/merge
|
gharchive/pull-request
| 2023-11-13T20:41:42 |
2025-04-01T06:40:11.176463
|
{
"authors": [
"pentschev"
],
"repo": "rapidsai/ucxx",
"url": "https://github.com/rapidsai/ucxx/pull/124",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
107339635
|
Fix parsing of IPv4 address in SOCKS4A
According to the protocol a domain is used instead of an IPv4 address if the address is 0.0.0.x with x non-zero.
Since (0x00000000 & 0x000000ff) == 0x00000000 the IP address 0.0.0.0 was considered to be a domain as well, so we need to check if not only the first 3 octets are zero but if the last octet is non-zero as well.
I'm not sure if any client sets 0.0.0.0 as IP address. I didn't see a real life issue with this, but nonetheless let's try to be true to the specification.
Good find - thanks.
|
gharchive/pull-request
| 2015-09-19T15:33:28 |
2025-04-01T06:40:11.178227
|
{
"authors": [
"blochbergermax",
"raptorswing"
],
"repo": "raptorswing/Qt-Socks-Server",
"url": "https://github.com/raptorswing/Qt-Socks-Server/pull/11",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
764058063
|
zInvokeClient+208 when no MI Row selected and Registry set to send FormObject
Describe the bug
When the registry setting, 'Include FormObject for sections without a current row selected' is set to 'Y' multiple iteration FormObjects are sent in the ScriptLink payload with no CurrentRow. After sending the ScriptLink request, myAvatar shows the error 'zInvokeClient+208' when processing the response. - Reported by Compass Health.
To Reproduce
Steps to reproduce the behavior:
Set the Registry Setting 'Include FormObject for sections without a current row selected' to 'Y'
Open a myAvatar form with existing multiple iteration content and a ScriptLink call configured.
Trigger ScriptLink call without selecting a row in the multiple iteration table.
Error 'zInvokeClient+208' appears to user and shows in logging.
Expected behavior
Either no error should occur or a message provided by the ScriptLink API. THe 'zInvokeClient+208' should not occur.
Screenshots
Example Code
In this example, no modifications are made to the incoming OptionObject2015, so no error message should be displayed.
public class ParentGuardian : ICommand
{
private OptionObject2015 _optionObject2015;
private string _parameter;
public ParentGuardian(OptionObject2015 optionObject2015, string parameter)
{
_optionObject2015 = optionObject2015;
_parameter = parameter;
}
public OptionObject2015 Execute()
{
return _optionObject2015.ToReturnOptionObject();
}
}
This issue is caused by a missing null check when cloning the FormObject. This will be corrected in the next release.
|
gharchive/issue
| 2020-12-12T17:05:23 |
2025-04-01T06:40:11.184288
|
{
"authors": [
"scottolsonjr"
],
"repo": "rarelysimple/RarelySimple.AvatarScriptLink",
"url": "https://github.com/rarelysimple/RarelySimple.AvatarScriptLink/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1225129444
|
Feature request: pioasm: multiple programs
Let's say I want to use the PIO for something UART-like. Now obviously this requires a send program and a receive program. I want to pack both into one PIO device because they're both small enough.
My problem is that there seems to be no clean way to do that: pioasm doesn't tell me how long the first program is, thus I have no way to set the .offset for the second program correctly.
Have a look at some of the examples in https://github.com/raspberrypi/pico-examples/tree/master/pio/
https://github.com/raspberrypi/pico-examples/tree/master/pio/ir_nec/nec_transmit_library is an example that uses multiple PIO programs (and there might be others if you dig around? :man_shrugging: )
And I've not tried it myself, but you might want to look at https://github.com/harrywalsh/pico-hw_and_pio-uart-gridge/tree/HW_and_pio_uarts
|
gharchive/issue
| 2022-05-04T09:29:48 |
2025-04-01T06:40:11.248112
|
{
"authors": [
"lurch",
"smurfix"
],
"repo": "raspberrypi/pico-sdk",
"url": "https://github.com/raspberrypi/pico-sdk/issues/804",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
798611072
|
enforce that resulting binaries dont have arm opcodes, fixes #50
the original fault, is that the arm-none-gcc on my host was built with multi-lib disabled
as a result, it could generate thumb binaries at compile time, but the crtbegin.o and the newlib libc.a had been pre-built in arm32 mode only
pico-sdk would silently link the arm32 variants, and then the code would just fault out at runtime, and only SWD could reveal why
this pr will cause the build to fail if such arm32 opcodes wind up in the binary, forcing you to fix the tooling before you can flash the pico
if we're going to do anything here, we should check in elf2uf2 (which should be able to tell the difference... it already checks a lot of things)...
@kilograham should i just throw in some c code and have cmake compile it the same way it does the other host utils?
ah, elf2uf2 does sound like a good place to add the check
yup add something here https://github.com/raspberrypi/pico-sdk/blob/master/tools/elf2uf2/main.cpp#L92
@kilograham is that a new git repo?, or do you just mean a new PR against the elf2uf2 file?
I suspect the latter. (as you can see above, elf2uf2 is maintained in this repo)
yes in this repo, but this PR seems a bit obsolete
|
gharchive/pull-request
| 2021-02-01T18:40:04 |
2025-04-01T06:40:11.252032
|
{
"authors": [
"cleverca22",
"kilograham",
"lurch"
],
"repo": "raspberrypi/pico-sdk",
"url": "https://github.com/raspberrypi/pico-sdk/pull/70",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1137510116
|
🛑 NJU Mirror (mirrors.nju.edu.cn) is down
In 578b6ef, NJU Mirror (mirrors.nju.edu.cn) (https://mirrors.nju.edu.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NJU Mirror (mirrors.nju.edu.cn) is back up in a5c68ab.
|
gharchive/issue
| 2022-02-14T16:02:38 |
2025-04-01T06:40:11.267124
|
{
"authors": [
"ryanfortner"
],
"repo": "raspbian-addons/status",
"url": "https://github.com/raspbian-addons/status/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2174378585
|
CLN update to v24.02
Testing the RECKLESS CLN UPDATE option on Raspiblitz v0.11.0 rc3 + the backup plugin patch in #4446
Got a warning from poetry, but otherwise looking good:
Cloning into 'lightning'...
remote: Enumerating objects: 139424, done.
remote: Counting objects: 100% (18553/18553), done.
remote: Compressing objects: 100% (1446/1446), done.
remote: Total 139424 (delta 17579), reused 17165 (delta 17101), pack-reused 120871
Receiving objects: 100% (139424/139424), 97.20 MiB | 1.23 MiB/s, done.
Resolving deltas: 100% (91450/91450), done.
Updating files: 100% (27334/27334), done.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is re
commended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is re
commended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Warning: The current project could not be installed: No file/folder found for package cln-meta-project
If you do not want to install the current project use --no-root.
If you want to use Poetry only for dependency management but not for packaging, you can set the operating mode to "non-package" in your pypr
oject.toml file.
In a future version of Poetry this warning will become an error!
2024-03-07 17:17:20,872 [DEBUG] Primitive[path=Getinfo.id, required=True, type=pubkey]
2024-03-07 17:17:20,873 [DEBUG] Primitive[path=Getinfo.alias, required=True, type=string]
Should be ok for v0.11.0
Then could ship the next raspiblitz release with the CLN v24.02 from start.
At the moment in dev CLN version is ...
https://github.com/raspiblitz/raspiblitz/blob/4687506d12f958bbba03b05bcb54b0a4ec432596/home.admin/config.scripts/cl.install.sh#L5
.. closing issue.
|
gharchive/issue
| 2024-03-07T17:29:03 |
2025-04-01T06:40:11.270209
|
{
"authors": [
"openoms",
"rootzoll"
],
"repo": "raspiblitz/raspiblitz",
"url": "https://github.com/raspiblitz/raspiblitz/issues/4455",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1251790854
|
Bitcoin Core install issues
Describe the issue
I'm at the installation step of Bitcoin client, have downloaded the latest Bitcoin Core binary along with the hash and signatures. Reference checksum was ok, and signature check said there were good signatures, but with warnings. I went ahead and extracted and installed Bitcoin Core anyway, but it fails the version check. Not sure if I should try to proceed with Steps 2a and following.
Location in guide
https://raspibolt.org/guide/bitcoin/bitcoin-client.html
Screenshots
Here's are some of the signature warnings:
Here's the error when I tried to check the version after installing:
I tried to change to the bitcoin-23.0/bin/ directory where I can clearly see the file "bitcoind" listed (as shown in the screenshot) but it came back with the same error "no such file or directory"
Environment
Hardware platform: Raspberry Pi 4
Operating system: Raspberry Pi OS 64bit Lite
Version: Debian GNU/Linux 11 (bullseye)
Any suggestions for how to resolve the issues I reported? Can I safely ignore the warnings on the signatures and assume the bitcoin core client I downloaded is safe to use? And why doesn't the version check work? If it's a simple noob error, please advise - I don't know linux well enough to troubleshoot it myself.
I installed btc core 23.0 yesterday on my raspi and it worked.
Why did you use the arm-linux version and not the suggested version from the guide?
$ tar -xvf bitcoin-23.0-aarch64-linux-gnu.tar.gz
I dont know if that is a problem but maybe you can try that version instead
@zynos I was going by the instruction in the guide to download the latest version in case there had been an update, as it says here:
But looking at it again, I see the latest version is in fact 23.0, so I'll retry it like you did. Thanks for the suggestion!
@zynos I went ahead and took your suggestion and just downloaded the version listed in the guide, and when I checked the signatures, got similar warnings as last time.
But I went ahead and installed the bitcoin core client and this time it worked as expected, so will close this issue. Still not sure if those warnings should be heeded though - makes this noob a bit uncertain. If they're not important, I feel like the guide should mention that they can be safely ignored.
|
gharchive/issue
| 2022-05-29T05:16:02 |
2025-04-01T06:40:11.277045
|
{
"authors": [
"kwikslvr",
"zynos"
],
"repo": "raspibolt/raspibolt",
"url": "https://github.com/raspibolt/raspibolt/issues/1032",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
211735146
|
Update vue-events version
I fixed a bug in vue-events that was introduced in Vue 2.2.x, just updating this projects dependency.
https://github.com/cklmercer/vue-events/issues/8
@cklmercer Thanks, really appreciated. :)
|
gharchive/pull-request
| 2017-03-03T16:25:14 |
2025-04-01T06:40:11.287928
|
{
"authors": [
"cklmercer",
"ratiw"
],
"repo": "ratiw/vuetable-2-tutorial-bootstrap",
"url": "https://github.com/ratiw/vuetable-2-tutorial-bootstrap/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
343364338
|
Documentation links to lessons are outdated
That page and I believe all other pages referencing lessons contain outdated links, as lessons moved from blob/master to wiki.
Please, fix.
Sorry for that. Those pages will soon be replaced when v2.0 is released (currently in beta). The document for the v2.0-beta is available here.
|
gharchive/issue
| 2018-07-21T23:55:16 |
2025-04-01T06:40:11.289653
|
{
"authors": [
"RomanDavlyatshin",
"ratiw"
],
"repo": "ratiw/vuetable-2",
"url": "https://github.com/ratiw/vuetable-2/issues/501",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2159285471
|
sendfile is not supported on SSL
Right now I just FFI out to file:sendfile for both TCP and SSL sockets. However, sendfile does not work with SSL.
I'll need to scope this per-transport, and probably have some default implementation for SSL. Currently, ThousandIsland just does a file:pread and ssl:send... is it worth trying to do something a little less memory intensive? Not sure how likely this situation is. Maybe just add the basic one and see what people think.
Whoops, this should be on mist.
|
gharchive/issue
| 2024-02-28T15:54:38 |
2025-04-01T06:40:11.321595
|
{
"authors": [
"rawhat"
],
"repo": "rawhat/glisten",
"url": "https://github.com/rawhat/glisten/issues/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2740153122
|
[CI] TestAutoscalingRayService is flaky
Search before asking
[X] I searched the issues and found no similar issues.
KubeRay Component
ci
What happened + What you expected to happen
https://buildkite.com/ray-project/ray-ecosystem-ci-kuberay-ci/builds/5672#0193c1f9-ac0c-4a19-88eb-aee9aa146f14/3769
Reproduction script
https://buildkite.com/ray-project/ray-ecosystem-ci-kuberay-ci/builds/5672#0193c1f9-ac0c-4a19-88eb-aee9aa146f14/3769
Anything else
No response
Are you willing to submit a PR?
[ ] Yes I am willing to submit a PR!
Let's temporarily not address this issue, as I ran it many times on Buildkite and my local machine but couldn't reproduce the error. This is the PR where I tried running it 10 times: https://github.com/ray-project/kuberay/pull/2682
|
gharchive/issue
| 2024-12-14T21:34:17 |
2025-04-01T06:40:11.325273
|
{
"authors": [
"MortalHappiness",
"kevin85421"
],
"repo": "ray-project/kuberay",
"url": "https://github.com/ray-project/kuberay/issues/2654",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1345648155
|
[Feature] Add CI tests for Helm charts
Search before asking
[X] I had searched in the issues and found no similar feature requirement.
Description
Test Helm configuration in the CI.
The Helm charts are very easy to break.
We could take a look at testing strategies used by similar projects; no need to innovate.
This is actually a duplicate of https://github.com/ray-project/kuberay/issues/184.
|
gharchive/issue
| 2022-08-22T00:42:45 |
2025-04-01T06:40:11.327451
|
{
"authors": [
"DmitriGekhtman"
],
"repo": "ray-project/kuberay",
"url": "https://github.com/ray-project/kuberay/issues/500",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1320184953
|
[Autoscaler] Match autoscaler image to Ray head image for Ray >= 2.0.0
Why are these changes needed?
This PR updates the logic that selects a default autoscaler image.
For Ray versions at least 2.0.0, use the same image for the autoscaler as for the Ray container.
This eliminates the possibility of autoscaler/Ray incompatibility and reduces docker pull time.
For earlier Ray versions, use rayproject/ray:2.0.0 to guarantee up-to-date autoscaler functionality.
As of the Ray 2.0.0 branch cut earlier today, an image tagged rayproject/ray:2.0.0 exists on Dockerhub.
Until the official Ray release in two weeks, this image is unofficial and its actual contents are a moving target -- but I think we can live with the inconsistency in the short term. (I'm open to other opinions on this choice.)
Related issue number
Closes https://github.com/ray-project/kuberay/issues/360
Checks
[ ] I've made sure the tests are passing.
Testing Strategy
[ ] Unit tests
[ ] Manual tests
[ ] This PR is not tested :(
so rayproject/ray:2.0.0 will be overridden once the office image is out?
/cc @akanso to take a look
so rayproject/ray:2.0.0 will be overridden once the office image is out?
rayproject/ray:2.0.0 is updated each time we push to the Ray 2.0.0 release branch
The "official image is out" after the last commit is made to the release branch and we announce the release.
Maybe not the best way of doing things but that's the way it is at the moment.
#424 is merged to help improve test stability. You can rebase the change to see if nightly version pass
Let's merge this one and it's time to cut rc.0 release. If other reviewers have further feedback, feel free to leave it here.
|
gharchive/pull-request
| 2022-07-27T22:27:17 |
2025-04-01T06:40:11.334058
|
{
"authors": [
"DmitriGekhtman",
"Jeffwan"
],
"repo": "ray-project/kuberay",
"url": "https://github.com/ray-project/kuberay/pull/423",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
779822281
|
xgboost_ray fails on Python 3.8 on macOS
In Python 3.8 on macOS, multiprocessing uses a spawn strategy instead of a fork strategy for process creation. This change is no longer compatible with our subclassed RabitTracker that uses a process internally rather than a thread and causes xgboost_ray training to fail.
Works fine using the xgboost built in RabitTracker
Python 3.8.6 (default, Nov 20 2020, 18:29:40)
[Clang 12.0.0 (clang-1200.0.32.27)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ray
>>> from ray.services import get_node_ip_address
>>> import xgboost
>>> from xgboost import RabitTracker
>>> node_ip = get_node_ip_address()
>>> rabit_tracker = RabitTracker(hostIP=node_ip, nslave=2)
>>> rabit_tracker.start(nslave=2)
Fails using xgboost_ray internal _RabitTracker
Python 3.8.6 (default, Nov 20 2020, 18:29:40)
[Clang 12.0.0 (clang-1200.0.32.27)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ray
>>> from ray.services import get_node_ip_address
>>> from xgboost_ray.main import _RabitTracker
>>> node_ip = get_node_ip_address()
>>> rabit_tracker = _RabitTracker(hostIP=node_ip, nslave=2)
>>> rabit_tracker.start(nslave=2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/amog/dev/xgboost_ray/xgboost_ray/main.py", line 106, in start
self.thread.start()
File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_RabitTracker.start.<locals>.run'
Once we fix this we should include 3.8 and macOS tests in the CI and make another release.
cc @krfricke
Current master with ray 1.2.0 works fine for me:
> python
Python 3.8.6 (default, Apr 14 2021, 14:07:57)
[Clang 12.0.0 (clang-1200.0.32.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ray
>>> from ray.services import get_node_ip_address
>>> from xgboost_ray.main import _RabitTracker
>>> node_ip = get_node_ip_address()
>>> rabit_tracker = _RabitTracker(hostIP=node_ip, nslave=2)
>>> rabit_tracker.start(nslave=2)
>>> del rabit_tracker
>>>
> python ../../xgboost_ray/examples/simple.py
File descriptor limit 2560 is too low for production servers and may result in connection errors. At least 8192 is recommended. --- Fix with 'ulimit -n 8192'
2021-04-14 14:11:37,731 INFO services.py:1264 -- View the Ray dashboard at http://127.0.0.1:8265
2021-04-14 14:11:39,656 INFO main.py:817 -- [RayXGBoost] Created 4 new actors (4 total actors). Waiting until actors are ready for training.
2021-04-14 14:11:40,582 INFO main.py:860 -- [RayXGBoost] Starting XGBoost training.
(pid=89787) [14:11:40] task [xgboost.ray]:4598953872 got new rank 3
(pid=89777) [14:11:40] task [xgboost.ray]:4556349584 got new rank 2
(pid=89790) [14:11:40] task [xgboost.ray]:4554084240 got new rank 0
(pid=89782) [14:11:40] task [xgboost.ray]:4535186576 got new rank 1
2021-04-14 14:11:41,440 INFO main.py:1304 -- [RayXGBoost] Finished XGBoost training on training data with total N=143 in 1.84 seconds (0.85 pure XGBoost training time).
Final validation error: 0.0210
> python --version
Python 3.8.6
> uname -a
Darwin Kais-MacBook-Pro.local 19.6.0 Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64 x86_64
@amogkam could you check if it works for you, too, and close the issue if it does?
Ah seems this was fixed in #42 and we only left this issue open. Closing this for now then.
|
gharchive/issue
| 2021-01-06T01:07:20 |
2025-04-01T06:40:11.486017
|
{
"authors": [
"amogkam",
"krfricke"
],
"repo": "ray-project/xgboost_ray",
"url": "https://github.com/ray-project/xgboost_ray/issues/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2149419903
|
[URL Unshortener] Unable to Expand URL from https://is.gd
Extension
https://www.raycast.com/SebDanielsson/url-unshortener
Description
URL Unshortener is showing error when expanding URL from https://is.gd.
System: Raycast 1.68.1 on macOS 14.3.1
Example URL: https://is.gd/putPEo
Steps To Reproduce
Tried different URL from https://is.gd same result
Current Behaviour
No response
Expected Behaviour
No response
I just tried unshortening https://is.gd/putPEo with Raycast 1.68.1 running on macOS 14.3.1 and didn't get any error messages. Do you still have this issue?
It works, wondering if it's also possible to expand/clean URL like this, I use www.expandurl.net to get the canonical URL.
https://redirect.viglink.com/?key=0ab71bd42c2ca312a536dac167978a13&u=https%3A%2F%2Fwww.amazon.com%2FApple-MU8F2AM-A-Pencil-Generation%2Fdp%2FB07K1WWBJK%2F%3Ftag%3Dtoysb-20&type=ap&loc=https%3A%2F%2F9to5toys.com%2F2024%2F02%2F17%2Fapple-pencil-2-drops-to-to-79-at-amazon-its-second-best-price-reg-129%2F&ref=https%3A%2F%2F9to5toys.com%2F
Sorry for the late reply!
I'm not sure how to implement that functionality as sometimes the parameters are used to redirect the user to the final page. Preferably I would like to avoid depending on an online service. PRs are welcome of course😉
|
gharchive/issue
| 2024-02-22T16:09:38 |
2025-04-01T06:40:11.492739
|
{
"authors": [
"alwinsamson",
"sebdanielsson"
],
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/issues/10886",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1182613661
|
[Extension Bug] Home Assistant extension can't connect to web-socket
Extension – Home Assistant
Author: @tonka3000
Description
Extension can't connect to Home Assistant Websocket API.
I have tried manually connecting to the Websocket API using Websocket Extension and I have managed successfully connect to it where this extension seems not to.
I wanted to see logs of what is going on, but thanks to https://github.com/raycast/extensions/issues/1224 I did not manage.
Maybe it is incompatible with the latest Home Assistant version? I have noticed that you are not running the latest home-assistant-js-websocket package.
Raycast version
Version: 1.31.0
Home Assistant info
version
core-2022.3.6
installation_type
Home Assistant Core
dev
false
hassio
false
docker
false
user
homeassistant
virtualenv
true
python_version
3.9.2
os_name
Linux
os_version
5.4.182
arch
armv7l
timezone
Europe/Prague
Home Assistant Cloud
logged_in
false
can_reach_cert_server
ok
can_reach_cloud_auth
ok
can_reach_cloud
ok
Lovelace
dashboards
1
resources
0
mode
auto-gen
Hey @AuHau ,
I have also the latest version of HA (2022.3.7) and it works for me.
Do you have a trailing whitespace in your url e.g. https://myhainstance.org/. If yes: Remove it (https://myhainstance.org) and try again.
If this does not work maybe you need to upgrade to 2022.3.7.
Hmmm, nothing traling.
I have no idea what has changed but today it works as expected 😅 Most probably it is then problem on my Home Assistant deployment, so closing this. Thanks for the ideas!
@AuHau Great to hear that it works now. Let me know if you miss something from home assistant in the exension.
|
gharchive/issue
| 2022-03-27T18:05:35 |
2025-04-01T06:40:11.504130
|
{
"authors": [
"AuHau",
"tonka3000"
],
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/issues/1225",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1644625130
|
On the new Bear Beta, Search Note doesn't seem to work[Bear Notes] ...
Extension
https://www.raycast.com/hmarr/bear
Description
I tried to search and I get an error:
here is the log
Error: no such table: Z_7TAGS
ZT:index.js:116:8406
113: ,this.lastOut=
114: ;for(var e=this.indentLevel;e>0;e--)this.buffer+=this.indent}}function Ure(e,t,r){var n="<"+e;if(t&&t.length>0)for(var u=0,a;(a=t[u])!==void 0;)n+=" "+a[0]+'="'+this.esc(a[1])+'"',u++;return r&&(n+=" /"),n+=">",n}hr.prototype=Object.create(On.prototype);hr.prototype.render=Yre;hr.prototype.out=Wre;hr.prototype.cr=Hre;hr.prototype.tag=Ure;hr.prototype.esc=vr;var Q_=He(wn());var Je=require("@raycast/api"),Y_=He(s_()),qn=require("react");var Mt=require("react/jsx-runtime");function W_({note:e}){let[t,r]=Gc(),[n,u]=(0,qn.useState)(),[a,i]=(0,qn.useState)();return(0,qn.useEffect)(()=>{t!=null&&(u(t.getBacklinks(e.id)),i(t.getNoteLinks(e.id)))},[t]),r&&(0,Je.showToast)(Je.Toast.Style.Failure,"Something went wrong",r.message),(0,Mt.jsxs)(Je.List,{isLoading:n==null,navigationTitle:e.title,children:[(0,Mt.jsx)(Je.List.Section,{title:"Note Backlinks",children:n?.map(s=>(0,Mt.jsx)(Je.List.Item,{title:s.title===""?"Untitled Note":s.encrypted?"\u{1F512} "+s.title:s.title,subtitle:s.formattedTags,icon:{source:"command-icon.png"},keywords:[s.id],actions:(0,Mt.jsx)(Tt,{isNotePreview:!1,note:s}),accessoryTitle:edited ${(0,Y_.formatDistanceToNowStrict)(s.modifiedAt,{addSuffix:!0})}},s.id))}),(0,Mt.jsx)(Je.List.Section,{title:"Note Links",children:a?.map(s=>(0,Mt.jsx)(Je.List.Item,{title:s.title===""?"Untitled Note":s.encrypted?"\u{1F512} "+s.title:s.title,subtitle:s.formattedTags,icon:{source:"command-icon.png"},keywords:[s.id],actions:(0,Mt.jsx)(Tt,{isNotePreview:!1,note:s}),accessoryTitle:edited ${(0,Y_.formatDistanceToNowStrict)(s.modifiedAt,{addSuffix:!0})}},s.id))})]})}var $T=require("@raycast/api"),zT=require("os");var UT=require("react/jsx-runtime"),HT=(0,zT.homedir)()+"/Library/Group Containers/9K33E3U3T4.net.shinyfrog.bear/Application Data/Local Files";function ud(e,t=!0){if(e===null)return"";let r=e,n=r.matchAll(/\[(?<type>file|image):(?<path>.+)\]/g);for(let u of n){if(u.groups===void 0)return r;let a="";if(u.groups.type==="image"&&!t)a=;else{let i=encodeURI(file://${HT}/${u.groups.type==="file"?"Note Files":"Note Images"}/${u.groups.path});a=Show attached ${u.groups.type}}r=r.replace(u[0],a)}return r}function H_({note:e}){let t=e.encrypted?# ${e.title}
115:
116: This note's content is encrypted:e.text;return(0,UT.jsx)($T.Detail,{markdown:ud(t),navigationTitle:e.title,actions:(0,UT.jsx)(Tt,{isNotePreview:!0,note:e})})}var xe=require("@raycast/api"),QT=He(wn()),At=require("react/jsx-runtime");function jre(){async function e({url:t,tags:r,pin:n}){if(!t){(0,xe.showToast)(xe.Toast.Style.Failure,"URL is required");return}(0,QT.default)(bear://x-callback-url/grab-url?url=${encodeURIComponent(t)}&tags=${encodeURIComponent(r)}&pin=${n?"yes":"no"}),await(0,xe.popToRoot)({clearSearchBar:!0})}return(0,At.jsx)(xe.Action.SubmitForm,{icon:xe.Icon.Globe,title:"Capture in New Note",onSubmit:e})}function U_(){return(0,At.jsxs)(xe.Form,{navigationTitle:"Grab URL",actions:(0,At.jsx)(xe.ActionPanel,{children:(0,At.jsx)(jre,{})}),children:[(0,At.jsx)(xe.Form.TextField,{id:"url",title:"URL",placeholder:"URL of web page to capture (eg. http://raycast.com)"}),(0,At.jsx)(xe.Form.TextField,{id:"tags",title:"Tags",placeholder:"comma,separated,tags"}),(0,At.jsx)(xe.Form.Checkbox,{id:"pin",label:"Pin note to top of note list"})]})}var Q=require("@raycast/api"),GT=He(wn()),ad=require("react"),le=require("react/jsx-runtime");function ene(e){let t=/(^|\n)(?<level>#{1,6})[^\S\t\n\r](?<text>.*)\n/g,r=e.matchAll(t),n=[];for(let u of r){let a={level:u.groups?.level.length??1,text:u.groups?.text??"Some Header"};n.push(a)}return n}function $_({note:e}){let[t,r]=(0,ad.useState)([]);(0,ad.useEffect)(()=>{r(ene(e.text))},[]);let n=()=>{let u=async a=>{if(!a.text){(0,Q.showToast)(Q.Toast.Style.Failure,"Please enter text");return}(0,GT.default)(bear://x-callback-url/add-text?id=${e.id}${a.header==="none"?"":"&header="+encodeURIComponent(a.header)}&mode=${a.mode}&new_line=${a.newLine?"yes":"no"}&tags=${encodeURIComponent(a.tags)}&open_note=${a.openNote!=="no"?"yes":"no"}&new_window=${a.openNote==="new"?"yes":"no"}&show_window=${a.openNote!=="no"?"yes":"no"}&edit=${a.openNote==="no"?"no":"yes"}×tamp=${a.timestamp?"yes":"no"}&text=${encodeURIComponent(a.text)},{background:a.openNote==="no"}),await(0,Q.closeMainWindow)(),await(0,Q.popToRoot)({clearSearchBar:!0})};return(0,le.jsx)(Q.Action.SubmitForm,{icon:Q.Icon.Plus,title:"Append Text",onSubmit:u})};return(0,le.jsxs)(Q.Form,{navigationTitle:Add Text To: ${e.title},actions:(0,le.jsx)(Q.ActionPanel,{children:(0,le.jsx)(n,{})}),children:[(0,le.jsxs)(Q.Form.Dropdown,{id:"mode",title:"Mode",children:[(0,le.jsx)(Q.Form.Dropdown.Item,{value:"prepend",title:"Prepend"}),(0,le.jsx)(Q.Form.Dropdown.Item,{value:"append",title:"Append"}),(0,le.jsx)(Q.Form.Dropdown.Item,{value:"replace",title:"Replace"}),(0,le.jsx)(Q.Form.Dropdown.Item,{value:"replace_all",title:"Replace All"})]}),(0,le.jsx)(Q.Form.TextArea,{id:"text",title:"Text",placeholder:"Text to add to note ..."}),(0,le.jsx)(Q.Form.Separator,{}),(0,le.jsx)(Q.Form.TextField,{id:"tags",title:"Tags",placeholder:"comma,separated,tags"}),(0,le.jsxs)(Q.Form.Dropdown,{id:"header",title:"Append To Header",children:[(0,le.jsx)(Q.Form.Dropdown.Item,{value:"none",title:"-"}),t.map(({level:u,text:a})=>(0,le.jsx)(Q.Form.Dropdown.Item,{value:a,title:h${u}: ${a}},a))]}),(0,le.jsxs)(Q.Form.Dropdown,{id:"openNote",title:"Open Note",children:[(0,le.jsx)(Q.Form.Dropdown.Item,{value:"no",title:"Don't Open Note"}),(0,le.jsx)(Q.Form.Dropdown.Item,{value:"main",title:"In Main Window"}),(0,le.jsx)(Q.Form.Dropdown.Item,{value:"new",title:"In New Window"})]}),(0,le.jsx)(Q.Form.Checkbox,{id:"newLine",label:"Force new line"}),(0,le.jsx)(Q.Form.Checkbox,{id:"timestamp",label:"Prepend time and date"}),(0,le.jsx)(Q.Form.Checkbox,{id:"pin",label:"Pin note in notes list"})]})}var te=require("@raycast/api"),VT=He(wn()),Te=require("react/jsx-runtime");function tne(){async function e(t){if(!t.text){(0,te.showToast)(te.Toast.Style.Failure,"Please enter text");return}(0,VT.default)(bear://x-callback-url/create?title=${encodeURIComponent(t.title)}&tags=${encodeURIComponent(t.tags)}&open_note=${t.openNote!=="no"?"yes":"no"}&new_window=${t.openNote==="new"?"yes":"no"}&show_window=${t.openNote!=="no"?"yes":"no"}&edit=${t.openNote==="no"?"no":"yes"}×tamp=${t.timestamp?"yes":"no"}&text=${encodeURIComponent(t.text)},{background:t.openNote==="no"}),await(0,te.closeMainWindow)(),await(0,te.popToRoot)({clearSearchBar:!0})}return(0,Te.jsx)(te.Action.SubmitForm,{icon:te.Icon.Document,title:"Create Note",onSubmit:e})}function z_(){return(0,Te.jsxs)(te.Form,{navigationTitle:"Create Note",actions:(0,Te.jsx)(te.ActionPanel,{children:(0,Te.jsx)(tne,{})}),children:[(0,Te.jsx)(te.Form.TextField,{id:"title",title:"Title",placeholder:"Note Title ..."}),(0,Te.jsx)(te.Form.TextArea,{id:"text",title:"Text",placeholder:"Text to add to note ..."}),(0,Te.jsx)(te.Form.TextField,{id:"tags",title:"Tags",placeholder:"comma,separated,tags"}),(0,Te.jsx)(te.Form.Separator,{}),(0,Te.jsxs)(te.Form.Dropdown,{id:"openNote",title:"Open Note",children:[(0,Te.jsx)(te.Form.Dropdown.Item,{value:"no",title:"Don't Open Note"}),(0,Te.jsx)(te.Form.Dropdown.Item,{value:"main",title:"In Main Window"}),(0,Te.jsx)(te.Form.Dropdown.Item,{value:"new",title:"In New Window"})]}),(0,Te.jsx)(te.Form.Checkbox,{id:"timestamp",label:"Prepend time and date"}),(0,Te.jsx)(te.Form.Checkbox,{id:"pin",label:"Pin note in notes list"})]})}var re=require("react/jsx-runtime");function rne(e){let t=new S_,r=new C_,n;try{let u=ud(e,!1);return n=r.render(t.parse(u)),n}catch(u){return console.log(Error rendering with commonmark: ${String(u)}),(0,N.showToast)(N.Toast.Style.Failure,"Error rendering markdown"),""}}function nne({note:e}){return(0,re.jsx)(N.Action.Push,{title:"Show Note Preview",target:(0,re.jsx)(H_,{note:e}),icon:N.Icon.Text,shortcut:{modifiers:["cmd"],key:"p"}})}function Tt({isNotePreview:e,note:t}){let{focusCursorAtEnd:r}=(0,N.getPreferenceValues)(),n=r?"yes":"no";return(0,re.jsxs)(N.ActionPanel,{children:[(0,re.jsxs)(N.ActionPanel.Section,{title:"Open",children:[(0,re.jsx)(N.Action.Open,{title:"Open in Bear",target:bear://x-callback-url/open-note?id=${t.id}&edit=${n},icon:N.Icon.Sidebar}),t.encrypted?null:(0,re.jsx)(N.Action.Open,{title:"Open in New Bear Window",target:bear://x-callback-url/open-note?id=${t.id}&new_window=yes&edit=${n},icon:N.Icon.Window})]}),t.encrypted?null:(0,re.jsxs)(N.ActionPanel.Section,{title:"Edit",children:[(0,re.jsx)(N.Action.Push,{title:"Add Text",icon:N.Icon.Plus,shortcut:{modifiers:["cmd"],key:"t"},target:(0,re.jsx)($_,{note:t})}),(0,re.jsx)(N.Action,{title:"Move to Archive",onAction:()=>{(0,Q_.default)(bear://x-callback-url/archive?id=${t.id}&show_window=yes,{background:!0}),(0,N.showToast)(N.Toast.Style.Success,"Moved note to archive")},icon:{source:N.Icon.List,tintColor:N.Color.Orange},shortcut:{modifiers:["ctrl","shift"],key:"x"}}),(0,re.jsx)(N.Action,{title:"Move to Trash",onAction:()=>{(0,Q_.default)(bear://x-callback-url/trash?id=${t.id}&show_window=yes,{background:!0}),(0,N.showToast)(N.Toast.Style.Success,"Moved note to trash")},icon:{source:N.Icon.Trash,tintColor:N.Color.Red},shortcut:{modifiers:["ctrl"],key:"x"}})]}),(0,re.jsxs)(N.ActionPanel.Section,{title:"Show in Raycast",children:[e?null:(0,re.jsx)(nne,{note:t}),(0,re.jsx)(N.Action.Push,{title:"Show Note Links",target:(0,re.jsx)(W_,{note:t}),icon:N.Icon.Link,shortcut:{modifiers:["cmd"],key:"l"}})]}),(0,re.jsxs)(N.ActionPanel.Section,{title:"Copy",children:[t.encrypted?null:(0,re.jsx)(N.Action.CopyToClipboard,{title:"Copy Markdown",content:ud(t.text,!1),icon:N.Icon.Clipboard,shortcut:{modifiers:["cmd"],key:"c"}}),t.encrypted?null:(0,re.jsx)(N.Action.CopyToClipboard,{title:"Copy HTML",content:rne(t.text),icon:N.Icon.Terminal,shortcut:{modifiers:["cmd","shift"],key:"c"}}),(0,re.jsx)(N.Action.CopyToClipboard,{title:"Copy Link to Note",content:bear://x-callback-url/open-note?id=${t.id},icon:N.Icon.Link,shortcut:{modifiers:["cmd","opt"],key:"c"}}),t.encrypted?null:(0,re.jsx)(N.Action.CopyToClipboard,{title:"Copy Unique Identifier",content:"note.id",icon:N.Icon.QuestionMark,shortcut:{modifiers:["cmd","opt","shift"],key:"c"}})]}),(0,re.jsxs)(N.ActionPanel.Section,{title:"Create",children:[(0,re.jsx)(N.Action.Push,{title:"New Note",icon:N.Icon.Document,shortcut:{modifiers:["cmd"],key:"n"},target:(0,re.jsx)(z_,{})}),(0,re.jsx)(N.Action.Push,{title:"New Web Capture",icon:N.Icon.Globe,shortcut:{modifiers:["cmd","shift"],key:"n"},target:(0,re.jsx)(U_,{})})]})]})}var Re=require("react/jsx-runtime");function ZT(){let[e,t]=(0,Pn.useState)(""),[r,n]=Gc(),[u,a]=(0,Pn.useState)();(0,Pn.useEffect)(()=>{r!=null&&a(r.getNotes(e))},[r,e]),n&&(0,ye.showToast)(ye.Toast.Style.Failure,"Something went wrong",n.message);let i=(u??[]).length>0&&(0,ye.getPreferenceValues)().showPreviewInListView;return(0,Re.jsx)(ye.List,{isLoading:u==null,onSearchTextChange:t,searchBarPlaceholder:"Search note text or id ...",isShowingDetail:i,children:u?.map(s=>(0,Re.jsx)(ye.List.Item,{title:s.title===""?"Untitled Note":s.encrypted?"\u{1F512} "+s.title:s.title,subtitle:i?void 0:s.formattedTags,icon:{source:"command-icon.png"},keywords:[s.id],actions:(0,Re.jsx)(Tt,{isNotePreview:!1,note:s}),accessoryTitle:i?void 0:edited ${(0,id.formatDistanceToNowStrict)(s.modifiedAt,{addSuffix:!0})},detail:(0,Re.jsx)(ye.List.Item.Detail,{markdown:s.encrypted?"*This note's content is encrypted*":s.text,metadata:(0,Re.jsx)(une,{note:s})})},s.id))})}function une({note:e}){return(0,Re.jsxs)(ye.List.Item.Detail.Metadata,{children:[(0,Re.jsx)(ye.List.Item.Detail.Metadata.TagList,{title:"Tags",children:e.tags.length===0?(0,Re.jsx)(ye.List.Item.Detail.Metadata.TagList.Item,{text:"Untagged"}):e.tags.map(t=>(0,Re.jsx)(ye.List.Item.Detail.Metadata.TagList.Item,{text:t,color:ye.Color.Yellow},t))}),(0,Re.jsx)(ye.List.Item.Detail.Metadata.Label,{title:"Last modified",text:(0,id.formatDistanceToNowStrict)(e.modifiedAt,{addSuffix:!0})}),(0,Re.jsx)(ye.List.Item.Detail.Metadata.Label,{title:"Created",text:(0,id.formatDistanceToNowStrict)(e.createdAt,{addSuffix:!0})}),(0,Re.jsx)(ye.List.Item.Detail.Metadata.Label,{title:"Word count",text:${e.wordCount} words`})]})}0&&(module.exports={});
117: /*! http://mths.be/fromcodepoint v0.2.1 by @mathias /
118: /! http://mths.be/repeat v0.2.0 by @mathias */
119:
Nr:index.js:5:2490
at ray-navigation-stack
ko:index.js:5:2088
Who will benefit from this feature?
Everyone
Anything else?
No response
Also looking for Bear 2 Beta x Raycast support :)
I opened pull request #5706 adding Bear 2 support to the extension 🙂
@raycastbot close this issue
|
gharchive/issue
| 2023-03-28T20:38:30 |
2025-04-01T06:40:11.521815
|
{
"authors": [
"lucaschultz",
"ohong",
"roamnovice"
],
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/issues/5622",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2363822272
|
Update qrcode-generator extension
Description
Screencast
Checklist
[ ] I read the extension guidelines
[ ] I read the documentation about publishing
[ ] I ran npm run build and tested this distribution build in Raycast
[ ] I checked that files in the assets folder are used by the extension itself
[ ] I checked that assets used by the README are placed outside of the metadata folder
Hi @Melvynx and @darmiel 👋
I updated the extension, do you mind checking it out 🙂
|
gharchive/pull-request
| 2024-06-20T08:16:04 |
2025-04-01T06:40:11.526319
|
{
"authors": [
"pernielsentikaer"
],
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/pull/13060",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2561317500
|
[Typeform Navigator] choose endpoint + update logo + cache + add icons + move to hooks + show basic responses
Description
I wanted a basic overview of Responses in a certain form which is why we have the following CHANGELOG:
Choose EU or Default API
Updated Typeform logo
Items are now cached for a better experience
Added more icons for list items
Added a very basic view for Responses
Updated dependencies + removed got
By utilizing useFetch, we have 2 slightly generic hooks which provide us with pagination and error handling. We might need to bring back got in the future if any "POST" endpoints are added but that's a problem to discuss later.
Screencast
Unfortunately, I can not share responses from a real form so I am sharing a screencast from a sample account:
https://github.com/user-attachments/assets/ba51424e-acde-4ecc-97bc-396a32307146
Checklist
[x] I read the extension guidelines
[x] I read the documentation about publishing
[x] I ran npm run build and tested this distribution build in Raycast
[x] I checked that files in the assets folder are used by the extension itself
[x] I checked that assets used by the README are placed outside of the metadata folder
Thanks for the update @xmok
@jdvr do you want to check this?
|
gharchive/pull-request
| 2024-10-02T11:25:21 |
2025-04-01T06:40:11.532219
|
{
"authors": [
"pernielsentikaer",
"xmok"
],
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/pull/14762",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2672718947
|
Remove duplicate bookmarks from search results.
The bookmark search was showing duplicate entries when a URL was bookmarked
multiple times. This was caused by the query returning all bookmark entries
for a URL, including tag-related entries.
The fix uses a window function (ROW_NUMBER) to partition bookmarks by their
foreign key (URL reference) and selects only the entry with the lowest ID
for each URL. This ensures each bookmarked URL appears only once in the results while preserving all bookmark metadata including optional titles.
fixes #15442
Description
Screencast
Checklist
[x] I read the extension guidelines
[x] I read the documentation about publishing
[x] I ran npm run build and tested this distribution build in Raycast
[x] I checked that files in the assets folder are used by the extension itself
[x] I checked that assets used by the README are placed outside of the metadata folder
I was also getting some db locked and read issues periodically, so I added some retry logic.
Awesome stuff man! Thanks for the contribution.
|
gharchive/pull-request
| 2024-11-19T16:09:21 |
2025-04-01T06:40:11.536961
|
{
"authors": [
"Keyruu",
"theherk"
],
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/pull/15443",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1379957202
|
Updated Any Website Search extension
Extension: extensions/any-website-search
Description
The initial release had bug that ignored the selected search suggestion; this PR fixes it. Also added a setting for search suggestions provider and improved how DDG bang-related intelligence works.
Can you also add so when I change to another engine in the dropdown, then the search results are refreshed?
The suggestion always come from the user's configured search suggestions provider: Google or DuckDuckGo (or none). The selected site in the dropdown is only the "destination" and is not used to get suggestions. So there is nothing to refresh when changing the search engine.
|
gharchive/pull-request
| 2022-09-20T20:28:08 |
2025-04-01T06:40:11.538748
|
{
"authors": [
"rben01"
],
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/pull/2965",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1876451216
|
Update obsidian-smart-capture extension
Description
Adding support to fetch links from more browsers: "Arc", "Edge", "Firefox", "Brave" & "Opera"
Checklist
[x] I read the extension guidelines
[x] I read the documentation about publishing
[x] I ran npm run build and tested this distribution build in Raycast
[x] I checked that files in the assets folder are used by the extension itself
[x] I checked that assets used by the README are placed outside of the metadata folder
@pernielsentikaer I planned to use them in Readme but forgot to update it.
|
gharchive/pull-request
| 2023-08-31T23:54:18 |
2025-04-01T06:40:11.542331
|
{
"authors": [
"trillhause"
],
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/pull/8179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1209128957
|
rayon thread pool coredump
hey, I have a problem with rayon thread pool,Probabilistic coredump. the core like:
rayon::ThreadPoolBuilder::new()
.stack_size(8 * 1024 *1024)
.num_threads((num_cpus::get() * 6 / 8).min(32))
.panic_handler(rayon_panic_handler)
.build()
.expect("Failed to initialize a thread pool for worker")
thread_pool.install(move || {
loop {
//debug!("mine_one_unchecked");
let block_header =
BlockHeader::mine_once_unchecked(&block_template, &terminator_clone, &mut thread_rng())?;
//debug!("mine_one_unchecked end");
// Ensure the share difficulty target is met.
if N::posw().verify(
block_header.height(),
share_difficulty,
&[*block_header.to_header_root().unwrap(), *block_header.nonce()],
block_header.proof(),
) {
return Ok::<(N::PoSWNonce, PoSWProof<N>, u64), anyhow::Error>((
block_header.nonce(),
block_header.proof().clone(),
block_header.proof().to_proof_difficulty()?,
));
}
}
})
the backtrace:
gdb) bt #0 <alloc::vec::Vec<T,A> as core::ops::deref::Deref>::deref (self=0xf58017d4cd80e144) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2402 #1 <alloc::vec::Vec<T,A> as core::ops::index::Index<I>>::index (self=0xf58017d4cd80e144, index=1) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2496 #2 rayon_core::sleep::Sleep::wake_specific_thread (self=0xf58017d4cd80e134, index=1) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/sleep/mod.rs:355 #3 0x000055e3c542dbe0 in rayon_core::sleep::Sleep::notify_worker_latch_is_set (self=0xf58017d4cd80e134, target_worker_index=1) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/sleep/mod.rs:245 #4 rayon_core::registry::Registry::notify_worker_latch_is_set (target_worker_index=1, self=<optimized out>) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:544 #5 <rayon_core::latch::SpinLatch as rayon_core::latch::Latch>::set (self=0x7facd9bec448) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/latch.rs:214 #6 <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute (this=0x7facd9bec448) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/job.rs:123 #7 0x000055e3c53da4b1 in rayon_core::job::JobRef::execute (self=<optimized out>) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/job.rs:59 #8 rayon_core::registry::WorkerThread::execute (self=<optimized out>, job=...) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:749 #9 rayon_core::registry::WorkerThread::wait_until_cold (self=<optimized out>, latch=<optimized out>) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:726 #10 0x000055e3c5633534 in rayon_core::registry::WorkerThread::wait_until (self=0x7facd8be4800, latch=<optimized out>) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:700 #11 rayon_core::registry::main_loop (registry=..., index=9, worker=...) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:833 #12 rayon_core::registry::ThreadBuilder::run (self=...) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:55 #13 0x000055e3c5635581 in <rayon_core::registry::DefaultSpawn as rayon_core::registry::ThreadSpawn>::spawn::{{closure}} () at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:100 #14 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/sys_common/backtrace.rs:123 #15 0x000055e3c5630994 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/thread/mod.rs:483 #16 <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/core/src/panic/unwind_safe.rs:271 #17 std::panicking::try::do_call (data=<optimized out>) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:403 #18 std::panicking::try (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:367 #19 std::panic::catch_unwind (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panic.rs:133 #20 std::thread::Builder::spawn_unchecked::{{closure}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/thread/mod.rs:482 #21 core::ops::function::FnOnce::call_once{{vtable-shim}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/core/src/ops/function.rs:227 #22 0x000055e3c585ce05 in <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691 #23 <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691 #24 std::sys::unix::thread::Thread::new::thread_start () at library/std/src/sys/unix/thread.rs:106 #25 0x00007fb41d7696db in start_thread (arg=0x7facd8be5700) at pthread_create.c:463 #26 0x00007fb41cef061f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
https://github.com/rayon-rs/rayon/blob/v1.5.1/rayon-core/src/sleep/mod.rs#L355 index out of range
cpu: 3970x 7542 with hyperthreading open
This seems similar to #913 and #919, but I was never able to reproduce those myself. Do you have a full example that you can share for your case?
my code is based on a complex library(https://github.com/AleoHQ/snarkVM), It is the client part of a complex system. I don't know if the code will help, It cannot run alone. I will try to reproduce it in a simple code, and if I reproduce it, I will share the code with you. thanks for your reply!!!
Interesting, #913 also happens to be related to Aleo... but #919 is not, so there still might be a more general bug here.
diff --git a/rayon-core/src/registry.rs b/rayon-core/src/registry.rs
index 4dd7971..06f21da 100644
--- a/rayon-core/src/registry.rs
+++ b/rayon-core/src/registry.rs
@@ -437,8 +437,10 @@ impl Registry {
unsafe {
let worker_thread = WorkerThread::current();
if worker_thread.is_null() {
println!("in_worker_cold");
self.in_worker_cold(op)
} else if (*worker_thread).registry().id() != self.id() {
println!("in_worker_cross");
self.in_worker_cross(&*worker_thread, op)
} else {
// Perfectly valid to give them a `&T`: this is the
diff --git a/rayon-core/src/thread_pool/mod.rs b/rayon-core/src/thread_pool/mod.rs
index 5edaedc..23f7ab7 100644
--- a/rayon-core/src/thread_pool/mod.rs
+++ b/rayon-core/src/thread_pool/mod.rs
@@ -108,6 +108,7 @@ impl ThreadPool {
OP: FnOnce() -> R + Send,
R: Send,
{
println!("install");
self.registry.in_worker(|_, _| op())
}
i can't find any extra pool.install invoke in func "BlockHeader::mine_once_unchecked"
ThreadPool::join, scope, and scope_fifo each call install as well.
If this is specifically an issue between multiple pools, per in_worker_cross, that's new information.
Can you capture a backtrace of all threads? In gdb that's thread apply all backtrace (t a a bt for short), and you can use logging to write that to a text file.
By the way, I have edited your comments to use fenced code blocks for readability, as described here:
https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks
ThreadPool::join, scope, and scope_fifo each call install as well.
If this is specifically an issue between multiple pools, per in_worker_cross, that's new information.
Can you capture a backtrace of all threads? In gdb that's thread apply all backtrace (t a a bt for short), and you can use logging to write that to a text file.
thanks, i will try to capture a backtrace
i test a patch below, the coredump frequency has decreased.
diff --git a/rayon-core/src/job.rs b/rayon-core/src/job.rs
index a71f1b0..9dd1aa4 100644
--- a/rayon-core/src/job.rs
+++ b/rayon-core/src/job.rs
@@ -4,6 +4,7 @@ use crossbeam_deque::{Injector, Steal};
use std::any::Any;
use std::cell::UnsafeCell;
use std::mem;
+use std::sync::Mutex;
pub(super) enum JobResult<T> {
None,
@@ -73,6 +74,7 @@ where
pub(super) latch: L,
func: UnsafeCell<Option<F>>,
result: UnsafeCell<JobResult<R>>,
+ m: Mutex<bool>,
}
impl<L, F, R> StackJob<L, F, R>
@@ -86,6 +88,7 @@ where
latch,
func: UnsafeCell::new(Some(func)),
result: UnsafeCell::new(JobResult::None),
+ m: Mutex::new(false),
}
}
@@ -114,6 +117,15 @@ where
}
let this = &*this;
+ {
+ let mut guard = this.m.lock().unwrap();
+ if *guard {
+ println!("job has been comsumed.");
+ return
+ }
+ *guard = true;
+ }
+ let _holder = this.latch.hold();
let abort = unwind::AbortIfPanic;
let func = (*this.func.get()).take().unwrap();
(*this.result.get()) = match unwind::halt_unwinding(call(func)) {
diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs
index 1d573b7..c4f3275 100644
--- a/rayon-core/src/latch.rs
+++ b/rayon-core/src/latch.rs
@@ -41,6 +41,9 @@ pub(super) trait Latch {
/// and it's typically better to read all the fields you will need
/// to access *before* a latch is set!
fn set(&self);
+ fn hold(&self) -> Option<Arc<Registry>> {
+ None
+ }
}
pub(super) trait AsCoreLatch {
@@ -214,6 +217,11 @@ impl<'r> Latch for SpinLatch<'r> {
registry.notify_worker_latch_is_set(target_worker_index);
}
}
+
+ #[inline]
+ fn hold(&self) -> Option<Arc<Registry>> {
+ Some(Arc::clone(self.registry))
+ }
}
On the 3970X platform, Coredump disappeared for 2 days +
#!/bin/bash
export RUST_BACKTRACE=full
ulimit -c unlimited
x=1
name="sh-94"
if [ $# == 1 ];then
x=$1
echo $x-$1
fi
while true
do
for ((i=1; i<=$x; i++))
do
echo "process $i"
./target/debug/worker --address aleo1nuhl5vf8xdldzxsfnzsnsdgfvqkuufzex2598fzjuxkq2qrl5qzqupr666 --tcp_server "36.189.234.195:4133" --ssl_server "36.189.234.195:4134" --ssl --custom_name $name --parallel_num 6 >> log_$i 2>&1 &
# sudo prlimit --core=unlimited --pid $!
done
wait
cur_dateTime="`date +"%Y-%m-%d %H:%M:%S"`"
echo "restart $cur_dateTime" >> running_history
done
aleo1@x3970x-94:~/snarkOS$ cat running_history
restart 2022-04-24 15:06:30
restart 2022-04-24 18:58:54
restart 2022-04-25 10:59:33
restart 2022-04-25 17:47:02
restart 2022-04-26 03:13:02
restart 2022-04-26 13:33:20
restart 2022-04-26 14:40:14
restart 2022-04-26 17:18:15
aleo1@x3970x-94:~/snarkOS$
aleo1@x3970x-94:~/snarkOS$
aleo1@x3970x-94:~/snarkOS$ date
Fri Apr 29 09:59:04 CST 2022
On the 3990X platform, Coredump occurred once in two days. Unfortunately, I did not grab the Coredump file when the coredump occurred.
At first I suspected that stackJob was consumed twice, but I didn't catch the log "job has been comsumed" when I coredump on 3990x. so my guess was wrong.
Can you try it with just the "hold" addition?
There are comments in SpinLatch::set about making sure the pool is kept alive, dating back to #739. In review, it still seems like we're doing the right thing there, but I could be wrong.
the coredump frequency has decreased in my case.
Coredump occurred once in two days.
Oh, it's still not completely fixed? In that case, it might be that the Mutex is adding enough synchronization to make some race condition harder to fail, but that's just a guess.
Can you try it with just the "hold" addition?
There are comments in SpinLatch::set about making sure the pool is kept alive, dating back to #739. In review, it still seems like we're doing the right thing there, but I could be wrong.
I've tried that, the mutex lock i added dind't work, the "holder" seems to be working, because the coredump frequency has dropped significantly on both of my platforms,but it still not completely fixed. For the same reson.
## 3970x
aleo1@x3970x-94:~/snarkOS$ cat running_history
restart 2022-04-24 15:06:30
restart 2022-04-24 18:58:54
restart 2022-04-25 10:59:33
restart 2022-04-25 17:47:02
restart 2022-04-26 03:13:02
restart 2022-04-26 13:33:20
restart 2022-04-26 14:40:14
restart 2022-04-26 17:18:15
restart 2022-05-01 08:31:29
restart 2022-05-04 02:25:54
aleo1@x3970x-94:~/snarkOS$ date
Thu May 5 09:51:18 CST 2022
## 3990x since 2022-04-28
ps@filecoin-21891:~/6pool-worker-file$ cat running_history
restart 2022-05-03 05:15:09
restart 2022-05-04 08:54:47
ps@filecoin-21891:~/6pool-worker-file$ date
2022年 05月 05日 星期四 09:52:40 CST
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./target/debug/worker --address aleo1nuhl5vf8xdldzxsfnzsnsdgfvqkuufzex2598fzjux'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 <alloc::vec::Vec<T,A> as core::ops::deref::Deref>::deref (self=0x1b8)
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2402
2402 /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs: No such file or directory.
[Current thread is 1 (Thread 0x7f92f87fb700 (LWP 118675))]
warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts
of file /mnt/fstar/.aleo/aleo1/snarkOS/target/debug/worker.
Use `info auto-load python-scripts [REGEXP]' to list them.
(gdb) bt
#0 <alloc::vec::Vec<T,A> as core::ops::deref::Deref>::deref (self=0x1b8)
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2402
#1 <alloc::vec::Vec<T,A> as core::ops::index::Index<I>>::index (self=0x1b8, index=3)
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2496
#2 rayon_core::sleep::Sleep::wake_specific_thread (self=0x1a8, index=3)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/sleep/mod.rs:357
#3 0x00005588f821a238 in rayon_core::sleep::Sleep::notify_worker_latch_is_set (self=0x1a8, target_worker_index=3)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/sleep/mod.rs:247
#4 rayon_core::registry::Registry::notify_worker_latch_is_set (target_worker_index=3, self=<optimized out>)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:546
#5 <rayon_core::latch::SpinLatch as rayon_core::latch::Latch>::set (self=0x7f92505f6138)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/latch.rs:217
#6 <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute (this=0x7f92505f6138)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/job.rs:135
#7 0x00005588f81fb4de in rayon_core::job::JobRef::execute (self=<optimized out>)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/job.rs:60
#8 rayon_core::registry::WorkerThread::execute (self=<optimized out>, job=...)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:751
#9 rayon_core::registry::WorkerThread::wait_until_cold (self=<optimized out>, latch=<optimized out>)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:728
#10 0x00005588f845caf4 in rayon_core::registry::WorkerThread::wait_until (self=0x7f92f87fa880, latch=<optimized out>)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:702
#11 rayon_core::registry::main_loop (registry=..., index=1, worker=...)
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:835
#12 rayon_core::registry::ThreadBuilder::run (self=...) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:55
#13 0x00005588f845a211 in <rayon_core::registry::DefaultSpawn as rayon_core::registry::ThreadSpawn>::spawn::{{closure}} ()
at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:100
#14 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...)
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/sys_common/backtrace.rs:123
#15 0x00005588f845db84 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} ()
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/thread/mod.rs:483
#16 <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/core/src/panic/unwind_safe.rs:271
#17 std::panicking::try::do_call (data=<optimized out>)
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:403
#18 std::panicking::try (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:367
#19 std::panic::catch_unwind (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panic.rs:133
#20 std::thread::Builder::spawn_unchecked::{{closure}} ()
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/thread/mod.rs:482
#21 core::ops::function::FnOnce::call_once{{vtable-shim}} ()
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/core/src/ops/function.rs:227
#22 0x00005588f8679d05 in <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once ()
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691
#23 <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once ()
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691
#24 std::sys::unix::thread::Thread::new::thread_start () at library/std/src/sys/unix/thread.rs:106
#25 0x00007f940454d6db in start_thread (arg=0x7f92f87fb700) at pthread_create.c:463
#26 0x00007f9403cd461f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
I would still like to see thread apply all backtrace too.
I would still like to see thread apply all backtrace
gdb.txt
I am testing another patch:rayon-coredump.diff.txt
My basic idea is to keep the "SpinLatch instance (here)" alive until "latch.set() (here)" is executed. This ensures that the calling thread waits here for the computed thread to terminate the latch.set() call.
If the patch works, I think there is a real possibility of a potential problem with the code handling race conditions, although it is not easy to spot.
The 24 hour test results look normal so far, I will continue to test for a period of time until something goes wrong or the problem does not recur for a long time
diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs
index 1d573b7..0ecf381 100644
--- a/rayon-core/src/latch.rs
+++ b/rayon-core/src/latch.rs
@@ -143,6 +143,7 @@ pub(super) struct SpinLatch<'r> {
registry: &'r Arc<Registry>,
target_worker_index: usize,
cross: bool,
+ life_lock: Mutex<()>,
}
impl<'r> SpinLatch<'r> {
@@ -157,6 +158,7 @@ impl<'r> SpinLatch<'r> {
registry: thread.registry(),
target_worker_index: thread.index(),
cross: false,
+ life_lock: Mutex::new(()),
}
}
@@ -165,9 +167,16 @@ impl<'r> SpinLatch<'r> {
/// safely call the notification.
#[inline]
pub(super) fn cross(thread: &'r WorkerThread) -> SpinLatch<'r> {
+ //SpinLatch {
+ // cross: true,
+ // ..SpinLatch::new(thread)
+ //}
SpinLatch {
+ core_latch: CoreLatch::new(),
+ registry: thread.registry(),
+ target_worker_index: thread.index(),
cross: true,
- ..SpinLatch::new(thread)
+ life_lock: Mutex::new(()),
}
}
@@ -187,6 +196,7 @@ impl<'r> AsCoreLatch for SpinLatch<'r> {
impl<'r> Latch for SpinLatch<'r> {
#[inline]
fn set(&self) {
+ let _life = self.life_lock.lock().unwrap();
let cross_registry;
let registry = if self.cross {
@@ -216,6 +226,17 @@ impl<'r> Latch for SpinLatch<'r> {
}
}
+impl<'r> Drop for SpinLatch<'r> {
+ fn drop(&mut self) {
+ {
+ let _life = self.life_lock.lock().unwrap();
+ }
+ //if self.cross {
+ // println!("Drop SpinLatch");
+ //}
+ }
+}
+
/// A Latch starts as false and eventually becomes true. You can block
/// until it becomes true.
#[derive(Debug)]
gdb.txt
Wow, 832 threads total, and 688 of them are rayon threads?!?
Then there are 141 tokio threads, 2 threads that are just starting, and the main thread.
Technically there's no reason why that many threads should cause memory problems, but you're severely oversubscribed unless most of those are idle. I do see 529 in rayon_core::sleep::Sleep::sleep, leaving 159 active rayon threads. If nothing else, this will cause a lot of context switching and exacerbate whatever race condition we're facing.
gdb.txt
Wow, 832 threads total, and 688 of them are rayon threads?!? Then there are 141 tokio threads, 2 threads that are just starting, and the main thread.
Technically there's no reason why that many threads should cause memory problems, but you're severely oversubscribed unless most of those are idle. I do see 529 in rayon_core::sleep::Sleep::sleep, leaving 159 active rayon threads. If nothing else, this will cause a lot of context switching and exacerbate whatever race condition we're facing.
Yes, I have 12 globally independent Rayon pools that are concurrently executing "BlockHeader::mine_once_unchecked", and BlockHeader::mine_once_unchecked is almost unchecked during execution (although the number is fixed at 48? ) Temporary pool. My 3900X platform has 64/128 (HyperThread) cores.
I am testing another patch:rayon-coredump.diff.txt My basic idea is to keep the "SpinLatch instance (here)" alive until "latch.set() (here)" is executed. This ensures that the calling thread waits here for the computed thread to terminate the latch.set() call. If the patch works, I think there is a real possibility of a potential problem with the code handling race conditions, although it is not easy to spot. The 24 hour test results look normal so far, I will continue to test for a period of time until something goes wrong or the problem does not recur for a long time
diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs
index 1d573b7..0ecf381 100644
--- a/rayon-core/src/latch.rs
+++ b/rayon-core/src/latch.rs
@@ -143,6 +143,7 @@ pub(super) struct SpinLatch<'r> {
registry: &'r Arc<Registry>,
target_worker_index: usize,
cross: bool,
+ life_lock: Mutex<()>,
}
impl<'r> SpinLatch<'r> {
@@ -157,6 +158,7 @@ impl<'r> SpinLatch<'r> {
registry: thread.registry(),
target_worker_index: thread.index(),
cross: false,
+ life_lock: Mutex::new(()),
}
}
@@ -165,9 +167,16 @@ impl<'r> SpinLatch<'r> {
/// safely call the notification.
#[inline]
pub(super) fn cross(thread: &'r WorkerThread) -> SpinLatch<'r> {
+ //SpinLatch {
+ // cross: true,
+ // ..SpinLatch::new(thread)
+ //}
SpinLatch {
+ core_latch: CoreLatch::new(),
+ registry: thread.registry(),
+ target_worker_index: thread.index(),
cross: true,
- ..SpinLatch::new(thread)
+ life_lock: Mutex::new(()),
}
}
@@ -187,6 +196,7 @@ impl<'r> AsCoreLatch for SpinLatch<'r> {
impl<'r> Latch for SpinLatch<'r> {
#[inline]
fn set(&self) {
+ let _life = self.life_lock.lock().unwrap();
let cross_registry;
let registry = if self.cross {
@@ -216,6 +226,17 @@ impl<'r> Latch for SpinLatch<'r> {
}
}
+impl<'r> Drop for SpinLatch<'r> {
+ fn drop(&mut self) {
+ {
+ let _life = self.life_lock.lock().unwrap();
+ }
+ //if self.cross {
+ // println!("Drop SpinLatch");
+ //}
+ }
+}
+
/// A Latch starts as false and eventually becomes true. You can block
/// until it becomes true.
#[derive(Debug)]
this patch seems works fine. Coredump has not appeared so far.
# 3970x platform
aleo1@x3970x-94:~/snarkOS$ cat running_history
restart 2022-04-24 15:06:30
restart 2022-04-24 18:58:54
restart 2022-04-25 10:59:33
restart 2022-04-25 17:47:02
restart 2022-04-26 03:13:02
restart 2022-04-26 13:33:20
restart 2022-04-26 14:40:14
restart 2022-04-26 17:18:15
restart 2022-05-01 08:31:29
restart 2022-05-04 02:25:54
start 2022-05-05 13:57:53
aleo1@x3970x-94:~/snarkOS$ date
Mon May 9 11:15:43 CST 2022
#3990x platform
restart 2022-05-03 05:15:09
restart 2022-05-04 08:54:47
start 2022-05-05 14:51:45
ps@filecoin-21891:~/6pool-worker-file$ date
2022年 05月 09日 星期一 11:17:02 CST
# running script
#!/bin/bash
cur_dateTime="`date +"%Y-%m-%d %H:%M:%S"`"
echo "start $cur_dateTime" >> running_history
export RUST_BACKTRACE=full
ulimit -c unlimited
x=1
name="sh-91"
# hk
#tcp_server="61.10.9.34:4133"
#ssl_server="61.10.9.34:4134"
# china
tcp_server="36.189.234.195:4133"
ssl_server="36.189.234.195:4134"
if [ $# == 1 ];then
x=$1
echo $x-$1
fi
while true
do
for ((i=1; i<=$x; i++))
do
echo "process $i"
./worker --address aleo1nuhl5vf8xdldzxsfnzsnsdgfvqkuufzex2598fzjuxkq2qrl5qzqupr666 --tcp_server $tcp_server --ssl_server $ssl_server --ssl --custom_name $name --parallel_num 12 >> log_$i 2>&1 &
# sudo prlimit --core=unlimited --pid $!
done
wait
cur_dateTime="`date +"%Y-%m-%d %H:%M:%S"`"
echo "restart $cur_dateTime" >> running_history
done
I think I found the issue -- could you see if this fixes it for you?
diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs
index 1d573b781612..b84fbe371ee3 100644
--- a/rayon-core/src/latch.rs
+++ b/rayon-core/src/latch.rs
@@ -189,19 +189,21 @@ impl<'r> Latch for SpinLatch<'r> {
fn set(&self) {
let cross_registry;
- let registry = if self.cross {
+ let registry: &Registry = if self.cross {
// Ensure the registry stays alive while we notify it.
// Otherwise, it would be possible that we set the spin
// latch and the other thread sees it and exits, causing
// the registry to be deallocated, all before we get a
// chance to invoke `registry.notify_worker_latch_is_set`.
cross_registry = Arc::clone(self.registry);
- &cross_registry
+ &*cross_registry
} else {
// If this is not a "cross-registry" spin-latch, then the
// thread which is performing `set` is itself ensuring
- // that the registry stays alive.
- self.registry
+ // that the registry stays alive. However, that doesn't
+ // include this *particular* `Arc` handle if the waiting
+ // thread then exits, so we must completely dereference it.
+ &**self.registry
};
let target_worker_index = self.target_worker_index;
Ok, I will test and give feedback.
Haha, "* &" is confusing in Rust
Yeah, that * invokes Deref, then & to capture an inner reference.
I have applied the patch above to start the test, and I think you found the real problem, the 'Arc' handle's Lifecycle has no guarantee.
aleo1@x3970x-94:~/rayon$ git diff
diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs
index 1d573b7..b84fbe3 100644
--- a/rayon-core/src/latch.rs
+++ b/rayon-core/src/latch.rs
@@ -189,19 +189,21 @@ impl<'r> Latch for SpinLatch<'r> {
fn set(&self) {
let cross_registry;
- let registry = if self.cross {
+ let registry: &Registry = if self.cross {
// Ensure the registry stays alive while we notify it.
// Otherwise, it would be possible that we set the spin
// latch and the other thread sees it and exits, causing
// the registry to be deallocated, all before we get a
// chance to invoke `registry.notify_worker_latch_is_set`.
cross_registry = Arc::clone(self.registry);
- &cross_registry
+ &*cross_registry
} else {
// If this is not a "cross-registry" spin-latch, then the
// thread which is performing `set` is itself ensuring
- // that the registry stays alive.
- self.registry
+ // that the registry stays alive. However, that doesn't
+ // include this *particular* `Arc` handle if the waiting
+ // thread then exits, so we must completely dereference it.
+ &**self.registry
};
let target_worker_index = self.target_worker_index;
aleo1@x3970x-94:~/rayon$
aleo1@x3970x-94:~/snarkOS$ cat running_history
restart 2022-04-24 15:06:30
restart 2022-04-24 18:58:54
restart 2022-04-25 10:59:33
restart 2022-04-25 17:47:02
restart 2022-04-26 03:13:02
restart 2022-04-26 13:33:20
restart 2022-04-26 14:40:14
restart 2022-04-26 17:18:15
restart 2022-05-01 08:31:29
restart 2022-05-04 02:25:54
start 2022-05-05 13:57:53
start 2022-05-11 09:48:25
aleo1@x3970x-94:~/snarkOS$
ps@filecoin-21891:~/6pool-worker-file$ cat running_history
restart 2022-05-03 05:15:09
restart 2022-05-04 08:54:47
start 2022-05-05 14:51:45
start 2022-05-11 09:58:30
https://github.com/rayon-rs/rayon/issues/929#issuecomment-1123104488
all tests seems ok. In addition to the two machines previously tested, six more machines participated in the test.
ps@filecoin-21891:~/6pool-worker-file$ cat running_history
restart 2022-05-03 05:15:09
restart 2022-05-04 08:54:47
start 2022-05-05 14:51:45
start 2022-05-11 09:58:30
ps@filecoin-21891:~/6pool-worker-file$ date
2022年 05月 12日 星期四 09:27:28 CST
aleo1@x3970x-94:~/snarkOS$ cat running_history
restart 2022-04-24 15:06:30
restart 2022-04-24 18:58:54
restart 2022-04-25 10:59:33
restart 2022-04-25 17:47:02
restart 2022-04-26 03:13:02
restart 2022-04-26 13:33:20
restart 2022-04-26 14:40:14
restart 2022-04-26 17:18:15
restart 2022-05-01 08:31:29
restart 2022-05-04 02:25:54
start 2022-05-05 13:57:53
start 2022-05-11 09:48:25
aleo1@x3970x-94:~/snarkOS$ date
Thu May 12 09:27:26 CST 2022
@cuviper
|
gharchive/issue
| 2022-04-20T03:57:48 |
2025-04-01T06:40:11.582730
|
{
"authors": [
"cuviper",
"scuwan"
],
"repo": "rayon-rs/rayon",
"url": "https://github.com/rayon-rs/rayon/issues/929",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1469009342
|
[Cleanup] use C code that is also compatible with C++
While the template is valid C code, it does not work well when converted to C++. Some users would like to use the template as the starting point for C++ code.
This PR makes a few small changes to the C code to make it also compatible with C++.
use the scene enums everywhere, not a mix of enums and numbers (also makes the code read better)
don't pass the address of unnamed temp variables into functions, allocate an actual structure.
These changes allow the c files to be simply renamed C++ and compile without issues.
@JeffM2501 Thanks! I'm used to use integers for enum types, maybe it's about time to change to a stronger typeage mode... :)
|
gharchive/pull-request
| 2022-11-30T03:56:08 |
2025-04-01T06:40:11.587401
|
{
"authors": [
"JeffM2501",
"raysan5"
],
"repo": "raysan5/raylib-game-template",
"url": "https://github.com/raysan5/raylib-game-template/pull/12",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
2610048868
|
[RTEXTURES] Remove the panorama cubemap layout option
The panorama cube map option was not implemented, so this PR removes it from the enumeration in raylib.h so that users do not get confused and try to use it.
I left a todo in the code for some aspiring developer to finish.
@JeffM2501 thanks for the review!
|
gharchive/pull-request
| 2024-10-23T23:38:53 |
2025-04-01T06:40:11.588582
|
{
"authors": [
"JeffM2501",
"raysan5"
],
"repo": "raysan5/raylib",
"url": "https://github.com/raysan5/raylib/pull/4425",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
461897166
|
个人月度任务安排 | 方子龙
https://www.raysonblog.cn/monthplan/
test gitment
|
gharchive/issue
| 2019-06-28T07:26:23 |
2025-04-01T06:40:11.589540
|
{
"authors": [
"raysonfang",
"wave-gbt"
],
"repo": "raysonfang/blog-comments",
"url": "https://github.com/raysonfang/blog-comments/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2184801440
|
Optimize memory usage in ChipTuneEngine.
Optimize memory usage in ChipTuneEngine so that instead of generating all waveforms per note first and then playing them, instead we create a waveform one or a few notes or beats before it is about to be played and then freeing up the memory just as a note has finished playing.
Optionally group together sounds that are identical for more efficient memory usage.
|
gharchive/issue
| 2024-03-13T20:07:48 |
2025-04-01T06:40:11.599189
|
{
"authors": [
"razterizer"
],
"repo": "razterizer/8Beat",
"url": "https://github.com/razterizer/8Beat/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2640170137
|
Add editor-in-editor textel editor
To make freehand painting much easier.
Also indicate if the supplied material index already exists or not.
Use key 'E' (for edit).
|
gharchive/issue
| 2024-11-07T07:45:24 |
2025-04-01T06:40:11.600389
|
{
"authors": [
"razterizer"
],
"repo": "razterizer/TextUR",
"url": "https://github.com/razterizer/TextUR/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
676033631
|
docs
i am real new to hydra json in whole.
i managed to install package and i get:
{
"@context": {
"@vocab": "http:\/\/politiebureaus.localhost\/docs",
"hydra": "http:\/\/www.w3.org\/ns\/hydra\/core#",
"rdf": "http:\/\/www.w3.org\/1999\/02\/22-rdf-syntax-ns#"
},
"@id": "http:\/\/politiebureaus.localhost\/docs",
"@type": "hydra:ApiDocumentation",
"hydra:title": "Politie Hydra API",
"hydra:description": "Politie API that conforms to the Hydra specification",
"hydra:entrypoint": null,
"hydra:supportedClass": "[]"
}
As shows, i can set description etc in .env
I had expected my App\Models classes in hydra:supportedClass ?
or something in entrypoint ?
Is there some docs around telling me what to do now?
Or is there an example project use-ing this package?
Else some suggestions?
thanks,
Noud
Sorry, I didn't get too far with this project, I was trying something as a proof of concept and it hasn't really worked out. I'm going to archive the project as I don't really have too much time to invest in it at the moment. If you're looking for a resource to build a Hydra API then I'd recommend you get started with API Platform, if you haven't seen it already. If it has to be Laravel I don't think there's much out there, and my personal feeling after trying this project is that Laravel's opinionated nature makes working with Hydra very difficult. By all means fork the repo and see how you get on, though. Sorry I can't be more helpful right now.
|
gharchive/issue
| 2020-08-10T10:31:05 |
2025-04-01T06:40:11.605401
|
{
"authors": [
"noud",
"rb1193"
],
"repo": "rb1193/laravel-hydra",
"url": "https://github.com/rb1193/laravel-hydra/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2624885902
|
need aprove and pull request
need aprove and pull request
Originally posted by @MorgunovE in https://github.com/rbabari/Test-1277-A13/issues/3#issuecomment-2447939967
pull request approved
|
gharchive/issue
| 2024-10-30T17:55:09 |
2025-04-01T06:40:11.606887
|
{
"authors": [
"MorgunovE"
],
"repo": "rbabari/Test-1277-A13",
"url": "https://github.com/rbabari/Test-1277-A13/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
558500980
|
Add development scripts
scripts/install - Install dependencies in a virtual environment.
scripts/test - Run the test suite.
scripts/lint - Run the code linting.
scripts/publish - Publish the latest version to PyPI.
Fixed by https://github.com/rbw/snow/pull/44
|
gharchive/issue
| 2020-02-01T07:33:20 |
2025-04-01T06:40:11.653092
|
{
"authors": [
"rbw"
],
"repo": "rbw/snow",
"url": "https://github.com/rbw/snow/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2000641896
|
bashunit does not provide test reports option yet
Currently bashunit does not have an option to output test report files.
As a workaround (see #12) this adapter runs a test command per test function name. This allow us to have individual results and individual outputs.
With a test report we could be able to at least:
Run all tests in a file with just 1 command
Run all test files in a directory with just 1 command
Show inline errors about what went wrong in running a test
Hi everyone,
After careful consideration and further usage of the adapter, as well as observing increased community adoption, I have decided that the proposed functionality to generate test report files is no longer necessary for our project. The same goals can be achieved without this specific integration.
Therefore, I am closing this issue. I greatly appreciate all the time and effort that everyone has put into this discussion and proposal.
|
gharchive/issue
| 2023-11-19T00:46:04 |
2025-04-01T06:40:11.659074
|
{
"authors": [
"rcasia"
],
"repo": "rcasia/neotest-bash",
"url": "https://github.com/rcasia/neotest-bash/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
263720497
|
Use Docusign for signing documents.
We have now several documents that need a signature of either one person or more persons:
SoW
W9 tax form
So we don't have to paste a Signature.jpg in documents anymore.
Evan is in takes care of the process
You do not need to use Docusign to sign documents. You can sign in a google
doc by Insert Drawing, and Line, Scribble tool before exporting as PDF or
using Adobe acrobat, Tools, Sign.
On Sun, Oct 8, 2017 at 10:07 AM, HJ Hilbolling notifications@github.com
wrote:
We have now several documents that need a signature of either one person
or more persons:
SoW
https://docs.google.com/document/d/1MH-wHbLy_4KSjQeyd5bJNwCS6mGYsuXEA8q4GyooAwY/editP
W9 tax form
https://www.google.com/url?q=https://www.irs.gov/pub/irs-pdf/fw9.pdf&sa=D&ust=1507471167994000&usg=AFQjCNEbXCPjmkmAvOiIrgH8trSWW0zE4Q
So we don't have to paste a Signature.jpg in documents anymore.
Evan is in takes care of the process
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/rchain/Members/issues/120, or mute the thread
https://github.com/notifications/unsubscribe-auth/AC5YEZIIukQu10GSHgj_Y3tO9lu2seLzks5sqNcNgaJpZM4Pxt1c
.
Don't you think signing documents digitally would be easier to verify?
|
gharchive/issue
| 2017-10-08T14:07:08 |
2025-04-01T06:40:11.693034
|
{
"authors": [
"Ojimadu",
"jimscarver",
"lapin7"
],
"repo": "rchain/Members",
"url": "https://github.com/rchain/Members/issues/120",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2065358272
|
Errors in ploting dyads
Hello team,
I'm facing an issue while plotting dyad density (the occupancy plots come out fine).
My data is from MNase-seq in yeast.
The error messages are -
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘coverage’ for signature ‘"NULL"’
Calls: Main ... ComputeNormalizationFactors -> coverage ->
Could you please let me know what could be the issue?
I can provide other information as required.
Many thanks,
Jeff
Hi Jeffrey,
based on the error, it looks like the reads are NULL.
You get reads object from input file path, genome name, and annotations:
rawReads <- LoadReads(params$inputFilePath, params$genome, annotations)
reads <- CleanReads(rawReads, annotations$chrLen, params$lMin, params$lMax)
Then, you call ComputeNormalizationFactors with reads as parameter:
ComputeNormalizationFactors <- function(reads) {
occ <- coverage(reads)
If you want to share raw data and annotations with me, I can debug it.
Best,
Paula
Hi Paula,
Thanks for the reply.
I have managed to troubleshoot the problem. It was coming from an issue with the BAM file index. It had been overwritten by the script.
Thanks,
Jeff
|
gharchive/issue
| 2024-01-04T10:06:59 |
2025-04-01T06:40:11.703346
|
{
"authors": [
"jeff-godwin",
"paulati"
],
"repo": "rchereji/plot2DO",
"url": "https://github.com/rchereji/plot2DO/issues/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
61764552
|
User profile
Small fixes to enable users to click to see a profile and makes sure a user has a github username when they create an account.
Looks good!
|
gharchive/pull-request
| 2015-03-15T07:04:37 |
2025-04-01T06:40:11.735840
|
{
"authors": [
"agundy",
"seveibar"
],
"repo": "rcos/Observatory3",
"url": "https://github.com/rcos/Observatory3/pull/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1638290558
|
Pharos-targets: Download sql file to separate dir
The changes to https://github.com/rcsb/py-rcsb_workflow/pull/7 did the trick for stashing Pharos-targets data, however, it unnecessarily included the downloaded latest.sql.gz and pharos-update.sql in the file. It caused a 500 error from GitHub when trying to push 7 gb worth of 50 mb part files.
This change puts the downloaded SQL dump into a separate directory and keeps Pharos-targets for what will be stashed and used by PharosTargetActivityProvider.
This doesn't have any affect the rest of the code because they are specifically using the tdd files in Pharos-targets. That isn't changing, just the location of the SQL file that is downloaded to generate those files (via mySQL).
|
gharchive/pull-request
| 2023-03-23T20:45:48 |
2025-04-01T06:40:11.746227
|
{
"authors": [
"aliciaaevans"
],
"repo": "rcsb/py-rcsb_utils_targets",
"url": "https://github.com/rcsb/py-rcsb_utils_targets/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1411402746
|
Feature: Numeric Datatypes
Implements the remaining xsd and owl datatypes and makes the appropriate datatype arbitrary precision.
Additionally fixes the performance of numeric serialization by utilizing <charconv>
Some tests missing.
Tests failing because didn't implement fetchcontent stuff yet
Also: will use the test from the other older numeric types PR
Btw, compile times are really suffering. Maybe we should include explicit template specializations in a cpp file afterall. Although I'm not entirely certain that this would help
TODOs:
[ ] fix compile times
[ ] fix float,double to_string
[ ] make build without conan work
Summed up, I think we can just remove the extra CMake file for datatypes, right?
Yes, I just forgot to delete it 😅
|
gharchive/pull-request
| 2022-10-17T11:25:35 |
2025-04-01T06:40:11.763399
|
{
"authors": [
"Clueliss"
],
"repo": "rdf4cpp/rdf4cpp",
"url": "https://github.com/rdf4cpp/rdf4cpp/pull/82",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1788886333
|
RDKDEV-774 Add support for dm-verity based bundles
Adds support for dm-verity[1] based encrypted
bundles.
It uses OMI[2] service to mount encrypted bundles.
[1] https://docs.kernel.org/admin-guide/device-mapper/verity.html
[2] https://code.rdkcentral.com/r/plugins/gitiles/rdk/components/generic/rdk-oe/meta-cmf/+/refs/heads/rdk-next/recipes-support/omi/omi.bb
Signed-off-by: Damian Wrobel dwrobel.contractor@libertyglobal.com
Signed-off-by: Damian Wrobel dwrobel@ertelnet.rybnik.pl
Change-Id: Iac4e76b9dcbe411ee4226a24a9b1ea8f197df681
Internal ticket for tracking https://ccp.sys.comcast.net/browse/RDKCOM-4033
Counterpart for the main branch: https://github.com/rdkcentral/rdkservices/pull/4177.
|
gharchive/pull-request
| 2023-07-05T06:47:10 |
2025-04-01T06:40:11.782619
|
{
"authors": [
"dwrobel",
"pradeeptakdas"
],
"repo": "rdkcentral/rdkservices",
"url": "https://github.com/rdkcentral/rdkservices/pull/4185",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1738625129
|
InChI calculation for Sulfur with valence 4 and 6 omits hydrogens
Describe the bug
For molecules with sulfur with valence 4 and 6, valence is reduced to 2 when computing an InChI and hydrogens are lost. I looked through the InChI Standard Valences, and I don't believe this is a normalization change (Appendix I in InChI technical manual) https://www.inchi-trust.org/all-downloadable-versions/
To Reproduce
mol1 = Chem.MolFromSmiles('CN(C)[SH3]')
for atom in mol1.GetAtoms():
print(atom.GetSymbol(), atom.GetTotalValence(), atom.GetTotalNumHs())
C 4 3
N 3 0
C 4 3
S 4 3
Chem.MolToInchi(mol1)
InChI=1S/C2H7NS/c1-3(2)4/h4H,1-2H3
mol2 = Chem.MolFromSmiles('CN(C)[SH5]')
for atom in mol2.GetAtoms():
print(atom.GetSymbol(), atom.GetTotalValence(), atom.GetTotalNumHs())
C 4 3
N 3 0
C 4 3
S 6 5
print(Chem.MolToInchi(mol2))
InChI=1S/C2H7NS/c1-3(2)4/h4H,1-2H3
Expected behavior
Here are the InChIs that I was expecting:
For CN(C)[SH3] ---> InChI=1S/C2H9NS/c1-3(2)4/h1-2,4H3
For CN(C)[SH5] ---> InChI=1S/C2H11NS/c1-3(2)4/h1-2H3,4H5
Configuration (please complete the following information):
RDKit version: 2023.03.1
OS: Ubuntu 22.04
Python version (if relevant): 3.11.3
Are you using conda? Yes
If you are using conda, which channel did you install the rdkit from? conda-forge
If you are not using conda: how did you install the RDKit? N/A
It is more to the inchi library authors that to rdkit itself, but on the other hand molecules you wrote are not correct, as in the form you wrote them a charge is missing.
|
gharchive/issue
| 2023-06-02T18:25:54 |
2025-04-01T06:40:11.788411
|
{
"authors": [
"vfscalfani",
"wopozka"
],
"repo": "rdkit/rdkit",
"url": "https://github.com/rdkit/rdkit/issues/6436",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
328812780
|
Install prop-types as peer, babel-plugin-dev-expression as dev
Hello!
I noticed this two errors in your package.json.
As prop-types is used in your codebase, it makes it impossible to use tools like BundlePhobia: https://bundlephobia.com/result?p=@reach/router
thanks, looks like you've got prop-types as a normal, not peer dependency?
@ryanflorence Yes, it was the point of @sheepsteak, I change that to respect the prop-types project documentation.
|
gharchive/pull-request
| 2018-06-03T09:12:46 |
2025-04-01T06:40:11.807238
|
{
"authors": [
"ryanflorence",
"zoontek"
],
"repo": "reach/router",
"url": "https://github.com/reach/router/pull/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
445694323
|
Consider creating a truffle framework box on this template
Is your feature request related to a problem? Please describe.
So dapps (decentralized applications) are growing and are using react to create web apps to connect to the ethereum network. Truffle is the framework used to assist with this. there are templates called truffle boxes to create apps. I honestly think your react template is the best template out there and think you should integrate your amazing template
Describe the solution you'd like
I wanted to actually take a version of your boilerplate I have and create a box on it. But I thought I would let you know, maybe you want to, contribute yourselves.
Describe alternatives you've considered
Here are the steps to create a box. If you don't want to do this. Let me know, ill attempt to create one based on your template.
Additional context
Here is the react box I am using at the moment.
Excellent! We'd love to be be part of a Truffle box for dapps!
Personally what little time I can dedicate to OS work I try to dedicate to this project directly but I'd be happy to help out if any questions/issues come up.
I'll keep this open as I'm sure the rbp+dapps intersection will be relevant to a few people who come here.
Closing for lack of activity.
|
gharchive/issue
| 2019-05-18T08:40:48 |
2025-04-01T06:40:11.811205
|
{
"authors": [
"jacques-trixta",
"julienben"
],
"repo": "react-boilerplate/react-boilerplate",
"url": "https://github.com/react-boilerplate/react-boilerplate/issues/2657",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
382891496
|
Update issue templates
We should consider adding 2 separate templates for bug and feature to further guide people when they visit with their ideas/problems. imo the more guidance the better.
Thanks @jwinn !
Coverage remained the same at 100.0% when pulling c236d01eba4e778dc048f6ce16e3cb5e65ea0fc2 on update-templates into 435e07b10b769da07c912ffb439dcd233fd3dbbb on master.
Coverage remained the same at 100.0% when pulling c236d01eba4e778dc048f6ce16e3cb5e65ea0fc2 on update-templates into 435e07b10b769da07c912ffb439dcd233fd3dbbb on master.
|
gharchive/pull-request
| 2018-11-20T23:32:36 |
2025-04-01T06:40:11.814451
|
{
"authors": [
"coveralls",
"gretzky"
],
"repo": "react-boilerplate/react-boilerplate",
"url": "https://github.com/react-boilerplate/react-boilerplate/pull/2482",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
228292234
|
Parent Navigator of Nested loses path params
Off the back of #1232
Simplified Example
const AppStackNavigation = StackNavigator({
Profile: {
path: 'profile/:profileId',
screen: TabNavigator({
overview: {
path: 'overview',
screen: () => <View/>,
},
activity: {
path: 'activity',
screen: () => <View/>,
},
},
},
}
Paths and their results
Using AppStackNavigation.router.getActionForPathAndParams :
profile/123 => {routeName: 'Profile', params: {profileId :undefined}} ❌
profile/123/ => {routeName: 'Profile', params: {profileId :undefined}}❌
profile/123/anything => {routeName: 'Profile', params: {profileId :123}} 👍
profile/123/activity => {routeName: 'Profile', params: {profileId :123}, action: {routeName:'activity'}} 👍
Simplified the results a bit to make it easier to understand. But basically I would have thought that stackitem/someparam as a path would still work if the stackitem is a navigator
any update?
Hi there @onlydave ,
In an effort to cleanup this project and prioritize a bit, since this issue had no follow up since my last comment I'm going to close it.
If you are still having the issue (especially if it's a bug report) please check the current open issues, and if you can't find a duplicate, open a new one that uses the new Issue Template to provide some more details to help us solve it.
|
gharchive/issue
| 2017-05-12T13:29:20 |
2025-04-01T06:40:11.819898
|
{
"authors": [
"eduardoleal",
"kelset",
"onlydave"
],
"repo": "react-community/react-navigation",
"url": "https://github.com/react-community/react-navigation/issues/1489",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
252383756
|
How are we supposed to set this up when we don't use an EXPO project?
Your example illustrates the assumption that the user has an App.js file built from
Expo.
How do we use the DrawerNavigator when we have index.ios.js and index.android.js?
All I get is Native module cannot be null.
Thanks
Let those files, index.ios.js and index.android.js render a component which eventually renders your navigator, or render your navigator directly inside the render() method of your root component. Navigators are React components and can be rendered like any other component.
|
gharchive/issue
| 2017-08-23T19:01:42 |
2025-04-01T06:40:11.821893
|
{
"authors": [
"limeytrader007",
"matthamil"
],
"repo": "react-community/react-navigation",
"url": "https://github.com/react-community/react-navigation/issues/2453",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
165239509
|
怎么关联 array of object
数据结构类似于:
{
title: 'test',
items: [{
id: 1
title: 'xxx'
}, {
id: 2
title: 'xxxx'
}]
}
根据 input-array 这个例子,
先使用
getFieldValue('items').map
但是 map 里的 input 怎么关联到 id 和 title 这样的字段呢
{...getFieldProps('item.title')}
并没有生效
getFieldProps 第一个参数必须唯一,所以应该这样:
`item${index}.title`
研究下这两个例子:
http://react-component.github.io/form/examples/input-array.html
http://react-component.github.io/form/examples/checkbox-group.html
getFieldValue('items').map((item, index) => (
<Input
{...getFieldProps(`item${index}.title`)}
/>
));
input 里还是没有值出现,打印了下 fields,{ name: item0.title } ,并没有 value 值
貌似没找到可在 codepen 直接引用的 js ,不太好做示例呃
write as below, you need initialValue to let it display in input.
then in onFieldsChange function (usually redux reducer) to update relate value.
like split prop string (eg: item.0.title), get index, then update.
getFieldValue('items').map((item, index) => (
<Input
{...getFieldProps(`item.${index}.title`, {
initialValue: item.title,
})}
/>
));
设置 initialValue 后让它显示,然后在 onFieldsChange 去更新相关的值,就可以了
|
gharchive/issue
| 2016-07-13T05:21:31 |
2025-04-01T06:40:11.829431
|
{
"authors": [
"alayii",
"benjycui"
],
"repo": "react-component/form",
"url": "https://github.com/react-component/form/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
664151336
|
How can I use in mobile phone?
Is your feature request related to a problem? Please describe.
Looks like your react-dnd library is not support mobile devices. Or any idea you can provide to me to implememt in mobile?
Describe the solution you'd like
Think is your events listener do not listen the touch events list touch start, move and end. right? not sure.
Has anyone had a chance to look into this yet?
I'm not a maintainer but have you tried the TouchBackend?
It's documented on the first page, it might be helpful:
https://react-dnd.github.io/react-dnd/docs/overview#backends
The library currently ships with the HTML backend, which should be sufficient for most web applications. There is also a Touch backend that can be used for mobile web applications.
|
gharchive/issue
| 2020-07-23T01:48:48 |
2025-04-01T06:40:11.837281
|
{
"authors": [
"larissa-n",
"laurentsenta",
"zy-zero"
],
"repo": "react-dnd/react-dnd",
"url": "https://github.com/react-dnd/react-dnd/issues/2684",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
562203591
|
Contributor
Hi @bluebill1049 ,
I've been working with react-hook-forms for a while and the results are great. In return I would love to contribute to this repo in any way I can.
Cheers, sigfried.
aha that would be awesome! I will keep you in the loop on what we can work together. you can take a preview on V1 there is a branch now.
It's going to be an interesting and useful project :)
It's going to be an interesting and useful project :)
Indeed :+1:
Hey @sigfriedCub1990 you want to help on the filter functionality?
the filter name part?
cd app
yarn
yarn start
everything lives inside the devTools folder.
Sure thing @bluebill1049 I'll look into it today's afternoon :)
@kotarella1110 if you got sometimes, please help improve the types :) ❤️
@bluebill1049 yeah! i will improve types 💚
amazing! @kotarella1110 🙏
it's an ongoing thing, let's close this issue for now. feel free to send PR to improve this devtool
|
gharchive/issue
| 2020-02-09T16:37:24 |
2025-04-01T06:40:11.848247
|
{
"authors": [
"bluebill1049",
"kotarella1110",
"sigfriedCub1990"
],
"repo": "react-hook-form/react-hook-form-devtools",
"url": "https://github.com/react-hook-form/react-hook-form-devtools/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
333162912
|
How To Get Refresh Token on Both IOS and Android
Is it possible to get Refresh Token on this package?? Apparently it is not included on the documentation so I'm assuming it is not configured to get refresh token.
And also is it also possible to get expiration date for access token on Android?? I've read that it is only available on IOS.
Environment
react-native : 0.55.0
react-native-google-signin : 0.12.0
react : 16.3.0-alpha.0
i found that fixed in #398 should i fork repo, or any better solution that can pull newest commits
I might be able to answer this question if you can provide some context about what you're trying to achieve. Are you looking for a refresh token because you're trying to do some offline API access, server-side perhaps?
I also have this problem. I am using google contact api v3 to get contact list(name, photo).
To get contact photo, I need access_Token, but it is expired soon.
How can I get refresh_token to access Google api when I need?
I'm also facing the same problem, I'm wondering why I do not receive a refreshToken. Just like the post above, I'm looking for the refresh token in order to refresh to access token when it has been checked to be expired.
related: https://stackoverflow.com/questions/35154509/android-how-to-get-refresh-token-by-google-sign-in-api
Thanks for the quick reply! I sent the serverAuthCode in a request and I only got a new accessToken.
I tried some other things and I've just remembered that I only get a refreshToken on the first request.
I got the refreshToken when I send the serverAuthCode immediately on the first authentication.
Just for clearity, the access token you will receive from GoogleSignin.getTokens(); will only last for about an hour, to get a new access_token we need to send serverAuthCode to https://oauth2.googleapis.com/token with fields : client_id, client_secret, code(this is serverAuthCode), grant_type(its value should be authorization_code), redirect_uri(can set it from developers console). Remember to only use the serverAuthCode that you get on your first attempt when you just allowed your app for the permissions FIRST TIME otherwise you will get grant_error every time. After getting the refresh_token, we need to get the new access_token using refresh token that we just got, so now just replace the value of grant_type from authorization_code to refresh_token and also replace the value of code field to refresh_token , fill its value and send a post request to the same url, you will get a fresh access_token that will be valid for 1 hour.
|
gharchive/issue
| 2018-06-18T07:40:50 |
2025-04-01T06:40:11.883737
|
{
"authors": [
"Esxiel",
"Sebosam",
"anton128",
"deejaygeroso",
"thomasw",
"vonovak",
"webdevfarhan"
],
"repo": "react-native-community/google-signin",
"url": "https://github.com/react-native-community/google-signin/issues/425",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
509982630
|
model close animation from bottom to top not working
animationOut={'slideOutUp'} not working.
Please reopen an issue following the issue template.
|
gharchive/issue
| 2019-10-21T13:59:53 |
2025-04-01T06:40:11.885125
|
{
"authors": [
"PrantikMondal",
"rewieer"
],
"repo": "react-native-community/react-native-modal",
"url": "https://github.com/react-native-community/react-native-modal/issues/363",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
544626950
|
Broken example
Is this lib still maintained ? The example is broken on both Androis/iOS and web.
What is the current way of loading an svg in react-native + web ?
Thanks and best!
Updated the example a bit, can you try again: https://snack.expo.io/@msand/react-native-svg-example
The best way depends a bit on the use case. It the content is static, then I would recommend using react-native-svg-transformer with plain .svg files. If you need dynamic content, then you can import the elements from react-native-svg and create normal react components. They should work in react-native, react-native-web and plain react-dom (requires aliasing react-native-web to react instead) based projects.
Thanks for updating the example. Many of the example from the README.md, produce error, such as non existing SvgCss SvgXml imported from react-native-svg.
I was not able to import from a .svg files, even if I did expo install react-native-svg.
What worked the best for me was recreating the svg in react with Path from react-native-svg.
I have found that animateTransform was not existing in the library while I wanted to use my svg for the splash screen and the animation is required to make it spin.
Why are some imports not defined from the main library? If possible I'd like to repair the import so i can do the animation.
This is an example of error I have when importing svg on the web:
bundle.js:69367 Warning: </static/media/splashScreen.0f923900.svg /> is using incorrect casing. Use PascalCase for React components, or lowercase for HTML elements.
in /static/media/splashScreen.0f923900.svg (at SplashScreen/index.js:34)
Seems more like it's react-native-svg-transformer related issue, or naming issue. Are you trying to use SvgCss / SvgXml in an Expo web project? They aren't defined for the web yet, as there are several processing modes to choose from, and you can easily implement your own: https://github.com/react-native-community/react-native-svg/issues/1230
Yes I was using it for the web. But that was a fallback decision, after I noticed that Platform.OS from react-native-web in expo sdk36 was always pointing to web, even if started on the expo client from an iPhone or Android.
Otherwise, if the import of the .svg would work for the web, by work I mean importing a ReactComponent instead of showing a path string.
|
gharchive/issue
| 2020-01-02T15:35:09 |
2025-04-01T06:40:11.891563
|
{
"authors": [
"kopax",
"msand"
],
"repo": "react-native-community/react-native-svg",
"url": "https://github.com/react-native-community/react-native-svg/issues/1234",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
214820811
|
How to configure tabs slide transition?
When I set new index to state of tabview (i.e. switch tabs programatically, from a button click) tabs switch too fast for me, I'd like to have a slower transition. Docs say there's configureTransition callback, but don't say how should it be written and how does that 'transition configuration' that it should return should look like. Could you please, pour some light on it and update the docs?
Hi! You can use configureTransition as following:
configureTransition={() => ({
timing: Animated.spring,
tension: 300,
friction: 35,
})}
If it can help you out, you also have TransitionConfigurator's type with Flow and you can check out how the callback is used under the hood right here :)
@CharlesMangwa does this still work as described? For me the configureTransition prop has no effect. It doesn't get executed at all.
|
gharchive/issue
| 2017-03-16T19:57:12 |
2025-04-01T06:40:11.894591
|
{
"authors": [
"CharlesMangwa",
"Druux",
"you-fail-me"
],
"repo": "react-native-community/react-native-tab-view",
"url": "https://github.com/react-native-community/react-native-tab-view/issues/173",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1654410429
|
feat(windows): implementation for useBatteryLevel & useBatteryLevelIsLow
Description
Fixes #1528
add subscription to Battery.ReportUpdated in Initialize and send event RNDeviceInfo_batteryLevelDidChange with latest battery level
check for low battery condition of <20 and send event RNDeviceInfo_batteryLevelIsLow with latest battery level
update readme to reflect added functionality
Compatibility
OS
Existing
Implemented
iOS
✅
❌
Android
✅
❌
Windows
❌
✅
Checklist
[x] I have tested this on a device/simulator for each compatible OS
[x] I added the documentation in README.md
[ ] I updated the typings files (privateTypes.ts, types.ts)
[ ] I added a sample use of the API (example/App.js)
Code looks fine, thanks for the PR!
The failure seems unrelated to the functionality:
$ patch-package
'patch-package' is not recognized as an internal or external command,
This seems like some sort of systemic issue but why is it just popping up now? I use patch-package all the time :-) - this should be working...
I assume you have built + tested this locally? It all works for you?
@mikehardy I was also trying to figure this out, but it seems some issue with Github flow, as same run is getting passed on my fork of the repo. Github flow sometimes is wacky though so can't say for sure. You can check E2E run details here - https://github.com/sohail-p/react-native-device-info/actions/runs/4613458100/jobs/8155450012?pr=2
I have tested Android and Windows locally but not iOS
The last re-run it failed on iOS because hashfiles took too long on the iOS worker 😆 - re-running but obviously that has nothing to do with your PR, will merge regardless of status on the iOS run since windows made it through this time, and it will auto-release
Thanks @mikehardy
|
gharchive/pull-request
| 2023-04-04T18:54:54 |
2025-04-01T06:40:11.908811
|
{
"authors": [
"mikehardy",
"sohail-p"
],
"repo": "react-native-device-info/react-native-device-info",
"url": "https://github.com/react-native-device-info/react-native-device-info/pull/1529",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
829724866
|
fix part of #3: created seperate css file for explore page
Refers: #3
Changes:
made seperate css file for explore page.
@pranshuchittora sir please review this pr.
Pls pull the new changes
@pranshuchittora sir please review this pr.
|
gharchive/pull-request
| 2021-03-12T03:31:53 |
2025-04-01T06:40:11.910651
|
{
"authors": [
"nlok5923",
"pranshuchittora"
],
"repo": "react-native-elements/playground",
"url": "https://github.com/react-native-elements/playground/pull/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1404887915
|
fix: ios onRegionChange stops firing up when animateCamera is called …
Does any other open PR do the same thing?
No
What issue is this PR fixing?
[IOS] onRegionChangeComplete is not firing up at all
(please link the issue here) #4110
How did you test this PR?
On IOS, calling animate camera repeatedly, it never stops firing the event as before
Create a map, crete a flow of animateCamera called repeatedly, and print the result of onRegionChangeComplete
tested on simulator iPhone 13 iOs 15.4 and real device iPhone Xr iOs 15.5
Can a maintainer please look at this?
Hi @Simon-TechForm and @christopherdro, who is reviewing PR's on this lib currently? Could perhaps one of you be so kind as to review this one? We have a lot riding on this bug getting solved. Thanks!!
@kappu72 Thanks again for the PR. I forked the branch from your PR and I receive the following error in my app trying when opening the screen that contains the map:
View config getter callback for component AIRGoogleMap must be a function (received undefined)
Is this something you're familiar with?
|
gharchive/pull-request
| 2022-10-11T16:01:52 |
2025-04-01T06:40:11.915303
|
{
"authors": [
"WillyRamirez",
"bawb89",
"kappu72"
],
"repo": "react-native-maps/react-native-maps",
"url": "https://github.com/react-native-maps/react-native-maps/pull/4478",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
841931888
|
fix: Allow ',' in viewBox attribute
Hi! 👋
Firstly, thanks for your work on this project! 🙂
Today I used patch-package to patch react-native-svg@12.1.0 for the project I'm working on.
I have added a replace regex to remove comma's (,) in viewBox. According to MDN viewBox numbers are separated by whitespace and/or a comma.
Here is the diff that solved my problem:
diff --git a/node_modules/react-native-svg/src/lib/extract/extractViewBox.ts b/node_modules/react-native-svg/src/lib/extract/extractViewBox.ts
index c4515b5..8053617 100644
--- a/node_modules/react-native-svg/src/lib/extract/extractViewBox.ts
+++ b/node_modules/react-native-svg/src/lib/extract/extractViewBox.ts
@@ -38,7 +38,7 @@ export default function extractViewBox(props: {
const params = (Array.isArray(viewBox)
? viewBox
- : viewBox.trim().split(spacesRegExp)
+ : viewBox.trim().replace(/,/g, '').split(spacesRegExp)
).map(Number);
if (params.length !== 4 || params.some(isNaN)) {
This issue body was partially generated by patch-package.
Let me know if I you want me to create a pull request or not.
This patch does not account for numbers that are separated by commas but not whitespace(for example: 0, 0, 400,69.83316168898044), updating it to the following will address that issue.
viewBox.trim().replace(/,/g, ' ').replace(/ +/g, ' ').split(spacesRegExp)
|
gharchive/issue
| 2021-03-26T12:52:34 |
2025-04-01T06:40:11.918777
|
{
"authors": [
"HSalaila",
"gp3gp3gp3"
],
"repo": "react-native-svg/react-native-svg",
"url": "https://github.com/react-native-svg/react-native-svg/issues/1555",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.