id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1828146641
|
User log out when changing account or network in wallet
Describe the bug
After login, when user change network or account in wallet, user has to be logged out.
To Reproduce
Steps to reproduce the behavior:
User login with wallet.
User change account in wallet.
User is not logged out.
Expected behavior
User has to be logged out.
Log out when changing account
|
gharchive/issue
| 2023-07-31T00:27:48 |
2025-04-01T04:35:41.606170
|
{
"authors": [
"realbits-lab"
],
"repo": "realbits-lab/prompt-nft",
"url": "https://github.com/realbits-lab/prompt-nft/issues/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1905314600
|
Xcode 15 Carthage release built with beta Xcode version
How frequently does the bug occur?
Always
Description
Carthage release for 10.42.3 built with Swift 5.9 (5.9.0.128.106) which is swift version from Xcode 15 beta release. Current Swift version is Swift 5.9 (5.9.0.128.108).
Stacktrace & log output
No response
Can you reproduce the bug?
Always
Reproduction Steps
Run Carthage carthage build --no-skip-current --use-xcframeworks --platform iOS
Output
*** Skipped installing realm-swift binary due to the error:
"Incompatible Swift version - framework was built with 5.9 (swiftlang-5.9.0.128.106 clang-1500.0.40.1) and the local version is 5.9 (swiftlang-5.9.0.128.108 clang-1500.0.40.1)."
Falling back to building from the source
Version
10.42.3
What Atlas Services are you using?
Local Database only
Are you using encryption?
Yes
Platform OS and version(s)
iOS 17.0
Build environment
No response
The same problem occurs with v10.42.4.
*** Downloading realm-swift binary at "v10.42.4"
*** Skipped installing realm-swift binary due to the error:
"Incompatible Swift version - framework was built with 5.9 (swiftlang-5.9.0.128.106 clang-1500.0.40.1) and the local version is 5.9 (swiftlang-5.9.0.128.108 clang-1500.0.40.1)."
Falling back to building from the source
swiftlang-5.9.0.128.106 clang-1500.0.40.1
swiftlang-5.9.0.128.108 clang-1500.0.40.1
It looks like Carthage does not approve of our xcframework having slices built with mixed compiler versions, so we'll need to drop the visionOS build until there's a non-beta Xcode version which supports it.
@tgoyne
still occurs with v10.43.0
*** Downloading realm-swift binary at "v10.43.0"
*** Skipped installing realm-swift binary due to the error:
"Incompatible Swift version - framework was built with 5.9 (swiftlang-5.9.0.128.106 clang-1500.0.40.1) and the local version is 5.9 (swiftlang-5.9.0.128.108 clang-1500.0.40.1)."
Falling back to building from the source
@tgoyne yep, issue is not fixed in latest version
still broken
|
gharchive/issue
| 2023-09-20T16:06:58 |
2025-04-01T04:35:41.781319
|
{
"authors": [
"aehlke",
"kawacho",
"mikhailmulyar",
"tgoyne"
],
"repo": "realm/realm-swift",
"url": "https://github.com/realm/realm-swift/issues/8370",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
496774597
|
ReasonML: Add list of reserved keywords
As discussed here, it is a good idea to have a list of reserved keywords.
It would also be interesting to discuss certain syntax desugaring (such as __x for first pipe placeholder and type_ vs. type conversion in object attributes).
It was mentioned in the thread that OCaml docs already maintain a list of keywords: http://caml.inria.fr/pub/docs/manual-ocaml-312/manual044.html
It might make sense to maintain just a list of Reason changes on top of that list. E.g., private -> pri, match -> switch, etc. Of course we would also point to the OCaml reserved words index and say something like, 'ReasonML reserved words are substantially the same as OCaml's, documented here: ... with the following changes: ...'
@yawaramin Is it bad to just copy paste it? I think it's better UX to have a complete list right inside the docs to prevent forwarding users to the OCaml manual (we can neither use OCaml nor Reason keywords for variable names, it would be great to have a total overview!)... I guess it's good for DRYness, but on the other hand, keywords won't change that often.
Copy-pasting with attribution and a link to the source should be fine. Although depending on how we want to show it in the Reason Association docs, it might not really be a straight copy-paste but rather a fairly complete re-organization of the index.
I'm of the mind that sending people outside of docs for other docs is bad practice. What needs to be known should exist in a single spot so it can be consumed, contemplated, and edited as a single creation. Reason is not OCaml.
@amartincolby I kind of refer to this as the 'curse of crossover technologies', you're always going to have to refer to something external :-)
That said, for the keyword list I agree it's a good idea to document it in one place since it's so intrinsic to Reason syntax.
We will find a way to make it a one page! @yawaramin I really like your argument about attribution, I still need to figure out all the license details for colocating information from different sources.
Considering that the list is purely informative, there is no copyright, so no legal need for attribution. One cannot copyright facts. I still think it a good idea to offer the attribution, though, since even though Reason and OCaml are not the same, understanding their similarities is important.
There are some additional words in Reason that get special treatment (e.g. not gets rewritten as !) due to the fact that the parser makes some assumptions about the stdlib. I realize that's not quite the same thing as being a "reserved word" but it would be really great if those were documented somewhere.
The correct list of keywords will always be in the source code: https://github.com/facebook/reason/blob/master/src/reason-parser/reason_declarative_lexer.mll#L85-L144
Fixed in #163
|
gharchive/issue
| 2019-09-22T12:44:50 |
2025-04-01T04:35:41.820080
|
{
"authors": [
"amartincolby",
"fhammerschmidt",
"mlms13",
"ryyppy",
"yawaramin"
],
"repo": "reason-association/reasonml.org",
"url": "https://github.com/reason-association/reasonml.org/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
384437302
|
fix: external extGetItemProps should bind to getItemProps
Fixes #8.
As far as I can tell this has been broken since #5, unless I'm misunderstanding something.
This is released as v1.0.1
|
gharchive/pull-request
| 2018-11-26T17:35:28 |
2025-04-01T04:35:41.821373
|
{
"authors": [
"emmenko",
"mattjbray"
],
"repo": "reasonml-community/bs-downshift",
"url": "https://github.com/reasonml-community/bs-downshift/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
708048086
|
Two problems with @ppxCustom
Problem 1: Parse errors
@ppxCustom as now implemented assumes that parsing can always be done successfully.
Consider a schema where
type Item {
id: ID!
name: String!
hasPriceRange: Boolean!
price: Int
minPrice: Int
maxPrice: Int
}
and our domain knowledge tells us that if hasPriceRange is false, then price should be non-null. If hasPriceRange is true then minPrice and maxPrice should be non-null.
The ideal definitions client side don't work with @ppxCustom because that wants
module Custom = {
type t = someType;
let parse: raw => t;
let serialize: t => raw;
}
and thus doesn't work if parsing could fail. One could do something like;
module Custom = {
type t = result(someType, string);
let parse: raw => t;
let serialize: t => raw;
}
but making serialize accept a result is hardly ideal either.
Proposal
A new directive, e.g. @ppxParser, that accepts something like
module Custom = {
type t;
let parse: raw => result(t, 'error);
let serialize: t => raw;
}
Full client-side implementation of Item
type price =
| Price(int)
| PriceRange(int, int);
module Item = {
type t = {
id: string,
name: string,
price,
};
};
type raw = {
id: string,
name: string,
hasPriceRange: bool,
price: option(int),
minPrice: option(int),
maxPrice: option(int),
};
module ItemDecoder = {
type t = Item.t;
let parse: raw => result(t, 'error) =
({id, name, hasPriceRange, price, minPrice, maxPrice} as p) =>
switch (hasPriceRange, price, minPrice, maxPrice) {
| (false, Some(price), _, _) => Ok({id, name, price: Price(price)})
| (true, _, Some(minPrice), Some(maxPrice)) =>
Ok({id, name, price: PriceRange(minPrice, maxPrice)})
| _ => Error({j|Error parsing price: $p|j})
};
let serialize: t => raw =
({id, name, price}) => {
switch (price) {
| Price(price) => {
id,
name,
hasPriceRange: false,
price: Some(price),
minPrice: None,
maxPrice: None,
}
| PriceRange(minPrice, maxPrice) => {
id,
name,
hasPriceRange: true,
price: None,
minPrice: Some(minPrice),
maxPrice: Some(maxPrice),
}
};
};
};
Problem 2: Serialization isn't always possible or desired
@ppxCustom forces the programmer to implement serialize. That might be fine (if somewhat cumbersome) for small fragments and small queries, but for large top-level queries and more complicated types it will add a lot of extra work. The programmer knows from the application requirements whether they want to be able to serialize. @ppxCustom doesn't know, but still forces them to be able serialize.
I can even imagine cases where it's impossible to write serialize or where, to make it possible, the programmer would have to write the types so that they work for serialization, instead of what is best for the application.
Proposal
Reintroducing the old @bsDecoder directive as @ppxDecoder. It accepts a function that parses the t_whatever type into what the application needs, but which most often would be either t or result(t, 'error).
This was suggested by @jfrolich in #187 as a measure to ease upgrading to 1.0, but I now think that such a directive is generally needed
I would very much like to contribute PRs for this (and the other issues I've opened), but I need some help with getting up to speed with contributing (e.g. as detailed in #205).
@jfrolich Wrote in #208
Interesting. I wonder if you make a conversion that is not always possible if that's not better to be done in the application code.
and I think that could be right. Don't think that changes the problem that we need a directive with a parse function that's able to handle errors like in the price example. The argument is laid out very convincingly in @lexi-lambda's excellent Parse, don't validate
My two cents - for me, an invalid custom scalar is a protocol error, just as if the server had sent malformed graphql - I think it should show up to application code the same way. By the time it gets into application code I don't want to worry that there was some protocol error in my custom fields - just like I don't have to worry about built-in fields like Float. So, I do think that @ppxCustom should allow me to indicate that parsing failed, either through option or result.
You can parse into an option or result type, just make that part of the type that you are converting to.
BTW you don't need to implement a serialize function, this is only necessary if you make use of it. You can just return a dummy value and/or throw an exception.
|
gharchive/issue
| 2020-09-24T10:22:52 |
2025-04-01T04:35:41.830803
|
{
"authors": [
"jfrolich",
"mbirkegaard",
"russellmcc"
],
"repo": "reasonml-community/graphql-ppx",
"url": "https://github.com/reasonml-community/graphql-ppx/issues/209",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1561059516
|
Replace db.location with catalog.info analyzer
The db.location analyzer has always annoyed me a little bit. It's kind of special case. I had the idea to have a recap.analyzers.catalog package with an info analyzer inside that contained:
class Info(BaseMetadataModel):
path: str
"""
Full catalog path.
"""
I thought I might also include some kind of insertion/updated timestamp, but I now think that should be done per-analyzer since analyzers can update independently of one another.
Note that db.location is used in a lot of examples, so the docs will need to be combed through and updated.
I've deleted db.location. No need to include path in the JSON since it's left to the catalog to store that information. For the default catalog (db), it's stored as parent and name.
|
gharchive/issue
| 2023-01-29T00:11:21 |
2025-04-01T04:35:41.865838
|
{
"authors": [
"criccomini"
],
"repo": "recap-cloud/recap",
"url": "https://github.com/recap-cloud/recap/issues/154",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1000947717
|
docker-compose up is showing error
when I run docker-compose up it shows error
Thanks a lot, @Arvind644, for reporting! It seems like docker is not running on your system. Since none of us are working with windows it is a bit difficult for us to reproduce your error, but I attach a couple of links that hopefully point you in the right direction:
https://stackoverflow.com/questions/64952238/docker-errors-dockerexception-error-while-fetching-server-api-version
https://docs.docker.com/desktop/windows/install/
Hope I could help! 🙂
@dcfidalgo I have docker desktop installed with wsl2 ubuntu . I just tried it with windows manager powershell and it works. It didn't worked in vscode terminal
Great to hear that it works now! Have fun with Rubrix! 🙂
@dcfidalgo Thank you But it is very slow in pulling the image. Its already 15 mins and not even the half part is done. Does there is way to do that fast.
|
gharchive/issue
| 2021-09-20T13:05:32 |
2025-04-01T04:35:41.880079
|
{
"authors": [
"Arvind644",
"dcfidalgo"
],
"repo": "recognai/rubrix",
"url": "https://github.com/recognai/rubrix/issues/369",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
89446936
|
Validator loads [ValidatorClassName.properties] as validation error message resource. This is for plugin validators.
Validator loads [ValidatorClassName.properties] as validation error message resource.
This is for plugin validators.
Fantastic!
Ref #425
LGTM
|
gharchive/pull-request
| 2015-06-19T02:21:24 |
2025-04-01T04:35:41.881548
|
{
"authors": [
"takahi-i",
"yusuke"
],
"repo": "recruit-tech/redpen",
"url": "https://github.com/recruit-tech/redpen/pull/442",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
55631116
|
The price is not changed on a subscription when the subscription plan is changed
When i update subscription plan on a subscription, the price of the subscription and the addons are not updated to the price of the new plan and new plan addons. I can see that the plan is changed on recurly but not the prices..
This happens using the .NET library.
@dragankae Thanks for reporting this. We're looking into it internally.
Hello, can you update me about the progress of this issue.
Thanks
Let me explain the problem again:
I have 2 Plans
PlanA 10$
PlanB 20$
My subscription has PlanA 10$
if i change the plan of the subscription to PlanB through the API the price is still 10$ and the addon prices are not updated according the new plan
if i change the plan of the subscription to PlanB through user interface the price is correctly updated to 20$ and to all of the addons
This is in recurly API documentation https://docs.recurly.com/api/subscriptions/upgrade-downgrade
Plan Code, Quantity, and Unit Amount
Values not specified will be not be changed.
If the subscription plan changes, the unit amount will default to the new plan's unit amount. Of course, you are welcome to override the plan's unit amount by specifying it in the change request.
Hi @dragankae, can you provide us with sample code to reproduce the issue?
Hi @dragankae, can you please provide some code to reproduce the behavior you're seeing? Thanks.
This is what i do:
var activeRecurlySubscription = Recurly.Subscriptions.Get(SubscriptionID);
subscription.Plan = Recurly.Plans.Get(newPlanCode);; // Set the new plan
subscription.ChangeSubscription(Recurly.Subscription.ChangeTimeframe.Now);
The subscription update api call is somehow including a custom unit_amount_in_cents, so the process doesn't default to the plan's value. Because the custom value is the amount of the previous subscription, the amount appears to have never changed.
I'm very confused on what you're trying to explain.
A plan is a template. A subscription is created from that template. Before you save the subscription you can override the defaults that were specified in the template (unit price being one).
If you then change the template it would not change any of the subscriptions that were created previously.
If template price is $5.00
You then create a subscription using the template defaults.
You then change template price to $6.00
You then get previously created subscription and observe the price is $5.00
You then update subscription without changing anything and observe price is $5.00
I'm not sure where the confusion lies in that logic.
On Jun 11, 2015, at 6:28 PM, Drew Kitch notifications@github.com wrote:
The subscription update api call is somehow including a custom unit_amount_in_cents, so the process doesn't default to the plan's value. Because the custom value is the amount of the previous subscription, the amount appears to have never changed.
—
Reply to this email directly or view it on GitHub.
It looks like this issue is being caused because assigning a new Plan to a Subscription does not propagate the UnitAmountInCents from the Plan to the Subscription, and ChangeSubscription() submits the Subscription price, not the Plan price. It can be worked around like so:
var activeRecurlySubscription = Recurly.Subscriptions.Get(SubscriptionID);
subscription.Plan = Recurly.Plans.Get(newPlanCode);; // Set the new plan
subscription.UnitAmountInCents = subscription.Plan.UnitAmountInCents(subscription.Currency);
subscription.ChangeSubscription(Recurly.Subscription.ChangeTimeframe.Now);
Can we close this? It appears that there was confusion in how the system worked and not an actual bug. If you see my comment above I detail why this is not a concern of the client library.
@MuffinMan10110 yes, good point.
@dragankae feel free to re-open discussion
Maybe I should've reported a different bug, but in my test scenarios, no "template" prices are ever changed. Here's my test scenario:
Assume PlanA @ 149$, PlanB @ 1000$
Create new account
Create subscription using the new account & PlanA
Change subscription to PlanB
If I do this via the web interface, I would see that the customer is charged the correct, 1000$ rate, but when I go through the same steps with the API, the customer is only charged 149$.
My workaround, above, feels much more like a kludge than an intended pattern. If you'd like a complete, in-code illustration of the issue, I've got one of those from my original exploration.
As I did find that workaround, it's no longer a critical issue for us. I'm just putting in my two cents.
@BrainTrainDev I think the confusion lies in the fact that the Recurly Admin UI is doing one step automatically for you that you might not realize. When you choose a new plan from the drop down when editing your subscription the UI is automatically populating the price field with the new plans price and you can choose to continue with it or override it. This action is equivalent to you setting your subscription objects price to the new plans price when settings the new plan.
Thanks, I think I understand the design now.
|
gharchive/issue
| 2015-01-27T15:23:07 |
2025-04-01T04:35:41.903559
|
{
"authors": [
"BrainTrainDev",
"MuffinMan10110",
"austinheap",
"bhelx",
"dragankae",
"whiteotter"
],
"repo": "recurly/recurly-client-net",
"url": "https://github.com/recurly/recurly-client-net/issues/70",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2192968400
|
Option to install R core package in R-Studio Build
Fixes for https://issues.redhat.com/browse/RHOAIENG-2647
Option to Install of Rstudio packages though the use of INSTALL_ADD_PKGS.
/retest
Did you try to build on the cluster? If so, could you demo it here?
|
gharchive/pull-request
| 2024-03-18T18:30:54 |
2025-04-01T04:35:41.915162
|
{
"authors": [
"atheo89",
"dibryant"
],
"repo": "red-hat-data-services/notebooks",
"url": "https://github.com/red-hat-data-services/notebooks/pull/190",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
926386703
|
Fix JupyterHub availability panel, add JupyterHub DB availability
Signed-off-by: Anish Asthana anishasthana1@gmail.com
/lgtm
|
gharchive/pull-request
| 2021-06-21T16:17:36 |
2025-04-01T04:35:41.916156
|
{
"authors": [
"accorvin",
"anishasthana"
],
"repo": "red-hat-data-services/odh-deployer",
"url": "https://github.com/red-hat-data-services/odh-deployer/pull/117",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
604056618
|
modified rgw IO test config parameters
Signed-off-by: sunilkumarn417 sunnagar@redhat.com
@sunilkumarn417 looks like a duplicate of this PR: 40
@sunilkumarn417 looks like a duplicate of this PR: 40
yes @rakeshgm, this code changes are required for both the rh-luminous and rh-nautilus.
|
gharchive/pull-request
| 2020-04-21T14:45:44 |
2025-04-01T04:35:41.918128
|
{
"authors": [
"rakeshgm",
"sunilkumarn417"
],
"repo": "red-hat-storage/ceph",
"url": "https://github.com/red-hat-storage/ceph/pull/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2711935545
|
controllers: watch for consumer annotation in mirroring controller
As the mirroring controller, lists consumers based on the annotation to trigger maintenance mode, add a watch for storage consumer annotation change in the mirroring controller
/lgtm
/jira backport release-4.18
|
gharchive/pull-request
| 2024-12-02T13:31:08 |
2025-04-01T04:35:41.923765
|
{
"authors": [
"nb-ohad",
"rewantsoni"
],
"repo": "red-hat-storage/ocs-operator",
"url": "https://github.com/red-hat-storage/ocs-operator/pull/2915",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1530382775
|
App crashing
App is crasing some time and its not responding.
find the crashared report.
`Date/Time: 2023-01-12 14:34:14.208 +0530
End time: 2023-01-12 14:36:05.594 +0530
OS Version: macOS 13.1 (Build 22C65)
Architecture: x86_64h
Report Version: 40
Incident Identifier: 7560C2A3-4C07-454F-87E6-E3DE57A08534
Data Source: Stackshots
Shared Cache: EA7F9772-219E-3ECE-A4D9-20AEEE3BC80F slid base address 0x7ff81645a000, slide 0x1645a000 (System Primary)
Shared Cache: 73F7D64A-0E58-33BB-B79D-9CD6E9461CAE slid base address 0x7ff80b2fa000, slide 0xb2fa000 (DriverKit)
Command: Jamf Framework Redeploy
Path: /private/var/folders/*/Jamf Framework Redeploy.app/Contents/MacOS/Jamf Framework Redeploy
Identifier: co.uk.mallion.Jamf-Framework-Redeploy
Version: 1.0 (11)
Team ID: VR4GB7TBDP
Is First Party: No
Architecture: x86_64
Parent: launchd [1]
PID: 37390
Time Since Fork: 56224s
Note: Translocated
Event: hang
Duration: 111.39s
Duration Sampled: 1.10s (process was unresponsive for 110 seconds before sampling)
Steps: 11 (100ms sampling interval)
Hardware model: MacBookPro15,1
Active cpus: 12
HW page size: 4096
VM page size: 4096
Time Since Boot: 576474s
Time Awake Since Boot: 36608s
Time Since Wake: 31s
Fan speed: 2147 rpm
Total CPU Time: 2.216s (7.9G cycles, 12.0G instructions, 0.66c/i)
Advisory levels: Battery -> 2, User -> 2, ThermalPressure -> 0, Combined -> 2
Free disk space: 123.82 GB/465.63 GB, low space threshold 3072 MB
Vnodes Available: 78.80% (207370/263168)
Preferred User Language: en-GB
Country Code: GB
OS Cryptex File Extents: 327
--------------------------------------------------
Timeline format: stacks are sorted chronologically
Use -i and -heavy to re-report with count sorting
--------------------------------------------------
Heaviest stack for the main thread of the target process:
11 start + 2432 (dyld + 25360) [0x7ff8164f4310]
11 ??? (Jamf Framework Redeploy + 23006) [0x10a1cc9de]
11 ??? (SwiftUI + 10088335) [0x7ff91ec8df8f]
11 ??? (SwiftUI + 17689156) [0x7ff91f3cda44]
11 ??? (SwiftUI + 605061) [0x7ff91e382b85]
11 NSApplicationMain + 817 (AppKit + 13351) [0x7ff819a12427]
11 -[NSApplication run] + 586 (AppKit + 193527) [0x7ff819a3e3f7]
11 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1219 (AppKit + 249268) [0x7ff819a4bdb4]
11 _DPSNextEvent + 1738 (AppKit + 254576) [0x7ff819a4d270]
11 AEProcessAppleEvent + 54 (HIToolbox + 265791) [0x7ff820256e3f]
11 aeProcessAppleEvent + 419 (AE + 19485) [0x7ff81cdcdc1d]
11 ??? (AE + 46600) [0x7ff81cdd4608]
11 ??? (AE + 48541) [0x7ff81cdd4d9d]
11 _NSAppleEventManagerGenericHandler + 80 (Foundation + 215958) [0x7ff817730b96]
11 -[NSAppleEventManager dispatchRawAppleEvent:withRawReply:handlerRefCon:] + 308 (Foundation + 216356) [0x7ff817730d24]
11 -[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] + 665 (AppKit + 275272) [0x7ff819a52348]
11 -[NSApplication(NSAppleEventHandling) _handleAEReopen:] + 273 (AppKit + 1953296) [0x7ff819bebe10]
11 ??? (SwiftUI + 7336594) [0x7ff91e9ee292]
11 ??? (SwiftUI + 7336453) [0x7ff91e9ee205]
11 ??? (SwiftUI + 10417113) [0x7ff91ecde3d9]
11 ??? (SwiftUI + 10420110) [0x7ff91ecdef8e]
11 ??? (SwiftUI + 10433862) [0x7ff91ece2546]
11 ??? (SwiftUI + 17108030) [0x7ff91f33fc3e]
11 ??? (SwiftUI + 14024228) [0x7ff91f04ee24]
11 +[NSWindow windowWithContentViewController:] + 57 (AppKit + 2903759) [0x7ff819cd3ecf]
11 -[NSView layoutSubtreeIfNeeded] + 65 (AppKit + 589719) [0x7ff819a9ef97]
11 NSPerformVisuallyAtomicChange + 132 (AppKit + 562866) [0x7ff819a986b2]
11 -[NSView _layoutSubtreeIfNeededAndAllowTemporaryEngine:] + 72 (AppKit + 589800) [0x7ff819a9efe8]
11 NSPerformVisuallyAtomicChange + 132 (AppKit + 562866) [0x7ff819a986b2]
11 __56-[NSView _layoutSubtreeIfNeededAndAllowTemporaryEngine:]_block_invoke + 1170 (AppKit + 9865880) [0x7ff81a377a98]
11 -[NSView _layoutSubtreeWithOldSize:] + 75 (AppKit + 591037) [0x7ff819a9f4bd]
11 NSPerformVisuallyAtomicChange + 132 (AppKit + 562866) [0x7ff819a986b2]
11 __36-[NSView _layoutSubtreeWithOldSize:]_block_invoke + 341 (AppKit + 9862288) [0x7ff81a376c90]
11 _NSViewLayout + 65 (AppKit + 591109) [0x7ff819a9f505]
11 NSPerformVisuallyAtomicChange + 132 (AppKit + 562866) [0x7ff819a986b2]
11 ___NSViewLayout_block_invoke + 595 (AppKit + 9910020) [0x7ff81a382704]
11 ??? (SwiftUI + 17745734) [0x7ff91f3db746]
11 ??? (SwiftUI + 17744719) [0x7ff91f3db34f]
11 +[NSAnimationContext runAnimationGroup:] + 55 (AppKit + 360361) [0x7ff819a66fa9]
11 ??? (SwiftUI + 17698141) [0x7ff91f3cfd5d]
11 ??? (SwiftUI + 17792588) [0x7ff91f3e6e4c]
11 ??? (SwiftUI + 17744926) [0x7ff91f3db41e]
11 ??? (SwiftUI + 17371367) [0x7ff91f3800e7]
11 ??? (SwiftUI + 17384201) [0x7ff91f383309]
11 ??? (SwiftUI + 10977538) [0x7ff91ed67102]
11 ??? (SwiftUI + 6009020) [0x7ff91e8aa0bc]
11 ??? (SwiftUI + 10977538) [0x7ff91ed67102]
11 ??? (Jamf Framework Redeploy + 21892) [0x10a1cc584]
11 ??? (Jamf Framework Redeploy + 33133) [0x10a1cf16d]
11 SecItemCopyMatching + 338 (Security + 2478107) [0x7ff8191b301b]
11 SecItemCopyMatching_osx(__CFDictionary const*, void const**) + 336 (Security + 2463447) [0x7ff8191af6d7]
11 SecKeychainSearchCopyNext + 161 (Security + 157044) [0x7ff818f7c574]
11 Security::KeychainCore::KCCursorImpl::next(Security::KeychainCore::Item&) + 272 (Security + 157900) [0x7ff818f7c8cc]
11 Security::KeychainCore::KeychainImpl::performKeychainUpgradeIfNeeded() + 73 (Security + 2341701) [0x7ff819191b45]
11 _pthread_mutex_firstfit_lock_slow + 212 (libsystem_pthread.dylib + 6600) [0x7ff8168229c8]
11 __psynch_mutexwait + 10 (libsystem_kernel.dylib + 14502) [0x7ff8167ea8a6]
*11 psynch_mtxcontinue + 0 (com.apple.kec.pthread + 11289) [0xffffff80036c1c19]
Process: Jamf Framework Redeploy [37390] [unique pid 136843]
UUID: 7F301508-BB0D-3659-A6B8-16FEB804B168
Path: /private/var/folders/*/Jamf Framework Redeploy.app/Contents/MacOS/Jamf Framework Redeploy
Identifier: co.uk.mallion.Jamf-Framework-Redeploy
Version: 1.0 (11)
Team ID: VR4GB7TBDP
Is First Party: No
Shared Cache: EA7F9772-219E-3ECE-A4D9-20AEEE3BC80F slid base address 0x7ff81645a000, slide 0x1645a000 (System Primary)
Architecture: x86_64
Parent: launchd [1]
UID: 501
Footprint: 38.69 MB
Time Since Fork: 56224s
Num samples: 11 (1-11)
CPU Time: 0.005s (10.3M cycles, 3.0M instructions, 3.45c/i)
Note: Translocated
Note: Unresponsive for 110 seconds before sampling
Note: 1 idle work queue thread omitted
Thread 0x9179e DispatchQueue "com.apple.main-thread"(1) 11 samples (1-11) priority 47 (base 47)
<process frontmost, thread QoS user interactive (requested user interactive), process unclamped, process received importance donation from WindowServer [144], IO tier 0>
11 start + 2432 (dyld + 25360) [0x7ff8164f4310] 1-11
11 ??? (Jamf Framework Redeploy + 23006) [0x10a1cc9de] 1-11
11 ??? (SwiftUI + 10088335) [0x7ff91ec8df8f] 1-11
11 ??? (SwiftUI + 17689156) [0x7ff91f3cda44] 1-11
11 ??? (SwiftUI + 605061) [0x7ff91e382b85] 1-11
11 NSApplicationMain + 817 (AppKit + 13351) [0x7ff819a12427] 1-11
11 -[NSApplication run] + 586 (AppKit + 193527) [0x7ff819a3e3f7] 1-11
11 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1219 (AppKit + 249268) [0x7ff819a4bdb4] 1-11
11 _DPSNextEvent + 1738 (AppKit + 254576) [0x7ff819a4d270] 1-11
11 AEProcessAppleEvent + 54 (HIToolbox + 265791) [0x7ff820256e3f] 1-11
11 aeProcessAppleEvent + 419 (AE + 19485) [0x7ff81cdcdc1d] 1-11
11 ??? (AE + 46600) [0x7ff81cdd4608] 1-11
11 ??? (AE + 48541) [0x7ff81cdd4d9d] 1-11
11 _NSAppleEventManagerGenericHandler + 80 (Foundation + 215958) [0x7ff817730b96] 1-11
11 -[NSAppleEventManager dispatchRawAppleEvent:withRawReply:handlerRefCon:] + 308 (Foundation + 216356) [0x7ff817730d24] 1-11
11 -[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] + 665 (AppKit + 275272) [0x7ff819a52348] 1-11
11 -[NSApplication(NSAppleEventHandling) _handleAEReopen:] + 273 (AppKit + 1953296) [0x7ff819bebe10] 1-11
11 ??? (SwiftUI + 7336594) [0x7ff91e9ee292] 1-11
11 ??? (SwiftUI + 7336453) [0x7ff91e9ee205] 1-11
11 ??? (SwiftUI + 10417113) [0x7ff91ecde3d9] 1-11
11 ??? (SwiftUI + 10420110) [0x7ff91ecdef8e] 1-11
11 ??? (SwiftUI + 10433862) [0x7ff91ece2546] 1-11
11 ??? (SwiftUI + 17108030) [0x7ff91f33fc3e] 1-11
11 ??? (SwiftUI + 14024228) [0x7ff91f04ee24] 1-11
11 +[NSWindow windowWithContentViewController:] + 57 (AppKit + 2903759) [0x7ff819cd3ecf] 1-11
11 -[NSView layoutSubtreeIfNeeded] + 65 (AppKit + 589719) [0x7ff819a9ef97] 1-11
11 NSPerformVisuallyAtomicChange + 132 (AppKit + 562866) [0x7ff819a986b2] 1-11
11 -[NSView _layoutSubtreeIfNeededAndAllowTemporaryEngine:] + 72 (AppKit + 589800) [0x7ff819a9efe8] 1-11
11 NSPerformVisuallyAtomicChange + 132 (AppKit + 562866) [0x7ff819a986b2] 1-11
11 __56-[NSView _layoutSubtreeIfNeededAndAllowTemporaryEngine:]_block_invoke + 1170 (AppKit + 9865880) [0x7ff81a377a98] 1-11
11 -[NSView _layoutSubtreeWithOldSize:] + 75 (AppKit + 591037) [0x7ff819a9f4bd] 1-11
11 NSPerformVisuallyAtomicChange + 132 (AppKit + 562866) [0x7ff819a986b2] 1-11
11 __36-[NSView _layoutSubtreeWithOldSize:]_block_invoke + 341 (AppKit + 9862288) [0x7ff81a376c90] 1-11
11 _NSViewLayout + 65 (AppKit + 591109) [0x7ff819a9f505] 1-11
11 NSPerformVisuallyAtomicChange + 132 (AppKit + 562866) [0x7ff819a986b2] 1-11
11 ___NSViewLayout_block_invoke + 595 (AppKit + 9910020) [0x7ff81a382704] 1-11
11 ??? (SwiftUI + 17745734) [0x7ff91f3db746] 1-11
11 ??? (SwiftUI + 17744719) [0x7ff91f3db34f] 1-11
11 +[NSAnimationContext runAnimationGroup:] + 55 (AppKit + 360361) [0x7ff819a66fa9] 1-11
11 ??? (SwiftUI + 17698141) [0x7ff91f3cfd5d] 1-11
11 ??? (SwiftUI + 17792588) [0x7ff91f3e6e4c] 1-11
11 ??? (SwiftUI + 17744926) [0x7ff91f3db41e] 1-11
11 ??? (SwiftUI + 17371367) [0x7ff91f3800e7] 1-11
11 ??? (SwiftUI + 17384201) [0x7ff91f383309] 1-11
11 ??? (SwiftUI + 10977538) [0x7ff91ed67102] 1-11
11 ??? (SwiftUI + 6009020) [0x7ff91e8aa0bc] 1-11
11 ??? (SwiftUI + 10977538) [0x7ff91ed67102] 1-11
11 ??? (Jamf Framework Redeploy + 21892) [0x10a1cc584] 1-11
11 ??? (Jamf Framework Redeploy + 33133) [0x10a1cf16d] 1-11
11 SecItemCopyMatching + 338 (Security + 2478107) [0x7ff8191b301b] 1-11
11 SecItemCopyMatching_osx(__CFDictionary const*, void const**) + 336 (Security + 2463447) [0x7ff8191af6d7] 1-11
11 SecKeychainSearchCopyNext + 161 (Security + 157044) [0x7ff818f7c574] 1-11
11 Security::KeychainCore::KCCursorImpl::next(Security::KeychainCore::Item&) + 272 (Security + 157900) [0x7ff818f7c8cc] 1-11
11 Security::KeychainCore::KeychainImpl::performKeychainUpgradeIfNeeded() + 73 (Security + 2341701) [0x7ff819191b45] 1-11
11 _pthread_mutex_firstfit_lock_slow + 212 (libsystem_pthread.dylib + 6600) [0x7ff8168229c8] 1-11
11 __psynch_mutexwait + 10 (libsystem_kernel.dylib + 14502) [0x7ff8167ea8a6] 1-11
*11 psynch_mtxcontinue + 0 (com.apple.kec.pthread + 11289) [0xffffff80036c1c19] (blocked by turnstile waiting for Jamf Framework Redeploy [37390] [unique pid 136843] thread 0x9184e) 1-11
Thread 0x9184e 11 samples (1-11) priority 47 (base 47)
<process frontmost, thread QoS user interactive (requested background, promote user interactive), process unclamped, process received importance donation from WindowServer [144], IO tier 0 and passive>
11 <truncated backtrace> 1-11
11 __psynch_mutexwait + 10 (libsystem_kernel.dylib + 14502) [0x7ff8167ea8a6] 1-11
*11 psynch_mtxcontinue + 0 (com.apple.kec.pthread + 11289) [0xffffff80036c1c19] (blocked by turnstile waiting for Jamf Framework Redeploy [37390] [unique pid 136843] thread 0x9197f) 1-11
Thread 0x91852 11 samples (1-11) priority 4 (base 4)
<process frontmost, thread QoS background (requested background), process unclamped, thread darwinbg, process received importance donation from WindowServer [144], IO tier 2>
11 <truncated backtrace> 1-11
11 __psynch_mutexwait + 10 (libsystem_kernel.dylib + 14502) [0x7ff8167ea8a6] 1-11
*11 psynch_mtxcontinue + 0 (com.apple.kec.pthread + 11289) [0xffffff80036c1c19] (blocked by turnstile with priority 47 waiting for Jamf Framework Redeploy [37390] [unique pid 136843] thread 0x9197f) 1-11
Thread 0x91858 11 samples (1-11) priority 4 (base 4)
<process frontmost, thread QoS background (requested background), process unclamped, thread darwinbg, process received importance donation from WindowServer [144], IO tier 2>
11 <truncated backtrace> 1-11
11 __psynch_mutexwait + 10 (libsystem_kernel.dylib + 14502) [0x7ff8167ea8a6] 1-11
*11 psynch_mtxcontinue + 0 (com.apple.kec.pthread + 11289) [0xffffff80036c1c19] (blocked by turnstile with priority 47 waiting for Jamf Framework Redeploy [37390] [unique pid 136843] thread 0x9184e) 1-11
Thread 0x91868 Thread name "com.apple.NSEventThread" 11 samples (1-11) priority 47 (base 47) cpu time 0.005s (10.3M cycles, 3.0M instructions, 3.45c/i)
<process frontmost, thread QoS user interactive (requested user interactive), process unclamped, process received importance donation from WindowServer [144], IO tier 0>
11 thread_start + 15 (libsystem_pthread.dylib + 7291) [0x7ff816822c7b] 1-11
11 _pthread_start + 125 (libsystem_pthread.dylib + 25177) [0x7ff816827259] 1-11
11 _NSEventThread + 132 (AppKit + 1696409) [0x7ff819bad299] 1-11
11 CFRunLoopRunSpecific + 560 (CoreFoundation + 514944) [0x7ff816900b80] 1-11
11 __CFRunLoopRun + 1360 (CoreFoundation + 517962) [0x7ff81690174a] 1-11
11 __CFRunLoopServiceMachPort + 145 (CoreFoundation + 523486) [0x7ff816902cde] 1-11
11 mach_msg + 19 (libsystem_kernel.dylib + 6312) [0x7ff8167e88a8] 1-11
11 mach_msg_overwrite + 723 (libsystem_kernel.dylib + 34357) [0x7ff8167ef635] 1-11
11 mach_msg2_trap + 10 (libsystem_kernel.dylib + 5570) [0x7ff8167e85c2] 1-11
*11 ipc_mqueue_receive_continue + 0 (kernel + 972000) [0xffffff80003c94e0] 1-11
Thread 0x9197f 11 samples (1-11) priority 47 (base 47)
<process frontmost, thread QoS user interactive (requested background, promote user interactive), process unclamped, process received importance donation from WindowServer [144], IO tier 0 and passive>
11 <truncated backtrace> 1-11
11 __psynch_mutexwait + 10 (libsystem_kernel.dylib + 14502) [0x7ff8167ea8a6] 1-11
*11 psynch_mtxcontinue + 0 (com.apple.kec.pthread + 11289) [0xffffff80036c1c19] (blocked by turnstile waiting for Jamf Framework Redeploy [37390] [unique pid 1368'
appcrash.txt
Hi,
I was used this app yesterday and after I just closed the LCD.
Now today I just tried to open the app it’s causing this issue.
Just kill the app and reopened its fine .
On Thu, 12 Jan 2023 at 3:19 PM, red5coder @.***> wrote:
Could you try something. Can you deselect the check box to save the
password? Does this make a difference?
—
Reply to this email directly, view it on GitHub
https://github.com/red5coder/Jamf-Framework-Redeploy/issues/1#issuecomment-1380069411,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AMUGJUZMFOENJIDH4XRFLTLWR7HSNANCNFSM6AAAAAATZAR7D4
.
You are receiving this because you authored the thread.Message ID:
@.***>
--
Thanks & Regards,
Manikandan R
Contact: +91-9902103878
E-mail: @.***
Hi, still have been unable to recreate the issue. Also so far no one else is reporting this issue.
Might be worth trying another Mac in case there is some conflict going on.
Let me know if that helps but for the time being will close this ticket.
|
gharchive/issue
| 2023-01-12T09:27:38 |
2025-04-01T04:35:41.953714
|
{
"authors": [
"mani2care",
"red5coder"
],
"repo": "red5coder/Jamf-Framework-Redeploy",
"url": "https://github.com/red5coder/Jamf-Framework-Redeploy/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
639030718
|
Basic example of Filter/Service middleware
Protocol-specific middleware means writing O(protocol*middleware) adapters. By making protocol-agnostic middleware, we enable writing O(protocol+middleware) adapters. Fewer adapters makes making maintenance of existing adapters easier, and makes it easier to add new middlewares or protocols.
For generic middleware, let's choose the Service/Filter model made popular by Finagle [1], and also adopted by Facebook [2] and others. A Service is anything that processes requests into responses, and a Filter is a wrapper around a Service that can transform requests/responses.
Using these primitives, it's possible to model both clients and servers as Services, and any middleware as a Filter.
The downside of this is that when making a new Service, you have to provide the adapter to transform a service implementation into a Service and back. You can see this in the PR as we transform http.Client<->Service. But the upside is that you get Filter reuse for free. As the number of Filters and protocols expands, this reuse becomes incredibly useful and valuable.
This is a reference implementation for discussion. I imagine that we can slowly move existing middleware if we want to pursue this direction.
[1] https://twitter.github.io/finagle/guide/ServicesAndFilters.html
[2] https://github.com/facebook/wangle
I'm not super enthusiastic about this approach, this is basically what go-kit does with it's Endpoint interface and the end result is you just end up writing a bunch of boilerplate to convert your things to and from your interface in every service
Go-kit gets things almost right, but in a way that introduces lots of boilerplate, which would not be present in this similar interface. Go-kit uses Endpoint to model a single RPC call. Service, which is a very similar interface to endpoint, models the entire service.
In order to have the maintenance gains associated with my O(protocol+middleware) argument, you should not vary on the interface of the service! If you have to vary on the interface of the service, by adding method-specific adapters, then you are now O(protocol*methods+middleware) adapters, which is a huge step backwards, especially with many methods.
So I think go-kit isn't quite the right thing, but I'm not quite ready to damn this approach for similarity, especially considering the success of similar models elsewhere.
I think the reality is things like this don't work that great in a language without generics.
This is true. However, based on past experience with similar libraries, the annoyance can be mild.
Netty, perhaps the most popular server library in Java, currently implements middleware while requiring similar type assertions to up/downcast requests as they move through the pipeline. For example, SSL termination is implemented as a middleware on a TCP stream, and later in the same pipeline you can process HTTP headers. To do so, you are casting the input objects as. In writing low-level Netty, this lack of generics is mildly annoying, but the ability to write in their pipeline abstraction is essential!
In our specific case, the burden of typecasting goes to the baseplate.go implementer, not the end-user, and it represents less burden than the alternatives. When implementing a new protocol, the burden would be 2 type assertions, which is less than implementing an adapter for every middleware.
So all-in-all, I share your apprehension based upon go-kit, but I think this approach hits a middle ground where we reap much of the benefits of go-kit, with very little of the cost.
That's a good point about the difference between this and go-kit's approach, I guess the thing that's really missing for me is seeing an actual example.
The current code is really just in a vacuum (and all new), what would the existing baseplate.go servers and clients (and their middlewares) look like with this approach and what would a service actually look like?
The current code is really just in a vacuum (and all new), what would the existing baseplate.go servers and clients (and their middlewares) look like with this approach and what would a service actually look like? To that end, it's probably worth at least looking into?
There is at least an http.Client here, and also you can see some sample filters in the test case to get an idea of what they would look like.
I'm trying not to rewrite all of the things immediately :) Is there some subset that I could translate that would help get the idea across?
I think this can make a lot of sense, especially with middlewares which clearly should apply across endpoints, i.e. logging middleware.
A majority of our middlewares are around header handling, and we do need different header handling both between thrift and http, and between client and server.
As we move forward, we will add more low-level middlewares. Circuit-breaking, retries, load-shedding are largely protocol-independent.
You cited code repetition as a reason to go with this approach, but in reality we actually implemented the things that's common between them (e.g. create the server span after we get all the headers) as shared, and the difference between the thriftbp and httpbp server middleware is just the different part (how to actually extract the headers). So I would say we actually already shared the code as much as we can, and this approach will not provide meaningful improvement on that front.
As the number of protocols/clients we would like to have nice network hygiene, spans, retries, etc expands (http client, redis, memcached, postgres, grpc, cassandra, aws client, kafka, rabbitmq, etc), not only will we want code reuse, but we will want to verify that the implementations are consistent across all of the protocols. We could easily end up with subtle differences in the way spans are wrapped across all of our clients, and this would be hard to verify, leading to subtle bugs.
As a practical matter, many of the protocols will be implemented by the trailblazers, and they may not understand the subtleties of the middlewares, or of the baseplate.go implementation itself. As an example, I'd like to implement an HTTP client, and I'd like to implement circuit-breaking, but I don't yet understand how our preferred client span double-wrapping is intended to behave.
If our middlewares were more generically factored, I could contribute without learning much more about the whole baseplate.go codebase. It's easier to create a circuitbreaking filter than it is to apply circuitbreaking to each of the disparate protocol interfaces that exist right now. Similarly, it's relatively easy to take a middleware chain from the thrift client, and apply the same middleware chain, perhaps with minor modifications, to the new http client. It's much harder to gather and port all of the middleware to a new interface.
The way things are factored today makes a ton of sense, especially as all of the contributors are baseplate.go experts, and we haven't had to scale to even a moderate number of protocols or middlewares. We've done an excellent job of keeping things simple!
My argument, much like any O(n) argument, is about scalability as the numbers get larger. We'll have more protocols, middlewares, and contributors in the future. As a new contributor to baseplate.go, I'm starting to feel some pain that is probably not felt by the original authors. And as someone who's worked for years in the environment I'm suggesting, I'm hopeful that it will help the project scale nicely into the region where we have, say, around 10 each of common protocols, contributors and middlewares.
but we will want to verify that the implementations are consistent across all of the protocols
I don't see how that works in reality. Blindly apply middlewares from one server (thrift) to another (http) is a very dangerous idea. For example, we extract deadline propagation headers on thrift server, but don't do so in http, because thrift servers has a very different trust model from http servers. If we ended up sending span information and deadline propagation information to all of redis/aws/etc., that's both wasteful (we are sending headers server are not going to read), and dangerous (we are leaking internal info to third party).
Instead, that should be achieved by code reuse, e.g. the core functionality is implemented middleware agnostic, and the middlewares are just a very thin wrapper applying the things can't be shared with other middleware implementations. This is basically where we are today.
As a new contributor to baseplate.go, I'm starting to feel some pain that is probably not felt by the original authors
That will always be true, no matter how "good" our code base is. I would argue that we do have very good code documentations and comments, but you are not looking at them. (for example your "how our preferred client span double-wrapping is intended to behave" question is answered here: https://github.com/reddit/baseplate.go/blob/8630c76dbfdea70160bc33febfc0f05d78a55984/thriftbp/client_middlewares.go#L75-L89)
I agree that middlewares shouldn't be applied across protocols unless they make sense but there are middlewares that are agnostic to their protocol, so I wonder if it's possible to support this pattern but also remain flexible by applying the middleware only to certain protocols without an override type of design.
@konradreiche sure, but at what price? The current situation is that the price is too high (that we have to deal with a lot of newly added interface{}), while the benefit is too low (those middlewares will be replaced by the new pattern are already very think wrapper around already shared code).
Also not to mention that the current approach won't work with thrift server middlewares. It likely also don't work with thrift client middlewares. (if you don't believe me, try implement it for thrift).
I was hoping to have a little in-person discussion about this before it got into too much of a debate (hence why I draft mode'd and didn't invite reviewers), so I think I'm going to check out of the thread until we get a chance to discuss in-person.
And to be clear, I don't think there's anything wrong today. It's more of a "where do we want to evolve the system" sort of a discussion.
|
gharchive/pull-request
| 2020-06-15T17:50:55 |
2025-04-01T04:35:41.976799
|
{
"authors": [
"fishy",
"fizx",
"konradreiche",
"pacejackson"
],
"repo": "reddit/baseplate.go",
"url": "https://github.com/reddit/baseplate.go/pull/235",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1258464179
|
Add ability to login to AWS ECR repositories
Description
This PR adds the ability to login to AWS ECR repositories. Unfortunately the AWS SDK will stop working with Node 12 in November (see logs below), so making this action use Nodejs 16 should be the next step.
I'm not sure how to test this, if it's feasible for you to make an AWS account? But it works for me™.
Related Issue(s)
#23
Checklist
[X] This PR includes a documentation change
[ ] This PR does not need a documentation change
[ ] This PR includes test changes
[ ] This PR's changes are already tested
[ ] This change is not user-facing
[ ] This change is a patch change
[X] This change is a minor change
[ ] This change is a major (breaking) change
Changes made
Added a check to detect if the registry is a ECR URL
If it is, fetch the token from the AWS API
Log output
Run der-eismann/podman-login@3364306a59191107a98d7cc1f60ba9fedf15f5c5
with:
registry: 123456789012.dkr.ecr.eu-west-1.amazonaws.com
username: ***
password: ***
logout: true
💡 Detected 123456789012.dkr.ecr.eu-west-1.amazonaws.com as an ECR repository
(node:178952 NodeDeprecationWarning: The AWS SDK for JavaScript (v3) will
no longer support Node.js v12.22.7 on November 1, 2022.
To continue receiving updates to AWS services, bug fixes, and security
updates please upgrade to Node.js 14.x or later.
For details, please refer our blog post: https://a.co/48dbdYz
/bin/podman version
/bin/podman version
Client: Podman Engine
Version: 4.0.2
API Version: 4.0.2
Go Version: go1.17.7
Built: Tue May 10 16:40:19 2022
OS/Arch: linux/amd64
/bin/podman login 123456789012.dkr.ecr.eu-west-1.amazonaws.com -u AWS -p *** --verbose
Used: /tmp/podman-run-1001/containers/auth.json
Login Succeeded!
✅ Successfully logged in to 123456789012.dkr.ecr.eu-west-1.amazonaws.com as AWS
Exporting REGISTRY_AUTH_FILE=/tmp/podman-run-1001/containers/auth.json
✍️ Writing registry credentials to "/home/github/.docker/config.json"
Thanks for the quick reply! Unfortunately I am unable to update the bundles because executing the command locally doesn't produce a diff :/
$ git rev-parse HEAD
3364306a59191107a98d7cc1f60ba9fedf15f5c5
$ podman run --rm -it -v $(pwd):/project --entrypoint=/bin/bash -w /project node:12 -c "npm run bundle"
> podman-login@1.0.0 bundle /project
> ncc build src/index.ts --source-map --minify
ncc: Version 0.25.1
ncc: Compiling file index.js
ncc: Using typescript@4.2.3 (local user-provided)
122kB dist/sourcemap-register.js
727kB dist/index.js
2079kB dist/index.js.map
849kB [7917ms] - ncc 0.25.1
Also my index.js is 2 kB bigger than the compiled file in the CI check. I don't know where that is coming from because the commit is the same and my git status is clean. Could you maybe try to clone my PR and see if you get a diff?
I have cloned this PR and tried to run npm run bundle and the bundle was updated successfully.
Sometimes doing npm ci before npm run bundle helps. Could you please try that?
Sometimes doing npm ci before npm run bundle helps.
That helped indeed, thanks for the hint! The CI tests should succeed now.
It is still failing :confused:
Then I don't really know what else to do :confused: I tried running npm in podman and locally on my Fedora machine, versions 12 and 14, there are no changes to the index.js.
actually, the contributor's bundle is probably the right one, because the runner is using an old version of node (12) and an old version of npm. maybe setup-node action could be used to install node 16 and the package.json engine version should be updated to 16 too.
@tetchel thanks for your hint! I used the node:12 image on purpose because v12 was requested in the files. After running it with node 16
podman run --rm -it -v $(pwd):/project --entrypoint=/bin/bash -w /project node:16 -c "npm ci && npm run bundle"
the dist files changed again and the checks are green now. So yeah, the switch to NodeJS 16 should be the next step.
|
gharchive/pull-request
| 2022-06-02T17:09:11 |
2025-04-01T04:35:42.002609
|
{
"authors": [
"der-eismann",
"divyansh42",
"tetchel"
],
"repo": "redhat-actions/podman-login",
"url": "https://github.com/redhat-actions/podman-login/pull/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1747305867
|
Appstudio update test-component-pac-vdez
Pipelines as Code configuration proposal
To start the PipelineRun, add a new comment with content /ok-to-test
For more detailed information about running a PipelineRun, please refer to Pipelines as Code documentation Running the PipelineRun
To customize the proposed PipelineRuns after merge, please refer to Build Pipeline customization
Pipelines as Code CI/test-component-pac-vdez-on-pull-request has successfully validated your commit.
StatusDurationName
✅ Succeeded
11 seconds
init
✅ Succeeded
38 seconds
clone-repository
✅ Succeeded
1 minute
build-container
✅ Succeeded
31 seconds
deprecated-base-image-check
✅ Succeeded
32 seconds
inspect-image
✅ Succeeded
3 minutes
clair-scan
✅ Succeeded
1 minute
clamav-scan
✅ Succeeded
40 seconds
sbom-json-check
✅ Succeeded
13 seconds
label-check
✅ Succeeded
6 seconds
show-sbom
✅ Succeeded
7 seconds
show-summary
Pipelines as Code CI/test-component-pac-vdez-on-pull-request has successfully validated your commit.
StatusDurationName
✅ Succeeded
18 seconds
init
✅ Succeeded
20 seconds
clone-repository
✅ Succeeded
25 seconds
build-container
✅ Succeeded
15 seconds
deprecated-base-image-check
✅ Succeeded
17 seconds
inspect-image
✅ Succeeded
50 seconds
clamav-scan
✅ Succeeded
10 seconds
clair-scan
✅ Succeeded
11 seconds
sbom-json-check
✅ Succeeded
6 seconds
label-check
✅ Succeeded
6 seconds
show-summary
✅ Succeeded
7 seconds
show-sbom
|
gharchive/pull-request
| 2023-06-08T07:52:53 |
2025-04-01T04:35:42.021282
|
{
"authors": [
"rhtap-qe-bots"
],
"repo": "redhat-appstudio-qe/devfile-sample-hello-world",
"url": "https://github.com/redhat-appstudio-qe/devfile-sample-hello-world/pull/10189",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1234423414
|
update HAS to 86571e38f52e6bbe67b945dd58b695af26ae834d
Signed-off-by: Stephanie yangcao@redhat.com
Pulls in changes from
86571e38f52e6bbe67b945dd58b695af26ae834d commit
/test appstudio-e2e-deployment
/retest
|
gharchive/pull-request
| 2022-05-12T19:57:53 |
2025-04-01T04:35:42.022960
|
{
"authors": [
"flacatus",
"yangcao77"
],
"repo": "redhat-appstudio/infra-deployments",
"url": "https://github.com/redhat-appstudio/infra-deployments/pull/355",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2464942256
|
chore(KLUXINFRA-743): STAGE: Add "mlarge" platform
Adding the linux-mlarge/amd64 platform as a migration path from what was previously called linux/amd64 as that is being changed to run locally in the cluster.
For completeness sake, also adding linux-mlarge/arm64 which is identical to the current linux/arm64 but lets users be explicit about running in a VM.
This change is only done in STAGE for now
Sorry, there's a bug in the CI that will be fixed by this PR
I'll rerun the tests here and on your other PR asap
/test appstudio-e2e-tests
|
gharchive/pull-request
| 2024-08-14T05:37:49 |
2025-04-01T04:35:42.025207
|
{
"authors": [
"dheerajodha",
"ifireball"
],
"repo": "redhat-appstudio/infra-deployments",
"url": "https://github.com/redhat-appstudio/infra-deployments/pull/4327",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
539692691
|
setup.py: set proper python requirements
Only py3.5+ is supported
Signed-off-by: Martin Bašti mbasti@redhat.com
Because of missing python requirements in setup.py, release 0.1 is downloadable by pip from PyPI on py2. It should be probably removed from pypi and replaced by new release to avoid messing with py2.
podman run -ti --rm python:2 bash
root@dfbc76b4fa22:/# pip install wfepy # should error here with no available for py2 error
....
root@dfbc76b4fa22:/# python2 -m wfepy
Traceback (most recent call last):
File "/usr/local/lib/python2.7/runpy.py", line 163, in _run_module_as_main
mod_name, _Error)
File "/usr/local/lib/python2.7/runpy.py", line 111, in _get_module_details
__import__(mod_name) # Do not catch exceptions initializing package
File "/usr/local/lib/python2.7/site-packages/wfepy/__init__.py", line 3, in <module>
from .workflow import * # noqa: F401, F403
File "/usr/local/lib/python2.7/site-packages/wfepy/workflow.py", line 4, in <module>
import enum
ImportError: No module named enum
|
gharchive/pull-request
| 2019-12-18T13:49:51 |
2025-04-01T04:35:42.027747
|
{
"authors": [
"MartinBasti"
],
"repo": "redhat-aqe/wfepy",
"url": "https://github.com/redhat-aqe/wfepy/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
652234455
|
🏄♂️ Create a DEVELOPMENT.md file
Create instructions for contributing, testing and linting things etc
FYI @garethahealy
@springdo ; i've added a testing.md file via this PR:
https://github.com/redhat-cop/bats-library/pull/11
https://github.com/redhat-cop/bats-library/blob/06e630ff36be68f248b2bad9c6519aabf25b0ccb/TESTING.md
Look good for the first pass to close this issue?
|
gharchive/issue
| 2020-07-07T11:11:13 |
2025-04-01T04:35:42.030240
|
{
"authors": [
"garethahealy",
"springdo"
],
"repo": "redhat-cop/bats-library",
"url": "https://github.com/redhat-cop/bats-library/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1695518498
|
Problem with high memory consumption on kube-apiserver
Hello,
We would like to install the Cert Utils Operator on OpenShift via OperatorHub. The OpenShift version we are using is 4.11.27 (Kubernetes: v1.24.6+263df15). The version of Cert Utils Operator is 1.3.10.
After the installation of the Cert Utils Operator we see increasing memory usage in kube-apiservers:
As you can see the memory gets filled up at the first kube-apiserver and after this gets unavailable the next one is filled up and so on.
The OpenShift Cluster has 164 worker nodes and 3 "etcd" Nodes. Each of the etcd Nodes where the kube-apiserver is running has 12 vCores and 64 GB of memory.
This cluster hosts several namespaces and applications which results in 12644 secrets and 5655 configmaps.
We stopped the attempt to install the Cert Utils Operator after two and a half hours because the cluster gets very unstable and it doesn't look like it will be healing soon.
After uninstalling the Cert Utils Operator the Cluster gets back to normal operations and stability.
We also tested the installation on a smaller cluster with same versions and here we saw also for some time increased memory consumption on the kube-apiserver but after some minutes the kube-apiserver gets back to normal operations. We also see a high CPU consumption (up to 2 cores) of the Cert Utils Operator during this time.
The described behavior is every time reproduceable but it looks like it depends on size of the cluster/configmaps/secrets.
We are running several other operators in the cluster but none of them shows a similar behavior.
For me it is not clear whether the Cert Utils Operator does something wrong or this is an issue with the large quantity of secrets and configmaps and inefficient processing in kube-apiserver?
when cert-utils starts it creates watch on secret and configmaps, so when you have ton of them it will generate significant load on the kube api servers. But that should be only at the beginning. Then the load should subside. Unless your secrets are constantly changing.
We will do some investigation on this on our side.
Hello everyone,
I can confirm the behavior if the cert-utils operator changes configmaps and secrets, which are created or managed by other operators in the cluster.
Here is exactly the same problem: https://github.com/redhat-cop/cert-utils-operator/issues/120
The loops result in massive amounts of API calls....
And this is not an Openshift specific problem....
This is also not a problem or effect from a specific cluster size - here it only becomes significantly visible....
|
gharchive/issue
| 2023-05-04T08:31:53 |
2025-04-01T04:35:42.035724
|
{
"authors": [
"MacThrawn",
"ocpvkb",
"raffaelespazzoli"
],
"repo": "redhat-cop/cert-utils-operator",
"url": "https://github.com/redhat-cop/cert-utils-operator/issues/149",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1842901352
|
WIP: adding back 1-057 to test suite
What type of PR is this?
/kind failing-test
What does this PR do / why we need it:
Have you updated the necessary documentation?
[ ] Documentation update is required by this PR.
[ ] Documentation has been updated.
Which issue(s) this PR fixes:
Fixes 3076
Test acceptance criteria:
[ ] Unit Test
[ ] E2E Test
How to test changes / Special notes to the reviewer:
kubectl kuttl test ./test/openshift/e2e/ignore-tests/parallel --config ./test/openshift/e2e/parallel/kuttl-test.yaml --test 1-057_validate_notifications
/test v4.14-kuttl-parallel
/test v4.13-kuttl-parallel
/test v4.12-kuttl-parallel
/test v4.12-kuttl-parallel
/test v4.13-kuttl-parallel
/test v4.14-kuttl-parallel
1-066_validate_redis_secure_comm_no_autotls_no_ha test case failed with on ci/prow/v4.14-kuttl-parallel
retesting
/test v4.12-kuttl-parallel
/test v4.13-kuttl-parallel
/test v4.14-kuttl-parallel
/test v4.14-kuttl-sequential
/retest
/retest
/lgtm
/lgtm
/approve
|
gharchive/pull-request
| 2023-08-09T10:04:09 |
2025-04-01T04:35:42.064833
|
{
"authors": [
"anandrkskd",
"mehabhalodiya",
"varshab1210"
],
"repo": "redhat-developer/gitops-operator",
"url": "https://github.com/redhat-developer/gitops-operator/pull/585",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2312281124
|
fix: sentinal liveliness issue
What type of PR is this?
/kind bug
What does this PR do / why we need it:
Readiness probe failed: AUTH failed: ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?
/lgtm
|
gharchive/pull-request
| 2024-05-23T08:24:34 |
2025-04-01T04:35:42.066544
|
{
"authors": [
"iam-veeramalla",
"svghadi"
],
"repo": "redhat-developer/gitops-operator",
"url": "https://github.com/redhat-developer/gitops-operator/pull/716",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2658288804
|
[RHDHBUGS-106][RHIDP-4646] Clarify the behavior of the NO_PROXY rules
IMPORTANT: Do Not Merge - To be merged by Docs Team Only
Version(s):
1.4
Issue:
Relates to https://issues.redhat.com/browse/RHDHBUGS-106
Relates to https://issues.redhat.com/browse/RHIDP-4646
Link to docs preview:
https://redhat-developer.github.io/red-hat-developers-documentation-rhdh/pr-709/admin-rhdh/#understanding-no-proxy
Reviews:
[ ] SME: @ mention assignee
[ ] QE: @ mention assignee
[x] Docs review: @jmagak
[ ] Additional review: @mention assignee (by writer)
Additional information:
Follow-up to https://github.com/janus-idp/backstage-showcase/pull/1903
Added a few suggestions.
@jmagak Thanks for the suggestions. I've incorporated most of them, and changed some where I felt that the meaning could be confusing. Please take another look.
Sure, If you do not specify a port [...] looks good here. You can use that instead.
@jmagak Done in e18990c (#709). All the other suggestions have been incorporated as well. Please approve the PR if it looks good to you. Thanks.
/cc @albarbaro
@albarbaro Would you mind reviewing this PR on behalf of QE? Assigning this to you since you did the review on the initial PR documenting the support of corporate proxies. Thanks.
@themr0c @jmagak Do I need another SME review on this? Or can I consider myself the SME for this? :-)
/cc @subhashkhileri @zdrapela
FYI
Here are a few suggestions @rm3l.
@jmagak Suggestions applied. Thanks for the review. Can you or anyone else from the docs team merge this PR if it is okay for you?
@rm3l Does this PR need to be cherry-picked to release-1.3 branch?
@hmanwani-rh Nope, this is only for 1.4. The code change is currently only in the Showcase main branch.
|
gharchive/pull-request
| 2024-11-14T10:03:21 |
2025-04-01T04:35:42.076194
|
{
"authors": [
"rm3l"
],
"repo": "redhat-developer/red-hat-developers-documentation-rhdh",
"url": "https://github.com/redhat-developer/red-hat-developers-documentation-rhdh/pull/709",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2400632358
|
Add error or warning for GitHub links
OpenShift does not allow links to GitHub repositories unless they are in auto-generated API references. Other projects require project management approval. The RHSSG also does not allow links to GitHub. I think it would be a good idea to add a rule to detect GitHub links.
Great idea! Do you want to put this in? Something like this would work:
extends: substitution
ignorecase: false
level: error
link: https://redhat-documentation.github.io/vale-at-red-hat/docs/main/reference-guide/github/
message: "%s"
scope:raw
swap:
Start each error message with "Do not use ..."
Error messages must be single quoted.
'https://github.com/': 'Do not GitHub links in source.'
Feel free to assign me to this issue if that helps.
Note - repos inside the OpenShift org are Ok. https://github.com/openshift/* are fine.
Note - repos inside the OpenShift org are Ok. https://github.com/openshift/* are fine.
I think "owned and maintained by Red Hat" would cover that. Or did you want that URL in the 'valid' list?
I think we need clarification from @kalexand-rh maybe. Are all GitHub links forbidden? Or are GitHub links from inside the OpenShift organization allowed?
|
gharchive/issue
| 2024-07-10T12:29:35 |
2025-04-01T04:35:42.106618
|
{
"authors": [
"aireilly",
"apinnick"
],
"repo": "redhat-documentation/vale-at-red-hat",
"url": "https://github.com/redhat-documentation/vale-at-red-hat/issues/828",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1146863637
|
Docs on how to build rpm with application images in container store.
Signed-off-by: Parul parsingh@redhat.com
Which issue(s) this PR addresses:
How to on
To add application images to container store for microshift to access on limited or no network build rpm using microshift-app-images.spec
Closes #
lgtm I think we should not use var dir (volatile on ostree), but I can fix that on a follow up PR +, I don't want to bug you after so long. Thanks for documenting this!
|
gharchive/pull-request
| 2022-02-22T12:41:47 |
2025-04-01T04:35:42.108775
|
{
"authors": [
"husky-parul",
"mangelajo"
],
"repo": "redhat-et/microshift-documentation",
"url": "https://github.com/redhat-et/microshift-documentation/pull/133",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
55338726
|
Connect and Disconnect callbacks with an additional arg for private data !
Use Case:
If hiredis being used in asynchronous mode, then the callers should update their data structures upon on disconnect/connect like updating the flags or removing the redis context from their data structures. Today, there is no way to pass the private data as an argument to these callbacks, so having this facility would be very handy and reduces additional code in the caller modules .
For an instance, each thread maintains its own global data structure and redis client context is also saved in its data structure. So when disconnect happens , the callback need to update this data structure like removing the context or invalidating the context. This can be addressed in two ways:
Maintaining an array of entries that contain thread ID and and data structure specfic to the thread.
In Connect /disconnect call backs, getting the thread global data strcuture and updating the context.
The above overhead can be simply avoided by introducing an additional arg to provide the private data to connect and disconnect call backs.
Maybe I'm misunderstanding, but can you just use the void* data inside redisAsyncContext to point to your data structure? That's worked for me in a multithreaded library using hiredis.
Closing the old issue but you are able to use the data pointer for arbitrary user-supplied data as mentioned.
|
gharchive/issue
| 2015-01-23T22:01:04 |
2025-04-01T04:35:42.120207
|
{
"authors": [
"hmartiro",
"michael-grunder",
"saivamsi75"
],
"repo": "redis/hiredis",
"url": "https://github.com/redis/hiredis/issues/298",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
323006658
|
Use cp -pPR instead of cp -a
cp does not support the -a flag on all operating systems, such as Mac OS X Leopard 10.5.x or earlier, resulting in:
cp: illegal option -- a
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-pvX] source_file target_file
cp [-R [-H | -L | -P]] [-fi | -n] [-pvX] source_file ... target_directory
make: *** [install] Error 64
From man cp on macOS Sierra 10.12.6, cp -a is exactly the same as cp -pPR, so use that instead.
From man cp on macOS Sierra:
Historic versions of the cp utility had a -r option. This implementation supports that option; however, its use is strongly discouraged, as it does not correctly copy special files, symbolic links, or fifo's.
So don't ever use that.
Thanks, merged!
|
gharchive/pull-request
| 2018-05-14T22:40:53 |
2025-04-01T04:35:42.122864
|
{
"authors": [
"michael-grunder",
"ryandesign"
],
"repo": "redis/hiredis",
"url": "https://github.com/redis/hiredis/pull/596",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1954018871
|
feat: Add GearsCmdable to go-redis API adapter
This PR adds GearsCmdable to our go-redis API adapter, and copied the tests from go-redis to make sure that it's compatible.
Codecov Report
Attention: 19 lines in your changes are missing coverage. Please review.
Comparison is base (55961cc) 97.56% compared to head (53eef52) 97.51%.
Report is 1 commits behind head on main.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #395 +/- ##
==========================================
- Coverage 97.56% 97.51% -0.06%
==========================================
Files 76 76
Lines 30737 30848 +111
==========================================
+ Hits 29990 30082 +92
- Misses 627 641 +14
- Partials 120 125 +5
Files
Coverage Δ
internal/cmds/gen_triggers_and_functions.go
100.00% <100.00%> (ø)
rueidiscompat/adapter.go
95.07% <92.30%> (-0.10%)
:arrow_down:
rueidiscompat/command.go
78.83% <60.60%> (-0.39%)
:arrow_down:
... and 2 files with indirect coverage changes
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Thank you for your contribution! @unknowntpo
Thanks for reviewing my code, @rueian.
|
gharchive/pull-request
| 2023-10-20T10:39:42 |
2025-04-01T04:35:42.195801
|
{
"authors": [
"codecov-commenter",
"rueian",
"unknowntpo"
],
"repo": "redis/rueidis",
"url": "https://github.com/redis/rueidis/pull/395",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
444126698
|
RStream.readGroup doesn't work properly with TypedJsonJacksonCodec
Expected behavior
TypedJsonJacksonCodec should decode messages within stream to specific types.
Actual behavior
JSON is decoded to LinkedHashMap rather than to specific type.
Steps to reproduce or test case
RStream<String, Message> stream = redissonClient.getStream("stream", new TypedJsonJacksonCodec(String.class, Message.class));
Map<StreamMessageId, Map<String, Message>> messageMap = stream.readGroup("group", "consumer");
messageMap.forEach((key, value) -> {
value.forEach((k, v) -> {
log.info("Class: {}", v.getClass()); // prints LinkedHashMap
v.getValue(); // ClassCastException
});
});
Redis version
5.0
Redisson version
3.10.7
Redisson configuration
Config redisConfig = new Config();
redisConfig.useSingleServer().setAddress(url);
RedissonReactiveClient redissonClient = Redisson.create(redisConfig);
Fixed! Thanks for report
|
gharchive/issue
| 2019-05-14T21:09:54 |
2025-04-01T04:35:42.198494
|
{
"authors": [
"StasKolodyuk",
"mrniko"
],
"repo": "redisson/redisson",
"url": "https://github.com/redisson/redisson/issues/2107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
620696569
|
org.redisson.client.RedisTimeoutException: Subscribe timeout: (30000ms)
每隔一段时间获取分布式锁的时候就会抛出这个异常,抛出异常前主节点曾宕掉,哨兵选举出主节点后就出现了这个问题
Increase subscription connections pool size
|
gharchive/issue
| 2020-05-19T05:48:12 |
2025-04-01T04:35:42.199579
|
{
"authors": [
"AlliumDuck",
"mrniko"
],
"repo": "redisson/redisson",
"url": "https://github.com/redisson/redisson/issues/2783",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1619649812
|
Getting java.lang.ClassNotFoundException: org.apache.catalina.session.ManagerBase upon adding Redisson jars
Hi,
I have an existing Java(using openj9 version 1.8.0_222) application which runs on Apache Tomcat 8.
I am trying to add Redisson jar to replicate the Tomcat Session in Redis.
I have downloaded the redisson-all-3.20.0.jar and redisson-tomcat-8-3.20.0.jar and put them in Tomcat's lib folder.
I have updated the Tomcat's context.xml file to include the RedissonSessionManager, as below:
I have also made the redisson.yml file with redis config information and put it in the Tomcat's catalina base's conf folder.
I built the project and on starting the Tomcat server, it is giving me error on ManagerBase class not found, as below:
Caused by: java.lang.NoClassDefFoundError: org.apache.catalina.session.ManagerBase
at java.lang.ClassLoader.defineClassImpl(Native Method)
at java.lang.ClassLoader.defineClassInternal(ClassLoader.java:389)
at java.lang.ClassLoader.defineClass(ClassLoader.java:358)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:682)
at java.net.URLClassLoader.access$400(URLClassLoader.java:89)
at java.net.URLClassLoader$ClassFinder.run(URLClassLoader.java:1086)
at java.security.AccessController.doPrivileged(AccessController.java:770)
at java.net.URLClassLoader.findClass(URLClassLoader.java:589)
at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:944)
at java.lang.ClassLoader.loadClass(ClassLoader.java:889)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:933)
at java.lang.ClassLoader.loadClass(ClassLoader.java:889)
at java.lang.ClassLoader.loadClass(ClassLoader.java:872)
at org.apache.tomcat.util.digester.ObjectCreateRule.begin(ObjectCreateRule.java:116)
at org.apache.tomcat.util.digester.Digester.startElement(Digester.java:1203)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509)
at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:182)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2784)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1476)
at org.apache.catalina.startup.ContextConfig.processContextConfig(ContextConfig.java:537)
at org.apache.catalina.startup.ContextConfig.contextConfig(ContextConfig.java:475)
at org.apache.catalina.startup.ContextConfig.init(ContextConfig.java:738)
at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:310)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:395)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:108)
I have made the changes as per the readme files, as described above, but not sure what else could be the reason for the above issue.
Can you please take a look at it?
Thanks,
Unable to reproduce it
|
gharchive/issue
| 2023-03-10T21:34:57 |
2025-04-01T04:35:42.209963
|
{
"authors": [
"cverma",
"mrniko"
],
"repo": "redisson/redisson",
"url": "https://github.com/redisson/redisson/issues/4916",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
184385435
|
Configured Timeout in ElasticCacheServerConfig being ignored
Hello,
I am using Redisson library version 2.2.4. I have the following settings configured for when I want to create a Redisson Client object for connecting to the datastore.
"elasticacheServersConfig": {
"nodeAddresses": [
"//XXX.001.cache.amazonaws.com:6379",
"//XXX.002.cache.amazonaws.com:6379"
],
"timeout": 3000
}
As you can see from the above settings I am setting the timeout explicitly to be 3 seconds. But I am seeing exceptions in the log which says mentions 1 second timeout. Here is the exception,
error: org.redisson.client.RedisTimeoutException: Redis server response timeout (1000 ms) occured for command: (GET) with params: [XXXXX] channel: [id: 0x002202e0, L:/172.17.0.160:43221 - R://XXX.002.cache.amazonaws.com/10.169.226.201:6379]
at org.redisson.command.CommandAsyncService$10.run(CommandAsyncService.java:528)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:588)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:662)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:385)
at java.lang.Thread.run(Thread.java:745)
I am wondering how is this is possible? Also I think there is also auto-retry feature in Redisson, which defaults to 3 attempts. Then why would this happen?
Can I provide some more info to debug this issue?
It's strange. You could point a debugger to CommandAsyncService:453 to ensure that correct value is used.
@mrniko I actually printed the Config object before it is used to create the RedissonClient.Below is the output,
singleServerConfig:
idleConnectionTimeout: 10000
pingTimeout: 1000
connectTimeout: 1000
timeout: 1000
retryAttempts: 3
retryInterval: 1000
reconnectionTimeout: 3000
failedAttempts: 3
subscriptionsPerConnection: 5
address:
- "//127.0.0.1:6379"
subscriptionConnectionMinimumIdleSize: 1
subscriptionConnectionPoolSize: 50
connectionMinimumIdleSize: 5
connectionPoolSize: 100
database: 0
dnsMonitoring: false
dnsMonitoringInterval: 5000
threads: 0
useLinuxNativeEpoll: false
I know this says Single Server Config, but I get same for ElastiCacheServer Config.
@mrniko maybe this was fixed with https://github.com/redisson/redisson/commit/047aa60daebaa9e5920227cbd9693e052aae85b5
@tankchintan have you tried version from master branch?
@mrniko Yeah, actually we are pushing Redisson v3.0.0 to production today so we will know soon.
@tankchintan But Timer initialization problem was fixed in 3.0.1
@tankchintan highly recommend you don't put 3.0.0 in production, reconnection is broken for elasticache.
Ok @johnou @mrniko Will be deploying 3.0.1
@mrniko @johnou I dont think 3.0.1 has landed in maven repository yet.
[ERROR] Failed to execute goal on project pm-workflow: Could not resolve dependencies for project com.ooyala.pm.workflow:pm-workflow:pom:0.0.1-SNAPSHOT: Could not find artifact org.redisson:redisson:jar:3.0.1 in central (https://repo.maven.apache.org/maven2) -> [Help 1]```
@tankchintan all we can do is just waiting for its appearance. I have published it several hours ago. Appearance in maven central might take up to 6-12 hours.
@tankchintan it's there https://repo1.maven.org/maven2/org/redisson/redisson/3.0.1/ try again?
@mrniko @johnou I think we can close this ticket for now & if I see the issue persists I will re-open. Thanks!
|
gharchive/issue
| 2016-10-21T02:27:26 |
2025-04-01T04:35:42.217975
|
{
"authors": [
"johnou",
"mrniko",
"tankchintan"
],
"repo": "redisson/redisson",
"url": "https://github.com/redisson/redisson/issues/677",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
239957794
|
Merge upstream
This includes changes I submitted to support symlinks.
Thanks for this, it will be very useful!
|
gharchive/pull-request
| 2017-07-01T16:08:25 |
2025-04-01T04:35:42.225043
|
{
"authors": [
"ids1024",
"jackpot51"
],
"repo": "redox-os/tar-rs",
"url": "https://github.com/redox-os/tar-rs/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
230680499
|
encryption support?
One of the things missing from the ZFS linux releases right now is encryption support.
The FAQ mentions SPECK and AES but I assume that is for checksums.
If so, Is that something planned for later versions of TFS or would the modular nature of the project make it easy to implement.
The FAQ mentions SPECK and AES but I assume that is for checksums.
Those are ciphers not hash functiosn ☺
Are there any current file systems that provide an encrypted "write only" mode using public key cryptography? I know this proved tricky for systems like Tahoe-LAFS.
@burdges I don't think so. In fact, it might not be possible as sufficient disk access would mean that you could rewrite anything you wanted.
I haven't thought about it, but yes it'd necessarily leak some file system metadata to avoid over writing anything. There are stenographic filesystems that avoid this, but they hide only a small amount of data in a large filesystem and protect it using erasure coding, so that's kinda another situation.
I was thinking more about the problem if receiving email on a home machine or the digital camera problem for journalists where your want to record data, and maybe read it during that session, but you want to forget the key soonish and only be able to access it later. This could also be done with a hash iteration ratchet, not just public key crypo.
In fact, these use cases are probably best served by an application that writes the data into encrypted files on an ordinary filesystem, and corrupts their metadata like date, etc., not the file system itself.
|
gharchive/issue
| 2017-05-23T11:52:25 |
2025-04-01T04:35:42.228074
|
{
"authors": [
"burdges",
"robjtede",
"ticki"
],
"repo": "redox-os/tfs",
"url": "https://github.com/redox-os/tfs/issues/44",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2405127775
|
Feature Request: Pipeline-to-Pipeline communication
As a benthos (the core) user i want to be able to have a possibility to stitch together stream configs, so it acts as a pipeline-to-pipeline communication.
This would allow me to:
break-up the complexity of large pipelines.
implement various architectural patterns, such as the Distributor pattern, the Output Isolator pattern, the Forked Path pattern, and the Collector pattern.
have better manageability, because only individual pipelines needed to be changed instead of the big chonker.
This would be a Mock:
reader.yaml
input:
generate:
mapping: "hello"
pipeline:
processors: []
output:
pipeline:
send_to: ["myVirtualAddress"]
writer.yaml
input:
pipeline:
address: "myVirtualAddress"
pipeline:
processors: []
output:
stdout: {}
Can you not already do this with the inproc connectors?
|
gharchive/issue
| 2024-07-12T09:17:54 |
2025-04-01T04:35:42.231253
|
{
"authors": [
"danriedl",
"jem-davies"
],
"repo": "redpanda-data/benthos",
"url": "https://github.com/redpanda-data/benthos/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1268958393
|
auth.tls.certs remaining tasks
# Authentication
auth:
#
# TLS configuration
tls:
# Enable global TLS, which turns on TLS by default for all listeners
# If enabled, each listener must set a certificate name (ie. listeners.<group>.<listener>.tls.cert)
# If disabled, listeners may still enable TLS individually (see listeners.<group>.<listener>.tls.enabled)
enabled: false
# list all certificates below, then reference a certificate's name in each listener (see listeners.<listener name>.tls.cert)
# TODO consider switching certs to list
certs:
# This is the certificate name that is used to associate the certificate with a listener
# See listeners.<listener group>.tls.cert for more information
# TODO add custom cert support: https://github.com/redpanda-data/helm-charts/pull/51
kafka:
# issuerRef:
# name: redpanda-cert1-selfsigned-issuer
# The caEnabled flag determines whether the ca.crt file is included in the TLS mount path on each Redpanda pod
# TODO remove caEnabled (rely on requireClientAuth): https://github.com/redpanda-data/helm-charts/issues/74
caEnabled: true
# duration: 43800h
rpc:
caEnabled: true
admin:
caEnabled: true
proxy:
caEnabled: true
schema:
caEnabled: true
[ ] remove caEnabled
[ ] consider switching from map to list/array
Some open questions/comments:
is caEnabled only for selfsigned certificates? This may be able to be removed in favor of requireClientAuth on each listener. Maybe a better name would be isSelfSigned
right now the caEnabled flag determines whether the ca.crt file is included in the TLS mount path on each Redpanda pod
Not necessarily just self-signed. Any certificate that doesn't have it's CA in the OS default root ca bundle. That may be an org-level internal ACME cert.
It's also just for external certs. Internal should always use self-signed.
@alejandroEsc How https://github.com/redpanda-data/helm-charts/issues/303 relates to this issue?
I agree with @alejandroEsc that this issue should be removed.
Closing in favor of #303
|
gharchive/issue
| 2022-06-13T06:18:18 |
2025-04-01T04:35:42.238723
|
{
"authors": [
"RafalKorepta",
"joejulian",
"vuldin"
],
"repo": "redpanda-data/helm-charts",
"url": "https://github.com/redpanda-data/helm-charts/issues/74",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
105312898
|
more plugin examples
add sample Validator with hard-coded messages.
add test cases for sample validators.
LGTM
|
gharchive/pull-request
| 2015-09-08T06:49:31 |
2025-04-01T04:35:42.281953
|
{
"authors": [
"takahi-i",
"yusuke"
],
"repo": "redpen-cc/redpen",
"url": "https://github.com/redpen-cc/redpen/pull/483",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
175109955
|
Add refs in every field created in ComponentFactory.
Hi @andrerpena, @JonatanSalas !, In order to handle the focus in a field in case of error i was thinking to add a ref="<field_name"> when the field is created in the Factory, if i'm not mistaken doing this we can handlethis.refs.<field_name>.focus() inside the Group, because each field will be a children in the group, what do you think????
@andrerpena since you are working in the redux-form v6 solution could you consider to use the Field component prop withRef and set a ref that points to the name of the field?
I'll take this into account on the RF6 refactoring
Thank you!
With v1 repo this has first class support.
|
gharchive/issue
| 2016-09-05T17:39:05 |
2025-04-01T04:35:42.287933
|
{
"authors": [
"JonatanSalas",
"andrerpena",
"danigomez"
],
"repo": "redux-autoform/redux-autoform",
"url": "https://github.com/redux-autoform/redux-autoform/issues/82",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
200802324
|
'CHANNEL_END' is not exported
the v 0.14.2
when i user rollup.js dist the js,it show:
Error: 'CHANNEL_END' is not exported by 'H:\web\ruying\node_modules\redux-saga\es\internal\channel.js' (imported by 'H:\web\ruying\node_modules\redux-saga\es\utils.js');
i find the code 'export { CHANNEL_END } from './internal/channel';', in 'redux-saga\es\utils.js' line 3 err.
'./internal/channel' is not imported CHANNEL_END.
Gonna close the issue as this is a duplicate of this. Please read my comment in the other thread.
If you have any further questions about this, please put them in there.
|
gharchive/issue
| 2017-01-14T13:04:49 |
2025-04-01T04:35:42.323010
|
{
"authors": [
"Andarist",
"ok111net"
],
"repo": "redux-saga/redux-saga",
"url": "https://github.com/redux-saga/redux-saga/issues/765",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
484653222
|
remove browserify config from package.json
Since we use rollup now, we don't need any browserify config.
This isn't for our use of browserify, it's for those that use browserify with this package.
|
gharchive/pull-request
| 2019-08-23T18:16:57 |
2025-04-01T04:35:42.331753
|
{
"authors": [
"alsotang",
"timdorr"
],
"repo": "reduxjs/react-redux",
"url": "https://github.com/reduxjs/react-redux/pull/1385",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
522875085
|
Convert footer.js to function component
Description
The (maybe by now) classic stateless class - to - function component switcherino.
(Please close if not a priority!)
Thanks!
|
gharchive/pull-request
| 2019-11-14T13:55:20 |
2025-04-01T04:35:42.333109
|
{
"authors": [
"thiskevinwang",
"timdorr"
],
"repo": "reduxjs/react-redux",
"url": "https://github.com/reduxjs/react-redux/pull/1466",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1419844837
|
Meta: Use new APIs in update-pr-from-base-branch
Test URLs
https://github.com/refined-github/sandbox/pull/9 (Conflict)
https://github.com/refined-github/sandbox/pull/11
Screenshot
NA
@busches will this affect GHE?
@yakov116 you can check the GHE docs to see what, if any, versions support it yet. Guessing it's not in any version yet.
@yakov116 we have probably never waited for GHE to use updated APIs
This will break the entire feature, since the api call will fail.
I don't remember im the past ever having a case where an API was not available on GHE.
However I do remember holding off on v4 until GHE had normal support.
I do use this feature all the time, so I appreciate the thought. 😃
|
gharchive/pull-request
| 2022-10-23T16:14:49 |
2025-04-01T04:35:42.363350
|
{
"authors": [
"busches",
"fregante",
"yakov116"
],
"repo": "refined-github/refined-github",
"url": "https://github.com/refined-github/refined-github/pull/6103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
691614241
|
Issue in build
9:18:19 AM: failed Building production JavaScript and CSS bundles - 69.173s
9:18:19 AM: error Generating JavaScript bundles failed
9:18:19 AM: /opt/build/repo/node_modules/@reflexjs/gatsby-plugin-image/src/image.js: Support for the experimental syntax 'jsx' isn't currently enabled (17:7):
9:18:19 AM: 15 | const Caption = () =>
9:18:19 AM: 16 | caption ? (
9:18:19 AM: > 17 | <Figcaption
9:18:19 AM: | ^
9:18:19 AM: 18 | dangerouslySetInnerHTML={{ __html: caption }}
9:18:19 AM: 19 | mt="2"
9:18:19 AM: 20 | textAlign="center"
9:18:19 AM: Add @babel/preset-react (https://git.io/JfeDR) to the 'presets' section of your Babel config to enable transformation.
9:18:19 AM: If you want to leave it as-is, add @babel/plugin-syntax-jsx (https://git.io/vb4yA) to the 'plugins' section to enable parsing.
9:18:19 AM: not finished Generating image thumbnails - 73.171s
9:18:20 AM: (sharp:1503): GLib-CRITICAL **: 03:48:20.481: g_hash_table_lookup: assertion 'hash_table != NULL' failed
9:18:20 AM:
9:18:20 AM: ┌─────────────────────────────┐
9:18:20 AM: │ "build.command" failed │
9:18:20 AM: └─────────────────────────────┘
9:18:20 AM:
9:18:20 AM: Error message
9:18:20 AM: Command failed with exit code 1: gatsby build
9:18:20 AM:
9:18:20 AM: Error location
9:18:20 AM: In Build command from Netlify app:
9:18:20 AM: gatsby build
9:18:20 AM:
9:18:20 AM: Resolved config
9:18:20 AM: build:
9:18:20 AM: command: gatsby build
9:18:20 AM: commandOrigin: ui
9:18:20 AM: publish: /opt/build/repo/public
This issue randomly comes and goes.
I've added @babel/preset-react and @babel/plugin-syntax-jsx to the gatsby's config file but still, it's happening.
I've seen this bug too. I'll take a look at it and get back to you. Thanks.
I'm having the same problem :(
Tried a fresh install of Reflex and it happened again.
I haven't been able to successfully reproduce this. This happens on gatsby build? Can you try updating gatsby-cli and try again? npm i -g gatsby-cli
@Abhikumar98 @mattmaker any luck? Did the above work?
Yes, this did the work for me.
Thanks @arshad
Thanks @Abhikumar98. Going to close this as fixed.
@mattmaker Please re-open if you're still seeing this issue. Thank you.
|
gharchive/issue
| 2020-09-03T03:56:26 |
2025-04-01T04:35:42.377223
|
{
"authors": [
"Abhikumar98",
"arshad",
"mattmaker"
],
"repo": "reflexjs/reflex",
"url": "https://github.com/reflexjs/reflex/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1537050620
|
Update zio, zio-streams, zio-test, ... to 2.0.6
Updates
dev.zio:zio
dev.zio:zio-streams
dev.zio:zio-test
dev.zio:zio-test-sbt
from 2.0.2 to 2.0.6.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "dev.zio" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "dev.zio" }
}]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Superseded by #424.
|
gharchive/pull-request
| 2023-01-17T21:19:37 |
2025-04-01T04:35:42.403974
|
{
"authors": [
"scala-steward"
],
"repo": "reibitto/command-center",
"url": "https://github.com/reibitto/command-center/pull/414",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1797701457
|
Update scalafmt-core to 3.7.8
About this PR
📦 Updates org.scalameta:scalafmt-core from 3.7.2 to 3.7.8
📜 GitHub Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "org.scalameta", artifactId = "scalafmt-core" }
}]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Superseded by #486.
|
gharchive/pull-request
| 2023-07-10T22:36:54 |
2025-04-01T04:35:42.408353
|
{
"authors": [
"scala-steward"
],
"repo": "reibitto/podpodge",
"url": "https://github.com/reibitto/podpodge/pull/484",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1148650145
|
🛑 BINUSMAYA (Academic Services) is down
In 60a61f9, BINUSMAYA (Academic Services) (https://binusmaya.binus.ac.id/LoginAD.php) was down:
HTTP code: 404
Response time: 922 ms
Resolved: BINUSMAYA (Academic Services) is back up in e4022d3.
|
gharchive/issue
| 2022-02-23T22:32:53 |
2025-04-01T04:35:42.420238
|
{
"authors": [
"reinhart1010"
],
"repo": "reinhart1010/binusmayadown",
"url": "https://github.com/reinhart1010/binusmayadown/issues/156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2342472096
|
🛑 GreatNusa.com is down
In 136d1f6, GreatNusa.com (https://greatnusa.com) was down:
HTTP code: 403
Response time: 94 ms
Resolved: GreatNusa.com is back up in af1df62.
|
gharchive/issue
| 2024-06-09T20:21:44 |
2025-04-01T04:35:42.423368
|
{
"authors": [
"1010bots"
],
"repo": "reinhart1010/binusmayadown",
"url": "https://github.com/reinhart1010/binusmayadown/issues/7004",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2373429793
|
🛑 GreatNusa.com is down
In fb62c80, GreatNusa.com (https://greatnusa.com) was down:
HTTP code: 403
Response time: 77 ms
Resolved: GreatNusa.com is back up in 6d50d15.
|
gharchive/issue
| 2024-06-25T19:01:41 |
2025-04-01T04:35:42.426276
|
{
"authors": [
"1010bots"
],
"repo": "reinhart1010/binusmayadown",
"url": "https://github.com/reinhart1010/binusmayadown/issues/7708",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1549691753
|
Changing robot model
Hi Reinis,
How are you?
So finally mobile robot right now works pretty well.
As you remember I told that I want to change robot model itself and I couldn't find distinguishable place where robot model dynamics and kinematics are defined. I understand that it is within Gazebo environment but still did not find explicit location. I am new with Ros and Gazebo.
So where is it located and where I can change it, thank you beforehand!?
Originally posted by @Darkhan92 in https://github.com/reiniscimurs/DRL-robot-navigation/issues/44#issuecomment-1396446243
@Darkhan92
Robot that is used in the simulation is set in: https://github.com/reiniscimurs/DRL-robot-navigation/blob/c8321b22339115c983fb1d6575ba48ca8b57025a/TD3/assets/multi_robot_scenario.launch#L9
a launch file of the robot is referenced there. To see the call to the robot model and following sequence of the robot setup, it might be easier to look into the Noetic-Turtlebot branch. You will have to create a call to the launch file of your robot from your robot package.
catkin_ws/src/turtlebot3/turtlebot3_bringup/launch/turtlebot3_model.launch
In the robot launch file you can specify the xacro file of your robot description:
https://github.com/reiniscimurs/DRL-robot-navigation/blob/c8321b22339115c983fb1d6575ba48ca8b57025a/catkin_ws/src/turtlebot3/turtlebot3_bringup/launch/turtlebot3_model.launch#L7
Then you can start the required nodes.
To spawn the robot in Gazebo:
https://github.com/reiniscimurs/DRL-robot-navigation/blob/c8321b22339115c983fb1d6575ba48ca8b57025a/catkin_ws/src/turtlebot3/turtlebot3_bringup/launch/turtlebot3_model.launch#L10
Robot state publisher:
https://github.com/reiniscimurs/DRL-robot-navigation/blob/c8321b22339115c983fb1d6575ba48ca8b57025a/catkin_ws/src/turtlebot3/turtlebot3_bringup/launch/turtlebot3_model.launch#L12
Publish the joint state node:
https://github.com/reiniscimurs/DRL-robot-navigation/blob/c8321b22339115c983fb1d6575ba48ca8b57025a/catkin_ws/src/turtlebot3/turtlebot3_bringup/launch/turtlebot3_model.launch#L17
If you have the xacro file and urdf files of your robot, this change should not be too difficult.
Hi,
Yes, stl files are responsible for the visualization of each element of the robot. Changing the wheel size and distance in the description will effectively describe the kinematics, but will not update the stl files. You should change them or update them accordingly. Also, check if differential drive plugin values are consistent with your new wheel sizes.
Hi,
Sorry, I am not an expert on model creation and maintenance. This seems like a rather complex issue and I do not think I can help you with this task.
|
gharchive/issue
| 2023-01-19T18:11:50 |
2025-04-01T04:35:42.432794
|
{
"authors": [
"reiniscimurs"
],
"repo": "reiniscimurs/DRL-robot-navigation",
"url": "https://github.com/reiniscimurs/DRL-robot-navigation/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
667152856
|
Crash
Hi. We have a crash on application started. Is there any ideas how to fix it?
Form what I can see it says ensureInitializationComplete, I'm not sure what's your problem exactly, but maybe calling WidgetsFlutterBinding.ensureInitialized(); before anything else can help;
|
gharchive/issue
| 2020-07-28T15:10:37 |
2025-04-01T04:35:42.437267
|
{
"authors": [
"DolgushevEvgeny",
"mehdok"
],
"repo": "rekab-app/background_locator",
"url": "https://github.com/rekab-app/background_locator/issues/112",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
681025526
|
Feature/shows background location indicator
iOS specific.
Add showBackgroundLocationIndicator to iOS settings to show a blue bar during background tracking.
This patch depend on #126
Cool feature;
@mehdok
Thanks man!
|
gharchive/pull-request
| 2020-08-18T13:04:02 |
2025-04-01T04:35:42.438597
|
{
"authors": [
"fnicastri",
"mehdok"
],
"repo": "rekab-app/background_locator",
"url": "https://github.com/rekab-app/background_locator/pull/127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
111161339
|
Add documentation to repo retrieve
This action was added when repos got identifiers. It was not
documented, though.
PDC-1071
Looks good to me.
|
gharchive/pull-request
| 2015-10-13T11:32:18 |
2025-04-01T04:35:42.470610
|
{
"authors": [
"erichuanggit",
"lubomir"
],
"repo": "release-engineering/product-definition-center",
"url": "https://github.com/release-engineering/product-definition-center/pull/159",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1552053885
|
Plugin produces non critical error
When running the SonarLint plugin the following error is produced.
> Task :...:sonarlintMain
Errors occurred while build effective model from /home/meijer/.gradle/caches/modules-2/files-2.1/org.eclipse.platform/org.eclipse.equinox.preferences/3.10.100/43fe3c49d5a6a30090b7081015e4a57bd0a6cb98/org.eclipse.equinox.preferences-3.10.100.pom:
'modelVersion' must be one of [4.0.0] but is '4.0'. in org.eclipse.platform:org.eclipse.equinox.preferences:3.10.100
Errors occurred while build effective model from /home/meijer/.gradle/caches/modules-2/files-2.1/org.eclipse.platform/org.eclipse.core.contenttype/3.8.200/e2fdb068262514474d73f236adaa821d9c861786/org.eclipse.core.contenttype-3.8.200.pom:
'modelVersion' must be one of [4.0.0] but is '4.0'. in org.eclipse.platform:org.eclipse.core.contenttype:3.8.200
When running ./gradlew dependencies you can see this is a problem in a transitive dependency from SonarLint core. Any ideas about a fix? This probably needs to fixed upstream, but I guess this issue can be used simply for a version bump for SonarLint core.
@Meijuh, a patch version of this plugin is automatically released when SonarLint artifacts get updated. There is some delay for such automatic releases, like 2-3 weeks, though.
To be honest, I don't think it's possible to do anything on this plugin's side. These Eclipse artifacts seem to have incorrect POMs.
Is there anything I can help with?
I could raise an issue upstream. Do you know where I should do that?
We have two options here:
Raise an issue somewhere in Eclipse project, and then, when POMs are fixed, raise another issue in SonarLint
Raise an issue in SonarLint asking to use other versions of Eclipse dependencies
I suppose the second option is easier. Unfortunately, I don't know where bug trackers of these projects are.
@Meijuh by the way, what Gradle and name.remal.sonarlint versions are you using? I'm asking because name.remal.sonarlint validates itself with the latest published version, and I can't see this warning. Maybe, I'm just missing something
I am using Gradle 7.5.1 and name.remal.sonarlint 3.0.3. However, this sonar plugin is not the only plugin I am using. I thought maybe that another plugin could override the eclipse pom version that this plugin requires. But running ./gradlew buildEnvironment does not even list the eclipse jar as a dependency for name.remal.sonarlint. This is a bit strange unless sonarlint is using a shadow jar?
Do you use Spring's dependency management Gradle plugin? If yes, could you please check that it isn't related to issues like https://github.com/gradle/gradle/issues/15339 or https://github.com/spring-gradle-plugins/dependency-management-plugin/issues/186?
This issue was originally reported at https://github.com/eclipse-platform/eclipse.platform/issues/180
I submitted a PR to the sonar project at https://github.com/SonarSource/sonar-java/pull/4421 to update the dependencies to fix this issue, so we need them to merge the PR and make a new release.
This project could override the offending dependency versions as a workaround until that release is made.
@candrews, I could make the project override dependency versions. However, I'm simply afraid that something can get broken.
My suggestions are:
Check if you use the latest version of name.remal.sonarlint plugin. If you don't, please try to update.
If the plugin update doesn't help, please create a minimal reproduction. I'm asking because I use this plugin myself and don't see these warnings. I assume that some other Gradle plugin interferes.
As a last resort, I can override dependencies. To do that, I'll need:
Full list of dependencies to override.
Versions that should be overridden and target versions. Please prefer patch updates.
Changelogs for the dependencies. I'd like to minimize risks as much as possible.
The problem is that Spring's Dependency management plugin sees the invalid pom in the dependency tree and errors out, as you noticed in https://github.com/remal-gradle-plugins/sonarlint/issues/79#issuecomment-1412985499
Here's a minimal reproducer:
repositories {
gradlePluginPortal()
mavenCentral()
}
plugins {
id("io.spring.dependency-management") version "1.1.2"
id("java")
}
dependencies {
implementation("org.sonarsource.java:sonar-java-plugin:7.22.0.31918")
}
To workaround the issue, one can add this:
dependencyManagement {
// workaround for https://github.com/spring-gradle-plugins/dependency-management-plugin/issues/365
applyMavenExclusions = false
}
It sure would be nice to not have to use that workaround, but maybe there's nothing that can or should be done in your plugin :shrug:
I tried implementing a workaround but failed. Not sure if it's possible to fix this on my end. Any suggestions are welcome!
Hello @candrews,
I found a potential way to fix this. Is it still reproducible with the latest version of the plugin?
If yes, could you please create a minimal reproduction? What task should be executed? I'm asking because I couldn't reproduce it.
Hello @candrews,
I found a potential way to fix this. Is it still reproducible with the latest version of the plugin?
If yes, could you please create a minimal reproduction? What task should be executed? I'm asking because I couldn't reproduce it.
I can still reproduce it with version 3.3.5 of this project.
You can find a full example project that reproduces the issue at https://github.com/candrews/jumpstart/pull/986 The GitHub Actions CI shows the error output. If you clone that project and grab that commit, ./gradlew build with show the problem. Hopefully that's sufficiently minimal and detailed to provide you with the information you need.
The error is:
> Task :sonarlintMain FAILED
Errors occurred while build effective model from /home/runner/.gradle/caches/modules-2/files-2.1/org.eclipse.platform/org.eclipse.equinox.preferences/3.10.100/43fe3c49d5a6a30090b7081015e4a57bd0a6cb98/org.eclipse.equinox.preferences-3.10.100.pom:
Errors occurred while build effective model from /home/runner/.gradle/caches/modules-2/files-2.1/org.eclipse.platform/org.eclipse.core.contenttype/3.8.200/e2fdb068262514474d73f236adaa821d9c861786/org.eclipse.core.contenttype-3.8.200.pom:
https://github.com/candrews/jumpstart/actions/runs/5802462920/job/15728888552?pr=986#step:10:287
@Meijuh, @candrews, fixed in v3.3.6.
@Meijuh, @candrews, fixed in v3.3.6.
I'm not so sure... It appears a dependency is missing:
> Could not resolve all files for configuration ':sonarlintCoreClasspath'.
> Did not resolve 'org.ow2.asm:asm:9.0' which is part of the dependency lock state
https://github.com/candrews/jumpstart/actions/runs/5814995983/job/15765736101?pr=986#step:10:293
Sorry, but I couldn't reproduce it locally. Without local reproduction, I can't debug and fit it. Can you create a minimal reproduction?
@remal for me, the fix is working and the error messages are gone.
@candrews could it be you need to update the lock state first? Something like running ./gradlew buildEnvironment --write-locks before running ./gradlew build?
@Meijuh, thanks for the confirmation. Closing the issue.
@candrews, it's hard to say what exactly went wrong in your repo because I couldn't reproduce it locally. Please open another issue with minimal reproduction.
|
gharchive/issue
| 2023-01-22T09:22:24 |
2025-04-01T04:35:42.495565
|
{
"authors": [
"Meijuh",
"candrews",
"remal"
],
"repo": "remal-gradle-plugins/sonarlint",
"url": "https://github.com/remal-gradle-plugins/sonarlint/issues/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
933010873
|
collapse is not supported
<details> not supported
Using of <details> tags in GFM is supposed to create collapse sections. By default this is not happening in react-markdown when gfm is added as a plugin, and I see no option to enable it.
Your environment
OS: MacOS
Packages: React-markdown
Steps to reproduce
Add this section to markdown
<details>
<summary> Title</summary>
Main Section
</details>
Add remark-gfm as react-markdown plugin
Codesandbox reproduction
Expected behavior
A collapsible should replace the <details> section
Actual behavior
<details content is rendered as escaped string.
The link you provided it to GitLab not GitHub
Details and collapse are not a part of GFM https://github.github.com/gfm/
They're plain HTML which can be supported with https://github.com/remarkjs/react-markdown#appendix-a-html-in-markdown
https://codesandbox.io/s/react-markdown-debug-forked-72z66
@ChristianMurphy I provided that link because the docs are better, but this is part of GFM. Its even used in the react-markdown readme. Its listed in the GFM Spec
Start condition: line begins the string < or </ followed by one of the strings (case-insensitive) address, article, aside, base, basefont, blockquote, body, caption, center, col, colgroup, dd, details, dialog, dir, div, dl, dt, fieldset, figcaption, figure, footer, form, frame, frameset, h1, h2, h3, h4, h5, h6, head, header, hr, html, iframe, legend, li, link, main, menu, menuitem, nav, noframes, ol, optgroup, option, p, param, section, source, summary, table, tbody, td, tfoot, th, thead, title, tr, track, ul, followed by whitespace, the end of the line, the string >, or the string />.
End condition: line is followed by a blank line.
I'm sorry for the confusing links to gitlab, but this is most certainly part of the GitHub markdown spec.
An HTML block is a group of lines that is treated as raw HTML (and will not be escaped in HTML output).
right, and remark-gfm does does that.
It puts the contents into an HTML block https://astexplorer.net/#/gist/35552bdcd50ad974284c7acd9437afaf/dcb1fc32ab8b2478d866003aefa2942f56f83f22
It is just the parser, not the part that translates to markdown to HTML (that's remark-rehype, and is part of react-markdown by default) or the part that takes raw HTML and parses it into an HTML abstract syntax tree (rehype-raw which can be included in react-markdown but is not by default).
remark-gfm is not producing the same output that github does when it takes the same markdown as input. Github produces a collapsible, remark-gfm produces plain text.
The output of remark-gfm on the README of react-markdown is not the same ouput that GitHub has.
Again, remark-gfm does not produce HTML.
The output of remark-gfm on the README of react-markdown is not the same ouput that GitHub has.
You are welcome to add rehype-raw and rehype-santize to the example if you want.
What does remark-gfm produce? When it sees github tables syntax the final output has an HTML table. remark-gfm might not produce that directly but without remark-gfm the HTML is different, so it produces HTML indirectly.
I'd like to avoid adding those packages if possible. I expected remark-gfm to create an experience that was consistent with GitHub, and this delta seems like a bug to me. Whatever level of the unified pipeline this packages fits into it impacts the final HTML output based in the markdown contents, and it seems like the its trying to do that in a way thats consistent with GitHub.
Perhaps I misunderstand the goal of this package though.
What does remark-gfm produce?
an Abstract Syntax Tree https://unifiedjs.com/learn and https://github.com/syntax-tree/mdast
remark-gfm might not produce that directly but without remark-gfm the HTML is different
Right, remark-gfm adds some additional markdown abstract syntax tree types https://github.com/syntax-tree/mdast#gfm.
Which remark-rehype can interpret.
I expected remark-gfm to create an experience that was consistent with GitHub, and this delta seems like a bug to me.
It is consistent with how it is parsed on GitHub.
Whatever level of the unified pipeline this packages fits into it impacts the final HTML output based in the markdown contents, and it seems like the its trying to do that in a way thats consistent with GitHub.
This handles markdown -> mdast and mdast -> markdown, it does both correctly.
mdast -> hast is handled by remark-rehype.
Gotcha, so any features of GitHub flavored markdown that require handling HTML tags are outside of this packages scope. It might be worth pointing out in the README that this package will not close the gap between CommonMark and the behavior of markdown when viewed on the GitHub site.
@kyeotic the handling of details you see on github or gitlab has nothing to do with those platforms. It's markdown, commonmark, that handles HTML - GFM does nothing with them. You don't need remark-gfm for them.
Actually the more I think about this the less it makes sense. This whole unified pipeline that moves from markdown to AST to HTML is an architecture of remark, not of GitHub or GFM.
When GitHub sees a markdown file with tables and <details> its final result is a page with HTML tables and HTML <details>.
When react-markdown + remark-gfm see the same content they do not produce the same result.
All the intermediate steps (parsing, producing AST, generating HTML) are all ways remark has decided to slice the problem up. However if the final result is different than GitHub then why does it matter that the issue is with HTML or AST or markdown? Those steps are all means to an end, and remark-gfm arrives at a different end than GitHub.
The pipeline is modified by remark-gfm to support tables. Why can't it modify the pipeline to support <details>? Is it impossible to represent <details> with an AST? It is impossible to parse?
The delta still isn't closed by rehype-raw+ rehype-santize because GitHub blocks most HTML tags, and rehype-raw + rehype-santize lets most of them through. Perhaps another library designed to match GitHub's behavior is needed.
Again, details has nothing to do with GFM or GitHub.
You guys keep saying that, but GitHub produces a collapsible. You can see it in the screenshot I posted above, or by looking at the README for react-markdown. I don't know why you think those experiences on github.com have nothing to do with GitHub
We’re saying it repeatedly because you seem to ignore it.
For an analogy, you’re saying that you see a book by a certain author and concluding from that that all books are by that author.
That is CommonMark. GitHub uses its own version of markdown, called GFM, which is a superset of CommonMark.
Here is how markdown works: https://spec.commonmark.org/dingus/?text=<details>
<summary>Some text<%2Fsummary>
More stuff
<%2Fdetails> (please see the result, but also the AST tab).
You can read the CommonMark spec to figure out why this is the way it is. Why there are two HTML nodes.
React can’t deal with strings of HTML. That is a limitation imposed by React and why react-markdown can’t do the same as CommonMark/GFM by default.
The solution is complex and heavy: include an HTML parser. That’s why it’s not done by default.
Note that GitHub itself makes the exact same decisions on how to handle HTML in markdown, include an HTML parser to parse things like details, and how to sanitize it (allowing details/summary but disallowing scripts).
Here is how markdown works: https://spec.commonmark.org/dingus/?text=<details>
<summary>Some text<%2Fsummary>
More stuff
<%2Fdetails> (please see the result, but also the AST tab).
Showing me that CommonMark behaves differently that GitHub only proves that this isn't part of CommonMark, which I have never argued. I don't expect this to work with CommonMark, I expect it to work with GFM. Here is how markdown works on GitHub (please see the collapsible).
Note that GitHub itself makes the exact same decisions on how to handle HTML in markdown, include an HTML parser to parse things like details, and how to sanitize it (allowing details/summary but disallowing scripts).
Yes, this is exactly what I am saying. GitHub has chosen to give special behavior to a subset of HTML that includes <details>. That's the delta between GitHub (not CommonMark, GFM) and this package that is at issue.
The solution is complex and heavy: include an HTML parser. That’s why it’s not done by default.
It's not done by this package, but it is done by GitHub. Note you don't need an general HTML parser, you only need to be able to parse two tags (details and summary), which do not support any attributes or other complexities that would make it harder than looking for the literal string "<details>" and "</details>". This would be much lighter than rehype-raw
Details is an html element: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details.
It again does not receive special treatment from github. Hence my example, while showing the source instead of the rendered result, is exactly the same on github as on any commonmark supporting place.
Showing me that CommonMark behaves differently that GitHub only proves that this isn't part of CommonMark
He's showing the it behaves the same as GFM.
GitHub has chosen to give special behavior to a subset of HTML that includes . That's the delta between GitHub (not CommonMark, GFM) and this package that is at issue.
You're missing a key point here.
CommonMark, GFM, and most markdown flavors embed HTML.
HTML are different document(s), included inside the Markdown documents.
Markdown parsers will split the content into embeddings, here as a raw html node.
But how to handle the HTML itself is outside of what Markdown does.
For an anology, think for HTML and SVG, SVG documents can be put in HTML pages, but they aren't HTML.
The HTML sees something SVG like, and it hands off to the SVG engine to do the rest.
Here the same concept applies, Markdown sees something HTML like, and it can (optionally) hand it off to an HTML engine to do additional processing.
I think I understand the point you are trying to make. It still seems to me like a technical distinction that shouldn't be relevant, but that's because I'm focusing on the end result. This package isn't trying to match GitHub's result, its just trying to parse some markdown extensions that GitHub is adding to CommonMark.
I do appreciate you both taking the time to explain this from so many angles. Thank you. At least I learned something today :)
It still seems to me like a technical distinction that shouldn't be relevant
It usually isn't, in most cases plain text strings could be passed through just fine, but
React can’t deal with strings of HTML. That is a limitation imposed by React and why react-markdown can’t do the same as CommonMark/GFM by default.
its just trying to parse some markdown extensions that GFM is adding to CommonMark
Right
|
gharchive/issue
| 2021-06-29T18:48:15 |
2025-04-01T04:35:42.531170
|
{
"authors": [
"ChristianMurphy",
"kyeotic",
"wooorm"
],
"repo": "remarkjs/remark-gfm",
"url": "https://github.com/remarkjs/remark-gfm/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
98459672
|
Presence of /var/run/docker.sock prevents docker daemon from starting
When launching an instance, the presence of /var/run/docker.sock/ is preventing docker from starting. As such, instances aren't getting registered with ECS. Forcefully removing this directory and restarting docker seems to solve the problem.
FWIW, using Stacker to spin things up, there's been at least one case where this was a problem on one instance and not the other, i.e., one (of two) instance has the /var/run/docker.sock/ directory preventing docker from starting, while the other instance didn't have that directory and Docker started up just fine.
I'm going to close this, since I believe #2 should fix it. If you see it again, let me know!
|
gharchive/issue
| 2015-07-31T19:50:25 |
2025-04-01T04:35:42.549973
|
{
"authors": [
"brianz",
"phobologic"
],
"repo": "remind101/empire_ami",
"url": "https://github.com/remind101/empire_ami/issues/1",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1805128869
|
remix dev v2 unexpectedly clears the console
What version of Remix are you using?
1.18.1
Are all your remix dependencies & dev-dependencies using the same version?
[X] Yes
Steps to Reproduce
Running the following command
remix dev --no-restart -c "./src/server/index.js"
will clear the console right before it outputs
💿 remix dev
Expected Behavior
The console should not be cleared.
Actual Behavior
The console console is cleared, also removing output from other tasks running.
Duplicate of #6709
|
gharchive/issue
| 2023-07-14T16:13:58 |
2025-04-01T04:35:42.558267
|
{
"authors": [
"garth",
"pcattori"
],
"repo": "remix-run/remix",
"url": "https://github.com/remix-run/remix/issues/6838",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2216045292
|
Vite fails running react-dnd only in production builld
Reproduction
Add the DndProvider somewhere on the page. Below is an example where I added it to our layout:
<DndProvider backend={HTML5Backend}>{children}</DndProvider>
I have been able to get around this error for now by adding the following to our root file:
// @ts-expect-error hack to get around react-dnd + vite issue
globalThis.global = typeof window !== "undefined" ? window : {};
System Info
System:
OS: Linux 5.15 Ubuntu 20.04.5 LTS (Focal Fossa)
CPU: (24) x64 AMD Ryzen 9 3900XT 12-Core Processor
Memory: 29.05 GB / 31.31 GB
Container: Yes
Shell: 5.8 - /usr/bin/zsh
Binaries:
Node: 20.9.0 - ~/.nvm/versions/node/v20.9.0/bin/node
Yarn: 1.22.5 - ~/.yarn/bin/yarn
npm: 10.1.0 - ~/.nvm/versions/node/v20.9.0/bin/npm
npmPackages:
@remix-run/cloudflare: 2.8.1 => 2.8.1
@remix-run/cloudflare-pages: 2.8.1 => 2.8.1
@remix-run/dev: 2.8.1 => 2.8.1
@remix-run/react: 2.8.1 => 2.8.1
vite: ^5.2.7 => 5.2.7
Used Package Manager
npm
Expected Behavior
For it to work. The frustrating part is that it works fine in dev, but is broken in the prod build.
Actual Behavior
✘ [ERROR] ReferenceError: window is not defined
at getGlobalContext
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dnd/src/core/DndProvider.tsx:92:51)
at createSingletonDndContext
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dnd/src/core/DndProvider.tsx:72:28)
at getDndContextValue
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dnd/src/core/DndProvider.tsx:59:18)
at Component
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dnd/src/core/DndProvider.tsx:29:39)
at renderWithHooks
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5658:16)
at renderIndeterminateComponent
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5731:15)
at renderElement
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5946:7)
at renderMemo
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5868:3)
at renderElement
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dom/cjs/react-dom-server.browser.development.js:6011:11)
at window
(file:///home/inssein/projects/utiliti.dev/node_modules/react-dom/cjs/react-dom-server.browser.development.js:6104:11)
Have you tried wrapping the DndProvider in ClientOnly from
https://github.com/sergiodxa/remix-utils/tree/main?tab=readme-ov-file#clientonly
I have to use this for @hello-pangea/dnd which is an updated fork of react-beautiful-dnd
It would probably work, but it would defeat the purpose of using Remix at this point because it's part of my root layout.
Not sure how else to get around it, remix uses SSR and there is no window object on the server, only the client?
I feel like this is a bug I will possibly running into the future. For your builds, have you tried having the provider at a component-level as opposed to at the pagelevel, and ensuring that that component is client-only?
|
gharchive/issue
| 2024-03-29T22:12:01 |
2025-04-01T04:35:42.562865
|
{
"authors": [
"JasonColeyNZ",
"brianbancroft",
"inssein"
],
"repo": "remix-run/remix",
"url": "https://github.com/remix-run/remix/issues/9165",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1019564019
|
How to write multiple queries?
How do I write multiple queries in query(), or chain method?
A general use case is to get aggregate - count with the query -- common use case for pagination
query MyQuery {
table(limit: 10, offset: 20, where: {field: {_eq: 5}}) {
id
}
tableAggregate(where: {field: {_eq: 5}}) {
aggregate {
count
}
}
}
the problem here is that
I can't write where, limit, offset in normal query function.
and i cant write multiple queries in chain method where i can write where condition.
It should be documented again :')
|
gharchive/issue
| 2021-10-07T03:18:27 |
2025-04-01T04:35:42.577602
|
{
"authors": [
"RohitKaushal7",
"sephi-dev"
],
"repo": "remorses/genql",
"url": "https://github.com/remorses/genql/issues/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
122945107
|
Upcoming change in ISO 4217 for Belarusion Ruble (Amendment Number 161)
From 1 July 2016 to 31 December 2016, the banknotes of the 2000 series and the banknotes and coins of the 2009 series will be in parallel circulation and subject to obligatory acceptance without restriction in all kinds of payments to the ratio of 10,000 old for one new ruble (10,000:1).
From 1 January 2017 through 31 December 2021, the currency units of the 2000 series will be exchanged for the currency units of the 2009 series in any sum without restriction and charging commission.
See http://www.currency-iso.org/en/shared/amendments/iso-4217-amendment.html
In July 2016 a new Belarusian ruble will be introduced, at a rate of 1 new ruble = 10,000 old rubles. New and old rubles will circulate in parallel from July 1 to December 31, 2016. Belarus will issue coins for general circulation for the first time. Seven denominations of banknotes (5, 10, 20, 50, 100, 200 and 500 rubles) and eight denominations of coins (1, 2, 5, 10, 20 and 50 kapeykas, and 1 and 2 rubles) will be put into circulation on July 1, 2016.[5][6] The banknotes will have security threads and will show 2009 as an issue date (the date of an unsuccessful attempt at currency reform). Their designs will be similar to those of the euro. The future ISO 4217 code will be BYN.[7]
|
gharchive/issue
| 2015-12-18T13:03:29 |
2025-04-01T04:35:42.608968
|
{
"authors": [
"remyvd"
],
"repo": "remyvd/NodaMoney",
"url": "https://github.com/remyvd/NodaMoney/issues/33",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
220756264
|
APP-13 - Docker development
Docker development
Habilitar corretamente o desenvolvimento usando o ambiente montado do Docker.
Fix webpack with Docker container
Para desenvolvimento da aplicação usando o Docker, o container do Docker não está sincronizado ao source da aplicação local.
Task canceled
|
gharchive/issue
| 2017-04-10T20:10:41 |
2025-04-01T04:35:42.624809
|
{
"authors": [
"renatobenks"
],
"repo": "renatobenks/CodeRockrApplication",
"url": "https://github.com/renatobenks/CodeRockrApplication/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
53913501
|
move the refresh to live in parseOptions to make sure it works after hydration
basically what the title says.
it's a quick update of: https://github.com/rendrjs/rendr/pull/304 - and adds tests around it.
Coverage increased (+0.08%) when pulling 5de08ed9bffaeb6520950adb026df3365c4a19e8 on rendr-on-refresh-hydrate into 0de17f217e97f15d6ecca4b9a400e80cb1a80bcf on master.
Did not even know this option existed. LGTM!
:+1:
thanks :+1:
|
gharchive/pull-request
| 2015-01-09T20:52:03 |
2025-04-01T04:35:42.637042
|
{
"authors": [
"coveralls",
"mattpardee",
"quaelin",
"saponifi3d",
"spikebrehm"
],
"repo": "rendrjs/rendr",
"url": "https://github.com/rendrjs/rendr/pull/420",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1637475740
|
fix invalid empty lists in cog.yaml
resolves #368. pls review it, @bfirsh
resolves #368. pls review it, @bfirsh
Nice, thanks! Would you mind writing a test?
Nice, thanks! Would you mind writing a test?
sure, test cog init is enough?
@bfirsh @zeke
Hi @pysqz. Thanks so much for your contribution. I just merged https://github.com/replicate/cog/pull/1073, which specifies an array for type instead of using anyOf.
|
gharchive/pull-request
| 2023-03-23T12:44:58 |
2025-04-01T04:35:42.717062
|
{
"authors": [
"bfirsh",
"ccxx0",
"mattt",
"pysqz"
],
"repo": "replicate/cog",
"url": "https://github.com/replicate/cog/pull/965",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1262607287
|
Retry GET requests
This is primarily for iterating through predict(), where if an
exception is thrown the client has no way of restarting the iterator.
Signed-off-by: Ben Firshman ben@firshman.co.uk
Shipping for user.
|
gharchive/pull-request
| 2022-06-07T01:26:51 |
2025-04-01T04:35:42.718375
|
{
"authors": [
"bfirsh"
],
"repo": "replicate/replicate-python",
"url": "https://github.com/replicate/replicate-python/pull/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1659107554
|
Load Replit DB URL from a file
Why
The JWT used for authenticating into the Replit DB is placed on the Repl as an environment variable on boot and lasts for 30h or so. This is fine for shorter lasting Repls, but if we want to have long-running Repls we need a mechanism to refresh the DB token.
What changed
Let's make it so that we read the DB URL data from a file, which can be refreshed while a program is running. If the file does not exist or we fail to read from it, fall back to the environment variable. We then run this hourly.
Testing
Fork this Repl
Change the version of the replit dependency in pyproject.toml to the latest commit in this branch. The resulting [tool.poetry.dependencies] section should look something like the following.
[tool.poetry.dependencies]
python = ">=3.10.0,<3.11"
numpy = "^1.22.2"
Flask = "^2.2.0"
urllib3 = "^1.26.12"
replit = { git = "https://github.com/replit/replit-py.git", rev = "71466f774ef1c0d737259d249085dc42606424cd" }
Run poetry update
Run the Repl and make sure the application works
Deploy the Repl and make sure that it still works. Also check that both the plain Repl and the deployment are using the same database by checking that the values shown on the page match.
Current dependencies on/for this PR:
master
PR #139 👈
This comment was auto-generated by Graphite.
Probably meant poetry update instead of flask update?
@replit/micromanager merge skip ci
@replit/micromanager merge test
Is a timer in a thread really needed? Couldn't the db class just store the time of last url refresh. Then every db operation check if the last refresh was over an hour ago, if so only refresh the url then.
Is this ever gonna get merged? I've seen a few users complaining about no support for Repl DB in deployed Python Repls.
What is the timeline on this update? This is a high priority for me given always-on is unstable now.
This fix is already in pypi (version 3.3.0 https://pypi.org/project/replit/).
Just wondering, but shouldn't this PR be merged then?
Nice
|
gharchive/pull-request
| 2023-04-07T19:10:46 |
2025-04-01T04:35:42.728914
|
{
"authors": [
"PotentialStyx",
"airportyh",
"brenoafb",
"lafkpages",
"triviatroy"
],
"repo": "replit/replit-py",
"url": "https://github.com/replit/replit-py/pull/139",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
2173143511
|
Groq AI support
https://console.groq.com/docs/openai
+1
You could use Portkey-AI/gateway to solve it. More details you could find from this release.
|
gharchive/issue
| 2024-03-07T07:18:25 |
2025-04-01T04:35:42.730735
|
{
"authors": [
"VectorZhao",
"Whichbfj28",
"reply2future"
],
"repo": "reply2future/xExtension-NewsAssistant",
"url": "https://github.com/reply2future/xExtension-NewsAssistant/issues/25",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1192162940
|
composer/composer update to version 2
Do you plan to upgrade composer/composer package to version 2.x ?
The composer maintainers does not allow new features to composer 1.x anymore. Please check: https://github.com/composer/composer/pull/9134#issuecomment-678655291
and this has not been done? I think we have been supporting version 2.0 for a long time, or is it something else?
I am not talking about supporting composer 2 packages/repositories, but upgrading composer/composer library for repman: https://github.com/repman-io/repman/blob/master/composer.json#L32
ah, ok, this looks to me to be done (i mean doable).
I will try to handle that in current week
I already have an MR for this ready locally @akondas, it's part of my upgrade to PHP 8 PR. Might want to look at #579 first, then I can get the other 2 ready (PHP 8 and SF 6)
|
gharchive/issue
| 2022-04-04T18:32:39 |
2025-04-01T04:35:42.733491
|
{
"authors": [
"akondas",
"beniamin",
"xvilo"
],
"repo": "repman-io/repman",
"url": "https://github.com/repman-io/repman/issues/578",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2493101935
|
Add borderRadius and clipBehavior properties to editor settings
Add borderRadius and clipBehavior to Editor Settings
borderRadius: Customize editor corner radius for better UI design.
clipBehavior: Control how content overflow is handled (e.g., no clipping, hard edge).
These properties enhance styling flexibility and content management. Without clipBehavior, setting borderRadius may lead to content overflowing out of the widget, resulting in a poor visual experience.
Consider adding support for setting a complete BoxDecoration in the future for even greater customization.
👏LGTM, thanks!
|
gharchive/pull-request
| 2024-08-28T22:41:21 |
2025-04-01T04:35:42.761830
|
{
"authors": [
"MegatronKing",
"Vorkytaka"
],
"repo": "reqable/re-editor",
"url": "https://github.com/reqable/re-editor/pull/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
335123340
|
[flower] Add authentication support
Issue Type
Feature request
Current Behavior
The chart only deploys Flower in an unauthenticated fashion.
Expected Behavior
Flower is a great monitoring tool for Celery and could be a useful addition to our production infrastructure. However this can only happen if we have the ability to setup an authentication mechanism.
Possible Solution
Flower Authentication (official docs): https://flower.readthedocs.io/en/latest/auth.html
GitMate.io thinks possibly related issues are https://github.com/request-yo-racks/charts/issues/5 (Add a Makefile to each chart), https://github.com/request-yo-racks/charts/issues/33 (Create a chart for Celery Flower), and https://github.com/request-yo-racks/charts/issues/38 (Write a README.md for the flower chart).
|
gharchive/issue
| 2018-06-23T19:01:20 |
2025-04-01T04:35:42.771749
|
{
"authors": [
"rgreinho"
],
"repo": "request-yo-racks/charts",
"url": "https://github.com/request-yo-racks/charts/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
65952384
|
request.post fails in 2.54
Actually, this fails in 2.54 version:
var post = require('request').post;
post('http://localhost:3000', {body: 'param=true'});
I get this issue
TypeError: undefined is not a function
at request.(anonymous function) (/Users/jerome/AffilyOne/dev/api/node_modules/request/index.js:64:16)
It appears that this commit fixes the issue https://github.com/request/request/commit/c43477eb0484c328d9734ad1809a4a2ff18615d3.
Is it possible to get it in a 2.54.n version ?
2.55 with the fix is coming very soon
2.55 is published on NPM
|
gharchive/issue
| 2015-04-02T15:31:28 |
2025-04-01T04:35:42.773917
|
{
"authors": [
"rehia",
"simov"
],
"repo": "request/request",
"url": "https://github.com/request/request/issues/1526",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
25373914
|
Reloading Cookies
Is it possible to reload an entire cookie object in the event of the server going down?
I can see the Object { store: { idx: { //Cookie } }; but If I try to use the object instead of loading it with request.jar(); it gives me: { store: { idx: { [Object] } } }
Hi, I am trying to do this ? Please tell me where am I wrong
j = request.jar();
request.get( jar : j ...); // request call using j as cookie jar
var k = JSON.stringify(j); // storing j for storing in db.
var kJar = JSON.parse(j, function(key, value){
//converting date strings to objects.
});
However when I use kJar now for making request, my session is lost. What should I done in this situation.
i'm not sure if this is frowned upon but you can do this
var MemoryCookieStore = require('tough-cookie/lib/memstore').MemoryCookieStore;
MemoryCookieStore.prototype.toJSON = function () {
return this.idx;
};
MemoryCookieStore.prototype.fromJSON = function (json) {
return typeof json === 'string' ? JSON.parse(json) : json;
};
Then you can just run these operations directly on the store like this, to deserialize:
var store = new MemoryCookieStore();
store.fromJSON('...');
var jar = request.jar(store);
as you can do this to serialize:
store.toJSON();
The proper way to do this is to interact with the store directly, unfortunately tough cookies api is pretty verbose. Heres some pseudo code on what you need to do (putCookie, and getAllCookies are async functions)
// restore cookies
var cookies = ['...', '...'].map(Cookie.parse);
Promise.each(cookies, store.putCookie);
// save cookies
var cookies = store.getAllCookies().map(cookie => cookie.toString());
JSON.stringify(cookies);
|
gharchive/issue
| 2014-01-10T03:55:05 |
2025-04-01T04:35:42.777703
|
{
"authors": [
"Ratnesh-Github",
"icodeforlove",
"wishmedia"
],
"repo": "request/request",
"url": "https://github.com/request/request/issues/760",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2130459156
|
Update to Rerun 0.13
The new heuristics doesn't show the logged images:
worse, you can't show them because now the viewer regards everything that is not under the pinhole as a 3D space, thus not allowing any 2d objects it 🤔
worse, you can't show them because now the viewer regards everything that is not under the pinhole as a 3D space, thus not allowing any 2d objects it 🤔
You can if you set the origin to /image0 or /image
let's go straight for 0.14 instead
|
gharchive/pull-request
| 2024-02-12T16:09:59 |
2025-04-01T04:35:42.792478
|
{
"authors": [
"Wumpf",
"emilk",
"jleibs"
],
"repo": "rerun-io/cpp-example-opencv-eigen",
"url": "https://github.com/rerun-io/cpp-example-opencv-eigen/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1846662941
|
Record file:line of Rerun log call
When user logs some data with Rerun we should log the file:line of that log call.
A user could use this to figure out where a log call came from. For instance, they may see an image in Rerun and wonder where in their source code that was logged.
When a user misuses Rerun we could also tell the user which file:line in their code cause the problem.
How to find the file:line
C++: use a RERUN_LOG macro that reads __FILE__ and __LINE__
Python: there is certainly some magic for this
Rust: Use #[track_caller] and Location::caller()
How to store it
We could have a CallLocation component with the file:line in it.
To save store space we could consider hashing that, and only logging the hash for each log call (except the first one).
For C++20 there is std::source_location, and there might be compiler extensions for earlier C++ versions.
|
gharchive/issue
| 2023-08-11T11:28:55 |
2025-04-01T04:35:42.795918
|
{
"authors": [
"emilk"
],
"repo": "rerun-io/rerun",
"url": "https://github.com/rerun-io/rerun/issues/2963",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2041490632
|
.ply data loader should support 2D point cloud & meshes
Our current DataLoader supports only 3D point clouds, which is by far the most user requested thing, but plys can do more!
Related: https://github.com/rerun-io/rerun/issues/6808
|
gharchive/issue
| 2023-12-14T11:21:05 |
2025-04-01T04:35:42.797344
|
{
"authors": [
"jleibs",
"teh-cmc"
],
"repo": "rerun-io/rerun",
"url": "https://github.com/rerun-io/rerun/issues/4532",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2750627239
|
Arrow schema and metadata tags are lost too easily
Context
Our plan for Sorbet (and tagged components) hinges on being able to reliably "tag" data with semantic information.
The working theory is that every column receives a set of tags:
path
archetype
field
component
We store these tags in the arrow metadata of the RecordBatch that we send to Rerun, and Rerun uses these tags for everything from driving batching logic to how we visualize the data in the viewer.
Problem
The problem is it is way way too easy to accidentally lose these tags in most of the arrow libraries.
The reason is fundamental to arrow:
Arrow Arrays have Datatypes
Arrow Tables have Fields
A field is a datatype + a name + a set of metadata (our tags)
As soon as you access a table column:
timestamps = dataset.column("log_time")
positions = dataset.column("/points:Position3D")
the metadata is gone.
You now have two bare arrays:
One of type List<TimestampNs>
One of type List<List<F32,3>>
Neither column has tags. No knowledge of the entity path. No knowledge of the components. Not only are they lost, but there's not even a way to store the data on the arrow array.
If you thought you could send this back to arrow
rr.send_dataframe(pa.Table.from_arrays([timestamps, positions])
will not do what you want.
You need to manually re-apply the tags at the time that you create the table or else nothing works.
One thought is proposal to expose archetypes maybe helps a little: https://github.com/rerun-io/rerun/issues/7436
Struct datatypes contain internal Field specifications, and it appears this can propagate data all the way into datafusion and out again:
import pyarrow as pa
from datafusion import SessionContext
ctx = SessionContext()
struct_array = pa.StructArray.from_arrays([pa.array([1,2,3])], fields=[pa.field('key', pa.int64(), metadata={'rerun': 'data'})])
table = pa.Table.from_arrays([struct_array], names=["my_col"])
df = ctx.from_arrow(table)
arrow = df.collect()
print(arrow[0].columns[0].type.field(0).metadata)
# {b'rerun': b'data'}
However, still suffers from the same fundamental problem in that if you ever pull out the child-array from the struct, the array will lose its metadata in the same way.
|
gharchive/issue
| 2024-12-19T15:05:54 |
2025-04-01T04:35:42.803503
|
{
"authors": [
"jleibs"
],
"repo": "rerun-io/rerun",
"url": "https://github.com/rerun-io/rerun/issues/8547",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2341697868
|
Revert "add system requirements section for linux to tools readme"
Reverts rescript-lang/rescript-vscode#1007 because it's obsolete due to #1013 .
releated to #1006
Thanks!
|
gharchive/pull-request
| 2024-06-08T15:11:31 |
2025-04-01T04:35:42.804899
|
{
"authors": [
"woeps",
"zth"
],
"repo": "rescript-lang/rescript-vscode",
"url": "https://github.com/rescript-lang/rescript-vscode/pull/1014",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1848309808
|
Reduce files in package extension
master
npx vsce package -o rescript-vscode.vsix --no-git-tag-version
Executing prepublish script 'npm run vscode:prepublish'...
> rescript-vscode@1.18.0 vscode:prepublish
> npm run clean && npm run compile
> rescript-vscode@1.18.0 clean
> rm -rf client/out server/out
> rescript-vscode@1.18.0 compile
> tsc -b
This extension consists of 411 files, out of which 281 are JavaScript files. For performance reasons, you should bundle your extension: https://aka.ms/vscode-bundle-extension
. You should also exclude unnecessary files by adding them to your .vscodeignore: https://aka.ms/vscode-vscodeignore
DONE Packaged: rescript-vscode.vsix (411 files, 631.79KB)
CI
Now (esbuild)
npx vsce package -o rescript-vscode.vsix --no-git-tag-version
Executing prepublish script 'npm run vscode:prepublish'...
> rescript-vscode@1.18.0 vscode:prepublish
> npm run clean && npm run esbuild-server && npm run esbuild-client
> rescript-vscode@1.18.0 clean
> rm -rf client/out server/out
> rescript-vscode@1.18.0 esbuild-server
> npx esbuild server/src/server.ts --bundle --outfile=server/out/server.js --external:vscode --format=cjs --platform=node --minify
server/out/server.js 251.2kb
⚡ Done in 20ms
> rescript-vscode@1.18.0 esbuild-client
> npx esbuild client/src/extension.ts --bundle --outfile=client/out/extension.js --external:vscode --format=cjs --platform=node --minify
client/out/extension.js 355.0kb
⚡ Done in 24ms
DONE Packaged: rescript-vscode.vsix (19 files, 183.35KB)
unzip -l rescript-vscode.vsix
Archive: rescript-vscode.vsix
Length Date Time Name
--------- ---------- ----- ----
2736 2023-08-12 20:49 extension.vsixmanifest
517 2023-08-12 20:49 [Content_Types].xml
389 2022-11-09 20:52 extension/assets/switch-impl-intf-dark.svg
389 2022-11-09 20:52 extension/assets/switch-impl-intf-light.svg
20726 2023-08-12 20:49 extension/CHANGELOG.md
363492 2023-08-12 20:49 extension/client/out/extension.js
7318 2023-08-05 20:41 extension/client/package-lock.json
258 2023-03-29 14:13 extension/client/package.json
1111 2022-12-17 19:13 extension/grammars/rescript.markdown.json
13182 2023-08-05 20:41 extension/grammars/rescript.tmLanguage.json
1022 2022-06-06 19:28 extension/LICENSE.txt
8004 2022-06-06 19:28 extension/logo.png
6543 2023-08-12 20:49 extension/package.json
15550 2023-08-12 20:49 extension/README.md
585 2022-07-22 15:23 extension/rescript.configuration.json
257260 2023-08-12 20:49 extension/server/out/server.js
12737 2023-05-21 16:06 extension/server/package-lock.json
456 2023-05-21 16:06 extension/server/package.json
1335 2023-03-14 02:54 extension/snippets.json
--------- -------
713610 19 files
A small reduction rescript-vscode-1651c53.vsix (23 files, 11.49MB)
|
gharchive/pull-request
| 2023-08-13T00:08:26 |
2025-04-01T04:35:42.807949
|
{
"authors": [
"aspeddro"
],
"repo": "rescript-lang/rescript-vscode",
"url": "https://github.com/rescript-lang/rescript-vscode/pull/802",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
48007824
|
Support creating std::string from slice
If I do std::string str(slice.c_str()) I don't get a proper string for some reason. It seems like the c string is missing NULL termination. I assume that is on purpose?
The only way to create a proper std::string is as shown in this commit. This should be part of e so people don't get it wrong.
I guess we can close this?
|
gharchive/pull-request
| 2014-11-06T20:06:35 |
2025-04-01T04:35:42.809321
|
{
"authors": [
"kaimast"
],
"repo": "rescrv/e",
"url": "https://github.com/rescrv/e/pull/8",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
229028371
|
New public API
A complete implementation of the new API from #58. I added also a couple of fixes I previously did in separate commits in the same PR (ca55a3e, 56e19a5).
@majorz this should be a major change as the new API is incompatible with the old API - http://semver.org/
@majorz this should be a major change as the new API is incompatible with the old API - http://semver.org/
0.x.y and 0.z.w versions are only considered compatible if x = z, so this minor version bump is fine. Unless you mean the Change-Type thing which I don't quite understand.
One other thing that should be fixed eventually is using String for error. The error types should use https://github.com/brson/error-chain and be enums.
One other thing that should be fixed eventually is using String for error. The error types should use https://github.com/brson/error-chain and be enums.
I was thinking about the same earlier on. We need to do that in a next PR.
The PR should be ready now.
Two updates with a security focus were added as part of the review:
Using the AsAsciiStr trait from the ascii crate for passing passwords to our API - either with a str or [u8] reference. The password length should not exceed 64 ASCII chars.
Using a 'AsSsidSlicetrait for passing SSIDs - similarly to passwords we can use astror[u8]reference. The SSIDs are byte sequences with maximum length of 32. When using strings we use the internal UTF-8 encoded byte sequence and pass it directly to NM. For getting SSIDs we haveas_bytesmethod which will return a byte sequence directly. We have additionalas_strmethod which usesstr::from_utf8and may fail with astr::Utf8Error. Internally in our structs store an owned Ssidstruct - the API usesSsidSlice references for returning SSIDs (Ssidimplementsderefto&SsidSlice`).
There is still a lot missing to cover all the Rust API Guidelines - we can address those at a later point.
|
gharchive/pull-request
| 2017-05-16T13:14:15 |
2025-04-01T04:35:42.820522
|
{
"authors": [
"cmr",
"josephroberts",
"majorz"
],
"repo": "resin-io-modules/network_manager",
"url": "https://github.com/resin-io-modules/network_manager/pull/59",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
195630790
|
Update to support the new device registration flow
This PR fixes #232 by pulling in resin-register-device@3 (see resin-io/resin-register-device#22 and resin-io/resin-register-device#25 for background).
This PR brings the resin-sdk API in line with resin-register-device, and that adds the various changes to the tests required to make this work. Device registration now works in Node and browsers. Also clarifies some tests and reenables some tests that were temporarily disabled 2 months ago, and now pass.
I've favoured internal consistency for this change over backward compatibility here, which is debatable. Breaking changes are:
generateUUID is now generateUniqueKey, and is no longer async
register no longer returns the device, now just { id, uuid, api_key }).
These are more or less the same APIs as resin-register-device. We could transform these here to make this backward compatible (wrapping generateUniqueKey to make it async; doing an extra request after every register call to get the device and then attaching api_key to that result) but both of those changes are a little messy internally and have downsides. If we're happy breaking compatibility, I think this PR is the better end result.
Other minor point is that I've called the new { id, uuid, api_key } return type "device registration info", but I haven't described the format anywhere, so it's not easy for users to know exactly what'll be in that. Can't see any examples of how we document return object types in JSDoc, let me know if there's a clear way I should add more detail.
This PR is on top of #239, because otherwise the tests won't pass, and I've used that as a base for this PR so you can see the changes here independently. I'll can shift it to sdk-browser once the tests PR has been merged.
The tests PR is merged, good to rebase.
We should remember to bring the registration-related changes in the CLI when we upgrade the SDK version there.
I think the CLI doesn't have tests ATM.
Can you please rebase and resolve conflicts as I've squashed the commits in the sdk-browser branch?
I've pulled in all your comments and rebased.
LGTM, please squash before merging.
Oh BTW you need to rebuild the docs with npm run docs
Squashed down to just two commits: the actual changes of this PR, and a separate one for the re-enabling the CORS test, since that's a totally separate change, and it'd be good to be able to follow that back in future.
|
gharchive/pull-request
| 2016-12-14T19:59:37 |
2025-04-01T04:35:42.826518
|
{
"authors": [
"emirotin",
"pimterry"
],
"repo": "resin-io/resin-sdk",
"url": "https://github.com/resin-io/resin-sdk/pull/240",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
226285379
|
WiFi doesn't work on Variscite DART-6UL
Variscite DART-6UL successfully boots resinOS. Both Ethernet-Ports are being recognized by the System. However WiFi does not work. Here's what's inside the console:
[ 13.831384] brcmfmac: brcmf_sdio_drivestrengthinit: No SDIO Drive strength init done for chip 43430 rev 1 pmurev 24
[ 13.968427] usbcore: registered new interface driver brcmfmac
[ 14.022452] brcmfmac mmc0:0001:1: Direct firmware load for brcm/brcmfmac43430-sdio.bin failed with error -2
[ 14.180557] brcmfmac mmc0:0001:1: Falling back to user helper
[ 15.353737] cfg80211: World regulatory domain updated:
[ 15.357618] cfg80211: DFS Master region: unset
[ 15.380732] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[ 15.389245] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[ 15.408156] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[ 15.421330] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[ 15.428081] cfg80211: (5170000 KHz - 5250000 KHz @ 80000 KHz, 160000 KHz AUTO), (N/A, 2000 mBm), (N/A)
[ 15.458118] cfg80211: (5250000 KHz - 5330000 KHz @ 80000 KHz, 160000 KHz AUTO), (N/A, 2000 mBm), (0 s)
[ 15.467263] cfg80211: (5490000 KHz - 5730000 KHz @ 160000 KHz), (N/A, 2000 mBm), (0 s)
[ 15.478399] cfg80211: (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[ 15.485998] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[ 15.661281] FAT-fs (mmcblk1p1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
[ 16.340689] brcmfmac: brcmf_sdio_htclk: HT Avail timeout (1000000): clkctl 0x50
[ 17.366787] brcmfmac: brcmf_sdio_htclk: HT Avail timeout (1000000): clkctl 0x50
Hope this helps fixing it since this board is such a nice and professional / industrial-ready device!
Hi @KemperVreden , at this point we don't have this as a priority but as soon as the time allows it, we shall look into this.
Have you made any progress on your side?
I'm encountering this too-- DART-6UL-Y2-xx-WB.
I notice that the driver initializes the first time on the DART-6UL EVK (maybe some u-boot specific code turns on certain GPIOs when it interrogates the EVK's I2C flash memory identifier or something).
I can eventually get it working on my application board by loading brcmfmac module, then removing it, then re-loading it. Second time seems to take. But this seems like a very sketchy solution for us to go to production with.
Any resolution/help would be appreciated.
From the logs @KemperVreden provided it seems it misses "brcmfmac43430" firmware. Are you using an external dongle?
@dpruessner I a little puzzled why is it working for you, even with that hack. Did you manually add the firmware?
I'm running our own Buildroot base linux image.
I downloaded the Variscite Debian image from wget ftp://ftp.variscite.com/DART-6UL/Software/mx6ul-dart-debian-recovery-sd.v20.img.gz . With the SD card's kernel image, I loaded the zImage and the imx6ull-var-dart-nand_wifi.dtb onto the NAND flash; copied the /lib/firmware/b* directories over onto our image, and the /lib/modules/4.1.15-224502-gcf... modules over to our image.
In other words: the setup I was testing was with the March, 2017 Variscite Debian kernel image, modules, device tree, and firmware binaries, with a buildroot linux base.
Is there a better recommended kernel version commit # for 4.1.x with the WiFi working better than commit cf341517 ?
Since I last posted, I rebuilt the kernel (using git checkout cf341517 in the Variscite kernel repository) on our end to include drivers we need for our application. It seems better behaved with the following messages:
brcmfmac: brcmf_sdio_drivestrengthinit: No SDIO Drive strength init done for chip 43430 rev 1 pmurev 24
brcmfmac: brcmf_sdio_htclk: HT Avail timeout (1000000): clkctl 0x50
brcmfmac: brcmf_sdio_htclk: HT Avail timeout (1000000): clkctl 0x50
brcmfmac: brcmf_sdio_drivestrengthinit: No SDIO Drive strength init done for chip 43430 rev 1 pmurev 24
brcmfmac: brcmf_c_preinit_dcmds: Firmware version = wl0: Nov 14 2016 00:32:49 version 7.45.41.32 (r662793) FWID 01-8b473c3f
Same HT Avail timeout problem, but it displays the firmware version that it actually loads. Funny thing is that we did not change the binary firmware files' location or contents.
@floion did we test resinOS WiFi support on DART board?
At this point wifi is not functional on our OS. As I said, that is not high up on our priority list, @agherzan
Hi @dpruessner how did you get to download the image from Variscite's ftp? It requires a login now. Did it always require that?
Tested a staging OS release (version 2.29.2+rev2) on the official Variscite carrier board + SoM (https://www.variscite.com/products/evaluation-kits/dart-6ul-kits/) and WiFi is working.
So next OS release (2.30.0 or newer will contain the changes that make WiFi functional)
|
gharchive/issue
| 2017-05-04T13:44:37 |
2025-04-01T04:35:42.835094
|
{
"authors": [
"KemperVreden",
"agherzan",
"dpruessner",
"floion"
],
"repo": "resin-os/resinos",
"url": "https://github.com/resin-os/resinos/issues/292",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
739388156
|
refactor: move default actions to TILE_DEFAULTS
also introduce the option to disable actions by setting action: false or secondaryAction: false
also show "clickable" tick not for fixed tile types, but for tiles with action defined
Thanks.
|
gharchive/pull-request
| 2020-11-09T21:50:23 |
2025-04-01T04:35:42.836766
|
{
"authors": [
"akloeckner",
"rchl"
],
"repo": "resoai/TileBoard",
"url": "https://github.com/resoai/TileBoard/pull/522",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2238150617
|
Update lock timeout calculation on heartbeat
Use the reqTime to calculate the new lock timeout on hearbeat. This is not perfect as cannot know the exact tick that the server updates the database, but the assertion in the release lock validation will be correct because we err on the side of under calculating the timeout time.
Fixes #282
Fixes #283
I diffed previously successful DST runs with seeds {0, 2, 3} to runs against this change and they produced the same logs. Previously unsuccessful dst run seeds {1, 31010} are fixed with this change.
|
gharchive/pull-request
| 2024-04-11T16:49:09 |
2025-04-01T04:35:42.844413
|
{
"authors": [
"dfarr"
],
"repo": "resonatehq/resonate",
"url": "https://github.com/resonatehq/resonate/pull/284",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
316139515
|
A socket file causes 'restore' to skip all subsequent files in the directory.
Output of restic version
$ ./restic_0.8.3_linux_amd64 version
restic 0.8.3
compiled with go1.10 on linux/amd64
How did you run restic exactly?
$ ./restic_0.8.3_linux_amd64 -r restic-repo/ restore -t restoreDir 965ab679
What backend/server/service did you use to store the repository?
Local directory
Expected behavior
Restore all files and directories.
In particular, I have this original directory:
$ ls -l site/logs/
total 82132
-rwxrwx--- 1 likebike likebike 22713576 Jan 25 2012 access_log
-rwxrwx--- 1 likebike likebike 3877663 Feb 22 2012 access_log_addon
srwxrwx--- 1 likebike likebike 0 Jan 25 2012 cgisock.30489
-rwxrwx--- 1 likebike likebike 86078 Feb 22 2012 cron_log
-rwxrwx--- 1 likebike likebike 11111670 Feb 22 2012 error_log
-rwxrwx--- 1 likebike likebike 651243 Feb 22 2012 error_log_addon
-rw-rwx--- 1 likebike likebike 4 Jan 25 2012 fcgid_shm
drwxrwx--- 2 likebike likebike 4096 Jan 25 2012 fcgidsock
-rw-rwx--- 1 likebike likebike 6 Jan 25 2012 httpd.pid
-rwxrwx--- 1 likebike likebike 45640374 Feb 22 2012 tunnel_log
Notice that it contains a "cgisock.30489" file. I guess it's reasonable to skip that special file, but I'd like all the other normal files to be restored.
Actual behavior
Here is the 'logs' directory after a restore:
$ ls -l restoreDir/site/logs/
total 25972
-rwxrwx--- 1 likebike likebike 22713576 Jan 25 2012 access_log
-rwxrwx--- 1 likebike likebike 3877663 Feb 22 2012 access_log_addon
Notice that the restore process stopped at the socket file, and also skipped all subsequent files and directories.
FORTUNATELY, the subsequent files are located in the backup, they just aren't being restored:
$ ./restic_0.8.3_linux_amd64 -r restic-repo/ ls -l 965ab679 | grep site/logs
enter password for repository:
drwxr-xr-x 1001 1001 0 2012-01-25 13:25:05 /site/logs
-rwxrwx--- 1001 1001 342 2012-02-22 14:04:02 /site/logs/.oldcron
-rwxrwx--- 1001 1001 22713576 2012-01-25 18:14:57 /site/logs/access_log
-rwxrwx--- 1001 1001 3877663 2012-02-22 17:37:20 /site/logs/access_log_addon
Srwxrwx--- 1001 1001 0 2012-01-25 13:25:04 /site/logs/cgisock.30489
-rwxrwx--- 1001 1001 86078 2012-02-22 14:04:00 /site/logs/cron_log
-rwxrwx--- 1001 1001 11111670 2012-02-22 17:32:21 /site/logs/error_log
-rwxrwx--- 1001 1001 651243 2012-02-22 17:37:20 /site/logs/error_log_addon
-rw-rwx--- 1001 1001 4 2012-01-25 13:25:05 /site/logs/fcgid_shm
drwxrwx--- 1001 1001 0 2012-01-25 18:18:29 /site/logs/fcgidsock
-rw-rwx--- 1001 1001 6 2012-01-25 13:25:05 /site/logs/httpd.pid
-rwxrwx--- 1001 1001 45640374 2012-02-22 17:37:29 /site/logs/tunnel_log
...So if this bug is just fixed in the restore process, all data should be recoverable.
Steps to reproduce the behavior
You can easily re-create this kind of directory+socket structure like this:
$ mkdir -p test/x
$ touch test/{a,b,y,z}
$ python -c "import socket as s; sock = s.socket(s.AF_UNIX);
$ ls -l test
total 4
-rw-rw-r-- 1 likebike likebike 0 Apr 20 13:56 a
-rw-rw-r-- 1 likebike likebike 0 Apr 20 13:56 b
srwxrwxr-x 1 likebike likebike 0 Apr 20 13:56 c
drwxrwxr-x 2 likebike likebike 4096 Apr 20 13:56 x
-rw-rw-r-- 1 likebike likebike 0 Apr 20 13:56 y
-rw-rw-r-- 1 likebike likebike 0 Apr 20 13:56 z
Do you have any idea what may have caused this?
Do you have an idea how to solve the issue?
Did restic help you or made you happy in any way?
Yes! Restic is a great solution to a problem we all have. I like its design a lot! Thank you. :)
Related: #82
Thanks for the report! I'll look into it.
|
gharchive/issue
| 2018-04-20T06:08:42 |
2025-04-01T04:35:42.936945
|
{
"authors": [
"fd0",
"likebike"
],
"repo": "restic/restic",
"url": "https://github.com/restic/restic/issues/1730",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
100398247
|
Feature: Add redundancy for data stored in the repository
Restic does its best to reduce the amount of redundant data in a backup, e.g. by saving duplicate chunks of files only once. When the repository files are damaged at the storage location (e.g. by bit rot), this means that a large number of files/metadata/snapshots are affected because of this reduced redundancy.
This issue tracks implementing something that artificially adds some redundancy back. I'm thinking along the lines of a reed-solomon error correcting code, e.g. as implemented by the famous par2 program. This computes for a set of files (e.g. a large RAR archive with 100 individual .rXX files) a few .par2 files which can in the case of data corruption be used to repair the original files to a certain amount. An introduction to the reed-solomon family of codes can be found here: https://www.backblaze.com/blog/reed-solomon/
A Go implementation is available here: https://github.com/klauspost/reedsolomon
This would be a highly interesting feature to have if it was possible to distribute the shards to different targets (folders: different disks; hosts: offsite locations), allowing Tahoe-LAFS like RAID modes of efficient redundancy in scenarios where you can assume that ALL of your backup locations won't blow up at the same time, yet don't feel like making full, unneeded 1:1 backups of everything.
@cfcs Yes I agree. Configurable folders where each part should be stored would be totally awesome. That way I could use 5-6 disks without raid, and have the redundancy I need.
@fd0
Having a parity feature for restic would be the killer feature for me. I use attic today, but knowing that it will take care of bit rot would make me switch fast ;)
What about STAIR codes? http://ansrlab.cse.cuhk.edu.hk/software/stair/
Looks like something we could do, although I think it's much more complex than reed-solomon codes. I like the simplicity of the latter :)
i just found out about SeqBox, which is an interesting approach to this:
https://github.com/MarcoPon/SeqBox
... instead of storing multiple redundant blocks, it tags each block with a specific metadata header so that the content can easily be found even after major filesystem or storage medium damage. it has really interesting properties, even though it looks a little like "magic":
the borg project also has an issue tracking various potential solutions for this in https://github.com/borgbackup/borg/issues/225 - and there's an interesting discussion there about the pros and cons of redundancy...
SeqBox doesn't add redundancy; it enables you to carve out the puzzle pieces provided you have a block-aligned file system that lost its inode table but didn't lose any of the actual data. It does not help you recover from damaged storage blocks.
It does this at a massive overhead of ~3% - that's 30GB/TB.
If aligned to extents or to logical blocks (like - 1024) instead of 512 byte blocks the overhead would be smaller, but ... filesystems like ZFS are there to help resolve issues like this.
Given the nature of restic (writing very large volumes of data) I'd intuitively argue that there could be a larger statistical risk of bad blocks being data blocks than the inode index tables.
I think SeqBox is an interesting idea, but perhaps not a perfect match for restic's use-case.
It does not help you recover from damaged storage blocks.
I disagree. I think it can help recover damaged datasets with parts removed. It would allow someone to better recover if pieces of the filesystem went missing. Not entirely recover of course, but recover parts of the repo. How restic handles a partially destroyed repository is another thing, of course... But I think the comparison with PhotoRec is pretty convincing.
I do agree that a 3% overhead is huge, however. But when compared to par2, it's not that bad. :) Also keep in mind that the reed-solomon erasure coding algorithm proposed above talks about having 20-40% overhead, at least in the go library benchmarks. That's much larger, although the gains might be greater.
The question is: once the filesystem starts getting damaged, can you still recover enough the restic archive blocks? SeqBox helps finding those blocks and reed-solomon helps recovering the entirety of the original archive. I'm not sure you could recover the whole archive with only one of the two, unless reed-solomon is implemented at the filesystem level...
But you are right that, statistically, the odds are larger that data gets damaged than index. It's just that index data then becomes really precious: if any is lost, the result is complete catastrophe.
I appreciate that it can help find blocks when the underlying FS is damaged and restic data has not been damaged.
I disagree that it can help solve the problem described in this issue post which is: Recover the original data put into restic in the scenario where restic data has been damaged:
When the repository files are damaged at the storage location [..]
IMHO comparing to par2 is like comparing apples and oranges, they solve different problems.
I think we fundamentally agree on all the technical points, but disagree about the exact problem we're trying to solve in this issue.
Your third paragraph makes a good point, which I think should be the takeaway from the recent posts about SeqBox in this thread: SeqBox-style block/extent tagging alone does NOT help us recover from bitrot of the actual restic data, but combined with real error-correction it can be useful to help lay the puzzle.
It is however highly FS-dependent (block size could be different, FS could implement compression [like NTFS, ZFS, ...). I would still recommend relying on the FS layer for block indexing, and the main benefit of implementing application-layer parity would IMO be that it can be distributed across several physical storage backends.
That being said I appreciate you raising the point, it's an interesting subject to include in the discussion of an eventual error-correcting design. :-)
Dumping this link here (from Hacker News today) for future reference: https://github.com/Bulat-Ziganshin/FastECC
after some reflexion, i have come to believe that implementing error correction necessarily means the admins need to make decisions about where to store redundancy blocks. partial errors on drives is something that is quite uncommon: these days, especially with SSDs or SMART-enabled devices, drives often see total unrecoverable failures instead of partial ones.
what are we trying to protect against here? we should more clearly define what kind of failures we want to recover from before trying to find solutions. this is highly medium-dependent: a tape will fail completely differently from a HDD or a SDD, and different future-proof recovery mechanism should be implemented in those scenarios. are we defending against medium loss, physical damage or cosmic rays? those are also all very different...
borg previously pushed this issue aside and argued it is best delegated to the filesystem layer. i'm beginning to see a little more why.
concretely, i think reedsolomon would be interesting in the case where we don't trust the filesystem, e.g. if we don't want to rely on S3's reliability and want to distribute among multiple backends (different S3 enpoinds, S3/SSH or multiple drives or mixing it all)...
btw ECC was the only thing to use RAR as a backup-tool for years, because no other archive-tool used that... its a good feeling to know, if a archive has a bitrot i can easy fix it with 3% of recovery-data in the rar-file
but i think its also better to use a minimum of 2 locations of your backup.
how does restic handle an error in the repro?
does restic then heal the faulty data from the local source?
and what if it is no longer available locally, because the error was in an older snapshot?
maybe it would be also good if you could use a second repository to 'HEAL' the first repository?
like a fallback-repo :-)
i have another idea/question.
what if you read at backup-time unrecognized faulty data (sata-error / hdd-error / ram-error)?
in that case it would be also good to make always a second backup in a second-run in another repository without that errors.
-> by the way that could be a downside if you make a backup to two repos at the same time.
-> reading data twice would be safer and hopefully reading from disk and not from cache-buffer ;-)
does restic recognizes on the next snapshot, that in the first repository, the source-data does not match? because the 'change date' from the file are the same but the data are not?
i think the command 'restic check --read-data' checks only the repository and not against the original-source?
+1
if implemented, it should be done so that par2-files are created on the fly while writing the regular pack files. this way there is no 2nd pass needed to read already written data.
I'm watching for this feature myself. Though any kind of ECC should be robust enough to handle a couple different cases, like:
0.1% of all of the bytes in all of the packs is corrupt. (1 par file with 10% redundancy for each pack would easily be able to fix this.)
0.1% of all of the packs have been entirely lost. (1 par file with 10% redundancy for each pack would not fix this.)
Please see my recent comment https://github.com/restic/restic/issues/804#issuecomment-615140267 before trying to come up with yet another flawed FEC (forward error correction).
|
gharchive/issue
| 2015-08-11T20:12:26 |
2025-04-01T04:35:42.955853
|
{
"authors": [
"Intensity",
"anarcat",
"cd1188",
"cfcs",
"cowai",
"dumblob",
"fd0",
"iluvcapra"
],
"repo": "restic/restic",
"url": "https://github.com/restic/restic/issues/256",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1513162434
|
backup: extract StdioWrapper from ProgressPrinters
What does this PR change? What problem does it solve?
Extract the StdioWrapper from ProgressPrinters. The StdioWrapper is not used at all by the ProgressPrinters. It is now called a bit earlier than previously. However, as the password prompt directly accessed stdin/stdout this doesn't cause problems.
Was the change previously discussed in an issue or on the forum?
No.
Checklist
[x] I have read the contribution guidelines.
[x] I have enabled maintainer edits.
[ ] I have added tests for all code changes.
[ ] I have added documentation for relevant changes (in the manual).
[ ] There's a new file in changelog/unreleased/ that describes the changes for our users (see template).
[x] I have run gofmt on the code in all commits.
[x] All commit messages are formatted in the same style as the other commits in the repo.
[x] I'm done! This pull request is ready for review.
LGTM, but don't forget to also remove Stdout/Stderr from ui/backup/progress_test.go.
|
gharchive/pull-request
| 2022-12-28T20:59:47 |
2025-04-01T04:35:42.962302
|
{
"authors": [
"MichaelEischer",
"greatroar"
],
"repo": "restic/restic",
"url": "https://github.com/restic/restic/pull/4111",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
205675055
|
All fields exported in JSON export
The fields to omit from export are not omitted when exporting to Better CSL Json
That is correct, the skipfields option only works for Better Bib(La)TeX. Which fields are causing trouble?
abstract and source, mostly. I've also tried removing them through a script, but to no avail.
There's no technical barrier to adding postscript support to the CSL exporters; it would be harder for the skipfields, because you can't tell the translator where they're supposed to be active, and they may affect BibTeX processing. Also, the CSL isn't really a flat structure like bib(la)tex.
Why do you want these fields gone? The CSL styles I'm familiar with all ignore those two.
Some json parsers choke up on the weird sh|t that some publishers put on the abstract, and it's a problem before the CSL style.
I have tests running on postscript support for CSL, but the case you describe means one of two things:
BBT outputs out-of-spec JSON
The JSON parser you use is broken
The JSON format is well defined and fairly rigid, including escaping rules and character set (utf-8 only); if you have a sample that breaks a JSON parser you use, I'd love to get my hands on it. There's really nothing I should even be able to put in the reference that would cause auto-of-spec JSON. I use JSON.stringify as supplied by Firefox -- I'd suspect the consuming parser before I'd suspect Mozilla would get JSON wrong.
Here is one such file (exported from BBT): https://gist.github.com/tpoisot/ebf13e9cc07e387d34e94a43e3c0918a
Interestingly, it doesn't render using JSONovich or any other JSON extension in firefox (which makes me suspect that some of the abstract content is breaking json).
According to all tools I tested with (among which jsonlint), that is valid JSON. What parser is choking on it?
How is JSONovich supposed to work? I opened that gist in raw mode but I'm not seeing anything happening.
JSON-dataview and JSON-formatter happily format that JSON file for me.
And for JSONovich I don't see any errors in the web/developer console.
closing for lack of feedback
|
gharchive/issue
| 2017-02-06T18:53:44 |
2025-04-01T04:35:43.032489
|
{
"authors": [
"retorquere",
"tpoisot"
],
"repo": "retorquere/zotero-better-bibtex",
"url": "https://github.com/retorquere/zotero-better-bibtex/issues/641",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
58382982
|
numpy hypergeometric function freezes with certain arguments
This works fine:
import numpy.random as numpy_random
numpy_random.hypergeometric(32767, 2147450880, 1073741824)
But this doesn't:
import numpy.random as numpy_random
numpy_random.hypergeometric(262, 16776954, 8388608)
Commit 536be287531 removes the 'numpy' dependency.
|
gharchive/issue
| 2015-02-20T17:20:11 |
2025-04-01T04:35:43.051417
|
{
"authors": [
"rev112"
],
"repo": "rev112/pyope",
"url": "https://github.com/rev112/pyope/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
25268379
|
Rename ActionInvoker to be more consistent with filter naming
Perhaps ActionFilter or InvokerFilter. Keep in mind it is currently in invoker.go which should be renamed to reflect whatever new name we choose.
@robfig ping for 2 cents.
ActionInvoker is a special case and there is little value in renaming the func.
|
gharchive/issue
| 2014-01-08T20:15:19 |
2025-04-01T04:35:43.056270
|
{
"authors": [
"brendensoares"
],
"repo": "revel/revel",
"url": "https://github.com/revel/revel/issues/452",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
60917859
|
[RETARGET] Forking clarification
Minor tweak to forking directions, e.g. changed:
$ git checkout -b feature/useful-new-thing develop <--- Git did not accept this
to:
$ git checkout -b feature/useful-new-thing origin/remotes/develop
Also, added reminder to check that you have valid SSH keys.
This needs to target the develop branch per our contribution guidelines.
This needs to be in the "manual" and not here, and wip.. and aware of issue, so adding to my todo and closed here.. ta.. ;-)
|
gharchive/pull-request
| 2015-03-12T22:11:57 |
2025-04-01T04:35:43.058044
|
{
"authors": [
"brendensoares",
"eric-gilbertson",
"pedromorgan"
],
"repo": "revel/revel",
"url": "https://github.com/revel/revel/pull/893",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1574079324
|
Pypi Package supporting Python 3 and up
Hi and thanks for your work on this.
Are there any plans to release the python >= 3 supporting code of this respository in a pypi package? Or are there reasons that this hasn't happened? As I'd imagine it to be more comfortable for users of this package to be able to include it to a project by using a official pip installable release.
I'm confused, the version that is released support Python 3. In fact, it's currently only tested against Python 3.7 - 3.10 https://github.com/revsys/django-tos/blob/master/.github/workflows/django.yml#L15-L42
Thanks for getting back.
I just installed the django-tos package v1.0.0 via pip to retest. There's an occurence of unicode() in models.py (line 76) which is deprecated since Python 3.0. This leads to an error when (e.g.) trying to access the User Agreements Model in the django admin site.
In the master branch of this repository this has been fixed in December 2022. The latest release on pypi seems to be from February 2022.
Ah, makes sense. Probably don't have test coverage hitting that line.
@frankwiles looks like we need a version bump and pypi release of the latest changes on github whenever you have a moment, and we should be good to go
Thanks! Do you have a commit pending with the version bump?
Yes, good catch. I committed it but forgot to push when I got distracted by the next shiny object. It's there now.
|
gharchive/issue
| 2023-02-07T10:33:04 |
2025-04-01T04:35:43.065078
|
{
"authors": [
"floese",
"frankwiles",
"nicholasserra"
],
"repo": "revsys/django-tos",
"url": "https://github.com/revsys/django-tos/issues/67",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2180568989
|
Enabling Multiple Uses of NFC requestTechnology Instead of One-time
Hello. I am an IT developer working for a small company in Korea.
What I want to achieve is a functionality where, regardless of whether NFC is tapped once or not, pressing a 'Refresh' button turns off NFC reading and then turns it back on when pressed again.
However, I encountered an error after implementing this.
Could you please let me know if there is anything unusual in my source code?
const nfcTagLogin = async () => {
try {
NfcManager.start();
await NfcManager.requestTechnology(NfcTech.Ndef);
const tag = await NfcManager.getTag();
// console.log('NFC tagged, ID: ' + tag.id);
webviewRef.current.postMessage(String(tag.id));
} catch (error) {
console.error(error);
} finally {
NfcManager.cancelTechnologyRequest().catch(() => 0);
}
};
const nfcClean = () => {
NfcManager.unregisterTagEvent();
NfcManager.cancelTechnologyRequest();
};
const nfcRefresh =() => {
nfcClean();
nfcTagLogin();
};
I think the problem comes from trying to start the scanning immediatly after the cancellation. I had a similar problem and solved it by adding a small delay, which in your case will look like this
const nfcRefresh = () => { NfcManager.cancelTechnologyRequest().then(async () => { await new Promise((resolve) => setTimeout(resolve, 3500)).then(() => { nfcTagLogin() }); }) }
There might be a better solution, but this it the one that i managed to come up with, hope it helps you
|
gharchive/issue
| 2024-03-12T01:58:21 |
2025-04-01T04:35:43.069625
|
{
"authors": [
"divashuthron",
"georgipavlov-7DIGIT"
],
"repo": "revtel/react-native-nfc-manager",
"url": "https://github.com/revtel/react-native-nfc-manager/issues/704",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
779740609
|
Update s3, sqs to 2.15.59
Updates
software.amazon.awssdk:s3
software.amazon.awssdk:sqs
from 2.15.58 to 2.15.59.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "software.amazon.awssdk" } ]
labels: library-update, semver-patch
Superseded by #34.
Superseded by #34.
|
gharchive/pull-request
| 2021-01-05T23:46:04 |
2025-04-01T04:35:43.073370
|
{
"authors": [
"scala-steward"
],
"repo": "rewards-network/pure-aws",
"url": "https://github.com/rewards-network/pure-aws/pull/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1124637309
|
Update s3, sdk-core, sqs to 2.17.124
Updates
software.amazon.awssdk:s3
software.amazon.awssdk:sdk-core
software.amazon.awssdk:sqs
from 2.17.121 to 2.17.124.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "software.amazon.awssdk" } ]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Superseded by #420.
|
gharchive/pull-request
| 2022-02-04T21:38:03 |
2025-04-01T04:35:43.077099
|
{
"authors": [
"scala-steward"
],
"repo": "rewards-network/pure-aws",
"url": "https://github.com/rewards-network/pure-aws/pull/417",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
230241631
|
Including external .md files
I have tried to include static markdown files, but all this module appears to do is present the file reference rather than the content of the file. For instance,
import React from 'react';
import ReactDOM from 'react-dom';
import ReactMarkdown from 'react-markdown'
// The following line only results in '/static/media/thinstyle.766d8f97.md' appearing on the page
// import content from './content/thinstyle.md'
const content = require('./content/thinstyle.md')
ReactDOM.render(
<ReactMarkdown source={content} />,
document.getElementById('root')
);
Is there some other method in this situation to stream the contents of the file. I'm not aware of any other include methods in Javascript.
Incidentally, I've discovered the FileReader npm package, but all the examples of using it involve dealing with a file interactively uploaded to a web page (through a form) not locally loading the file.
Weirdly, the fs package doesn't help at all either. When I do this:
import content from './content/thinstyle.md'
fs.readFile(content, (err, data) => {
if (err) throw err;
console.log(data);
});
Webpack throws an error that essentailly says that readFile is not a function.
Ok. I see what I'm doing wrong. The markdown doesn't get compiled into the production bundle the way I thought. I thought there was a way to compile the markdown into the javascript bundle. Is that possible? If not, I guess this has to be an ajax call.
You can use raw-loader to convert a markdown file to a string.
module.exports = {
module: {
rules: [
{
test: /\.md$/,
use: 'raw-loader'
}
]
}
}
Hey, thanks! I know that wasn't exactly an issue with react-markdown, so thanks for the hint!
Happy to help ☺️
This probs trouble me a lot, may I know what's your solution eventually? Did you use any other package?
@zhanglinge
I use the 'file-loader' instead.
install the file-loader and config your webpack.config.js file;
import your .md file to your page;now,you can get the .md file path;
fetch your .md file and get the content of your .md file.
const content = require('../assets/doc/agreement.md') .... componentDidMount() { fetch(content).then(response => { return response.text() }).then(text => { console.log(text) this.setState({ agreementContent: text }) }).catch(err => { console.log(err) }) }
you can also get for help from here:https://stackoverflow.com/questions/42928530/how-do-i-load-a-markdown-file-into-a-react-component
@wellenzhong I see, many thanks.
Is it possible to fetch markdown files from github repo's into my react website?
Is it possible to fetch markdown files from github repo's into my react website?
I successfully got this running in my functional component, hope it helps!
const [mdText, setMdText] = useState('');
useEffect(() => {
fetch("https://raw.githubusercontent.com/Beamanator/fdi/master/README.md")
.then((response) => {
if (response.ok) return response.text();
else return Promise.reject("Didn't fetch text correctly");
})
.then((text) => {
setMdText(text);
})
.catch((error) => console.error(error));
});
|
gharchive/issue
| 2017-05-21T19:18:17 |
2025-04-01T04:35:43.084213
|
{
"authors": [
"Beamanator",
"EAT-CODE-KITE-REPEAT",
"rexxars",
"russellbits",
"wellenzhong",
"zhanglinge"
],
"repo": "rexxars/react-markdown",
"url": "https://github.com/rexxars/react-markdown/issues/76",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
69612353
|
AutoCompleteTextView example
Someone can help with a working AutoComplete example using this library?
I already get one at https://github.com/rey5137/material/blob/ac83d8f68fd8c793dac92163a0afcac9752b03c9/app/src/main/res/layout/fragment_textfield.xml
|
gharchive/issue
| 2015-04-20T15:24:41 |
2025-04-01T04:35:43.085654
|
{
"authors": [
"jlccaires"
],
"repo": "rey5137/material",
"url": "https://github.com/rey5137/material/issues/63",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2183609735
|
missing code for the package
dear @rezeck we tried your amazing GRF based collective transport algorithm, and we want to inform you that it DOESN'T work... So please send us the updated files. the issue that will found is the all robot when saw the target all of it goes to target without object.
finally we built your hero2.0v robot and we prefer to applied the algorithm in hero 2.0 bot please.....<3
Dear @besobees ,
Thank you for trying out our GRF based collective transport algorithm with the Hero robots! We're glad you find it promising.
I'm sorry to hear about the issue you're facing. Could you please provide more details? Specifically, which branch are you using? Also, how are you running the code? Understanding your environment will help me diagnose the problem.
Additionally, could you elaborate on the behavior you're experiencing? It seems all robots are heading towards the target without considering obstacles. Clarifying this will assist me in troubleshooting.
In the meantime, would you mind trying the 'dev' branch? It might have the updates you're looking for.
Looking forward to your response.
Best regards,
Rezeck
I utilize the main file of GRF Transport (grf_transport main) for multiple steps of execution through the following link: https://rezeck.github.io/grf_transport. In this process, the robots initiate movement of the object towards the designated target. Once the object reaches the target, another robot stationed at the target location pushes it outside. Moreover, if a robot perceives the target before detecting the object, it prioritizes heading towards the target rather than engaging with the object. this link in the website does not work https://github.com/verlab/grf_colletive_transport.git /
However, when employing the GRF Transport development file (grf_transport dev), the robots encounter cant identifying both the object and the target. Additionally, the robots tend to collide with the edges of the ring during operation look to this picture.
also there are two videos for the main file i will show you it now
Screencast from 24 آذار, 2024 +03 11 28 47 م.webm
Screencast from 24 آذار, 2024 +03 11 29 23 م.webm
finally the result of the trials is different between your paper and my run so i excepted that you have anew files.
thank you for replying and your interesting i wish to text me soon.
Hi @besobees ,
Thanks a lot for the detailed comment! I'll try to replicate it today and see if I've missed a step in the tutorial or if there's an issue with the code that's currently running. I'll do my best to come back with a fix in the next few days.
Best regards,
Rezeck
Hi @besobees ,
Can you try running the main branch again? It is working properly now.
Best regards,
Rezeck
Thank you very much for your feedback. I'm delighted to see that everything is working well now and that the issue has been successfully resolved. I look forward to the possibility of collaborating with you in the future.
@rezeck
|
gharchive/issue
| 2024-03-13T10:13:42 |
2025-04-01T04:35:43.094227
|
{
"authors": [
"besobees",
"rezeck"
],
"repo": "rezeck/grf_transport",
"url": "https://github.com/rezeck/grf_transport/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
368463961
|
Cryptark: Crash on collecting artifact (SetAchievement) - Steam version
this function may be missing from the libSteamworksNative.so stub
full backtrace:
276-Central Core Neutralized
TRYING TO GET Revenue - GOT 0
06:55:37.244-CRASH: Update
06:55:37.259-Line: 0 -
06:55:37.265-System.EntryPointNotFoundException: SetAchievement
at (wrapper managed-to-native) Steamworks.Steam+Stats.SetAchievement(string)
at Medusa.ScreenManager.UnlockArtifact (System.String className, System.Boolean ShowDialogEvenIfUn
locked) [0x00087] in <57aa6fc09a4347e685acfaf800260ba0>:0
at Medusa.EnemySelect.StartDebrief () [0x007b4] in <57aa6fc09a4347e685acfaf800260ba0>:0
at Medusa.EnemySelect.LoadContent () [0x00467] in <57aa6fc09a4347e685acfaf800260ba0>:0
at Medusa.ScreenManager.AddScreen (Medusa.GameScreen screen) [0x00019] in <57aa6fc09a4347e685acfaf
800260ba0>:0
at Medusa.CryptarkGame.EndLevel (Medusa.EnemyOption enemy, System.Boolean victory, System.Boolean
abandoned) [0x00407] in <57aa6fc09a4347e685acfaf800260ba0>:0
at Medusa.Nemesis.Update (Medusa.Engine Engine) [0x0006b] in <57aa6fc09a4347e685acfaf800260ba0>:0
at Medusa.CryptarkGame.Update (Microsoft.Xna.Framework.GameTime gameTime, System.Boolean otherScre
enHasFocus, System.Boolean coveredByOtherScreen) [0x00c86] in <57aa6fc09a4347e685acfaf800260ba0>:0
at Medusa.ScreenManager.Update (Microsoft.Xna.Framework.GameTime gameTime) [0x00424] in <57aa6fc09
a4347e685acfaf800260ba0>:0
at Microsoft.Xna.Framework.Game.Update (Microsoft.Xna.Framework.GameTime gameTime) [0x00078] in <4
991dfcf0593486eb48693a1f183704c>:0
at Medusa.Start.Update (Microsoft.Xna.Framework.GameTime gameTime) [0x00005] in <57aa6fc09a4347e68
5acfaf800260ba0>:0
06:55:37.278-Total Memory: 253281KB
06:55:37.278-v1.2 - 09/20/2017 - 20:10:26
06:55:37.284-<--- Log has been written to: /home/thfr/.local/share/Cryptark/Logs/12-ryze-up.domain-4
33827886259375.txt
06:55:38.526-Upload File Complete, status 226-110584 Kbytes used (1%) - authorized: 10240000 Kb
226-File successfully transferred
226 0.088 seconds (measured here), 56.98 Kbytes per second
fixed by stubbing the functions SetAchievement() and CommitStats() in the libSteamworksNative.so.
|
gharchive/issue
| 2018-10-10T01:59:18 |
2025-04-01T04:35:43.099010
|
{
"authors": [
"rfht"
],
"repo": "rfht/fnaify",
"url": "https://github.com/rfht/fnaify/issues/28",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
211999865
|
Add support for pivotshare.com websites
[x] I've verified and I assure that I'm running youtube-dl 2017.03.06
[x] At least skimmed through README and most notably FAQ and BUGS sections
[x] Searched the bugtracker for similar issues including closed ones
[x] Site support request (request for adding support for a new site)
Description of your issue, suggested solution and other information
Please add support for pivotshare.com.
Example URLs:
https://www.cultorama.tv/media/brotherhood-of-justice/49480
https://www.adsrcourses.com/media/02-underestimating-acoustical-treatment/50481
https://rpwondemand.pivotshare.com/media/live-in-portsmouth-7/59694
There are a heap of other channels that can be found here:
https://www.pivotshare.com/discover/
Account can be provided if needed.
Duplicate of #11374
|
gharchive/issue
| 2017-03-06T00:46:01 |
2025-04-01T04:35:43.104636
|
{
"authors": [
"jgilf",
"yan12125"
],
"repo": "rg3/youtube-dl",
"url": "https://github.com/rg3/youtube-dl/issues/12374",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.