id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
186112705
|
Immutable API
This is an implementation of moment/moment-rfcs#2, which was written to address #1754. This is obviously a breaking API change that would require a new major version number (e.g. "3.0").
The only major change from the RFC draft's discussion is that moment.updateOffset() and moment.duration()._bubble() will now return moment and duration objects, respectively. Otherwise it's pretty much impossible for external code to customize the behavior of these hooks.
Feedback welcome! :smiley:
That is amazing. I'll review it very thoroughly on Tuesday utc. Also we need to make sure to backport whatever is released before 3.0 (if we decide to release). Or we can ship this as 3.0-rc1 and keep the two branches for a few months.
Thanks! I'm adding a few comments now about things that I think are worth extra scrutiny, but I'd love to get a comprehensive review.
Not sure how everyone else feels, but my inclination with this PR is to divide up the tests between me, @icambron , @mj1856, and @ichernev. Each of us can take a set to review. You're then responsible for reviewing the entire code path down from the test. This will cause the most commonly used code paths to be reviewed several times, and the less commonly used to be reviewed once.
I think we also need to perf benchmark against the existing build to see if we have introduced any problem areas.
Finally, we should cut a build of this for a few interested individuals to use, before we do any kind of release. Better to catch with some dedicated beta testers.
Impressive as heck @butterflyhug!
Also, we're going to probably have a few merge conflicts with existing PRs. Not sure on the strategy for resolving that, but to me it makes sense to start merging them to develop, even if we aren't planning on a release. I think poor Lucas is going to be stuck with resolving for a large part, though we all can.
So long as we don't do another ES6 rewrite in the meantime, I think merging changes from develop between now and 3.0 should be a doable (albeit large) task. After all, I've done it before.
OK, here are my notes:
should we (do we) freeze moments/durations/locales to make sure they're not changed by something we've missed?
can we add _clone method that clones instead of passing the type in wrap. It will be for internal use only
in src/lib/create/from-anything.js why do we updateOffset (we weren't before)
in src/lib/duration/constructor.js bubble -- if its internal and used only in constructor why can't it just modify stuff
in src/lib/duration/prototype.js - why is abs not wrapped?
about updateOffset madness -- we can just implement the new API and remove the old one #3134
in src/lib/moment/start-end-of.js - why is startOf wrapped, when its implementation is immutable, same goes for endOf
in src/lib/utils/day-of-week.js - getSetDayOfWeek is immutable, do we need wrap (others in file too)
in src/lib/units/day-of-year.js - getSetDayOfYear -- no need to wrap
we have a generic get/set for all builtin units (ms,s,h,d,M,y), so I think we can make it immutable, then remove the wrap on all methods. This half-way immutability might give us an immutable API but it is crazy to develop, never knowing which functions can mutate and which can't
in src/lib/units/quarter.js -- the getSetQuarter can be made immutable (by relying on .month pseudo-immutability)
how do we port the locales and the locales tests? How did you fix all tests now?
I'm a bit lost on something -- how come the plugin did work while not changing any code inside moment. Because it had both frozen and mutating API, and the inners where always using the mutating api, the frozen stuff was only for the user, using the mutating api + clone. Couldn't we just hide the mutating API from the user and only expose the frozen, which will be autogen with the plugin code? Just overall -- I was expecting a change that was basically the plugin code shipped with the core code. If we spend time to make it properly immutable we might as well do that and remove wrap from the remaining methods. The core mutating methods are not that much, most rely on other mutating methods.
Re @ichernev:
I'm not freezing moments here, and I think we probably shouldn't add that. Object.freeze causes silent failures which I expect would be hugely problematic to debug here -- unless you're in strict mode, in which case it raises exceptions, but Moment APIs aren't supposed to raise exceptions. Also, Object.freeze is generally quite slow. If we care about cloning performance, then freezing seems like a non-starter.
I suppose we could only do the freezing in (some or all) tests, but I think maybe we just write a new testcase to prove that everything still works if the user freezes the moment, and call it a day.
Sure, will do.
Because I removed updateOffset from the constructor (so that it doesn't run during internal cloning), and it still needs to happen somewhere when users call moment(). details
You might disagree, but I feel like any prototype methods (even if private) should be immutable. Even if we slap everyone's wrist every time they do it, people will call prototype methods and then come looking to us for support, and I feel like our job is easier if we know that they can't run any of our mutating code.
So, I'd be happy to keep this mutating if we pull it off the prototype; otherwise my instinct is to make it immutable. I suppose that means I should import the raw mutable implementation into the constructor and use that instead of the current property-copying hack, regardless of whether we keep it on the prototype or not.
Oops, will fix.
6-11. Ooh, good point, I should use #3134 as a starting point. Without updateOffset in the mix, I should be able to keep everything internally mutable, more like the plugin. (During this work, I realized that my plugin presumably never worked properly with updateOffset / Moment Timezone.)
-12. I did all of these tests by hand, and I don't have a better system in place for porting locales and their tests. I'm tempted to try writing a script to do simple code rewriting (assigning the result of statements that have mutation methods back to the original variable name) and see how far that gets us, but I haven't done that sort of coding in JS before. Suggestions welcome here.
Re: "I'm a bit lost" -- because updateOffset, which shouldn't be an issue if I move this on top of #3134. I assume that consistent internal mutability is preferable to consistent internal immutability for performance reasons, even if it forces contributors to worry about more things (e.g. they can't directly call prototype APIs internally), but let me know if you disagree. I'm willing to push this in whichever direction we feel is best for the library long-term.
One concern about moving this change on top of 3134 - code churn. It feels like that introduces a large number of changes at once. 3134 is a big deal.
If we go down that road, I would get 3134 shipped in moment 3.0, and follow about two or three months later with moment 4.0 being immutable. I'm okay with that if it's the right decision for the code, but I would not send those two things together.
@butterflyhug just verifying -- the whole reason for the internal mutability is updateOffset? Now that I think about it -- its hard now, because update offset is called on every mutator and might mutate again. But I don't buy the performance reason. Right now we clone the whole moment object on every mutation just because we might mutate it with updateOffset. This should be much more inefficient. If updateOffset was made to return a new moment it will only happen very rarely, and all mutators can just create one copy, correct from the start.
I'd ship the two changes together. I disagree with @maggiepint because, the updateOffset needs to match the immutability in 3.0. So we can't just ship one without the other. We could ship only new TZ interface, but I do not consider it a big change. Because it is kind of internal API to moment-timezone. If somebody was using it -- he can't in an immutable version, so porting your code to immutability should include that small nuance (updateOffset).
@ichernev Yes, I think we're basically saying the same thing in slightly different words.
With updateOffset, there are two possible end states:
Convert all of our code (including internal APIs) to immutable APIs. This gets us a consistently immutable codebase at the cost of more cloning internally (potentially ~10 clones when calling certain public methods, like startOf).
Keep the existing mutable APIs internally and wrap functions on the prototype to expose an immutable API to users. Whenever we need to call updateOffset, we copy the resulting moment's properties back into the original moment, thus preserving mutability internally while giving users an immutable updateOffset. This minimizes the algorithmic changes (although there'd still be a bit of code churn to use unwrapped APIs internally instead of calling mehods from the ptototype), at the cost of forcing contributors to think about and manage the mutable->immutable boundary.
Either way, updateOffset implementations won't be able to rely on adding new properties to the moment being updated anymore (e.g. these tests) -- but that's true for any of the immutable APIs so it's not really a surprise.
The current code in this PR is a middle ground between these two end states, because I was indecisive. You're right to flag the inconsistency as a major issue, so I should ideally bring the code into alignment with whichever path we'd prefer to maintain over the long term.
And if we're willing to remove updateOffset and just ship the TZ interface from #3134, then that makes it a bit cleaner to keep a mutable API internally while wrapping the prototype for immutability.
@butterflyhug why is the new api better for mut internals and immut api?
I think I like the first option more. The problem is some methods are slower but that's what people want.
The problem is, having the mutable methods around is very flexible, allowing us to do a lot of questionable magic, like auto-batch mutations (record mutations and batch exec them on read) or provide a mutable interface directly to speed things up.
Ideally we'd do one or both (your suggestions) and benchmark with existing code.
@ichernev I guess maybe it isn't any better. I thought #3134 had deprecated updateOffset with the new Timezone stuff, but upon further review I guess I remembered that wrong.
I'll aim to take another pass at this within the next week or two.
@butterflyhug I'm pretty sure if we introduce the new TZ api we need to get rid of updateOffset. Basically on every mutation we'll ask the library, and it will return the offset, which we need to store (but will probably be the same as the old one). It just has a cleaner API.
If we're to have half our methods mutable internally, half immutable then we need veery good guidelines on what and where needs to be mutable and try to decrease that to a minimum. And possibly a package (directory) inside lib for all mutable stuff. But again, I think this will cause huge butthurt in the future.
@ichernev Right, okay. That's what I initially thought. The Timezone interface would be better for mutable internals and immutable API. Moment-internal code would ask the Timezone for the correct offset and update the Moment instance when appropriate.
The problem with the updateOffset API is that we have external code (from Moment Timezone) that needs to edit a moment in the middle of the internal mutable code path. That's fundamentally at odds with building a clean internal=mutable / external=immutable boundary. Of course we can hack around the issue, but it's cleaner if we don't need to.
Friendly reminder: the moment-timezone code to make the TZ interface work has not yet been written :-)
@maggiepint it hasn't. But I fail to find a better interface for updating the offset. So more or less it will be what the rfc says. If you have another idea that will magically solve the issue at hand I'm all ears.
Okay, I've updated this code and it should read a bit better now. For internal consistency, all prototype methods now have immutable implementations instead of using a Frozen Moment-style wrap function in the prototype definition. All existing tests are passing, and I've added a few additional tests for good measure.
I did not remove old deprecated functions here; that will be a quick and easy change for 3.0, but I think it belongs in a separate PR. This is also still using updateOffset instead of the new Timezone API. Otherwise I think I've addressed the bulk of the feedback on this PR -- let me know what you think!
I haven't written any new benchmarks to test things thoroughly, but based on existing resources, the performance impact seems to be relatively small:
Our existing benchmarks seem to be roughly the same speed as before. Obviously this is far from all-inclusive, but it's a promising result. Amusingly, the big outlier is that this PR's no-op clone is dramatically slower than before -- apparently our deprecation warnings are relatively slow.
Anecdotally, running the entire test suite is between 0.5 and 1 seconds slower (~4-8%) on my machine with this code than v2.18.1, although there's a lot of variance from one run to the next so this is a very rough estimate. I suspect this is a rough upper bound on the actual performance changes in our test suite (when running in Node), given that I've added a few more tests and deprecation warnings.
Oh, and this is rebased on top of last night's v2.18.1 release.
One more thing before I forget -- we might need allow some APIs for plugins to hook into the constructor, because if they need to keep a piece of data with moment, now that its immutable it needs to be in the constructor, but we don't have any hooks for that. Or maybe plugins don't need to keep data in moment, or maybe the data should be only lazily computed (and keep moment at least observably immutable).
I think it's probably best for plugins to lazily compute values. It's possible for someone to create a moment instance before loading all their plugins, so ideally plugins will need to handle the case where they didn't exist when the constructor ran anyway. I suspect they're more likely to do it correctly if we make that the normal case. :wink:
Also, I'm sure we'll still be lazily computing a few things in the core library unless we do a rigorous code audit specifically aimed at moving all lazy computations into the constructor. As long as nobody ever updates existing values (and all new values are derived solely from the moment's existing data), I think it'll be fine.
That said: we/I should probably write a guide to help plugin authors to update their code, and this would be a good issue to highlight in that doc.
Also: the proposal for distinguishing immutable from mutable moments was to add a version property to the instance prototypes, to supplement the existing moment.version global. I'll try to take a pass at adding that and addressing your other comments later today.
Thanks for the thorough review, @ichernev!
I'd rather add moment.isImmutable property. With version you have to split and compare and so on.
I didn't think too much but we should be able to hack some code that verifies all moment properties did not change after each function (for tests only). And we need to consider lazily computed props and not fail if those change.
just note for someone who come over this PR:
the referenced #1754 is closed in favour of Luxon with is immutable
Closing this. As others have pointed out, use Luxon if you want a mostly immutable API. Thanks.
Maybe we add a note to the moment readme to point to LUXON?
The not-immutable was the first pain point with the momentjs for me so it would be nice to know that exist a new improved tool for it
|
gharchive/pull-request
| 2016-10-30T02:22:06 |
2025-04-01T04:35:06.526425
|
{
"authors": [
"butterflyhug",
"ichernev",
"maggiepint",
"muescha"
],
"repo": "moment/moment",
"url": "https://github.com/moment/moment/pull/3548",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
262276659
|
#4024: Fix for year setter on leap years.
This PR addresses issue #4024.
Setting the year from February 29th on leap year to some other date on a non leap year returned March 1st of the other date.
Merged in 36f29b3a3746c7ca3668ede7fda6ad1b6005a2e5
|
gharchive/pull-request
| 2017-10-03T01:18:22 |
2025-04-01T04:35:06.529269
|
{
"authors": [
"Joddsson",
"ichernev"
],
"repo": "moment/moment",
"url": "https://github.com/moment/moment/pull/4199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1216667999
|
chore: Add Dockerfile and publish workflow
momento-proxy Docker image is tested on an EC2 instance and I was able to set a key/value pair.
[root@ip-172-31-53-115 bin]# telnet 0.0.0.0 11211
Trying 0.0.0.0...
Connected to 0.0.0.0.
Escape character is '^]'.
set erika 0 0 3
eri
STORED
Every time a pr is merged to main, a new Docker image is built and pushed to the gomomento/momento-proxy repo.
When users run docker run, they need to provide an env variable MOMENTO_AUTHENTICATION (Momento auth token).
Also updated momento-proxy.md to include how to run the momento-proxy Docker image.
@eaddingtonwhite @danielamiao
Updated the Dockerfile a bit so that we can mount a host machine's config directory to a container's config directory to be able to use a custom config file passed by users.
Also updated README to reflect this change.
Tested it on an EC2 instance and was able to use my custom config file to run.
|
gharchive/pull-request
| 2022-04-27T01:37:33 |
2025-04-01T04:35:06.531540
|
{
"authors": [
"poppoerika"
],
"repo": "momentohq/pelikan",
"url": "https://github.com/momentohq/pelikan/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
663280694
|
split term: permanent neonatal diabetes mellitus into: generic and type 1
Term to split: permanent neonatal diabetes mellitus
Name for new term:
permanent neonatal diabetes mellitus
permanent neonatal diabetes mellitus 1 https://www.omim.org/entry/606176
List properties that should be moved to new term
See docs for splitting OMIMs that migrate generic to specific
generic should have:
name: diabetes mellitus, permanent neonatal
synonym: "PDMI" EXACT []
xref: OMIMPS:606176 {source="MONDO:equivalentTo"} ! diabetes mellitus, permanent neonatal
I checked, this is an actual split of
id: MONDO:0011643
name: permanent neonatal diabetes mellitus
|
gharchive/issue
| 2020-07-21T20:15:50 |
2025-04-01T04:35:06.536815
|
{
"authors": [
"cmungall"
],
"repo": "monarch-initiative/mondo",
"url": "https://github.com/monarch-initiative/mondo/issues/1803",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1232242616
|
Incorrect bbox
Hello and thanks for this nice lib!
I've found a path that gives me different results in the browser and with svg-path-bbox.
svg-path-bbox gives me a width of 302.40999999999997
import svgPathBbox from "svg-path-bbox"
const path =
"M436.79 417.31C435.89 415.01 438.59 410.91 439.69 409.31C439.89 408.91 439.69 408.41 439.39 408.31C438.09 407.41 435.19 404.71 435.09 401.61C434.99 399.71 437.39 396.41 438.49 394.91C439.19 393.71 437.69 393.11 437.09 392.31C435.69 390.81 433.99 388.31 434.09 385.71C434.19 382.71 436.69 379.51 440.79 377.01C440.89 376.91 440.99 376.71 440.89 376.61C439.79 374.91 439.29 374.01 439.19 371.41C438.99 367.91 440.79 364.81 444.19 362.41C446.59 360.81 449.89 359.81 452.79 359.21C452.79 359.01 452.79 349.41 452.79 348.21V348.01V340.21C452.79 340.01 452.69 339.91 452.49 339.91L209.89 340.21C209.69 340.21 209.59 340.31 209.59 340.51L209.69 896.1H452.49C452.69 896.1 452.79 896 452.79 895.8V420.31C449.79 421.41 446.59 422.11 444.19 422.11C440.79 422.01 437.69 419.51 436.79 417.31Z "
const bbox = svgPathBbox(path)
const width = bbox[2] - bbox[0]
console.log(width) // 302.40999999999997
But for the same path, chrome gives me a width or 243.20001220703125:
document.body.innerHTML = `
<svg width="800" height='800'><path fill='black' d="M436.79 417.31C435.89 415.01 438.59 410.91 439.69 409.31C439.89 408.91 439.69 408.41 439.39 408.31C438.09 407.41 435.19 404.71 435.09 401.61C434.99 399.71 437.39 396.41 438.49 394.91C439.19 393.71 437.69 393.11 437.09 392.31C435.69 390.81 433.99 388.31 434.09 385.71C434.19 382.71 436.69 379.51 440.79 377.01C440.89 376.91 440.99 376.71 440.89 376.61C439.79 374.91 439.29 374.01 439.19 371.41C438.99 367.91 440.79 364.81 444.19 362.41C446.59 360.81 449.89 359.81 452.79 359.21C452.79 359.01 452.79 349.41 452.79 348.21V348.01V340.21C452.79 340.01 452.69 339.91 452.49 339.91L209.89 340.21C209.69 340.21 209.59 340.31 209.59 340.51L209.69 896.1H452.49C452.69 896.1 452.79 896 452.79 895.8V420.31C449.79 421.41 446.59 422.11 444.19 422.11C440.79 422.01 437.69 419.51 436.79 417.31Z"/></svg>
`
const bbox = document.querySelector('path').getBBox()
console.log(bbox.width) // 243.20001220703125
Thanks for the detailed report. It is fixed in v1.2.1.
Thanks for the quick fix @mondeja !!
|
gharchive/issue
| 2022-05-11T08:55:23 |
2025-04-01T04:35:06.539631
|
{
"authors": [
"mondeja",
"testerez"
],
"repo": "mondeja/svg-path-bbox",
"url": "https://github.com/mondeja/svg-path-bbox/issues/91",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2605882966
|
Extension stopped working after last update
Hello, this extension seems to have stopped working (both Win & Mac) after the last (0.2.6) update.
I mentioned this in a commit comment, but just in case you might have not seen it, I'm posting it here.
thanks for your quick intervention. I've fixed the issue. Check version 0.2.7
Perfect, works again!
@mondersky Aaah I'm sorry, I was too quick to report :D
It is KINDA working currently - after restarting, I see the colors again, but there's no reactivity. If I set a color it doesn't change until I restart VSCode..
after restarting, does it become reactive ?
No, after restarting previous changes are visible. Any new changes require another restart currently.
Strangely I am unable to reproduce the issue on my vscode. Is the issue occurring on both mac & windows ?
Checked on Windows and there it's working fine. So yeah, it seems to be only a Mac issue. However v0.2.5 worked on Mac just fine.
I just uploaded v0.2.8 it contains the same code as v0.2.5 can you check now ?
Alright, I tried and it still keeps happening. I downgraded to version 0.2.5 and it also keeps happening. So the issue must be something else at this point, but the strange thing is that today when I installed the extension for the first time (when it was at version 0.2.5), everything was working fine on Mac.
So 2 options come to mind for me:
since I wasn't using the extension for too long, it might've broken on v0.2.5 eventually, if I kept using it for few days, maybe after relaunch or after "restart extension host" command, etc.. Not sure, can't confirm that
other option might be that v0.2.6 migh have left some "permanent damage" after update? That's just speculation, since I don't know how this extension works, but the "Your Code installation is corrupt" message tells me it's not the usual approach
That's all I can report you, since I wasn't using the extension for very long. Hope it helps in some way.
Thank you for the detailed report! I'll try to find a Mac where the issue occurs and debug it there. In the meantime, you can run the command remove patch to clear your current patch, which will reset things and might provide us with some clues
Thank you for the detailed report! I'll try to find a Mac where the issue occurs and debug it there. In the meantime, you can run the command remove patch to clear your current patch, which will reset things and might provide us with some clues
It's not a Mac Issue. Tabs are not reactive - I have had this problem on Windows for a long time. I noticed it 6-12 months ago and it is present in version 0.2.8 as well. This is one of those problems that has prevented me from using Tabs Color since day one I discovered this plugin. I've been waiting for months for a working version to finally be available. I wondered why no one else was complaining, but finally I came across a message that I wasn't the only one with this problem.
I recorded a detailed video of this problem in VSCode on Windows.
Thanks for support. Hope this helps.
Thank you for the detailed report! I'll try to find a Mac where the issue occurs and debug it there. In the meantime, you can run the command remove patch to clear your current patch, which will reset things and might provide us with some clues
"remove patch" did not help. Still behaves the same, even after multiple program restarts.
I'm also having this issue after updating my vsc to the latest version (1.95.0) and also I have TabsColor (v0.2.8).
Same problem as the .gif IcyFoxe posted. I'm on Windows 10 64bit.
@IcyFoxe @GiurlaniDev I have updated a potential fix to the issue, can you update to 0.2.9 and tell me it it's still happening ?
@IcyFoxe @GiurlaniDev I have uploaded a potential fix to the issue, can you update to 0.2.9 and tell me it it's still happening ?
It works fine now! Thank you @mondersky 🥇
@mondersky
Currently on TabsColor v0.2.11, problem reocurring. Sorry to be a bother, I'm not sure what's wrong with it now, it was previously working when I commented last week, but seemingly broke yet again.
Appreciate all the effort, the extension really is utile and a lifesaver (when it does function hehe)
Refer to the video attached for details.
https://github.com/user-attachments/assets/daac9e70-3d15-4ca5-a2c5-e0facad68e1f
@mondersky
Currently on TabsColor v0.2.11, problem reocurring. Sorry to be a bother, I'm not sure what's wrong with it now, it was previously working when I commented last week, but seemingly broke yet again.
Appreciate all the effort, the extension really is utile and a lifesaver (when it does function hehe)
Refer to the video attached for details.
https://github.com/user-attachments/assets/4000f661-76a7-4515-91fa-b6d3afa80f3a
@GiurlaniDev thanks for reporting this and no problem, quite the opposite you are helping me make this extension more stable. Based on what I saw from the video, the problem seems to be related to unsaved files tabs, do you confirm this ?
@GiurlaniDev thanks for reporting this and no problem, quite the opposite you are helping me make this extension more stable. Based on what I saw from the video, the problem seems to be related to unsaved files tabs, do you confirm this ?
Yes! It doesn't work on unsaved file tabs
@GiurlaniDev This issue doesn't seem to be related to the last updates, it looks like it has always been there. Follow this thread here #48
|
gharchive/issue
| 2024-10-22T16:03:26 |
2025-04-01T04:35:06.555591
|
{
"authors": [
"GiurlaniDev",
"IcyFoxe",
"mondersky",
"xxxxxxbox"
],
"repo": "mondersky/tabscolor-vscode",
"url": "https://github.com/mondersky/tabscolor-vscode/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1620292984
|
Play when downloading
https://user-images.githubusercontent.com/51048550/224535799-7ed96b0b-0a48-47c7-adda-363824c7ec5a.mp4
A torrent doesn't fill data from start to finish, so you can't play from start to finish until the download has completed
start to finish
ExoPlayer does not handle incomplete or invalid media.
Sad
|
gharchive/issue
| 2023-03-12T09:22:39 |
2025-04-01T04:35:06.643530
|
{
"authors": [
"Archeix7",
"brentonv",
"moneytoo"
],
"repo": "moneytoo/Player",
"url": "https://github.com/moneytoo/Player/issues/447",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
391981537
|
Kill MongoDB instances before running tests and after each test
The tests in this PR will fail until #13 is merged. This PR will be updated to trigger a new build when the tests are able to be passed.
If an existing MongoDB server is running and the tests are ran, then a bunch of errors will occur. In a similar method, if a test fails without closing it's server, then all of the proceeding tests will fail because the address is in use.
Using kill-mongodb, all MongoDB instances are killed before running any of the tests and after running each individual test. In many instances, this should not do anything because the servers should be closed in the test itself.
There are a few Travis CI builds where the mongos server fails to startup. After further debugging, the reason why the server won't start is due to the address already being in use. To the best of my knowledge, I believe this is due to a mongos instance running before the tests were started and conflicting with the tests.
My best guess theory is that this test fails and the mongos server continues to run and then the tests are reran on the same server and the address is already in use causing more problems. After applying this fix, I was unable to reproduce the "address in use" error (which is fairly random), so I'm assuming this fixed it.
Example build demonstrating issue here
Thanks for the feedback!
The entire purpose of the kill-mongodb package is to kill MongoDB instances. This package is used in runner in the same manner here. I think it's a nice package that provides an easy way to kill off MongoDB instances, particularly for the case of tests.
Frequently while running the tests or with them running on Travis CI, I run into the problem of tests failing because of the address being in use (see here). The only way I can think of to solve this is by killing all MongoDB instances before running any tests.
I understand your concern about killing all MongoDB instances system-wide, but I don't think it will be an issue. In order for the tests to run successfully, the ports used in the tests must be open. In the majority of cases, I don't think the developer is going to have an existing MongoDB instance running when they are running the tests.
@mbroadst When you get some time, this might be the logical best next PR to review.
@daprahamian Brought up some good points and I wanted to hear your input.
|
gharchive/pull-request
| 2018-12-18T03:26:18 |
2025-04-01T04:35:06.654555
|
{
"authors": [
"addisonElliott"
],
"repo": "mongodb-js/mongodb-topology-manager",
"url": "https://github.com/mongodb-js/mongodb-topology-manager/pull/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
231294266
|
Add Projection $elemMatch support.
Projection's $elemMatch:
https://docs.mongodb.com/manual/reference/operator/projection/elemMatch/
isn't supported by Morphia 1.3.2 version.
Yeeah! We need this!
This method already exists in 1.3: org.mongodb.morphia.query.FieldEnd#elemMatch
|
gharchive/issue
| 2017-05-25T09:54:08 |
2025-04-01T04:35:06.678868
|
{
"authors": [
"evanchooly",
"sergey-alekseichenko-axon",
"vknysh-axon"
],
"repo": "mongodb/morphia",
"url": "https://github.com/mongodb/morphia/issues/1171",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
61947784
|
Is morphia support mongo3.0?
HI ALL,
I want to use morphia to parse java object to Document(new class in mongo java driver 3.0), or parse Document to java object, how to do it?
I know morphia can to/from use DBObject,but how could do with Document? thanks.
DBObject toDBObject(Object entity)
T fromDBObject(Class<T> entityClass, DBObject dbObject)
Morphia works with the 3.0 driver but not the new APIs (Document, MongoCollection, etc.). If you'd like to discuss this further, please use the mailing list.
|
gharchive/issue
| 2015-03-16T02:35:53 |
2025-04-01T04:35:06.680551
|
{
"authors": [
"evanchooly",
"zhaozhiming"
],
"repo": "mongodb/morphia",
"url": "https://github.com/mongodb/morphia/issues/723",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
38562323
|
NoMethodError - undefined method `each' for "53d00529486a1bcd28000075":Moped::BSON::ObjectId:
Hi guys I already checked my logic but I don't see anything wrong, so I believe it could be a bug.
My models are:
class Payment
include Mongoid::Document
include Mongoid::Timestamps
include Mongoid::Paranoia
belongs_to :account, index: true
field :deletion_reason
end
class Account
include Mongoid::Document
include Mongoid::Timestamps
include ActionView::Helpers::NumberHelper
include PublicActivity::Common
include ActionView::Helpers::DateHelper
include Mongoid::Paranoia
embeds_many :account_notes
has_many :payments
accepts_nested_attributes_for :payments
accepts_nested_attributes_for :account_notes
end
class AccountNote < Note
embedded_in :account
TOPICS = ["billing", "client services issues", "class management info"]
validates_presence_of :account
end
class Note
include Mongoid::Document
include Mongoid::Timestamps
belongs_to :user
field :content, type: String
field :topic, type: String
field :date, type: Date, default: Date.current
validates_presence_of :user, :topic, :content
end
The error occurs when executing the following code:
def destroy
@payment = @account.payments.find(params[:id])
@payment.update_attributes(deletion_reason: params[:payment][:deletion_reason])
@payment.create_activity key: 'payment.deleted', owner: current_user, recipient: @account, params: {deletion_reason: @payment.deletion_reason}
if @payment.destroy
# THE EXCEPTION IS THROWN HERE ################
@account.account_notes.create(topic: "billing", content: "Payment Deleted due to:\n#{@payment.deletion_reason}", account: @account, user_id: current_user.id, date: @payment.deleted_at)
#################################################
@account.update_balances_on(@payment.deleted_at.to_date)
else
@payment.activities.last.destroy
end
respond_to do |format|
format.js
end
end
This is the stack trace:
NoMethodError - undefined method `each' for "53d00529486a1bcd28000075":Moped::BSON::ObjectId:
mongoid (3.1.6) lib/mongoid/atomic/modifiers.rb:121:in `add_operation'
mongoid (3.1.6) lib/mongoid/atomic/modifiers.rb:87:in `block in set'
mongoid (3.1.6) lib/mongoid/atomic/modifiers.rb:84:in `set'
mongoid (3.1.6) lib/mongoid/atomic.rb:364:in `generate_atomic_updates'
mongoid (3.1.6) lib/mongoid/atomic.rb:134:in `block in atomic_updates'
mongoid (3.1.6) lib/mongoid/atomic.rb:132:in `atomic_updates'
mongoid (3.1.6) lib/mongoid/persistence/operations.rb:145:in `init_updates'
mongoid (3.1.6) lib/mongoid/persistence/operations.rb:118:in `updates'
mongoid (3.1.6) lib/mongoid/persistence/operations/update.rb:46:in `block in persist'
mongoid (3.1.6) lib/mongoid/persistence/modification.rb:26:in `block (2 levels) in prepare'
activesupport (3.2.16) lib/active_support/callbacks.rb:414:in `_run__1888542753040091012__update__2932193531082085051__callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:405:in `__run_callback'
activesupport (3.2.16) lib/active_support/callbacks.rb:385:in `_run_update_callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:81:in `run_callbacks'
mongoid (3.1.6) lib/mongoid/callbacks.rb:130:in `run_callbacks'
mongoid (3.1.6) lib/mongoid/persistence/modification.rb:25:in `block in prepare'
activesupport (3.2.16) lib/active_support/callbacks.rb:414:in `_run__1888542753040091012__save__2932193531082085051__callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:405:in `__run_callback'
activesupport (3.2.16) lib/active_support/callbacks.rb:385:in `_run_save_callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:81:in `run_callbacks'
mongoid (3.1.6) lib/mongoid/callbacks.rb:130:in `run_callbacks'
mongoid (3.1.6) lib/mongoid/persistence/modification.rb:24:in `prepare'
mongoid (3.1.6) lib/mongoid/persistence/operations/update.rb:45:in `persist'
mongoid (3.1.6) lib/mongoid/persistence.rb:150:in `update'
mongoid (3.1.6) lib/mongoid/persistence.rb:87:in `save'
mongoid (3.1.6) lib/mongoid/relations/proxy.rb:143:in `method_missing'
mongoid (3.1.6) lib/mongoid/relations/auto_save.rb:82:in `block (3 levels) in autosave'
mongoid (3.1.6) lib/mongoid/relations/auto_save.rb:81:in `block (2 levels) in autosave'
mongoid (3.1.6) lib/mongoid/relations/auto_save.rb:35:in `__autosaving__'
mongoid (3.1.6) lib/mongoid/relations/auto_save.rb:78:in `block in autosave'
activesupport (3.2.16) lib/active_support/callbacks.rb:405:in `_run__2983067559894985491__save__2932193531082085051__callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:405:in `__run_callback'
activesupport (3.2.16) lib/active_support/callbacks.rb:385:in `_run_save_callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:81:in `run_callbacks'
mongoid (3.1.6) lib/mongoid/callbacks.rb:130:in `run_callbacks'
mongoid (3.1.6) lib/mongoid/persistence/insertion.rb:23:in `prepare'
mongoid (3.1.6) lib/mongoid/persistence/operations/embedded/insert.rb:32:in `persist'
mongoid (3.1.6) lib/mongoid/persistence.rb:56:in `insert'
mongoid (3.1.6) lib/mongoid/persistence.rb:85:in `save'
mongoid (3.1.6) lib/mongoid/relations/many.rb:44:in `create'
app/controllers/accounts/payments_controller.rb:56:in `destroy'
actionpack (3.2.16) lib/action_controller/metal/implicit_render.rb:4:in `send_action'
actionpack (3.2.16) lib/abstract_controller/base.rb:167:in `process_action'
actionpack (3.2.16) lib/action_controller/metal/rendering.rb:10:in `process_action'
actionpack (3.2.16) lib/abstract_controller/callbacks.rb:18:in `block in process_action'
activesupport (3.2.16) lib/active_support/callbacks.rb:469:in `_run__2531881480130901124__process_action__1173113247620882588__callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:405:in `__run_callback'
activesupport (3.2.16) lib/active_support/callbacks.rb:385:in `_run_process_action_callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:81:in `run_callbacks'
actionpack (3.2.16) lib/abstract_controller/callbacks.rb:17:in `process_action'
actionpack (3.2.16) lib/action_controller/metal/rescue.rb:29:in `process_action'
actionpack (3.2.16) lib/action_controller/metal/instrumentation.rb:30:in `block in process_action'
activesupport (3.2.16) lib/active_support/notifications.rb:123:in `block in instrument'
activesupport (3.2.16) lib/active_support/notifications/instrumenter.rb:20:in `instrument'
activesupport (3.2.16) lib/active_support/notifications.rb:123:in `instrument'
actionpack (3.2.16) lib/action_controller/metal/instrumentation.rb:29:in `process_action'
actionpack (3.2.16) lib/action_controller/metal/params_wrapper.rb:207:in `process_action'
newrelic_rpm (3.7.2.192) lib/new_relic/agent/instrumentation/rails3/action_controller.rb:38:in `block in process_action'
newrelic_rpm (3.7.2.192) lib/new_relic/agent/instrumentation/controller_instrumentation.rb:339:in `perform_action_with_newrelic_trace'
newrelic_rpm (3.7.2.192) lib/new_relic/agent/instrumentation/rails3/action_controller.rb:37:in `process_action'
actionpack (3.2.16) lib/abstract_controller/base.rb:121:in `process'
actionpack (3.2.16) lib/abstract_controller/rendering.rb:45:in `process'
actionpack (3.2.16) lib/action_controller/metal.rb:203:in `dispatch'
actionpack (3.2.16) lib/action_controller/metal/rack_delegation.rb:14:in `dispatch'
actionpack (3.2.16) lib/action_controller/metal.rb:246:in `block in action'
actionpack (3.2.16) lib/action_dispatch/routing/route_set.rb:73:in `dispatch'
actionpack (3.2.16) lib/action_dispatch/routing/route_set.rb:36:in `call'
journey (1.0.4) lib/journey/router.rb:68:in `block in call'
journey (1.0.4) lib/journey/router.rb:56:in `call'
actionpack (3.2.16) lib/action_dispatch/routing/route_set.rb:608:in `call'
newrelic_rpm (3.7.2.192) lib/new_relic/rack/error_collector.rb:55:in `call'
newrelic_rpm (3.7.2.192) lib/new_relic/rack/agent_hooks.rb:32:in `call'
newrelic_rpm (3.7.2.192) lib/new_relic/rack/browser_monitoring.rb:27:in `call'
newrelic_rpm (3.7.2.192) lib/new_relic/rack/developer_mode.rb:45:in `call'
mongoid (3.1.6) lib/rack/mongoid/middleware/identity_map.rb:34:in `block in call'
mongoid (3.1.6) lib/mongoid/unit_of_work.rb:39:in `unit_of_work'
mongoid (3.1.6) lib/rack/mongoid/middleware/identity_map.rb:34:in `call'
warden (1.2.1) lib/warden/manager.rb:35:in `block in call'
warden (1.2.1) lib/warden/manager.rb:34:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/best_standards_support.rb:17:in `call'
rack (1.4.5) lib/rack/etag.rb:23:in `call'
rack (1.4.5) lib/rack/conditionalget.rb:35:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/head.rb:14:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/params_parser.rb:21:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/flash.rb:242:in `call'
rack (1.4.5) lib/rack/session/abstract/id.rb:210:in `context'
rack (1.4.5) lib/rack/session/abstract/id.rb:205:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/cookies.rb:341:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/callbacks.rb:28:in `block in call'
activesupport (3.2.16) lib/active_support/callbacks.rb:405:in `_run__2010804792476588076__call__2932193531082085051__callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:405:in `__run_callback'
activesupport (3.2.16) lib/active_support/callbacks.rb:385:in `_run_call_callbacks'
activesupport (3.2.16) lib/active_support/callbacks.rb:81:in `run_callbacks'
actionpack (3.2.16) lib/action_dispatch/middleware/callbacks.rb:27:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/reloader.rb:65:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/remote_ip.rb:31:in `call'
better_errors (0.9.0) lib/better_errors/middleware.rb:84:in `protected_app_call'
better_errors (0.9.0) lib/better_errors/middleware.rb:79:in `better_errors_call'
better_errors (0.9.0) lib/better_errors/middleware.rb:56:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/debug_exceptions.rb:16:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/show_exceptions.rb:56:in `call'
railties (3.2.16) lib/rails/rack/logger.rb:32:in `call_app'
railties (3.2.16) lib/rails/rack/logger.rb:16:in `block in call'
activesupport (3.2.16) lib/active_support/tagged_logging.rb:22:in `tagged'
railties (3.2.16) lib/rails/rack/logger.rb:16:in `call'
quiet_assets (1.0.2) lib/quiet_assets.rb:18:in `call_with_quiet_assets'
actionpack (3.2.16) lib/action_dispatch/middleware/request_id.rb:22:in `call'
rack (1.4.5) lib/rack/methodoverride.rb:21:in `call'
rack (1.4.5) lib/rack/runtime.rb:17:in `call'
activesupport (3.2.16) lib/active_support/cache/strategy/local_cache.rb:72:in `call'
rack (1.4.5) lib/rack/lock.rb:15:in `call'
actionpack (3.2.16) lib/action_dispatch/middleware/static.rb:63:in `call'
railties (3.2.16) lib/rails/engine.rb:484:in `call'
railties (3.2.16) lib/rails/application.rb:231:in `call'
rack (1.4.5) lib/rack/content_length.rb:14:in `call'
railties (3.2.16) lib/rails/rack/log_tailer.rb:17:in `call'
thin (1.5.1) lib/thin/connection.rb:81:in `block in pre_process'
thin (1.5.1) lib/thin/connection.rb:79:in `pre_process'
thin (1.5.1) lib/thin/connection.rb:54:in `process'
thin (1.5.1) lib/thin/connection.rb:39:in `receive_data'
eventmachine (1.0.3) lib/eventmachine.rb:187:in `run'
thin (1.5.1) lib/thin/backends/base.rb:63:in `start'
thin (1.5.1) lib/thin/server.rb:159:in `start'
rack (1.4.5) lib/rack/handler/thin.rb:13:in `run'
rack (1.4.5) lib/rack/server.rb:268:in `start'
railties (3.2.16) lib/rails/commands/server.rb:70:in `start'
railties (3.2.16) lib/rails/commands.rb:55:in `block in <top (required)>'
railties (3.2.16) lib/rails/commands.rb:50:in `<top (required)>'
script/rails:6:in `<main>'
script/rails:0:in `<main>'
By the way, the id in the exception message, "53d00529486a1bcd28000075" , is the id of the AccountNote object, that got created.
Please let me know if I can provide more information.
I also had this problem, a more specific warning would have been useful. If not for this issue, god knows how long I would have been going around in circles.
I see similar errors, though I see them with embedded models and timestamps.
I was able to fix mine by removing one of my relations.
|
gharchive/issue
| 2014-07-23T19:22:41 |
2025-04-01T04:35:06.699277
|
{
"authors": [
"ajsharp",
"andresilveira",
"msaspence"
],
"repo": "mongoid/moped",
"url": "https://github.com/mongoid/moped/issues/301",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
99209391
|
fix collection method_missing delegation
I stumbled on this bug by accident when I attempted to call #all on a Pagination::Collection object and got:
undefined method `all' for nil:NilClass
I am not entirely sure this is what you'd want to do, but delegating missing methods to an unset instance variable is probably even less desirable/expected than delegating to the collection's paginator.
Another possibility would be to remove the method_missing message passing all together, as you already have a pretty complete enumeration of methods being forwarded to the paginator in you in your def_delegators line.
Looks like a REALLY OLD COPY/PASTE error. Haha.
|
gharchive/pull-request
| 2015-08-05T13:53:34 |
2025-04-01T04:35:06.704669
|
{
"authors": [
"iande",
"jnunemaker"
],
"repo": "mongomapper/plucky",
"url": "https://github.com/mongomapper/plucky/pull/42",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
181474471
|
collection: name func params same as pymongo3
Rename update_one, update_many, and replace_one function parameters to
be consistent with pymongo3
Update tests to use keyword args when calling these functions
thanks
|
gharchive/pull-request
| 2016-10-06T17:16:08 |
2025-04-01T04:35:06.705938
|
{
"authors": [
"jordan-heemskerk",
"srinivasreddy"
],
"repo": "mongomock/mongomock",
"url": "https://github.com/mongomock/mongomock/pull/271",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1348145904
|
⚠️ Quirk Club Argentina (API endpoint) has degraded performance
In 45424fd, Quirk Club Argentina (API endpoint) (https://us-central1-quirkclub-dev.cloudfunctions.net/api/check/api) experienced degraded performance:
HTTP code: 200
Response time: 8402 ms
Resolved: Quirk Club Argentina (API endpoint) performance has improved in 03d8317.
|
gharchive/issue
| 2022-08-23T15:32:47 |
2025-04-01T04:35:06.735556
|
{
"authors": [
"monitoring-apps"
],
"repo": "monitoring-apps/qc.app",
"url": "https://github.com/monitoring-apps/qc.app/issues/204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
516440778
|
救命贴续,那个点错了给关了!
第一个能实现就行了,还得麻烦哥们给我写个代码段,在哪个类下面改,对这个ssl认证知识和netty的逻辑不太熟不知道咋改,辛苦了!
@monkeyWie 求指导啊,实在不知道在哪里改!
我试了下发现没这么好实现啊,还是得配合证书一起
这个要改的证书和私钥我有,怎么改,哥们指导一下,辛苦看看啦,救命的事!
@wayddmldbzz 这个需求太奇怪了吧,为什么不安装根证书去做呢
还有个问题,目标服务器如果会检测 host 请求头那么也实现不了
这个没事,公司内部系统不回校验请求头,而且是纯内网调用。
@monkeyWie 主要还是安装根证书去做,但是有的高版本手机会不行,所以为了不阻塞使用,还得支持这个,否则肯定会被diss
@wayddmldbzz 有个问题,比如你现在是 aaa.test.ooo.com 改成 bbb.test.ooo.com,你的bbb.test.ooo.com证书中是否支持aaa.test.ooo.com域名?
支持,aaa.test.ooo.com和bbb.test.ooo.com证书是一个
加了个拦截器,你试试
new HttpProxyServer()
.tunnelIntercept(requestProto -> {
if(requestProto.getHost().equals("aaa.test.ooo.com")){
requestProto.setHost("bbb.test.ooo.com");
}
})
.start(9999);
万分感谢,我现在试,这个拦截器用倒入证书吗?
不需要安装证书
哥们虽然HOST变更了,但是调用的还是改之前的服务。
额 你们是用的nginx做的反向代理吗?
目标服务器nginx正向代理
到断点那,已经替换了,下面进隧道拦截器数据状态正常吗,channel里面那个R:127.0.0.1:58068是我本地postman也就是客户端,您看看这数据对吗?
数据没问题,应该还是nginx那边的问题,因为nginx会根据host请求头路由到对应的服务,虽然连接的是bbb.test.ooo.com但是nginx接收到的host请求头还是aaa.test.ooo.com
那这个拦截器能否修改请求头呢?
改不了的,没有安装根证书不行
就是客户端必须安装证书?
是啊,你这个场景实现条件太苛刻了
nginx能不能改成用链接地址转发,如果能的话我推动改nginx在配合你改的就行了吧
哥们有转机了,我刚了解了一下,不明文也可以修改请求头,https只加密请求体。
@wayddmldbzz https所有报文都加密的
@monkeyWie 误导了,改了也不行,最后还是我用了你的新改的转发,直接转到我自己的nginx,我自己的nginx配证书设置请求头,再转到目标nginx解决的。总之谢谢哥们救命之恩。。。万分感谢!
好了就行😃
tunnelIntercept 这个拦截器可以处理https的吗
可以,但是只能改目标地址
|
gharchive/issue
| 2019-11-02T02:21:59 |
2025-04-01T04:35:06.752354
|
{
"authors": [
"liming1985",
"monkeyWie",
"wayddmldbzz"
],
"repo": "monkeyWie/proxyee",
"url": "https://github.com/monkeyWie/proxyee/issues/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1662382468
|
[BUG] SkiaSharp 2.88.0 PackageReference break csproj pack logic
Description
When referencing the SkiaSharp package (2.88.0) in a csproj where NuGet pack is enabled using
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
The following errors are generated
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-x86\native\libSkiaSharp.dll' is not added because the package already contains file 'content\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-x86\native\libSkiaSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-x64\native\libSkiaSharp.dll' is not added because the package already contains file 'content\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-x64\native\libSkiaSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-x64\native\libSkiaSharp.dll' is not added because the package already contains file 'content\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-x64\native\libSkiaSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-arm64\native\libSkiaSharp.dll' is not added because the package already contains file 'content\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-arm64\native\libSkiaSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-arm64\native\libSkiaSharp.dll' is not added because the package already contains file 'content\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\skiasharp.nativeassets.win32\2.88.0\runtimes\win-arm64\native\libSkiaSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libSkiaSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-x86\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'content\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-x86\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-x64\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'content\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-x64\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-x64\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'content\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-x64\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-arm64\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'content\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-arm64\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-arm64\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'content\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5118: File 'C:\Users\username\.nuget\packages\harfbuzzsharp.nativeassets.win32\2.8.2.3\runtimes\win-arm64\native\libHarfBuzzSharp.dll' is not added because the package already contains file 'contentFiles\any\net472\libHarfBuzzSharp.dll'
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5100: The assembly 'content\libSkiaSharp.dll' is not inside the 'lib' folder and hence it won't be added as a reference when the package is installed into a project. Move it into the 'lib' folder if it needs to be referenced.
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5100: The assembly 'contentFiles\any\net472\libSkiaSharp.dll' is not inside the 'lib' folder and hence it won't be added as a reference when the package is installed into a project. Move it into the 'lib' folder if it needs to be referenced.
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5100: The assembly 'content\libHarfBuzzSharp.dll' is not inside the 'lib' folder and hence it won't be added as a reference when the package is installed into a project. Move it into the 'lib' folder if it needs to be referenced.
C:\Program Files\dotnet\sdk\6.0.407\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5100: The assembly 'contentFiles\any\net472\libHarfBuzzSharp.dll' is not inside the 'lib' folder and hence it won't be added as a reference when the package is installed into a project. Move it into the 'lib' folder if it needs to be referenced.
0 Warning(s)
24 Error(s)
Code
<ItemGroup>
<PackageReference Include="HarfBuzzSharp" />
<PackageReference Include="ReactiveUI" />
<PackageReference Include="SkiaSharp" />
<PackageReference Include="Topten.RichTextKit" />
</ItemGroup>
<PropertyGroup>
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
</PropertyGroup>
Workarround
Adding these properties to my csproj ensure the native files are not added as content
<!-- Workarround to ensure the Native files of SkiaSharp and Harfbuzz are not included in the package -->
<ShouldIncludeNativeSkiaSharp>False</ShouldIncludeNativeSkiaSharp>
<ShouldIncludeNativeHarfBuzzSharp>False</ShouldIncludeNativeHarfBuzzSharp>
Expected Behavior
I expect a NuGet package of my assembly is generated containing my assembly and a dependency on the SkiaSharp package.
The assemblies part of the SkiaSharp and Harfbuzz packages should not be considered as input files to pack.
Actual Behavior
MSBuild fails with the below error. The SkiaSharp (and HarfBuzz) win32 package is injecting the native assemblies as content. Which is picked up by MSBuild.
Possible solution
The build targets included in the package contain logic to have it included as content.
One solution could be to use a None include instead of a Content included
Another solution could be to also set the Pack attribute, to ensure the content files is not considered as a file to pack.
<!-- include everything -->
<Content Include="@(_NativeSkiaSharpFile)">
<Link>%(Dir)%(Filename)%(Extension)</Link>
<Visible>False</Visible>
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<Pack>False</Pack>
</Content>
Basic information
Using Visual Studio 2022 17.4 and .NET 6 SDK version 6.0.407
I have a similar behavior with the native SkiaSharp libraries being included throughout all transitive dependencies
Ex:
AppA -> LibA -> LibB -> SkiaSharp
AppA, LibA and LibB will all embed the native dependencies in the nupkg, even when AppA end LibA are pulling it transitively
I also just stumbled upon this issue.
I have added the following code to Directory.Build.targets, but would prefer to have the Pack property set in the nuget's .props file, as is outlined in the bug report as a possible solution.
<ItemGroup>
<Content Update="@(_NativeSkiaSharpFile)">
<Pack>False</Pack>
</Content>
</ItemGroup>
Setting ShouldIncludeNativeSkiaSharp to False is not an option in my case because I need to run my application from the bin folder.
This problem still occurs when building NuGet-Packages. I'm using version 2.88.6.
Sometimes, but not always, an error occurs during compilation when the compiler tries to copy the DLLs into the bin directory.
We also have to start our application from the bin folder, but we also pack NuGet packages for some assemblies.
Is there a satisfying solution for this?
|
gharchive/issue
| 2023-04-11T12:33:33 |
2025-04-01T04:35:06.778337
|
{
"authors": [
"Nico-1987",
"SebastianSchumann",
"SimonWeinbergerEnscape",
"asidorowicz"
],
"repo": "mono/SkiaSharp",
"url": "https://github.com/mono/SkiaSharp/issues/2439",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
937147410
|
Implement cluster lifecycle management
(originally created by @q3k in T933)
Implementation of the cluster lifecycle part of our lifecycle design doc:
node/cluster state machine
multi-node clusters (replace golden ticket mechanism)
Single-node code is this stack: https://review.monogon.dev/197
The stack is still being reviewed (long tail), should land today.
Multi-node Register code is tracked in https://github.com/monogon-dev/monogon/issues/74
After this, we have to implement the Join flow and this issue is effectively resolved.
We have registration for multiple nodes. But they can't yet start any services.
That's currently blocked by doing a refactor of the cluster roleserver to do startups in an on-demand fashion, basically relying on Event Value statuses/watchers from all involved components.This is in my local tree, and I'm finishing that off this week.
Roleserver implementation is making progress. Currently working on a simplistic implementation based around bare channels and perhaps Event Values.
This should be the last thing required for multi-node clusters and effectively the lifecycle part of the design doc.
Update: https://review.monogon.dev/c/monogon/+/522
^ The above is the first change required to then continue the roleserver refactor to start up issues.
Notably, it creates a ConsensusMember node role that nodes carry to be able to start etcd/consensus. This makes newly Registered nodes aware of the fact that they have to run consensus (or not).
However, they still don't run it successfully, etcd gets stuck on new nodes not being able to resolve existing nodes, and the first node cannot resolve new nodes. This is due to startup sequencing of hostsfile/updater that we aim to fix with roleserver refactoring.
https://review.monogon.dev/c/monogon/+/532
It's merged. Follow up for one last implementation detail: https://github.com/monogon-dev/monogon/issues/112
|
gharchive/issue
| 2021-07-05T14:27:50 |
2025-04-01T04:35:06.841271
|
{
"authors": [
"leoluk",
"msgctl",
"q3k"
],
"repo": "monogon-dev/monogon",
"url": "https://github.com/monogon-dev/monogon/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
177654405
|
Broken: get_flash_videos --update
sudo /usr/bin/get_flash_videos --update
Unable to retrieve version data: 404 Not Found
Is this a bug?
The google code site has been removed. The --update feature should be removed, updates should be done using package management or download and install. The --update will not install additional dependencies needed package management does.
@njtaylor
The google code site has been removed. The --update feature should be removed
Who can fix that?
As I stated in #201:
the documentation needs to be updated accordingly. Since we are all volunteering our work on get_flash_videos, someone needs to volunteer to do these things. If you would like to help improve this site's wiki or documentation, please give it a go.
@pcwalden This is on code basis.
--update option has been removed. Not complete as some code is left behind. There was the hulu plugin and plugin download, so some work left. hulu plugin site has also been removed.
@njtaylld Thank you
@njtaylor Are you working on this?
No not at this time, just done enough to remove the --update option.
|
gharchive/issue
| 2016-09-18T14:18:51 |
2025-04-01T04:35:06.846268
|
{
"authors": [
"flyingzebras",
"karjonas",
"njtaylor",
"pcwalden"
],
"repo": "monsieurvideo/get-flash-videos",
"url": "https://github.com/monsieurvideo/get-flash-videos/issues/204",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1511522564
|
[-100]登陆失效
米游社最近修改了cookie样式,原来的登陆方式似乎不好用了,使用自己的登陆产生的cookie,机器人显示[-100]登陆失效
我获取cookie的方式:在浏览器控制台输入"document.cookie"。
json文件样例如下
{
"account_id": "22******41",
"cookie_token_v2": "v2_LMu2G8p0cvEA8dy-ZNOwu1te******yjZzBniIFH1Di",
"ltmid_v2": "v2_WCmvGJFxwusarpbY2jT*******YE896FLaGX3VxRAw1uzahjN3A",
"mid": "03tso*****_mhy"
}
我也尝试过将“ltmid_v2”更换为“stoken”或者“stoken_v2”也同样无法登陆。
所以想请教您是如何填写相关字段值的,还是说最近无法使用cookie登陆米游社了?
ltmid_v2 不是 stoken,也不是插件必需的字段,可以去掉,你这样填写配置缺少 stoken。
我这边暂时还是 v1 的 cookie 字段。可能我代码写得有些问题,如果你这个 cookie_token_v2 没过期应该也能用才对。
你试一下这个步骤能不能拿到带 login_ticket 的 cookie:
新建隐身标签页,后面均在此隐身标签页内操作
打开 https://www.miyoushe.com/ys/ 并登录
打开 http://user.mihoyo.com/ 并登录
控制台输入 document.cookie
如果能拿到 login_ticket 就把这个补到 cookie.json 文件里重启一下试试
你试一下这个步骤能不能拿到带 login_ticket 的 cookie:
新建隐身标签页,后面均在此隐身标签页内操作
打开 https://www.miyoushe.com/ys/ 并登录
打开 http://user.mihoyo.com/ 并登录
控制台输入 document.cookie
如果能拿到 login_ticket 就把这个补到 cookie.json 文件里重启一下试试
机器人返回缺少stoken无法自动更新过期的曲奇!
我的操作如下。
1、使用edge浏览器,新建InPrivate选项卡
2、打开 https://www.miyoushe.com/ys/ 并登录,控制台输入document.cookie ,在返回值当中没有找到关于login_ticket的字段
3、在尝试登陆 http://user.mihoyo.com/ 之后,控制台输入document.cookie ,返回
UM_distinctid=1******7; _ga=G******3; _MHYUUID=******; DEVICEFP_SEED_ID=*******; DEVICEFP_SEED_TIME=******; DEVICEFP=*******; login_uid=2***1; login_ticket=EyT6y*******x5aeYcm
之后我将login_ticket的值复制之后,放在cookie.json文件当中,文件内容如下
{
"cookie_token": "EyT6y*******x5aeYcm"
}
4、保存文件并重启机器人之后
输入
/原神计算 香菱
机器人返回
缺少stoken无法自动更新过期的曲奇!
😥所以还是失败了
呃,你补上 login_ticket 之后 cookie.json 文件内容怎么会是那样呢?
让你补进去,文件内容应该是像这样才对:
{
"account_id": "22******41",
"cookie_token_v2": "v2_LMu2G8p0cvEA8dy-ZNOwu1te******yjZzBniIFH1Di",
"ltmid_v2": "v2_WCmvGJFxwusarpbY2jT*******YE896FLaGX3VxRAw1uzahjN3A",
"mid": "03tso*****_mhy",
"login_ticket": "xxx"
}
这个 login_ticket 过期比较快,你可以重新获取一下按我说的填进去保存重启再试一下,还能拿到 login_ticket 应该就没问题。
你之前文件里的 cookie_token_v2 ltmid_v2 都删掉,留一个米游社 ID、一个 mid 和 login_ticket 就好
我尝试使用您讲的方法重新走了一遍流程,并将cookie.json文件内容替换为如下
{
"account_id": "2******1",
"mid": "03t*****_mhy",
"login_ticket": "iMgEK*******RTohBSM0pOyZvBi"
}
保存重启后,机器人返回
[-100]请先登陆
于是,我尝试改了一下源码如下(大约60行附近)
# 读取
if not cookie:
if not cookie_cfg:
return {"error": "养成计算器需要米游社 Cookie!"}
else:
check_res = await query_mys("校验", cookie_cfg, {"game_biz": "hk4e_cn"})
if not check_res.get("error"):
# 检验成功才返回,否则尝试刷新
logger.info("检验成功,开始绘制")
# return cookie_cfg
改动不大,就是将
if not check_res.get("error"):
后面的return注释掉了,然后换成了logger.info
于是机器人就可以正常返回图片了,不过就是曾经机器人返回的是原图,这次返回的好像是缩略图,看着比以前小了(应该是tx的问题?)不过可以返回图片已经算问题解决了吧!
非常感谢你最近的问题解决指导(❤ ω ❤)
旧版本可能用来判断 Cookie 是否失效的接口有问题,导致过期的 Cookie 不能自己更新。你修改的这种方法会在每次请求都走一遍更新 Cookie 的流程,但是我不是很推荐这样。
现在更新到 0.2.2 版本应该能解决这些问题,有问题可以随时 reopen 此 issue。
|
gharchive/issue
| 2022-12-27T07:32:48 |
2025-04-01T04:35:06.859812
|
{
"authors": [
"KishibeRohan1979",
"monsterxcn"
],
"repo": "monsterxcn/nonebot-plugin-gsmaterial",
"url": "https://github.com/monsterxcn/nonebot-plugin-gsmaterial/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1973543011
|
编写脚本自动更新喵喵的评分规则
功能请求
需求原因:人力更新可能不及时
预期功能:及时更新喵喵规则
实现方案:编写了py代码
实现代码如下
import asyncio
import aiohttp
import re
import json
from collections import OrderedDict
miao_artis_mark_url = "https://gitee.com/yoimiya-kokomi/miao-plugin/raw/master/resources/meta-gs/artifact/artis-mark.js"
github_url = "https://api.github.com/repos/yoimiya-kokomi/miao-plugin/contents/resources/meta-gs/character/"
gitee_url = "https://gitee.com/yoimiya-kokomi/miao-plugin/raw/master/resources/meta-gs/character/"
trans_table = {
"hp": "生命值百分比",
"atk": "攻击力百分比",
"def": "防御力百分比",
"cpct": "暴击率",
"cdmg": "暴击伤害",
"mastery": "元素精通",
"dmg": "元素伤害加成",
"phy": "物理伤害加成",
"heal": "治疗加成",
"recharge": "元素充能效率",
}
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) ",
"Authorization": "Bearer your own fine-grained personal access token",
}
def insert_keys(original_data, new_data):
ordered_data = OrderedDict(original_data)
for key, value in new_data.items():
if '-' in key:
main_key, _ = key.split('-')
keys = list(ordered_data.keys())
if main_key in keys:
position = keys.index(main_key)
keys.insert(position + 1, key)
ordered_data = OrderedDict((k, ordered_data.get(k, value)) for k in keys)
return ordered_data
async def get_data(session, url, pattern, headers=None):
async with session.get(url, headers=headers) as response:
text = await response.text()
data = {}
for line in text.split("\n"):
match = re.match(pattern, line.strip())
if match:
character = match.group(1).strip()
attributes_str = match.group(2).strip()
attributes = {}
for attr_str in attributes_str.split(","):
key, value = attr_str.split(":")
key = key.strip()
value = int(value.strip())
if key in trans_table:
key = trans_table[key]
if value != 0:
attributes[key] = value
data[character] = attributes
return data
async def get_special_characters(session, url, headers, name):
async with session.get(url, headers=headers) as response:
items = await response.json()
list = []
for item in items:
if "artis.js" == item["name"]:
list.append(name)
return list
async def main():
async with aiohttp.ClientSession() as session:
all_data = await get_data(session, miao_artis_mark_url, r"(.+?)\s*:\s*\{(.+?)\},")
special_characters_tasks = [
get_special_characters(session, github_url + name, headers, name)
for name in all_data.keys()
]
special_characters_nested = await asyncio.gather(*special_characters_tasks)
special_characters = [
item for sublist in special_characters_nested for item in sublist
]
special_data_tasks = [
get_data(
session,
gitee_url + f"{name}/artis.js",
r"return rule\('(.+?)', \{(.+?)\}\)",
)
for name in special_characters
]
special_data = await asyncio.gather(*special_data_tasks)
for data in special_data:
all_data = insert_keys(all_data, data)
all_data.update(all_data)
with open("artis-mark.json", "w", encoding="utf-8") as f:
f.write(json.dumps(all_data, indent=2, ensure_ascii=False))
if __name__ == "__main__":
asyncio.run(main())
生成结果如下
artis-mark.json
代码存在改进,但初步实现了,不过发现本仓库规则是和喵喵不完全一致,可以考虑使用喵喵新规则
改进的地方:不用github api,改为请求后验证status_code
- github_url = "https://api.github.com/repos/yoimiya-kokomi/miao-plugin/contents/resources/meta-gs/character/"
- headers = {
- "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) ",
- "Authorization": "Bearer your own fine-grained personal access token",
-}
- get_special_characters(session, github_url + name, headers, name)
+ get_special_characters(session, gitee_url + f"{name}/artis.js, name)
-async def get_special_characters(session, url, headers, name):
- async with session.get(url, headers=headers) as response:
- items = await response.json()
- list = []
- for item in items:
- if "artis.js" == item["name"]:
- list.append(name)
- return list
+async def get_special_characters(session, url, name):
+ special_name = []
+ async with session.get(url) as response:
+ if response.status == 200:
+ special_name.append(name)
+ return special_name
理论上别名也可以这样干,但是我看这边的别名和喵喵的别名不一样,就没弄了
感谢。
以前写过用 GitHub Actions 更新评分规则,这个不是很麻烦。但是后来发现喵喵仓库的 artis-mark.js 不全,那个时候 miao-plugin 才刚开始做多流派评分,角色的评分规则放到单独的 .js 文件里去了。现在已经很久不关注面板和评分了,不知道现在 artis-mark.js 是否还能保证是最合适的评分规则?
看了下,基本上都算比较符合,然后一些特殊的流派放在了character/${name}/artis.js里面,在这里也是全部收在一起了的,不过要完全适配喵喵的话可能得适当更改一些流派的识别,比如宵宫的话他默认是通用流派,然后加上了蒸发流派和纯火流派
感谢您提供的关于提取特殊评分规则的思路,虽然这些规则现在还没什么用,但是总归有点什么用。
关于自动更新最终能否发挥作用还是要看喵喵の脸色,谢谢喵
鉴于版本更新时间不定,目前仍保持旧的更新资源方式:手动触发 GitHub Action 来更新 CDN 上的 JSON 资源,本地重启 NoneBot2 后自动更新 JSON 资源。
现隆重邀请您加入此仓库合作者身份(主要用于在版本更新后及时手动触发 GitHub Action),以使我正式进入冬眠,感谢!
|
gharchive/issue
| 2023-11-02T05:37:07 |
2025-04-01T04:35:06.867142
|
{
"authors": [
"forchannot",
"monsterxcn"
],
"repo": "monsterxcn/nonebot-plugin-gspanel",
"url": "https://github.com/monsterxcn/nonebot-plugin-gspanel/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
87433405
|
Could you merge upstream v1.4.4
Hi @chytreg
As mentioned on recent calls, we are spinning a v1.1.2 release to add support for SLES12 for a particular customer.
I see that crohr/pkgr is now at v1.4.4 and has added support for both RedHat 7 and SLES12.
Would it be possible to update your fork with those changes?
I have been able to successfully rebuild capture_solarsystem-ui-1.1.1.17 with pkgr-1.4.4 on SLES12, but it doesn't have Monterail's changes.
thanks
Mark
Sure I will take care of this.
@sf-mep I have merge upstream here: #3
to test new pkgr version follow the instructions:
git clone git@github.com:monterail/pkgr.git
cd pkgr; git checkout feature/update-from-crohr
gem build pkgr.gemspec # this will create pkgr-1.5.1.gem
gem uninstall pkgr
gem install pkgr-1.5.1.gem
Next try to build for SLSES12
thanks, will test it
There is a problem with quoting in the fpm command giving this error:
STDERR: ERROR: Unrecognised option '--exclude '**/.git**''
Older version used this command:
[2015-06-12T14:04:57+01:00] DEBUG: sh(fpm -s dir --verbose --force --exclude \*\*/.git\*\* -C /tmp/d20150612-6424-wtwvqu -n capture_solarsystem-ui --version 1.1.1.17 --iteration 8d89b0305294 --url http://www.solarflare.com --provides capture_solarsystem-ui --license Proprietary -a x86_64 --description GUI\ for\ Capture\ SolarSystem\ network\ capture\ and\ monitor\ appliance --maintainer Solarflare\ Communications --vendor Solarflare\ Communications --template-scripts --before-install /tmp/postinstall20150612-6424-mynjc7 --after-install /tmp/postinstall20150612-6424-1iqbdq4 --before-remove /tmp/postinstall20150612-6424-pndla8 --after-remove /tmp/postinstall20150612-6424-15o0tb0 -d openssl -d readline -d libxml2 -d libxslt -d libevent -d postgresql-libs -d mysql-libs -d sqlite -t rpm .)
v1.5.1 produces:
[2015-06-12T14:20:38+01:00] DEBUG: sh(fpm -s\ dir --verbose --force --exclude\ \'\*\*/.git\*\*\' -C\ \"/tmp/d20150612-6975-2a3uvy\" -n\ \"capture_solarsystem-ui\" --version\ \"1.1.1.17\" --iteration\ \"8d89b0305294\" --url\ \"http://www.solarflare.com\" --provides\ \"capture_solarsystem-ui\" --license\ \"Proprietary\" -a\ \"x86_64\" --description\ \"GUI\ for\ Capture\ SolarSystem\ network\ capture\ and\ monitor\ appliance\" --maintainer\ \"Solarflare\ Communications\" --vendor\ \"Solarflare\ Communications\" --template-scripts --before-install\ \"/tmp/postinstall20150612-6975-38koh9\" --after-install\ \"/tmp/postinstall20150612-6975-1n7wsu2\" --before-remove\ \"/tmp/postinstall20150612-6975-ml2gk4\" --after-remove\ \"/tmp/postinstall20150612-6975-1oxxedl\" --deb-user\ root --deb-group\ root -d\ \"openssl\" -d\ \"readline\" -d\ \"libxml2\" -d\ \"libxslt\" -d\ \"libevent\" -d\ \"postgresql-libs\" -d\ \"mysql-libs\" -d\ \"sqlite\" -t\ rpm .)
As can be seen, the space after an option is now quoted, which is wrong.
I have introduced changed to feature/update-from-crohr. Last commit should fix the whitespace issue.
You have to rebuild and install gem once again
Same as we did for Fedora (https://github.com/monterail/pkgr-data/commit/0208cf1a40a0e7f35045fd671203d4f5e283283a) I think we have to update templates for SLES12. Build script fetch pkgr-data on every build so any change in master branch would affect the build.
This version of pkgr works for me on both RHEL7 and SLES12 using https://github.com/monterail/solarcapture-vm/tree/feature/sles12-1.1.1
This patch will make the necessary changes in pkgr-data:
diff --git a/dependencies/centos.yml b/dependencies/centos.yml
index a1ea99f..9f27af2 100644
--- a/dependencies/centos.yml
+++ b/dependencies/centos.yml
@@ -1,9 +1,3 @@
default:
- openssl
- - readline
- - libxml2
- - libxslt
- - libevent
- postgresql-libs
- - mysql-libs
- - sqlite
diff --git a/dependencies/sles.yml b/dependencies/sles.yml
index 3159b04..02bfd3f 100644
--- a/dependencies/sles.yml
+++ b/dependencies/sles.yml
@@ -1,10 +1,3 @@
default:
- openssl
- - readline
- - libxml2
- - libxslt
- - libevent
- postgresql93
- - libmysqlclient18
- - sqlite3
- - shadow
@sf-mep I have applied your changes
|
gharchive/issue
| 2015-06-11T18:07:17 |
2025-04-01T04:35:06.876737
|
{
"authors": [
"chytreg",
"sf-mep"
],
"repo": "monterail/pkgr",
"url": "https://github.com/monterail/pkgr/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1899808257
|
🛑 THORN Data Sync Service (China) is down
In 5ae4fd9, THORN Data Sync Service (China) (https://syncpoint.thorn.red/latency) was down:
HTTP code: 0
Response time: 0 ms
Resolved: THORN Data Sync Service (China) is back up in 781fa4e after 8 minutes.
|
gharchive/issue
| 2023-09-17T14:47:37 |
2025-04-01T04:35:06.912713
|
{
"authors": [
"Alecyrus"
],
"repo": "mooncyan/Status",
"url": "https://github.com/mooncyan/Status/issues/227",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
906893214
|
different attributes in GRCh37 vs. GRCh38 catalogs
A few more details that might need some clarification:
These 4 loci have different NormalMax values in 37 vs. 38 (and ATN1 also has different PathologicMin):
ATN1 35 34
DMPK 34 37
FMR1 55 65
TBP 40 44
These loci have differing LocusStructure (even though DisplayRU is the same):
TBP (CAN)* (GCA)*
NIPA1 (NGC)* (GCG)*
TBP locus GRCh38 coordinates match the EHv4 catalog coordinates (https://github.com/Illumina/ExpansionHunter/blob/master/variant_catalog/hg38/variant_catalog.json#L499)
but the GRCh37 coordinates don't match the EHv4 catalog (https://github.com/Illumina/ExpansionHunter/blob/master/variant_catalog/hg19/variant_catalog.json#L260)
The situation is the same for NIPA1.
Thank you kindly! Most is just lack of proofreading, but e.g. TBP and NIPA1 appear connected to incomplete merging in of the loci from ExpansionHunter as they are tested and added. We would as a general rule always try to keep the ExHu ones if they exist, for best compatibility and most testing. Note the eg tendency for us to be biased by the established clinical repeat unit in delimiting the units. And the lack of a scripted comparison test for liftOver operations. 😬
Again, thank you for the feedback. I have added a couple of validation scripts that will help keep the number of variant catalog genome build out of sync mistakes down. The off-by-one 0-based start to appear a common theme as well, so I will have to go on a separate ob1-hunt with REViewer. The usual effect is just an ugly gap in the alignment between the last anchor base and the first graph model one, but why not get it right. I will call it a night now before I make more mistakes. Cheers!
|
gharchive/issue
| 2021-05-30T21:57:43 |
2025-04-01T04:35:06.920467
|
{
"authors": [
"bw2",
"dnil"
],
"repo": "moonso/stranger",
"url": "https://github.com/moonso/stranger/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2013578155
|
🛑 plex server is down
In 16e0ddd, plex server ($PLEX_URL/identity) was down:
HTTP code: 0
Response time: 0 ms
Resolved: plex server is back up in d3a1081 after 4 hours, 3 minutes.
|
gharchive/issue
| 2023-11-28T04:08:49 |
2025-04-01T04:35:06.922701
|
{
"authors": [
"mooseburgr"
],
"repo": "mooseburgr/kmj-wtf-upptime",
"url": "https://github.com/mooseburgr/kmj-wtf-upptime/issues/300",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
102435947
|
Please make .NET 4 compatible distribution (or allow silent modifications)
The background should be well known.
When including todays single morelinq distribution in a project, you cannot invoke the Zip operator in a code block that has included both the System.Linq and Morelinq namespaces.
You can work around this problem today by either not using them in the same code block or by using the "per operator" nuget packages.
Adding all the nuget packages for the individual operators is a hassle when using ad hoc projects or LINQPad. I also think that it is somewhat excessive engineering to achieve a general solution for what is in all likelihood will be very few cases.
Todays solution also sets it apart from regular LINQ operators as Morelinq operators will not be laying around in intellisense waiting to be used/investigated. If intellisense does not show "ToDelimitedString", would you really browse nuget for it? How would you know the exact spelling without the partial matching of intellisense? Do you expect people to frequent the project web site in search of new operators? I have little doubt that this slows down the adoption of new operators. A project like this is also about educating people into functional thinking.
I suggest a breaking change that Zip should be removed from the main branch and available as separate nuget package.
Alternatively a separate .NET4 distribution. It's not like .NET4 is a new novelty anymore.
Alternatively (2), the license should allow removal of source code (commenting out zip operator) without all the end user notifications requirements clicking in place.
I make dll's for tooling (mostly internal use), and the license terms that applies for modified source code (distributed notice, source code availability, help->about info) that would have to accompany every tiny .exe file is not trivial to achieve in a large company.
Thank you. I love morelinq.
Originally reported on Google Code with ID 88
Reported by tormod.steinsholt on 2014-01-30 09:07:26
I don't see Zip anywhere, and my projects can use Zip (with 2.0-alpha1). Is this still valid?
This was addressed in 88c573f7bbcd15d0cd09f22e9b73b62dce9b66ed, which closed #60, but that was slated for 2.0. I think that issue was about release a 1.0 targeting 4.0 without Zip. This can be closed once 2.0 is out with the breaking change or closed now as wont-fix.
I think marking as closed is correct then (the tag I don't have an
opinion). Unfortunately, the github tracker is not particularly good at
handling multiple (release) branches.
The usual way of dealing with issues in the github tracker is to close them
when the code is imported into mainline (ie, the master branch). For
example, github will automatically close issues when so marked in the
commit or pull request message. The alternative is to manually close all
issues at release time which would be very error prone.
(As you can see I'm walking through the issues to triage them. If I'm
creating too much noise please tell me so)
On 29 October 2015 at 14:28, Atif Aziz notifications@github.com wrote:
This was addressed in 88c573f
https://github.com/morelinq/MoreLINQ/commit/88c573f7bbcd15d0cd09f22e9b73b62dce9b66ed,
which closed #60 https://github.com/morelinq/MoreLINQ/issues/60, but
that was slated for 2.0. I think that issue was about release a 1.0
targeting 4.0 without Zip. This can be closed once 2.0 is out with the
breaking change or closed now as wont-fix.
—
Reply to this email directly or view it on GitHub
https://github.com/morelinq/MoreLINQ/issues/88#issuecomment-152257517.
--
Saludos,
Felipe Sateler
This will be addressed by 2.0 only.
As you can see I'm walking through the issues to triage them. If I'm creating too much noise please tell me so
Appreciate your help so please carry on. I will try & keep up. If I go silent at times, don't lose hope; :) it's just one of the many projects I'm working on at the same time.
|
gharchive/issue
| 2015-08-21T18:14:25 |
2025-04-01T04:35:06.951978
|
{
"authors": [
"atifaziz",
"fsateler"
],
"repo": "morelinq/MoreLINQ",
"url": "https://github.com/morelinq/MoreLINQ/issues/88",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
507034225
|
fetch sets variables to [] if no arguments are present
Testing Morpheus client against my API it appears that if no arguments are passed to fetch then variables in the request body is set to [] instead of {}. This causes issues on some servers.
Tested against: hackage: morpheus-graphql-0.4.0@sha256:73420c02c2807c9861d2fc52ed31691cae68a11e40051a1db99071e68507e625,21405
Let me know if you need additional info for the bug report of if this is out of date!
I believe that this is being caused by () being accepted as a placeholder for empty arguments:
*Main> A.encode ()
"[]"
Can be seen in src/Data/Morpheus/Execution/Client/Build.hs.
Could possibly expose a different unit type that encodes to {}.
Tested locally and it works.
src/Data/Morpheus/Execution/Client/Build.hs:
queryArgumentType :: Maybe TypeD -> (Type, Q [Dec])
queryArgumentType Nothing = (ConT $ mkName "NoArgs", pure [])
queryArgumentType (Just rootType@TypeD {tName}) = (ConT $ mkName tName, declareInputType rootType)
And at the use site:
instance A.ToJSON NoArgs where
toJSON NoArgs = A.object []
Could include in Types?
I guess this would be a bad idea though since it might break backwards compatibility. Maybe the best approach is to make a custom instance for ToJSON for GQLRequest.
thank you for your issue, i will check out it. I think that () is correct for no arguments. id don't want to create new type for empty object
The spec seems to indicate that enumeration is the only required property of the variables set:
https://graphql.github.io/graphql-spec/June2018/#sec-Coercing-Variable-Values
This means that the issue could actually be a server-side bug, still trying to dig in deeper to confirm.
FYI the server implementation that breaks on [] is https://github.com/rmosolgo/graphql-ruby
On closer inspection of the spec it does seem like variables should be an object:
I think this slightly dodgy workaround or something similar should allow for preservation of backwards compatibility, and meeting spec for empty arguments:
± git df
diff --git a/src/Data/Morpheus/Execution/Client/Fetch.hs b/src/Data/Morpheus/Execution/Client/Fetch.hs
index b4e4410..d0fee38 100644
--- a/src/Data/Morpheus/Execution/Client/Fetch.hs
+++ b/src/Data/Morpheus/Execution/Client/Fetch.hs
@@ -16,11 +16,18 @@ import Data.ByteString.Lazy (ByteString)
import Data.Text (pack)
import Language.Haskell.TH
{+import qualified Data.Aeson as A+}
{+import qualified Data.Aeson.Types as A+}
--
-- MORPHEUS
import Data.Morpheus.Types.Internal.TH (instanceHeadT)
import Data.Morpheus.Types.IO (GQLRequest (..), JSONResponse (..))
{+fixVars :: A.Value -> A.Value+}
{+fixVars x | x == A.emptyArray = A.emptyObject+}
{+fixVars x = x+}
class Fetch a where
type Args a :: *
__fetch ::
@@ -32,7 +39,7 @@ class Fetch a where
-> m (Either String a)
__fetch strQuery opName trans vars = (eitherDecode >=> processResponse) <$> trans (encode gqlReq)
where
gqlReq = GQLRequest {operationName = Just (pack opName), query = pack strQuery, variables = Just {+(fixVars+} (toJSON [-vars)}-]{+vars))}+}
-------------------------------------------------------------
processResponse JSONResponse {responseData = Just x} = pure x
processResponse invalidResponse = fail $ show invalidResponse
filed 'variables' is optional. i would prefer
fixVars :: A.Value -> Maybe A.Value
fixVars x
| x == A.emptyArray = Nothing
| otherwise = Just x
@nalchevanidze ah that's much cleaner!
Would you like a PR?
sure, would be great
Solved. thanks to @sordina
|
gharchive/issue
| 2019-10-15T06:28:43 |
2025-04-01T04:35:06.966752
|
{
"authors": [
"nalchevanidze",
"sordina"
],
"repo": "morpheusgraphql/morpheus-graphql",
"url": "https://github.com/morpheusgraphql/morpheus-graphql/issues/272",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2116903228
|
🛑 Bama is down
In 7cd69ce, Bama (https://bama.design) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bama is back up in 9da4b70 after 13 minutes.
|
gharchive/issue
| 2024-02-04T06:12:14 |
2025-04-01T04:35:07.006712
|
{
"authors": [
"mortonpepper"
],
"repo": "mortonpepper/upptime-upptime",
"url": "https://github.com/mortonpepper/upptime-upptime/issues/1298",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2061340498
|
🛑 Bama is down
In efb4f9e, Bama (https://bama.design) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bama is back up in 5b760bb after 6 minutes.
|
gharchive/issue
| 2024-01-01T05:50:26 |
2025-04-01T04:35:07.009340
|
{
"authors": [
"mortonpepper"
],
"repo": "mortonpepper/upptime-upptime",
"url": "https://github.com/mortonpepper/upptime-upptime/issues/381",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1164710264
|
Reorganize algorithm docs
Our docs currently have 1) a lot of redundancy, 2) a difficult interface for finding much or most of our API surface area. This creates both confusion for readers and engineering burden for maintainers.
Specific problems/challenges:
We write independent overviews of each algorithm in three different places. This increases the barrier to contributing new algorithms and makes maintenance more difficult. E.g., for BlurPool:
Place 1: composer.algorithms.blurpool docstring
Place 2: BlurPool method card
Place 3: composer.algorithms.blurpool.BlurPool algorithm class (for most algos)
Several algorithms have different interfaces. These interfaces show up in different places in the docs, and only some interfaces are available without clicking through several levels of API reference. As an example, blurpool has:
BlurPool the Algorithm, documented in:
composer.algorithms API reference
composer.algorithms.blurpool API reference
composer.algorithms.blurpool.blurpool API reference
apply_blurpool the all-in-one model surgery function
composer.algorithms.blurpool API reference
composer.algorithms.blurpool.blurpool API reference
BlurConv2d and company, documented in
composer.algorithms.blurpool API reference
composer.algorithms.blurpool.blurpool_layers API reference
blur_pool2d and other standalone functions for the core logic, documented in
composer.algorithms.blurpool API reference
composer.algorithms.blurpool.blurpool API reference
Because we take up so much sidebar real estate with method cards and model cards, any API not documented in one of these pages is really hard to find; I have to click on "API reference" at the very bottom (not even visible unless I scroll down a lot), and then repeatedly expand submodules and scroll down to find stuff.
E.g., it's super annoying to find our functional API right now, even if you know where to look.
Searching, especially for algorithms, is hard because we have so many pages with similar names and content. Which makes the recursive "API reference" clicking more mandatory.
Proposal
Algorithms
Prefix every algorithm directory with an underscore. E.g., composer.algorithms._blurpool
This will eliminate redundant documentation of algorithm contents. Everything that should be visible under composer.algorithms will now only be visible under composer.algorithms. Same for composer.functional.
Make a top level composer.layers in which nn.modules like BlurConv2d can be discovered without digging several layers deep in the API reference.
Add standalone logic like blur_pool2d to composer.functional (or maybe elsewhere?)
Make method cards README.rst files instead of README.md, and have algorithm module-level docstrings just .. include:: these files.
Eliminates duplicate descriptions
Makes it much easier for other algorithm code to reference the method card (instead of hardcoding a URL in our last-released docs that may have out of sync content and/or break at any time). This also allows eliminating duplicate descriptions in Algorithm class docstrings.
Have method cards link to, but not include, docstrings for all interfaces for a given algorithm.
If I want to use BlurPool but am not sure of the best way to use it in my codebase, I should be able to click on the big "BlurPool" heading and see my options.
But if we add the full docstrings to the method card, it's a giant wall of mostly-redundant text (and it duplicates the docstrings being shown elsewhere, like in composer.functional).
Right now there no links to the actual code, so you're stuck using the search bar if you want to deviate from the simple examples shown.
Eliminate the "Methods Overview" page. It looks cool but is redundant with the sidebar. Or maybe add its content to the bottom of one of the quickstart/overview pages.
Models
Similarly make model cards be the module-level docstrings under composer.modules
Sphinx
Minor: Make our current toctree collapsible. I should be able to minimize the Method Cards, Trainer section, etc, like in the PyTorch and PyTorch Lightning docs.
Minor bugfix: make expanding sections in the API reference not reload the page. Right now I have to scroll to bottom from very top, click "API reference", scroll to bottom from very top again, click on submodule, repeat as needed.
Either eliminate the API reference section or move almost all of our docs there. Right now:
Some APIs are easy to find on the left, but some require digging through several levels of API reference, and
the API reference and our other docs are often redundant.
Since AFAIK most public APIs are addressable as composer.one_level_deep.ThingToUse, I propose eliminating the hierarchical API reference and just having a page for every one_level_deep module, which would include both tutorial content as needed and the full API for that module (including child modules) at the bottom. Mirrors what HF transformers docs do, and combined tutorial + full docs in each module docstring mirrors what torch does.
This is done!
|
gharchive/issue
| 2022-03-10T03:34:58 |
2025-04-01T04:35:07.027667
|
{
"authors": [
"dblalock",
"mvpatel2000"
],
"repo": "mosaicml/composer",
"url": "https://github.com/mosaicml/composer/issues/713",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2170604611
|
No API mode with askgd?
The runapi mode mentioned in the README doesn't seem to be available with the version of askgd that gets installed via go install github.com/mosajjal/askg/cmd/askgd@latest:
jmac@idun:~$ go/bin/askgd runapi
Error: unknown command "runapi" for "askgd"
Run 'askgd --help' for usage.
2024-03-05T23:16:44-05:00 ERR failed to execute command: unknown command "runapi" for "askgd"
jmac@idun:~$ go/bin/askgd
Use Google's AI model. This is a reverse engineered API of Gemini Web.
in order to use this, you first need to run the browser command to get the cookies from the browser.
note that all cookies will be stored in PLAINTEXT on this machine in the ~/.askg.yaml file.
make sure that file is only readable by the user running this daemon.
Usage:
askgd [command]
Available Commands:
browser Get cookies from browser
completion Generate the autocompletion script for the specified shell
help Help about any command
run Run the askgd daemon
Flags:
-c, --config string path to YAML configuration file (default "$HOME/.askg.yaml")
-h, --help help for askgd
-v, --version show version info and exit
Use "askgd [command] --help" for more information about a command.
jmac@idun:~$
just pushed the 0.4.0 release which hopefully helps with this. Still shows as 0.3.7 in my box but that could very well be caching.
|
gharchive/issue
| 2024-03-06T04:18:40 |
2025-04-01T04:35:07.030619
|
{
"authors": [
"jmacdotorg",
"mosajjal"
],
"repo": "mosajjal/askg",
"url": "https://github.com/mosajjal/askg/issues/47",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
87045687
|
Fixing build break caused by RepeatingSchedulableJob API change
Doing the needful
I was just getting ready to start the needful
lgtm
|
gharchive/pull-request
| 2015-06-10T17:29:55 |
2025-04-01T04:35:07.084017
|
{
"authors": [
"frankhuster",
"larubbio",
"llavoie"
],
"repo": "motech-implementations/mim",
"url": "https://github.com/motech-implementations/mim/pull/371",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
278491144
|
chore(package): update ava to version 0.21.0
Closes #9
This change is
Coverage remained the same at 100.0% when pulling d75691f5f72811b45794be9bf157f7944c31d82a on greenkeeper/ava-0.21.0 into 13f2e1ec48e07b46d18686895af3d4a9d31a6fbf on master.
|
gharchive/pull-request
| 2017-12-01T14:56:55 |
2025-04-01T04:35:07.095931
|
{
"authors": [
"coveralls",
"motss"
],
"repo": "motss/deep.clone",
"url": "https://github.com/motss/deep.clone/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1839372598
|
Reto #31 - Perl
Propuesta para Perl.
Comprobaciones
Asegúrate de cumplir los siguientes puntos antes de realizar la "Pull Request":
[X] El título de mi Pull Request sigue este formato: "Reto #[número] - [lenguaje_utilizado]". (Ej: Reto #0 - Kotlin")
[X] El nombre el fichero que se corresponde con el de mi usuario en GitHub más la extensión del lenguaje. (Ej: mouredev.kt)
[X] El fichero de corrección se encuentra dentro del directorio del ejercicio y en una carpeta con el nombre del lenguaje de programación utilizado en minúsculas. (Ej: Reto #0/kotlin/mouredev.kt)
[X] He revisado que el nombre del directorio del lenguaje no es conflictivo:
c#, no csharp
c++, no cplusplus
go, no golang
javascript, no js
[X] Únicamente he incluído los ficheros de ejercicios. No se aceptarán Pull Requests que contengan archivos adicionales asociados a editores de código o semejantes.
Añadida la versión para Raku.
|
gharchive/pull-request
| 2023-08-07T12:40:37 |
2025-04-01T04:35:07.100221
|
{
"authors": [
"joaquinferrero"
],
"repo": "mouredev/retos-programacion-2023",
"url": "https://github.com/mouredev/retos-programacion-2023/pull/4515",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2102005291
|
Reto #3 - Swift
Describe tus cambios
(Opcional) Sobre todo aconsejable si la "Pull Request" se corresponde con una corrección adicional y no con la presentación de un ejercicio.
Comprobaciones
Asegúrate de cumplir los siguientes puntos antes de realizar la "Pull Request":
[ ] El título de mi Pull Request sigue este formato: "Reto #[número] - [lenguaje_utilizado]". (Ej: Reto #0 - Kotlin")
[ ] El nombre el fichero que se corresponde con el de mi usuario en GitHub más la extensión del lenguaje. (Ej: mouredev.kt)
[ ] El fichero de corrección se encuentra dentro del directorio del ejercicio y en una carpeta con el nombre del lenguaje de programación utilizado en minúsculas. (Ej: Reto #0/kotlin/mouredev.kt)
[ ] He revisado que el nombre del directorio del lenguaje no es conflictivo:
c#, no csharp
c++, no cplusplus
go, no golang
javascript, no js
[ ] Únicamente he incluído los ficheros de ejercicios. No se aceptarán Pull Requests que contengan archivos adicionales asociados a editores de código o semejantes.
Información
Tienes toda la información sobre los retos semanales en retosdeprogramacion.com/semanales2023.
Cada semana se realizará la corrección en directo y publicación de un nuevo reto en twitch.tv/mouredev.
Recuerda que tienes un grupo de apoyo llamado "reto-semanal" en Discord.
Buenas. Cierro esta pull request ya que no puedo borrar ese archivo desde la propia pull peques
|
gharchive/pull-request
| 2024-01-26T10:51:29 |
2025-04-01T04:35:07.106701
|
{
"authors": [
"franmu94"
],
"repo": "mouredev/retos-programacion-2023",
"url": "https://github.com/mouredev/retos-programacion-2023/pull/6274",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2501126089
|
#12 - Python
Describe tus cambios
(Opcional) Sobre todo aconsejable si la "Pull Request" se corresponde con una corrección adicional y no con la presentación de un ejercicio.
Comprobaciones
Asegúrate de cumplir los siguientes puntos antes de realizar la "Pull Request":
[x] El título de mi Pull Request sigue este formato: "#[número] - [lenguaje_utilizado]". (Ej: #00 - Python")
[x] El nombre el fichero que se corresponde con el de mi usuario en GitHub más la extensión del lenguaje. (Ej: mouredev.py)
[x] El fichero de corrección se encuentra dentro del directorio del ejercicio y en una carpeta con el nombre del lenguaje de programación utilizado en minúsculas. (Ej: 00/python/mouredev.py)
[x] He revisado que el nombre del directorio del lenguaje no es conflictivo:
c#, no csharp
c++, no cplusplus
go, no golang
javascript, no js
[x] Únicamente he incluido los ficheros de ejercicios. No se aceptarán Pull Requests que contengan archivos adicionales asociados a editores de código o semejantes.
Información
Tienes toda la información sobre los retos semanales en retosdeprogramacion.com/roadmap.
Cada semana se realizará la corrección en directo y publicación de un nuevo reto en twitch.tv/mouredev.
Recuerda que tienes un grupo de apoyo llamado "reto-semanal" en Discord.
Todo hecho sin usar librerías, por eso quedó un poquito extenso.
|
gharchive/pull-request
| 2024-09-02T14:37:04 |
2025-04-01T04:35:07.113095
|
{
"authors": [
"JuanDAW37"
],
"repo": "mouredev/roadmap-retos-programacion",
"url": "https://github.com/mouredev/roadmap-retos-programacion/pull/5900",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1838034516
|
huff WETH
Hello,
I would like to ask for some guidance on which Constants should I care about when I am creating the contract?
https://github.com/mouseless-eth/rusty-sando/blob/master/contract/src/sando.huff#L17C67-L17C67
I understand that the SEARCHER is the same key from SEARCHER_PRIVATE_KEY https://github.com/mouseless-eth/rusty-sando/blob/master/bot/.env.example#L2
What about the WETH? Is this the WETH address that Im using in the smart contract?
@imcodingideas sorry im not helpful here, however i have a few questions regarding SEARCHER_PRIVATE_KEY and FLASHBOTS_AUTH_KEY.
is FLASHBOTS_AUTH_KEY a wallet address?
is SEARCHER_PRIVATE_KEY the private key of that wallet?
I had the same questions. An awesome person on this repo helped me privately, and another publicly here https://github.com/mouseless-eth/rusty-sando/issues/39#issuecomment-1666423362
I tried to add this info into the README but @mouseless-eth didn't want to recreate the internet which is fair. We all start somewhere.
What is the FLASHBOTS_AUTH_KEY?
The FLASHBOTS_AUTH_KEY is utilized as a method to track your representation with Flashbots. If you are setting up for the first time, refer to MetaMask's guide on creating an additional account in your wallet. The FLASHBOTS_AUTH_KEY is any private key. It's recommended to generate a new one from MetaMask or a similar service, and use an unfunded one. This key is used to establish your reputation on Flashbots.
What is the SEARCHER_PRIVATE_KEY?
The SEARCHER_PRIVATE_KEY corresponds to your private key. This key should be updated in your environment variables. It's this key that will be responsible for paying fees and utilizing the contract. Remember to update it here.
|
gharchive/issue
| 2023-08-06T04:01:07 |
2025-04-01T04:35:07.120478
|
{
"authors": [
"PraveenAlexis",
"imcodingideas"
],
"repo": "mouseless-eth/rusty-sando",
"url": "https://github.com/mouseless-eth/rusty-sando/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2646345509
|
Fixes humble backport issues (#3070)
Description
Backport fixes for #3070
@sjahr I'm unsure why a planning scene monitor test failed as I've only touched pilz. Would it be possible to rerun the pipeline in case it's sporadic?
Looks good. Seems to also be needed for the Iron backport PR, by the way!
|
gharchive/pull-request
| 2024-11-09T17:28:47 |
2025-04-01T04:35:07.126832
|
{
"authors": [
"TSNoble",
"sea-bass"
],
"repo": "moveit/moveit2",
"url": "https://github.com/moveit/moveit2/pull/3076",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
198044895
|
Stable release
v0.3.1 does not work in node or firefox addon - that error seems to be the same as for https://github.com/mozilla/datomish/issues/122
with master there also seems to be some issues.
Which version should be used for Firefox Addon development?
will report exact error from Firefox Tomorrow morning, need to get logs from my office machine.
there is no tag v0.3.5 on github, which commit was used to build that version on clojars?
https://github.com/mozilla/datomish/commit/7cf67474a8a774b37fd734a961f47a92a7ca38c6
the error I'm getting on transact on newly created database:
NS_ERROR_UNEXPECTED: Component returned failure code: 0x8000ffff (NS_ERROR_UNEXPECTED) [mozIStorageAsyncStatement.bindByIndex] Sqlite.jsm:700
Filed #146.
|
gharchive/issue
| 2016-12-29T17:55:29 |
2025-04-01T04:35:07.494314
|
{
"authors": [
"chrmod",
"rnewman"
],
"repo": "mozilla/datomish",
"url": "https://github.com/mozilla/datomish/issues/145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
122781118
|
Implement mailinglist system
The mailing list system lets us associate names with lists of
recipients. A list of recipients is stored in a text field that allows
for arbitrary mailing list sizes, but also comments to make
managing the list easier.
@mythmon @Osmose This is a very light-weight mailing list thing. It's not used, yet, but I'm going to switch the gengo account balance notification system to use it and I'm going to use it for the heartbeat health check emails.
Does this look "good enough" for now?
Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.
-Jamie Zawinski
@mythmon said he'd look at this some time today. I'll land it tomorrow regardless. I'm pretty sure it's simple and not wildly exciting. But I'm also not going to maintain it, so it's worth getting external input.
I talked with @mythmon and he's swamped. There's one model in this and I'm planning to use it, but not heavily and the API points are really clear, so even if this is a terrible thing, it'll be easy to fix in the future.
Given that and Travis' lovely green evening gown, I'm going to land this.
Out of context, this looks fine to me! Nice work!
|
gharchive/pull-request
| 2015-12-17T17:16:57 |
2025-04-01T04:35:07.534638
|
{
"authors": [
"Osmose",
"mythmon",
"willkg"
],
"repo": "mozilla/fjord",
"url": "https://github.com/mozilla/fjord/pull/721",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
153701730
|
[Bug Fix 1269126] Correcting typo: changing seach_fields to search_fields
Made changes to correct typo.
Coverage remained the same at 92.558% when pulling e696bb3acd2b19c669b424a990cc1df1a667d4dd on priyankt68:typo_correction_bug into da827c1199868cf2bd1c5f30ea597366c450cefe on mozilla:master.
Closing this one after #1450
Thank you for the contribution @priyankt68
Please let me know if you are interested to work on another mentored bug.
:rocket:
Sure @johngian, I would love to.
|
gharchive/pull-request
| 2016-05-09T04:33:08 |
2025-04-01T04:35:07.679673
|
{
"authors": [
"coveralls",
"johngian",
"priyankt68"
],
"repo": "mozilla/mozillians",
"url": "https://github.com/mozilla/mozillians/pull/1449",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
103112869
|
Cleanup olympia.py initialization module, and remove all noqa tags.
This is on top of #678. I just didn't want to include changes this noisy there if I didn't have to.
This needs a rebase, and #678 to land first, but it's very well documented, awesome!
I'm just not sure why we would need all this code to be in functions (as we're not re-using them anywhere, are we?)
For two reasons:
It makes the # noqa tags on the imports unnecessary, and I prefer to avoid having to use them when at all possible.
It makes it much easier to see what's going on in that file. In the previous version, for instance, the Jinja2 monkey patch operation had a completely unrelated import and initialization right in the middle of it, which was not easy to notice. This way, each individual operation is very clearly an individual operation, and looking at the end of the file, you have a clear summary of everything that's being done, and in what order.
I guess it doesn't harm anyway ;)
rebase and r+!
|
gharchive/pull-request
| 2015-08-25T20:44:42 |
2025-04-01T04:35:07.694320
|
{
"authors": [
"kmaglione",
"magopian"
],
"repo": "mozilla/olympia",
"url": "https://github.com/mozilla/olympia/pull/681",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1051063838
|
pdf.js rename master to main branch
Since some months, GitHub put out a guidance for changing master to main branches. Here you have the detailed overview of the new guidance's from the GitHub team.
Could it be possible to implement this guidance's to the pdf.js repository?
Yes, it could be done but there are certain things which be considered as per the guidance. It says,
Renaming a branch will:
-Re-target any open pull requests
-Update any draft releases based on the branch
-Move any branch protection rules that explicitly reference the old name
-Update the branch used to build GitHub Pages, if applicable
-Show a notice to repository contributors, maintainers, and admins on the repository homepage with instructions to update local copies of the repository
-Show a notice to contributors who git push to the old branch
-Redirect web requests for the old branch name to the new branch name
-Return a "Moved Permanently" response in API requests for the old branch name
We could use branch -m to move the branch locally instead of creating a new one and then we could push the new branch to GitHub.
@tiekom
|
gharchive/issue
| 2021-11-11T14:55:39 |
2025-04-01T04:35:07.697572
|
{
"authors": [
"anantraghuvanshi",
"tiekom"
],
"repo": "mozilla/pdf.js",
"url": "https://github.com/mozilla/pdf.js/issues/14263",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1631943050
|
Only warn about missing --scale-factor CSS-variable for visible textLayers (PR 16162 follow-up)
This is something that I completely overlooked in PR #16162, which in some cases cause the default viewer to incorrectly print warnings.
This can reproduced with the PAGE scrolling-mode, and/or the PresentationMode, and this patch simply work-around it by checking the visibility as well (since the warning is a best-effort solution anyway).
/botio-linux preview
|
gharchive/pull-request
| 2023-03-20T11:47:28 |
2025-04-01T04:35:07.698916
|
{
"authors": [
"Snuffleupagus"
],
"repo": "mozilla/pdf.js",
"url": "https://github.com/mozilla/pdf.js/pull/16181",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1919362991
|
[Editor] Make a deleted (when it was invisible) editor undoable
When the editor is invisible (because on a non-rendered page) its parent is null. But when we undo its deletion, we need to have a parent to attach it.
/botio integrationtest
|
gharchive/pull-request
| 2023-09-29T14:21:24 |
2025-04-01T04:35:07.700141
|
{
"authors": [
"calixteman"
],
"repo": "mozilla/pdf.js",
"url": "https://github.com/mozilla/pdf.js/pull/17050",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
64512710
|
Don't map glyphs to Unicode position 0x0E33, i.e. Thai character SARA AM (bug1046314)
A similar approach as in PR #5705.
Fixes https://bugzilla.mozilla.org/show_bug.cgi?id=1046314.
According to https://dxr.mozilla.org/mozilla-central/source/gfx/harfbuzz/src/hb-ot-shape-complex-thai.cc#270-365, 0x0E33 is treated as a special case (by the font shaping code in Firefox). Hence it seems reasonable to skip it when adjusting the font mapping.
Edit: Fixes one of the bugs in #5647.
/botio test
/botio-windows test
/botio makeref
|
gharchive/pull-request
| 2015-03-26T12:24:12 |
2025-04-01T04:35:07.702927
|
{
"authors": [
"Snuffleupagus",
"brendandahl"
],
"repo": "mozilla/pdf.js",
"url": "https://github.com/mozilla/pdf.js/pull/5882",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
130171699
|
Add support for PageLabels in the viewer (issue 6902, bug 793632)
Please refer to the individual commit messages.
Can be tested using e.g. the following files: http://www.adobe.com/content/dam/Adobe/en/devnet/acrobat/pdfs/PDF32000_2008.pdf, or http://mirrors.ctan.org/info/lshort/english/lshort.pdf.
Fixes issue #6902 and bug 793632.
TODO:
[x] Localize the new strings, and cleanup the now obsolete ones.
/botio-linux preview
Fixes #3683 too.
Because we do not have the "Page" label anymore, we can adjust the CSS a bit to make sure nothing overlaps anymore during resize:
Change
html[dir='ltr'] .outerCenter {
float: left;
left: 205px;
}
html[dir='rtl'] .outerCenter {
float: right;
right: 205px;
}
to
html[dir='ltr'] .outerCenter {
float: left;
left: 160px;
}
html[dir='rtl'] .outerCenter {
float: right;
right: 160px;
}
(i.e., 205 to 160 pixels) so that when you resize the window, the zoom controls don't move anymore and the icons do not overlap the select box anymore.
I think overall this is a great solution and most in line with how other viewers (especially Adobe Reader) handle it.
@yurydelendik @brendandahl What do you think about how the page labels are handled in the UI with this patch? If you also think that this is the way to go, I will review this patch in more detail.
Can we block this by extracting a toolbar control from app.js?
@yurydelendik Sounds good to me. Perhaps the patch will be slightly easier too when the toolbar code is more centralized.
Can we block this by extracting a toolbar control from app.js?
I've now rebased/rewritten this PR on top of PR #7458 (which at least improved the current situation wrt updating the toolbar); is this enough that we can now consider moving forward with this PR?
I would say so, yes. I see no more reason to block this work. This patch actually turned out to have less code than I initially thought we would need, so that is nice too!
/botio-linux preview
@Snuffleupagus Do you want to address https://github.com/mozilla/pdf.js/pull/6945#issuecomment-177595111 in a separate commit too? It would be nice to fix the responsiveness issues as well, which this patch already partially fixes.
I would say so, yes. I see no more reason to block this work. This patch actually turned out to have less code than I initially thought we would need, so that is nice too!
Thanks; I've actually tried to reduced the size of the total diff as best as I could.
I suppose that one of the remaining things we need to do, is to get consensus that it's actually OK to remove the "Page:" label from the toolbar.
@Snuffleupagus Do you want to address #6945 (comment) in a separate commit too? It would be nice to fix the responsiveness issues as well, which this patch already partially fixes.
I think I'd prefer if that was done in a follow-up instead, since it might not be as easy as it first seems.
Keep in mind that even without the "Page:" label in the toolbar, for documents that use Page Labels, some (or even all) of the space saving could actually be offset by the longer numPages label.
This is obviously dependent on the number of pages in the document (e.g. consider cases where numPages > 1000), and also what locale is being using (since length of the "of" label differs between languages).
Note: Even in English, the space saving isn't necessarily that great, this is how it looks with lshort.pdf:
/botio-windows preview
/botio lint
Can we try and land this feature, obviously after it has properly reviewed, while PDF.js is still shipping in Firefox? ;-)
(Considering that PDFium, at least as it appears in Chrome, doesn't seem to support this feature it'd be nice if we added this to PDF.js IMHO.)
Can we try and land this feature?
Yes, let's try to do it this cycle.
Per IRC: @timvandermeij, I only need to know if we have agreement on removing the Page: label and replacing it with a tooltip. Personally I think this change is fine because it makes things more in line with other viewers and the extra space is required for the page labels.
Sounds good: we can remove it.
/botio-linux preview
Amazing work, and the implementation is really clean!
|
gharchive/pull-request
| 2016-01-31T19:23:02 |
2025-04-01T04:35:07.715022
|
{
"authors": [
"Snuffleupagus",
"timvandermeij",
"yurydelendik"
],
"repo": "mozilla/pdf.js",
"url": "https://github.com/mozilla/pdf.js/pull/6945",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
206545607
|
Replaces RequireJS to SystemJS.
Removes RequireJS dependency since it will be hard to adapt ES6 module transpilers. It will be easier with SystemJS. Questions to resolve:
[x] Central configuration (vs SystemJS.config in multiple places)
[x] Dependency on Promise for legacy browsers (increase bar or move Promise polyfill to compatibility.js), e.g. we use that in some examples.
Dependency on Promise for legacy browsers (increase bar or move Promise polyfill to compatibility.js), e.g. we use that in some examples.
Personally I feel that support for older browsers are now seriously holding us back from using many of the really nice new features in ES6, and I would wholeheartedly support increasing the minimum requirements for using PDF.js.
My suggestion is that version 1.7.225 (the current pre-release) should be the last version with support for pre-ES6 browsers, and that going forward only ES6 compatible browsers/environments will be supported.
Such a requirement would mean that most, if not all, compatibility code could be removed; furthermore it'd also reduce the support burden since we'd no longer need to add any new compatibility hacks (which would be nice, given the limited number of regular PDF.js contributors).
Obviously such a decision would need to be communicate publicly not just in the Wiki/README, but also on the mailing list/Twitter first. However I really think that this is the way forward here!
Personally I feel that support for older browsers are now seriously holding us back from using many of the really nice new features in ES6, and I would wholeheartedly support increasing the minimum requirements for using PDF.js.
We can use a transpiler (like babel) to support old browsers and still have all the shiny new stuff.
/botio test
/botio-linux preview
/botio test
Thank you for working on this!
|
gharchive/pull-request
| 2017-02-09T15:58:25 |
2025-04-01T04:35:07.720277
|
{
"authors": [
"Snuffleupagus",
"brendandahl",
"timvandermeij",
"yurydelendik"
],
"repo": "mozilla/pdf.js",
"url": "https://github.com/mozilla/pdf.js/pull/8050",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
135763686
|
Use 'TabularInline' for event attendees.
@johngian really quick r?
r+
|
gharchive/pull-request
| 2016-02-23T15:16:11 |
2025-04-01T04:35:07.735261
|
{
"authors": [
"akatsoulas",
"johngian"
],
"repo": "mozilla/remo",
"url": "https://github.com/mozilla/remo/pull/1115",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
185682729
|
Content prefetching recipe
It would be nice to have a functioning recipe for content prefetching ala this demo from the Chrome docs (https://googlechrome.github.io/samples/service-worker/prefetch/index.html).
Incidentally, this example seems to work in Chrome, but not Firefox (as of FF 52), but a lot of the Chrome demos seem to be slightly broken on Firefox.
@jxn What doesn't work in firefox? The only difference I see here is that the non-cached resource gives the "corrupted content" for a bad interception in firefox when offline. This is because the service worker is rejecting the promie passed to the respondWith().
@wanderview that is the same thing I see, but I get the corrupted content response for both of the pre_fetched resources as well as the non-pre-fetched one. Do you see something different? I'm testing w/ FF 51.0a2 at the moment.@wanderview that is the same thing I see, but I get the corrupted content response for both of the pre_fetched resources as well as the non-pre-fetched one. Do you see something different? I'm testing w/ FF 51.0a2 at the moment.
Yea, I'm testing on dev-edition as well. Works for me here.
Are you by any chance sharing your profile between different versions of firefox?
Are you by any chance sharing your profile between different versions of firefox?
I am. Is that a potential for problems? I don't actually need Release and Nightly FF to use my profile (those are only for testing), but I didn't think it would cause problems.
Yes, we do not support running a profile in a newer version and then taking it back to an older version. It mostly works, but when disk formats change it can break. We recently changed the disk format of the Cache API in FF52, so that it likely what you are hitting.
I've been trying to get our UX team to do a warning of some kind if we detect this, but we're not there yet.
@wanderview ok, thanks. I just created a new profile and was about to post that it works. Bah! apologies for the bad false report. I'd still love some examples from serviceworke.rs on this, though.
FYI, I think chrome has some bug here in that its treating a thrown exception like a normal network error. I file this:
https://bugs.chromium.org/p/chromium/issues/detail?id=660377
We have this recipe about pre-catching all the resources from an external JSON file but it seems not to be simple enough.
|
gharchive/issue
| 2016-10-27T14:22:01 |
2025-04-01T04:35:07.743771
|
{
"authors": [
"delapuente",
"jxn",
"wanderview"
],
"repo": "mozilla/serviceworker-cookbook",
"url": "https://github.com/mozilla/serviceworker-cookbook/issues/260",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
103047489
|
Add API to view FxOS Add-ons in the review queue (bug 1196225)
https://bugzilla.mozilla.org/show_bug.cgi?id=1196234
Bonus: also start implementing publish/reject APIs (bug 1196234)
Thanks, docs are fixed now.
r+wc/t
Added CORS and CORS tests
|
gharchive/pull-request
| 2015-08-25T15:24:12 |
2025-04-01T04:35:07.856638
|
{
"authors": [
"diox",
"ngokevin"
],
"repo": "mozilla/zamboni",
"url": "https://github.com/mozilla/zamboni/pull/3298",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
197497366
|
Roadmap 2017
This a general roadmap issue for Weave to discuss further plans.
I see at least 2 important missing features:
[x] Inline code #77, #80
[x] Pass variables to documents #78
[ ] Improved examples. Go trough examples and leave only sensible ones. Add a script to run all examples, maybe example gallery similar to Pweave. http://mpastell.com/pweave/examples/index.html
There is no timetable as development takes place in my limited spare time, but I hope to get these implemented around the time of Julia 1.0.
Contributions and new ideas are very welcome!
Functionality akin to R’s flexdashboard would be nice.
It should probably be a separate package but I wanted to throw the idea out there and this seemed like an appropriate place.
|
gharchive/issue
| 2016-12-25T11:31:02 |
2025-04-01T04:35:07.919459
|
{
"authors": [
"ValdarT",
"mpastell"
],
"repo": "mpastell/Weave.jl",
"url": "https://github.com/mpastell/Weave.jl/issues/79",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
347406717
|
Add filter for compagnies and contacts, create filter trait
Hi, Thank you for this lib !
I found there are some missing endpoints to filter:
https://developers.freshdesk.com/api/#filter_contacts
https://developers.freshdesk.com/api/#filter_companies
https://developers.freshdesk.com/api/#filter_tickets (was under search)
Let me know if anything is wrong :)
(I didn't generate the doc)
Please merge this into master.
Please merge
Please merge this!
|
gharchive/pull-request
| 2018-08-03T13:55:53 |
2025-04-01T04:35:07.925800
|
{
"authors": [
"Jewhurst",
"OneHatRepo",
"guyoron",
"homersimpsons"
],
"repo": "mpclarkson/freshdesk-php-sdk",
"url": "https://github.com/mpclarkson/freshdesk-php-sdk/pull/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
602802603
|
Some Latex problems still persist
While the latest commit fixed most Latex problems, there are still some issues when Latex is embedded in comments, particularly after Markdown is used to delineate titles etc. Please see the same repository, level 1 in Sup/Inf, for an example. The first line after subtitle is dispayed as Let $X$ be a set of real numbers.
I couldn't replicate the issue. Could you check it again? Does the issue persist after reloading the page, or going to another level and coming back? Could you provide more details?
This must have happened when re-making and re-loading the game several times. I couldn't replicate it either after I restarted server and re-compiled everything. Closing it for now.
I have eventually been able to reproduce it. Latex math display works well if loading from scratch. Issues arise if one uses the "Restart" button. You should be able to replicate the problem by loading the game, going through at least levels 1 and 2 in Sup/Inf (third world currently) then hit "Restart", go to Main menu and reload level 1. See attached snapshot.
That's strange. I still can't replicate it. When this happens, could you go to the browser console (Tools > Web Developer > Web Console in Firefox) and type MathJax? Let me know if is says undefined or not. If it's not undefined, then try typing MathJax.Hub.Queue(["Typeset",MathJax.Hub]); and let me know if the LaTeX works.
Also, when this happens, if you go to another level and come back, will it load the LaTeX properly?
MathJax works asynchronously. When I open a level, sometimes it takes a fraction of a second before it starts working. Could it be the case that it takes longer in your browser?
OK, it took me some time to track this. It appears it is browser-dependent. If I make the game from scratch, start the local server then load the game in Opera, go through a few levels then reset, the problem arises. The exactly same process in Firefox leads to no problem. Didn't try in other browsers. Let me know if you think we should pursue this further.
Thank you! I was able to replicate the issue in Opera. Apparently the relative speed of loading different modules could cause problems in rendering LaTeX. I made some changes which solved the issue for me. If you have time to check, let me know if the issue persists.
It does persist on my end. I did rebuild the game maker and I have the latest version of Opera, running on a Latex machine. It may not be that important, because Opera is not very widely used.
Could it be related to the following warning messages I get when building the game maker (and if yes, how do I fix those?):
npm WARN terser-webpack-plugin-legacy@1.2.3 requires a peer of webpack@^3.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN make-lean-game@1.0.0 No repository field.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.12 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.12: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
audited 6463 packages in 14.844s
23 packages are looking for funding
run `npm fund` for details
found 1 moderate severity vulnerability
run `npm audit fix` to fix them, or `npm audit` for details
Just to double check, did you clean the cache before running the game again? Browsers will cache to optimise loading time. If you clear the recent history (including site data), then you can be sure that the browser isn't running the old version from cache.
As for the warnings you mentioned, I get the same warnings. They're harmless and not related to LaTeX rendering.
The issue I found, and fixed in the last commit, was that if the MathJax module was downloaded and run by the browser faster than some other components of the game, then LaTeX may not be rendered. Apparently this could depend on the browser, on whether the game is being run locally or not, the relative speed of the connection of the user to the website hosting the game in comparison with the website hosting the MathJax library, etc.
I also thought about all these variables you mention. Nevertheless, clearing the cache did fix the issue in Opera. In my experience, Firefox did best when rendering this game (better than Chrome, Safari and Opera). If that is also your experience, maybe it should be mentioned somewhere.
Thank you for this great contribution!
This code is a wrapper around Lean and I believe Lean works best in Chrome. Although I've rarely seen any issues in Firefox. I'll mention this in the README file.
Thank you for using this library to make more games!
|
gharchive/issue
| 2020-04-19T19:39:38 |
2025-04-01T04:35:07.946253
|
{
"authors": [
"mpedramfar",
"stanescuUW"
],
"repo": "mpedramfar/Lean-game-maker",
"url": "https://github.com/mpedramfar/Lean-game-maker/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
394516720
|
Customize the progress bar
Is there a way to customize the progress bar which appears while the panorama is loading?
I know that I can do that by changing the pannellum.css file. But it would be great if there is a built-in method to do that.
Altering the CSS is the only way to do it. However, instead of modifying the pannellum.css file, I'd recommend using an additional CSS file to override it (using !important if needed), since doing so makes it easier to keep track of the modifications.
|
gharchive/issue
| 2018-12-28T00:28:15 |
2025-04-01T04:35:07.955839
|
{
"authors": [
"DevelopSmith",
"mpetroff"
],
"repo": "mpetroff/pannellum",
"url": "https://github.com/mpetroff/pannellum/issues/700",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
849700244
|
Pause/resume and stop downloads
Add option to pause or stop current downloads
stop ongoing downloads is done
|
gharchive/issue
| 2021-04-03T17:12:26 |
2025-04-01T04:35:07.956631
|
{
"authors": [
"mpirescarvalho"
],
"repo": "mpirescarvalho/youtube-downloader",
"url": "https://github.com/mpirescarvalho/youtube-downloader/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1193164716
|
Delombok hangs if nested class carries @Builder(toBuilder = true)
Short description
When using the delombok action on a nested class with @Builder(toBuilder = true) annotation, IntelliJ hangs (see below for example project)
Expected behavior
No hangup, delombok should produce code as expected
Version information
IDEA Version: IntelliJ IDEA 2021.3.2 (Ultimate Edition), Build #IU-213.6777.52
JDK Version: 11.0.9.open-adpt
OS Type & Version: Ubuntu 20.04.4
Lombok Plugin Version: bundled 213.6777.52
Lombok Dependency Version: 1.18.22 (same results with 1.18.20 though)
Steps to reproduce
Use delombok on @Builder in the sample project below
Sample project
I created a fresh gradle project; I believe only the following two files will be relevant, though I am happy to provide the rest if required:
OuterClass.java
import lombok.Builder;
public class OuterClass {
@Builder(toBuilder = true)
class InnerClass extends OuterClass {
}
}
build.gradle
plugins {
id 'java'
}
group 'org.example'
version '1.0-SNAPSHOT'
repositories {
mavenCentral()
}
dependencies {
implementation 'org.projectlombok:lombok:1.18.22'
annotationProcessor 'org.projectlombok:lombok:1.18.22'
}
Additional information
The same problem happens if the outer class is an interface instead (this is the setup we had in our project).
Either of the following resolve the hangup, though delombok produces compile errors in the generated code:
Remove toBuilder = true
Remove extends OuterClass
Stacktrace
The following threaddump is produced by IntelliJ; unfortunately, I cannot make much sense of it (attached as file since it's quite long)
threadDump-20220405-152256.txt
I can confirm the same issue.
IDEA Version: Build #IU-222.3345.118, built on July 26, 2022
JDK Version: Corretto-11.0.12.7.1
OS Type & Version: Ubuntu 20.04.4 LTS
Lombok Plugin Version: bundled 222.3345.118
Lombok Dependency Version: 1.18.24 (same results with 1.18.20)
|
gharchive/issue
| 2022-04-05T13:38:06 |
2025-04-01T04:35:07.964648
|
{
"authors": [
"nicoweidner",
"topr"
],
"repo": "mplushnikov/lombok-intellij-plugin",
"url": "https://github.com/mplushnikov/lombok-intellij-plugin/issues/1122",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
449066214
|
Would it be possible to use docker ports instead...
I like your solution, but I was wondering if it would be possible to use docker ports instead of relying on your dockerfile, and staying up to data...
eg.
docker run -p 8080:8080 -p 9007:9007 --rm -v $PWD/logs:/logs -v $PWD/notebook:/notebook -e ZEPPELIN_LOG_DIR='/logs' -e ZEPPELIN_NOTEBOOK_DIR='/notebook' --name zeppelin apache/zeppelin:0.8.1
And then create the SSH tunnel on the host machine, instead of in the container:
ssh -i private-key.pem -vnNT -L localhost:9007:169.254.76.1:9007 glue@ec2-13-211-47-17.ap-southeast-2.compute.amazonaws.com
What do you think? My knowledge of docker and zeppelin is limited, and I haven't been able to get it working... Zeppelin says Connection Refused.
Hey, thanks for responding. I eventually did get this working using Docker ports, and I think the issue was the Zeppelin version, as you guessed.
In case anyone else stumbles on this, after getting Zeppelin running, I had to manually edit the ZEPPELING_HOME/conf/interpreter.json file:
"master": "yarn-client",
"option": {
"remote": true,
"port": 9007,
"host": "localhost",
Zeppelin 0.7.3 interpreter settings page seemed to be broken, and wouldn't allow editing of the variables... for me at least.
|
gharchive/issue
| 2019-05-28T05:07:44 |
2025-04-01T04:35:07.967669
|
{
"authors": [
"comfytoday"
],
"repo": "mporium/GlueZeppelin",
"url": "https://github.com/mporium/GlueZeppelin/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
235752235
|
Localstorage Support
Follow up issues for IndexDB and other types of storage when we get there.
Future Solutions
Proper API for deleting Localstorage by hostname (both of these bugs seem to want to extend the browsingData API)
Pros
Clear localstorage by site rather than all or nothing
Should be easily added to the extension code since it seems like the extended API uses a list of hostnames which Cookie Cleanup conveniently returns
Cons
No support yet
Unless there's a way to enumerate all localstorage, there's a way for this extension to miss some localstorage
Ex. A site that uses no cookies and only localstorage (as cookie cleanup outputs site names for cookies that have been deleted to be sent into the localstorage cleanup)
I find this very unlikely since almost all sites have cookies
Ex. A current Cookie AutoDelete user who upgrades to a version that supports deleting localstorage will have their existing localstorage still there
The user would have to clear all their localstorage after upgrade to get a "clean slate"
Firefox Blockers
[ ] Extend browsingData to support removing localStorage by host
Chrome Blockers
[ ] 78093
Current solutions
browsingData API
Pros
Actual API for removing localstorage
Cons
Not viable without the above bugs
Current removalOptions only include the timeframe
Firefox Blockers
[x] Implement clearing of LocalStorage in browsingData API (Firefox 57+)
Inject a content script that clears localstorage on page load (courtesy of reddit)
Code:
window.localStorage.clear();
window.sessionStorage.clear();
Pros
Works in all browsers
Better than the current browsingData API
Will work with the whitelists
Cons
Seems very hackish and its only a workaround
Localstorage stays until the user visits the website again
Probably have to come up with an algorithm to find the diff of the current tabs (between page loads) and only inject if there is a change (otherwise the injection would happen every time on site navigation)
Because of this, this workaround will be on the backburner until I have some strategies about the best way to only inject the content script when it is needed
I'm thinking of doing the content script route as an experimental feature until the browsingData gets extended. Anyone else is free to give their input on how best to implement this.
The workaround you found seems quite OK, at least until Mozilla does not sort out the API.
Do you have any ETA for the workaround to be implemented?
@crssi Probably once #20 is done and stable so hopefully by the end of summer. Things are subject to change like the workaround might not work as well as I thought but we'll see.
The tricky part here will be figuring out when to clear local storage. The content script will have to ask the backend if/when it should clear localStorage before it lets the rest of the page load. Unfortunately, you can't use a port as ports are asynchronous.
The best method I can think of is to inject a "clear-localStorage" cookie with onHeadersReceived and then read/delete it from the content script.
@Stebalien I was planning on doing tabs.executeScript() at document_start and see how well that does.
Where will you call that function? Unless I'm mistaken, the only place one can synchronously "catch" a page load (pause it while you do something) is from onHeadersReceived and I'm not sure if you can programatically inject scripts at that point.
There's the onUpdate event which I use to update the icon and calculate the number of cookies for that site.
I'm pretty sure that event is asynchronous so there's no way to guarantee that the content script will get injected before the page loads.
Ex. A site that uses no cookies and only localstorage (as cookie cleanup outputs site names for cookies that have been deleted to be sent into the localstorage cleanup)
I find this very unlikely since almost all sites have cookies
Not frequent indeed yet sites using only localStorage do exist.
I have two in mind because bookmarked, the first is modest, the second is far more important since it is a search engine: ClockTab and Qwant
Cleaning the localStorage is indeed as imperious as cleaning cookies, for non white-listed sites of course.
@zymase The only way I see this happening for those sites without an API to enumerate localstorage is to store the store the current domains in memory and find the diff between sites on page load. Another option that could work is to artificially set a temporary cookie, so that the cleanup would still get the hostname of the site. Personally 2nd one might be better for performance,
@mrdokenny I know the idea is to be constructive, always. I'm not at all into coding but from what I read concerning the possibility to read/analyze/edit localStorage with a Webextension it seems to be Mission:Impossible if the proper API is not made available. From there on, considering Mozilla's commitment to privacy, considering the highly privacy concern related to localStorage, I remain reasonably optimistic that Mozilla will work on that API. Should I be wrong, should Firefox 57 appear to consider localStorage as an exotic quest not worth being taken care of that I'd remain deaf to the company's credo of users' privacy and consider definitely an alternative to Firefox. Period.
If programmers in the light of Webextensions are required to support extensions as Atlas the weight of the planet on his shoulders then where are we leading to? Counter-progress? For the sake of what, "universal" extensions valid on all platforms on the basis of a leverage on the cheapest possibilities granted to extensions?
Good luck to all, and to extension programmers in particular, they'll need it.
Is it possible to clear all local storage on startup?
SDC is able to do it even with e10s enabled.
@zeepob , the Firefox SDC (Self-Destructing Cookies) add-on's developer faces the same problems:
Q: Will this add-on ever be multi-process (e10s) compatible?
A: Add-ons can't monitor sites' LocalStorage usage in e10s mode. This functionality will probably never be restored for legacy add-ons such as SDC. This means that the answer is "very likely never". You can still force-enable e10s and SDC should clean your cookies just fine, but it can only clean your LocalStorage when the browser starts.
Q: Will this add-on make the jump to the WebExtension world?
A: I don't have the time for a full rewrite as a WebExtension. Enjoy it while it lasts.
As explained on the add-on's AOM page
The Cookie AutoDelete add-on handles e10 I think, SDC doesn't
Cookie Autodelete handles localStorage, SDC doesn't
Cookie Autodelete is a WebExtension, SDC is not and will likely never be.
I'm still running SDC (on Firefox ESR 52.2.0) but one day or another (at latest at the EOL of FF52ESR) SDC will be obsolete, so the target is Cookie Autodelete and the wish is to have it handle localStorage.
@zymase I don't understand your answer.
SDC can't clear individual site local storage when e10s is enabled but it can clear local storage globally on startup. My question is if it's technically possible for Cookie Autodelete to do.
That ability is clear advantage of SDC running on latest firefox versions.
@zeepob ,
SDC can't clear individual site local storage when e10s is enabled but it can clear local storage globally on startup.
That is correct. That is why I added "at least not completely.". I block e10 altogether here and now so I don't have to face the e10 restrictions when it comes to e10 incompatibility.
My question is if it's technically possible for Cookie Autodelete to do
SDC's developer says it's not possible. I have no idea, not being a coder myself.
In fact it appears that Cookie Autodelete is facing two walls when it comes to handling localStorage: the WebExtension format and the e10 implications. Need I say both bother me and many of us?
@zymase
SDC's developer says it's not possible
Well, you just quote him saying that's possible with e10s:
You can still force-enable e10s and SDC should clean your cookies just fine, but it can only clean your LocalStorage when the browser starts.
I wonder if it's possible for webextension too.
@zeepob , I quoted above SDC' developer:
Add-ons can't monitor sites' LocalStorage usage in e10s mode. This functionality will probably never be restored for legacy add-ons such as SDC. This means that the answer is "very likely never". You can still force-enable e10s and SDC should clean your cookies just fine, but it can only clean your LocalStorage when the browser starts.
Seems explicit to me.
It's very explicit and contradicts you saying that's not possible. I understand you have no knowledge to answer my question so please refrain from posting unrelated answers to me. Thanks.
I tried to bring my contribution to your question by referring to what is known. I linger to find a contradiction when emphasizing on what the developer of an add-on similar to Cookie Auto delete wrote, explicitly. To force enable e10 which will clean cookies but will clean localStorage only when browser starts is not what I call monitoring localStorage, neither is it SDC's developer opinion when he states "Add-ons can't monitor sites' LocalStorage usage in e10s mode. "
Is it a language problem of that of basic logic and understanding? Good luck.
Maybe it's language. In my every post I was crystal clear that I talk about:
clearing all local storage on startup
Monitoring local storage and clearing it per domain is out of my question. Thank you.
OK; @zeepob I have to agree that indeed your quest concerned clearing all local storage on startup
I will have missed that probably because, if it is for you a point of interest it is for me so far from what I expect from localStorage monitoring that unconsciously I misunderstood. From there on arguments mismatched. Neither a language nor a logic problem, obviously a wrong dialog triggered by an initial psychological bias.
I think we got it clear now :)
@zymase
"Add-ons can't monitor sites' LocalStorage usage in e10s mode. "
That's only for legacy extensions because Mozilla broke the XUL API for monitoring localstorage when they were implementing e10s and don't want to fix it because they were moving on to WebExtensions.
See 1130859 and 1043081. It's not that with e10s extensions can't clear localstorage ever again, because there is an API to do so, but it's too general of an approach.
@zeepob
Is it possible to clear all local storage on startup?
Yes this should be possible with the general API (browsingData).
How the browsingData works currently is you pass in which type of storage you want to clean and the removal options. The only problem is that it only has the since property which means that I could pass 0 in it and it would clear all localstorage (as well as any other data that I specify).
What I want is another property hostname that I can pass in to delete data by site rather than only by time. The proper API is nice but this is probably the best for right now.
Just so everyone is on the same page now: what you need is a hostname option to be added to browsingData's removal options, and that's it?
@mrdokenny thanks for the reply.
So what do you think about adding localstorage clearing at startup as opt-in feature? I think it would be better than nothing as currently we have.
Just so everyone is on the same page now: what you need is a hostname option to be added to browsingData's removal options, and that's it?
@spinda Yes
Also one interesting to note that that Chrome bug is 6 years old by now, so I'm pretty sure Mozilla won't take that long since
So what do you think about adding localstorage clearing at startup as opt-in feature? I think it would be better than nothing as currently we have.
@zeepob They haven't added localstorage cleanup to browsingData yet. The related bug is under Current Solutions ->browsingData API -> Firefox Blockers and it's getting some activity from what I see. :)
Just following up on @zeepob's request -- I want to transition from SDC to a WebExtension (so thank you for working on this!). What practice would you recommend in the mean time to preserve privacy at a level similar to SDC? If I clear "Offline website data" in the "Clear history" option in Preferences->Privacy, will that clear localStorage and indexDB? I could do that by hand periodically (not quite as good as SDC doing it when a tab closes but close).
Or maybe I should just keep using SDC for now and wait for 1355576 so you can implement @zeepob's suggestion.
Firefox's LocalStorage is all in the user's profile webappsstore.sqlite file. Once Firefox closed of course, deleting this webappsstore.sqlite file is radical; a new file will be initiated on next Firefox run. Beware nevertheless that LocalStorage may include data from sites the user has whitelisted (which then, as this is the case with SDC, is not removed in the same way cookies from whitelisted sites are not removed). Even though I (still) use SDC at this time, I have the CCleaner application include my webappsstore.sqlite file for deletion, just in case SDC would have missed a hit ...
Some addons are using webappsstore.sqlite to store preferences so deleting the file can break them.
@willsALMANJ you can stick to SDC for now or use this sript with greasemonkey (or both)
Some addons are using webappsstore.sqlite to store preferences so deleting the file can break them.
Indeed, and that's why deleting our webappstore.sqlite file is to be put on the account of a radical approach when no other way is available.
This is why add-ons such as SDC or Cookie AutoDelete consider that the same logic should apply to LocalStorage than to cookies : if the user whitelists a site then that site is authorized to keep cookies and LocalStorage data -- If not then cookies and LocalStorage data get wiped once the site is closed.
Generally speaking storage is a plea. browsers, Firefox included, have tried numerous ways to legitimate sites' right and pertinence to keep an associated user's data in his profile. That is NOT necessary, cookies are far enough to keep a minimum when required and when required means when the user has decided to. Hence, LocalStorage is baloney and sites using it to keep the user's data rather than plain cookies are crooks.
Cookies, LocalStorage, Dom IndexedDB .... insane : INSANE. Cookies and that's all, cookies which are already by themselves totally misused compared to their initial function & meaning. All sites now (almost all) set a cookie, for nothing, just as another tracker item. CRAPS. Same with LocalStorage. CRAPS. Which is why add-ons such as SDC and now Cookie Autodelete are so worthy and an essential component in the light (or shade?!) of privacy.
Keep on the good work, guys.
Thanks for the suggestion of using Greasemonkey (I see those commands were suggested in the opening post here as well). I can try that out with Cooke Autodelete as a replacement for SDC until the WebExtension API's are added (since it's not clear that Greasemonkey will become a WebExtension).
What about IndexedDB? Is that cleared by clearing localStorage and sessionStorage? Also, is IndexedDB covered by this ticket, or only localStorage?
@willsALMANJ
(since it's not clear that Greasemonkey will become a WebExtension).
https://addons.mozilla.org/en-US/firefox/addon/violentmonkey/
What about IndexedDB? Is that cleared by clearing localStorage and sessionStorage? Also, is IndexedDB covered by this ticket, or only localStorage?
Assuming Mozilla adds the hostname property, it "should" work for all the DataTypeSets including IndexDB, which could get a little interesting cause SDC doesn't clear most of these (like cache) by hostname. So basically it would be like a "Forget this site" once you close all the tabs of the site.
Even if they can't add the parameter for all of them (like cache), it would be fine to have just the localstorage and indexedDB.
SDC handles cooikies, localStorage and caches, not the IndexedDB data
IndexedDB is all in the user's profile 'storage' folder and managed by storage.sqlite. As far as I know nothing cleans the user's indexedDB, except deleting the storage folder together with the storage.sqlite file (once Firefox closed of course).
If many sites require
dom.indexedDB.enabled
to be set to true, which is the default Boolean value, most won't deposit any data, they just need it, why? No idea; some sites will deposit data, like the Webmail site posteo.de, but most don't yet their pages won't display correctly if dom.IndexedDB is set to false (i.e. laposte.fr).
Funny thing:
Replace Firefox's default home page (about:home) by another page (i.e. about:blank)
Close Firefox then in the current Firefox profile, delete the storage folder and the storage.sqlite file
Restart Firefox : in the current profile the storage folder and the storage.sqlite file have not been rebuilt.
Open a site requiring dom.indexedDB = true without actually using it, such as above mentioned laposte.net and the site will display correctly.
Now, if you reset Firefox's default home page to about:home and restart Firefox, both the storage folder and the storage.sqlite file are rebuilt on Firefox start.
One thing is sure, Firefox's home page not only triggers the storage folder/file but uses it as well, and my speculation is that this storage comes in very handy to Firefox's about:home and about:newtab pages when it come to performing what these pages allow, including a quick look on the user's visited sites.
I avoid about:home and about:newtab. Start Page is whatever except about:home and the new tab calls the page set as the start page.
Slightly off-topic with this extent over our concern, storage, IndexedDB and mainly LocalStorage.
Moreover you will have noticed that English is not my mother-tongue and be thanked for your tolerance.
most won't deposit any data, they just need it, why? No idea;
When I checked long time ago, jQuery or one of its plugins checks for indexedDb, and crashes if is not avliable. Bug was closed as wontfix.
When I checked long time ago, jQuery or one of its plugins checks for indexedDb, and crashes if is not avliable. Bug was closed as wontfix.
Thanks for recalling that.
That's why I keep it enabled. I could/should have mentioned what is mentioned in Pants' Ghacks User.js:
If set as false (disabled) [dom.indexedDB.enabled], this WILL break some [old] add-ons and DOES break a lot of sites' functionality. Applies to websites, add-ons and session data.
That's the facts but doesn't explain why this fantasy (as all storage) is required by some add-ons, some sites, some sites which don't even use it. I'm not a programmer so i'll put this mystery on the account of my ignorance.
SDC handles cooikies, localStorage and caches, not the IndexedDB data
IndexedDB is all in the user's profile 'storage' folder and managed by storage.sqlite. As far as I know nothing cleans the user's indexedDB, except deleting the storage folder together with the storage.sqlite file (once Firefox closed of course).
If many sites require
dom.indexedDB.enabled
to be set to true, which is the default Boolean value, most won't deposit any data, they just need it,
FWIW, there is also the Disable IndexedDB add-on which disables IndexedDB reliably. Unfortunately it doesn't work on a per-site basis, hence it's not possible to whitelist specific sites so you have to manually disable the add-on for them. Its github site is rather dormant - it seems that the add-on is no longer actively maintained.
The Disable IndexedDB Firefox add-on does the job indeed. I had it installed at one time after which I replaced it with another approach, that of the Custom Buttons add-on.
If you happen to be in need of several toolbar buttons performing various tasks you can install the add-on then add various buttons.
Custom Buttons add-on is available on AMO and on a dedicated forum. At one time the version on AMO was outdated so personally I had installed the custom_buttons-0.0.5.8.9-fixed2-signed from the link provided on that forum.
From there on, one of the interesting buttons made available for the add-on is the Preference Switcher (Basic) button. The button, as most others, may be duplicated with different settings, for instance,
Button 1 toggles dom.indexedDB.enabled (true/false)
Button 2 toggles security.mixed_content.block_display_content (true/false)
etc...(these 2 and others used here)
The advantage is that you have one add-only with a quasi infinity of settings possibilities.
Anyway, be it the Disable IndexedDB add-on or the Custom Button approach, both indeed require the user's input as there is no per-site global setting.
If Firefox, if whatever browser wants to handle storage on a user's computer, ok but then at least be the handling (on/off/per-site) correctly provided as it is with cookies. At this time storage is NOT accompanied by quick switches and this is not correct.
IMO best way to deal with indexedDB is to set to read-only two folders inside your-firefox-profile/storage directory. Those folders are default and temporary.
Now you can leave dom.indexedDB.enabled and all sites will think that indexedDB is enabled while they can't write anything inside it.
@zymase : Thanks for this hint. But since Custom Buttons doesn't offer no per-site setting, either, I don't see any advantage over Disable IndexedDB. And unfortunately bot add-ons are not WE's.
@zeepob : Interesting approach! I'm gonna try it immediately!
@zeepob : I've tried your suggestion for several sites which require IndexedDB and haven't run into any problems. Thanks again - neat trick, indeed!
To be on the safe side I made these two folders immutable with sudo chattr +i (I'm on Linux).
Getting slightly off topic here 😉
Anyways, I going to take a break on developing this extension for about a month, and then come back to see the state of all these issues/Mozilla Bug reports/etc.. I'll still respond or fix any major breaking issues (for the most part) that anyone has. Feel free to send in a PR (as per contributing code) for any issues.
@zeepob the concept of setting to read-only a file or folder is valuable and most often accepted (there are exceptions where Firefox will write/rename an important file which would have been set to read-only but this is seldom). To be noted: a folder may be made totally inaccessible by deleting it and creating a file (file, not foder) with the exact same name then set to read-only...
Anyway, if your tip works and I have no doubt it does (confirmed by @curiosity-seeker ) the problem remains when a site needs not only the setting dom.indexedDB.enabled to be true, but really needs to fill the user's storage folder : in such a case the trick may lead to problems. I'll have to try with posteo.de which uses my storage folder truly by filling it! At this time I run posteo.de in Private Mode so the local storage is only in RAM ... I still prefer a per-site approach.
@mrdokenny ,
Getting slightly off topic here
I agree and plead guilty. Digging may indeed carry off the path.
I'll stay on the track, sorry for getting off the main topic.
I plead guilty, too :-) I just want to confirm that posteo.de doesn't cause any probelms so far!
I just want to confirm that posteo.de doesn't cause any probelms so far!
Thanks, @curiosity-seeker ! Good to know!
Back to the topic: Localstorage Support
Free coffee and bagels :)
Alright just got an update for a new bug with a patch already!
Bug 1388428 - Extend browsingData to support removing localStorage by host
Sorry for blunt question but, did you implemented some localstorage cleanup solution?
Thank you in advance
Cheers
@crssi Not yet as the above bug "Extend browsingData to support removing localStorage by host" hasn't been resolved yet.
Thank you
It seems Self-Destructing Cookies doesn't clear localstorage anymore in FF55+ so there is no point using it instead of Cookie-AutoDelete
@curiosity-seeker There's also the Privacy Settings add-on, which has options for enabling/disabling Local Storage and Indexeddb.
@Pentarctagon : I know. But that add-on is not a webextension and, hence, won't work with FF57+.
If you like to play with deeper setting, you can get a lot of info at https://github.com/ghacksuserjs/ghacks-user.js
Cheers
Note: For future testing when local storage by host lands: test it fails with FPI
OT: the same will happen with indexedDB by host if/when that lands, and I hope this WE adds that functionality - stick it on the wish list - 1333050 FYI
Would it also be prudent to add that privacy.firstparty.isolate disables access to cookies (and local storage after testing) on the AMO page?
I might be naive, but have you looked at how uMatrix implements web storage clearing?
Note that Firefox may have a problem with IndexedDB, so this add-on can help here, when it is implemented.
@rugk One thing I did notice about that question is that they mention they are using Firefox 52, which they say has: "cache, cookies, website settings, download history, search history, browser history and active logins".
However, now in Firefox 55, I see another checkbox: Offline Website Data. I also use Tutamail, which stores ~8 MBs total in two entries under Advanced > Network > Offline Web Content and User Data. When this new checkbox is checked and after I close/reopen Firefox, both of Tutamail's entries now show as 0 bytes rather than the ~ 4 MBs each before I closed and reopened Firefox.
@Pentarctagon I'm not sure what exactly Mozilla means by "Offline Website Data," but I don't think it's IndexedDB. For ages now I've had my preferences set to clear OWD when Firefox closes – the checkbox has been there since FF 50 at least – but the storage folder in my FF profile is still full of website data whenever I check it.
@practik OWD does indeed refer to indexedDB, it's just been broken for 3+ years -now fixed
1047098 - fixed 58 and pushed to 56 and 57
1333050 - fixed 58 and pushed to 57
There are a dozen or more similar tickets, most closed as duplicate, to do with clearing "IDB/OWD" when closing and/or when using "clear recent history" with time range everything, and some with time range not everything.
1367607 - reopened to clarify issues with cookies + IDB/OWD
832660 - the UI needs fixing
That last one if 5 years old but sums up WTF? huh? UI is being overhauled anyway with a new permissions interface - see 1275599 which was resolved fixed 2 days ago. The new Storage API, new about:permissions, Photon and UI changes = expect a bit of a complete overhaul in wording
@Thorin-Oakenpants That was enlightening, thanks for the links!
Looks like extending browsingData to support removing localStorage by host just landed for Firefox.
^^They are not serious. Not until FF58. :(
It's only about 6 weeks and we've come so far, so let's count them down.. 😉
FF58 is scheduled for January 2018, which is quite more than 6 weeks.
But we might get lucky since a few resolved bugs planning to land on FF58 were after rescheduled for FF57.
Sorry, I'm on Beta, so I'll receive it way earlier. 😉
Bug reports aren't social clubs.
@pwd-github As the question, so the answer.
From that onwards, I wasn't after a chit chat, but after hinting at using the Beta for sooner usage of Cookie AutoDelete in Firefox.
Thanks for your understanding and proper manners next time.
Looks like extending browsingData to support removing localStorage by host just landed for Firefox.
It seems that this doesn't include IndexedDB, does it?
@curiosity-seeker No, localstorage is already covered by removeLocalStorage()
IDB is covered under 1333050 which just recently landed for 57
not entirely sure if this IDB currently allows by host and by time
not entirely sure if it also covers cache API + SW (service workers) cache and asm.js as discussed in the ticket
@Thorin-Oakenpants
IDB is covered under 1333050 which just recently landed for 57
not entirely sure if this currently allows by host and by time
Yes, that's what I meant. In the patch for 1388428 is no reference to IDB.
Will the Firefox version be able to clear local storage or do we need to wait for the Chrome API to land?
@kah0922 I usually enable/disable certain features (like Containers) based on the browser anyways. Right now I'm just waiting for actual documentation on the API to land as it could change from Nightly to Beta.
@mrdokenny According to 1388428
It should be available at Firefox 58
Hello, @mrdokenny.
You can wait a long time for the desired interface. But you can implement some useful functionality already.
The window.localStorage, window.sessionStorage, and window.cache objects control the entities for a particular domain, not for all domains. You do not have to wait for a special interface for the domain to appear. It already exists.
After closing the last tab for a specific domain, you can create a background tab for that domain and call the JavaScript code
window.localStorage.clear();
window.sessionStorage.clear();
window.caches.keys().then(function(cacheNames) {
return Promise.all( cacheNames.map(function(cacheName) { return caches.delete(cacheName); }) );
});
Next, you should close this background tab. This will clean the localStorage, sessionStorage, cache for this domain.
You can also watch how the "Clear Cache" extension works
https://chrome.google.com/webstore/detail/clear-cache/cppjkneekbjaeellbfkmgnhonkkjfpdn?utm_source=chrome-app-launcher-info-dialog
Also you can watch here how to work with the cache
Https://davidwalsh.name/cache
Please make a full cleanup for the closed tab for us.
Thank you a lot for your extension.
Sorry for my English. It is not my native language.
The API documentation seems to have landed.
What's the status of this issue? Firefox Quantum is out, will extension clear localstorage when installd?
Thanks.
The APIs for Firefox have landed, but Cookie AutoDelete has not been updated to implement support for Local Storage clearing yet.
Citing it here for convenience.
Excerpt from https://blog.mozilla.org/addons/2017/11/20/extensions-in-firefox-58/:
The browsingData API now supports clearing the indexedDB storage area
The browsingData API supports clearing localStorage by hostname, similar to cookies
I implemented a basic implementation of localstorage cleaning.
Here are some things I noticed:
C-AD will have to set a cookie for sites that don't set cookies so that the cleanup "knows" that site's cookies and localstorage should be cleaned when the tab closes
Might not be compatible with containers enabled (needs more testing)
It will not clear localstorage that was previously there (Maybe a deep clean option that uses your history to clear cookies and localstorage?)
C-AD will have to set a cookie for sites that don't set cookies so that the cleanup "knows" that site's cookies and localstorage should be cleaned when the tab closes
It might be good to open a bug report to mozilla/chrome. I don't think that it should work that way. Just seems counter-intuitive.
@mrdokenny
C-AD will have to set a cookie for sites that don't set cookies so that the cleanup "knows" that site's cookies and localstorage should be cleaned when the tab closes
I'm not quite sure that I understand. AFAIK, cookies and localstorage are linked together in Firefox (unless this has recently changed): no cookie -> no localstorage. So why would C-AD need/want to clear localstorage on sites where cookies and, hence, localstorage, are blocked?
@mrdokenny
It will not clear localstorage that was previously there (Maybe a deep clean option that uses your history to clear cookies and localstorage?)
For the localstorage cookies that were previously there why not just check them against the white/grey lists.
That way if there are not in the lists they will be cleaned.
Also, can you please consider having a beta so we can test it before its made official.
Thanks
For the localstorage cookies that were previously there why not just check them against the white/grey lists.
That way if there are not in the lists they will be cleaned.
Unlike cookies, there's no way to enumerate localstorage. The API requires you to pass in a list of hostnames. So what I am doing is getting that infomation from the cleanup part of cookies and passing that into the API for localstorage cleaning.
So if a site does not set cookies, then that hostname wouldn't get passed into the API for localstorage cleaning.
https://developer.mozilla.org/en-US/Add-ons/WebExtensions/API/browsingData/RemovalOptions
Also, can you please consider having a beta so we can test it before its made official.
Time is limited for me right now but I hope to get something out before Firefox 58 releases (since the API required for localstorage is in that version).
there's no way to enumerate localstorage. The API requires you to pass in a list of hostnames
Maybe it's worth to request better API in bugtracker as @publicarray suggested?
https://bugzilla.mozilla.org/show_bug.cgi?id=1329745
Unlike cookies, there's no way to enumerate localstorage.
Can we do a full cleanup from inside Firefox (or maybe from the profile folder)? A delete all is probably an easy way out for most people.
The current title of Bug 1329745 is "WE API to add/change localStorage items on a per-site basis" and the last comment is "Renaming, because removing is possible." So is this a wrong link to describe possibility of enumerating?
Can we do a full cleanup from inside Firefox (or maybe from the profile folder)? A delete all is probably an easy way out for most people.
I agree. The StoragErazor add-on claims that it "automatically removes data stored in DOM Storage (local storage) and IndexedDB when the browser restarts". This might be a way to go.
automatically removes data stored in DOM Storage (local storage) and IndexedDB when the browser restarts
I think this contradicts with the purpose of the extension - delete cookies/data that you don't need. Obviously I don't want to remove data of the sites which I reguarly use.
I think this contradicts with the purpose of the extension - delete cookies/data that you don't need. Obviously I don't want to remove data of the sites which I reguarly use.
Ofcourse but it is a response to "It will not clear localstorage that was previously there". CAD won't be able to delete old data, a complete delete should give us the option to start with a fresh/clean localstorage and start whitelisting/blacklisting from there.
I uploaded 2.1.0b1 to the AMO beta channel which has localstorage support.
Some notes:
The API for localstorage cleaning doesn't appear to support Containers. So if cookies from one container are cleared, then all of the site's localstorage from all containers are deleted regardless of whitelisting rules.
You are still able to enable both Containers and Localstorage cleaning but a warning will appear.
TBH, I haven't noticed anything major with both of them on, but still better to put a warning there.
Due to the way this extension works, I am placing a temporary cookie on sites that have no cookies if you have the localstorage setting on.
The cookie's name and content are very obvious to what the purpose is.
Fortunately, not many sites only use localstorage, so this shouldn't be a major issue.
This can't clear localstorage that was there previously
I'll eventually add a popup action that clears all of them manually or you can use something like StorageErazor
Some Test sites
A test site that uses only localstorage and no cookies:
https://mdn.github.io/dom-examples/web-storage/
Soundcloud uses localstorage for the volume control.
https://soundcloud.com/
BYW, why don't set Path to some special value to avoid sending this CookieAutoDelete cookie on each request to the given host?
For example
Path: /cookie-for-localstorate-cleanup
Due to the way this extension works, I am placing a temporary cookie on sites that have no cookies if you have the localstorage setting on.
How does this affect users who would like to use something like #95?
How does this affect users who would like to use something like #95?
My other option is to store which websites you visited and pass that along to the localstorage API, which might not be ideal privacy-wise.
Using a fixed path name makes it easy to fingerprint Cookie AutoDelete users: https://anewuser.github.io/dom-examples/web-storage/index.html
Can you set it to a different random string every time a new dummy cookie is created?
Since localstorage "support" is there, FF version 58 is out, and this thread is getting long, I'm going to close this issue. Any future issues with localstorage should be new issue. Thanks for the support.
@mrdokenny
Was this thread just for FF and not Chrome.
@zero77 Technically there is "Clear localstorage for this domain" in the "Clean" dropdown menu of the popup. This does not require an API since CAD injects a content script, so it works in Chrome.
I also doubt that this bug will be resolved anytime soon.
https://bugs.chromium.org/p/chromium/issues/detail?id=78093
I see, ok thanks.
@mrdokenny
Technically there is "Clear localstorage for this domain" in the "Clean" dropdown menu of the popup. This does not require an API since CAD injects a content script, so it works in Chrome.
Dose this happen automatically when the tab is closed, if not is it possible.
Dose this happen automatically when the tab is closed, if not is it possible.
He has said:
I'm going to close this issue. Any future issues with localstorage should be new issue.
And he has also already pointed you to the bug that needs to be solved by Chromium developers for it to be possible automatically: https://bugs.chromium.org/p/chromium/issues/detail?id=78093
Please don't post comments here anymore. Many people had subscribed to this issue and are still receiving notifications about your posts.
@mrdokenny Consider to restrict comments for this issue to contributors-only.
|
gharchive/issue
| 2017-06-14T03:25:30 |
2025-04-01T04:35:08.164499
|
{
"authors": [
"Eagle3386",
"EchoDev",
"HOxQRLGVTr",
"Kagami",
"Pentarctagon",
"Rictusempra",
"Solomon1732",
"Stebalien",
"Thorin-Oakenpants",
"Vincent43",
"anewuser",
"crssi",
"curiosity-seeker",
"grenzor",
"gwarser",
"ihateregs",
"kah0922",
"little-arhat",
"mrdokenny",
"practik",
"publicarray",
"pwd-github",
"rugk",
"ruv",
"spinda",
"willsALMANJ",
"zeepob",
"zer0def",
"zero77",
"zymase"
],
"repo": "mrdokenny/Cookie-AutoDelete",
"url": "https://github.com/mrdokenny/Cookie-AutoDelete/issues/44",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
957040121
|
SmartMatrix style multiplexing of addr and RGB data?
Hi,
I need to find a way to lower the memory footprint of my software which is, so far, based around the SmartMatrix library. I'm just starting adapting your code to the SmartMatrix style of latching out RGB and addr lines on the gpio. (Since I can't easily change the hardware I already have.) It's a bit sticky, to put it mildly. Do you have any suggestions along this route? I'll post in this thread when I finish working my way through the two libraries...
Thanks!
Rudy
This library doesn't support any form of multiplexing / dual use of GPIOs, you would be best suited to sticking with the SmartMatrux library.
I've patched your libraries to work with multiplexed addr and RBG data using the method SmartMatrix uses. The main reason to do this is for the memory savings. One 64x64 panel at an lsbMsbTransitionBit of 3 (for speed) uses only 23kB. I got eight 64x64 panels working on the PatternPlasma demo (at a paltry frame rate of 9fps), but I was still only using 133kB.
I'm not sure if this is something you'd like to include in your library or not. And I don't understand Git well enough to know the best way to upload the patch to let you use it. I tried to stick with your coding style where possible.
Let me know if you're interested and how best to get the code to you.
Best,
Rudy
Hi @twinotter,
Raise a pull request and I'll take a look.
If there's a 'clean' way of doing this then I may merge. I want to try avoid making this library needlessly complex for rare use cases - but given you have made these changes and I am working on support for the 'newer' ESP32's which have a pitiful amount of RAM (Only 320kB vs. 520kB on the 'original'), this could actually be quite handy.
https://github.com/mrfaptastic/ESP32-HUB75-MatrixPanel-I2S-DMA/tree/386SX-33
Attach a .zip file here of your modified library as well if you want, if that's easier.
here's the zip file. I'll see if I can figure out pull requests. I've been meaning to learn git. :)
ESP32_HUB75_LED_MATRIX_PANEL_DMA_Display.zip
|
gharchive/issue
| 2021-07-30T20:27:06 |
2025-04-01T04:35:08.263987
|
{
"authors": [
"mrfaptastic",
"twinotter"
],
"repo": "mrfaptastic/ESP32-HUB75-MatrixPanel-I2S-DMA",
"url": "https://github.com/mrfaptastic/ESP32-HUB75-MatrixPanel-I2S-DMA/issues/159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
270573392
|
Use b-table instead table
there is a component b-table in bootstrap-vue ,so it's better to use it instead raw html table tag.
Thanks for this hint. We are moving to b-table in the next release.
v1.0.6
|
gharchive/issue
| 2017-11-02T08:43:51 |
2025-04-01T04:35:08.288222
|
{
"authors": [
"wxs77577",
"xidedix"
],
"repo": "mrholek/CoreUI-Vue",
"url": "https://github.com/mrholek/CoreUI-Vue/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1027922480
|
Is there a way to show both path and title in nav bar (read/focus mode)?
Currently we are using "view" mode because that is the only view that contains the list of endpoints / description at a glance. However, the read/focus mode is awesome, with features like filtering, and easier navigation in general.
Is there a way that we can display the endpoint on the nav list?
The read/focus mode is awesome, with features like filtering,
Filter and searching both are available in view mode too
Is there a way that we can display the endpoint on the nav list?
use-path-in-nav-bar = true | false Example https://mrin9.github.io/RapiDoc/examples/nav-item-as-path.html
feel free to close the issue, if it takes care of ur use case
|
gharchive/issue
| 2021-10-16T01:56:28 |
2025-04-01T04:35:08.293898
|
{
"authors": [
"mrin9",
"sammy-tam"
],
"repo": "mrin9/RapiDoc",
"url": "https://github.com/mrin9/RapiDoc/issues/585",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
140535283
|
running as cronjob
Thanks for a great tool. I had 2 questions related to running db-sync as a cronjob via cli:
Is there a way to suppress the output to certain log levels, so that only warnings or errors are printed?
I noticed that when an error occurs, the exit code is still 0. Is it possible to get a non-zero exit code when there is an error?
Thanks! I'll look at fixing the error code and also add some -v levels to the options.
Cheers
Joe Green
On 13 Mar 2016, at 22:14, rbro notifications@github.com wrote:
Thanks for a great tool. I had 2 questions related to running db-sync as a cronjob via cli:
Is there a way to suppress the output to certain log levels, so that only warnings or errors are printed?
I noticed that when an error occurs, the exit code is still 0. Is it possible to get a non-zero exit code when there is an error?
—
Reply to this email directly or view it on GitHub.
I've added quite options and fixed the error code to return non zero on failure
|
gharchive/issue
| 2016-03-13T22:14:48 |
2025-04-01T04:35:08.297539
|
{
"authors": [
"mrjgreen",
"rbro"
],
"repo": "mrjgreen/db-sync",
"url": "https://github.com/mrjgreen/db-sync/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
925897069
|
Getting MQTT Error
HI
I have been using this custom component for a while and haven't had any issues, however lately I have noticed that it stops working
The errors in the event log are
[custom_components.blitzortung.mqtt] Failed to connect to MQTT server due to exception: [Errno -3] Try again
I didn't think this used MQTT, as there isn't an option to point it to a broker, unless I am wrong...
I've just confirmed that MQTT server is working and a lot of clients are connected now (but it looks like there was some downtime 20 jun, ~0:00 UTC) What HA version are you using (maybe you did upgrade recently)?
Thanks for the quick reply.. I checked if my Adguard or PiHole were doing any blocking, but I couldn't see any
I did find that blitzortung.ha.sed.pl was been forwarded successfully to the upstream DNS.
However when I tried to ping or use nslookup to resolve it, it doesn't resolve to an IP..
I tried using a few DNS servers and nothing worked. Not sure if that is relevant or not..
I haven't updated HA lately. I am still on core-2021.5.5
However when I tried to ping or use nslookup to resolve it, it doesn't resolve to an IP..
I tried using a few DNS servers and nothing worked. Not sure if that is relevant or not..
Ha - so this is strangest part, this domain should definitely resolve to IP. Could you show output of:
dig blitzortung.ha.sed.pl
OK, Now it is resolving. Below is the dig command
; <<>> DiG 9.16.15 <<>> blitzortung.ha.sed.pl
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25678
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 689d9c47e346e087 (echoed)
;; QUESTION SECTION:
;blitzortung.ha.sed.pl. IN A
;; ANSWER SECTION:
blitzortung.ha.sed.pl. 60 IN CNAME ec2-35-158-104-63.eu-central-1.compute.amazonaws.com.
ec2-35-158-104-63.eu-central-1.compute.amazonaws.com. 60 IN A 35.158.104.63
;; Query time: 4071 msec
;; SERVER: 172.30.32.3#53(172.30.32.3)
;; WHEN: Mon Jun 21 19:34:29 AEST 2021
;; MSG SIZE rcvd: 217
I also was able to resolve it from my windows desktop as well
Default Server: pi-hole
Address: 192.168.1.20
blitzortung.ha.sed.pl
Server: pi-hole
Address: 192.168.1.20
Non-authoritative answer:
Name: ec2-35-158-104-63.eu-central-1.compute.amazonaws.com
Address: 35.158.104.63
Aliases: blitzortung.ha.sed.pl
And it now appears to be working.. Very strange...
Maybe it was some problem with DNS provider for my domain - please let me know if it repeats.
|
gharchive/issue
| 2021-06-21T07:05:16 |
2025-04-01T04:35:08.308169
|
{
"authors": [
"craigmate",
"mrk-its"
],
"repo": "mrk-its/homeassistant-blitzortung",
"url": "https://github.com/mrk-its/homeassistant-blitzortung/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
136527787
|
Mrknow nie działa z windows 10
Witam,
Po zainstalowaniu windows 10 mam problem z MRKNOW, nie działa, nawet ta najnowsza wersja . Specto działa bez problemu, nie wiem o co chodzi.
Posiadam Kodi wersja 16
Pozdrawiam i czekam na jakies info
log?
Ja mam 10 i u mnie działa ...
OK, u mnie wszystko działałao, jak miałem 8.1, ale przestało [po upgrdade
do windows 10, nie wiem czy to jest powiazane czy przypadek, dlatego szukam
pomocy, moze muszę ustawić cos dodatkowo o czym nie wiem, czemu specto
działa bez problem?
2016-02-26 7:26 GMT+00:00 mrknow notifications@github.com:
Ja mam 10 i u mnie działa ...
—
Reply to this email directly or view it on GitHub
https://github.com/mrknow/filmkodi/issues/140#issuecomment-189146199.
jaki masz user name do windowsa?
podaj logs please?
chmurarafal1@gmail.com
2016-02-26 11:50 GMT+00:00 xxcriticxx notifications@github.com:
jaki masz user name do windowsa?
podaj logs please?
—
Reply to this email directly or view it on GitHub
https://github.com/mrknow/filmkodi/issues/140#issuecomment-189239453.
A po co ci ten log?
2016-02-26 11:58 GMT+00:00 Rafał Chmura chmurarafal1@gmail.com:
chmurarafal1@gmail.com
2016-02-26 11:50 GMT+00:00 xxcriticxx notifications@github.com:
jaki masz user name do windowsa?
podaj logs please?
—
Reply to this email directly or view it on GitHub
https://github.com/mrknow/filmkodi/issues/140#issuecomment-189239453.
pokaze blendy jakie kodi znalazlo
czy user do windowsa masz Rafał czasem?
nie, jak tylko updradowałem do windows 7 uzywalem tego loga co ci podałem
Nie rozumiemtylko czemu specto dziala bez probelmu a to nie, ale ty pewnie
jestes wiekszym fachowcem I moze tak moze byc
jesli uwazasz ze mam cos zmienic, loga czy cos, to daj znac, bo irytuje
mnie to strasznie
na drugim kompie co mam jest windows 7 I tam dziala bez problem, na
tablecie dziala bez problem, a tu ciagle jest blad dodatku
W dniu 26 lutego 2016 14:09 użytkownik xxcriticxx notifications@github.com
napisał:
pokaze blendy jakie kodi znalazlo
czy user do windowsa masz Rafał czasem?
—
Reply to this email directly or view it on GitHub
https://github.com/mrknow/filmkodi/issues/140#issuecomment-189294453.
python nie rozumie polskich znakow dlatego pytam jak sie logujesz do windowsa?
jesli nie polskie znaki log pokaze wiecej
czekam na loga
https://github.com/mrknow/filmkodi/wiki/Jak-przesłać-logi-z-kodi
http://mods-kodi.pl/articles.php?article_id=36
http://iptvlive.org/97-pomoc/98-jak-wyslac-log-file
bez polskich znaków, ale loga ci podałem już chmurarafal1@gmail.com, tak
sie loguje
W dniu 26 lutego 2016 15:09 użytkownik xxcriticxx notifications@github.com
napisał:
python nie rozumie polskich znakow dlatego pytam jak sie logujesz do
windowsa?
jesli nie polskie znaki log pokaze wiecej
czekam na loga
https://github.com/mrknow/filmkodi/wiki/Jak-przesłać-logi-z-kodi
http://mods-kodi.pl/articles.php?article_id=36
http://iptvlive.org/97-pomoc/98-jak-wyslac-log-file
—
Reply to this email directly or view it on GitHub
https://github.com/mrknow/filmkodi/issues/140#issuecomment-189317012.
chmurek nie LOGIN tylko log - to coś , co jest plikiem zdarzeń tego co się wydarzyło od momentu włączenia programu itd.
Odinstaluj KODI wywal repo i zainstaluj na nowo - wszystko na 10 działa przechodziłem z 7 na 10 i zrobiłem tak i wsio hula do dziś - pozdro
To co, pomoże ktoś, bo dalej nie działa i nie wiem o co to ku.. chodzi...
oczywiscie jak podasz log
ok, a jak mogé znaleśc ten log, bo u mnie pojawia się info błąd dodatku,
szczególowe info w rejestrze zdarzeń, a ajk ide do zdarzeń to jedyne co
jest to że kodi się poprawnie uruchomiło o tej i o tej godzinie
2016-02-29 0:25 GMT+00:00 xxcriticxx notifications@github.com:
oczywiscie jak podasz log
—
Reply to this email directly or view it on GitHub
https://github.com/mrknow/filmkodi/issues/140#issuecomment-189979880.
https://github.com/mrknow/filmkodi/wiki/Jak-przesłać-logi-z-kodi
http://mods-kodi.pl/articles.php?article_id=36
http://iptvlive.org/97-pomoc/98-jak-wyslac-log-file
Mam problem z wtyczką po backupie. Zrobilem kopie kodi na androidzie i zaktualizowalem to na WIN10. Wszystkie wtyczki i ustawienia dzialaja oprocz MrKnow. Wczesniej mrknow działal prawidlowo na WIN10. Prosze o pomoc bo juz nie mam koncepcji. Link do loga http://xbmclogs.com/ppu2y8vmx
Witam,
Jak zainstalowałem kodi log uploader to on jakby nie działał.
Dodałem email, dałem ok, jak się pojawi błąd to idę do tego dodatku kodi log uploader, klikam na niego, ale nic nie działa (wiec nie ma okna z pytaniem czy ma wysłać maila).
Nawet jak z poziomu konfiguruj klikam na uruchom to nic się nie dzieje, tak jakby w ogóle nie reagowało, masacra jakaś.
Nie wiem, ale to chyba jakaś większa sprawa u mnie jest????
R.
napewno cos prostego
próbowaliście odmontowac plugin i zamontowac jeszcze raz?
Tak. Odinstalowalem najpierw plugin i repozyturium ale nie pomoglo. Odinstalowalem cale Kodi a potem przywrocilem kopie z backupu. Rezultat taki sam. Bład dodatku, jednoczesnie specto działa bardzo dobrze. Czy z mojego loga nic nie wynika ?
przegladam jestem w pracy troche zajmie czasu
20:16:42 T:8852 ERROR: EXCEPTION Thrown (PythonToCppException) : -->Python callback/script returned the following error<--
- NOTE: IGNORING THIS CAN LEAD TO MEMORY LEAKS!
Error Type: <type 'exceptions.ImportError'>
Error Contents: Bad magic number in C:\Users\Mario\AppData\Roaming\Kodi\addons\script.module.xbmcfilm\lib\Parser.pyo
Traceback (most recent call last):
File "C:\Users\Mario\AppData\Roaming\Kodi\addons\plugin.video.mrknow\default.py", line 108, in
import wykop, joemonster, milanos,filmbox,vodpl, filmydokumentalne, zalukaj, efilmyseriale
File "C:\Users\Mario\AppData\Roaming\Kodi\addons\plugin.video.mrknow\host\milanos.py", line 15, in
import mrknow_pLog, settings, Parser
ImportError: Bad magic number in C:\Users\Mario\AppData\Roaming\Kodi\addons\script.module.xbmcfilm\lib\Parser.pyo
-->End of Python script error report<--
mrknow bendzie potrzebny
Dobra u mnie problem rozwiązany :-)
C:\Users\Mario\AppData\Roaming\Kodi\addons\script.module.xbmcfilm\lib\Parser.pyo
trzeba zmienic nazwe na parser.pyo z malej litery :-) Wzcesniej kombinowalem na androidzie z tymi nazwami bo nie dzialalo i widocznie cos tam zostalo
youtube dodatek masz zamontowany?
Tak. wszystko działa. Problem rozwiązany.
youtube w ostatniej ver?
5.1.17 nie wiem czy to najnowsza wersja. Rzadko korzystam z YT.
potrzebne do filmow dokumentalnych
U mnie dalej nic nie działa, teraz nawet już specto przestało działać...
Chyba muszę się pogodzić, że upgrade do windows 10 zakończył moją erę kodi..
no chyba, że ktoś mi pomoże w końcu...
Poprzednie porady nie dają rady....
napisz co do tej pory zrobiles
od montuj wszystko i zacznij od nowa
Kilka razy odinstlowywałem I kodi I wtyczki, nie działa
Nie dziala loguploader, żebym mógł chociaż problem wyświetlić...
Nie wiem co miałbym zrobić jeszcze,
raczej Windowsa 10 to wolalbym nie odinstalowywać itp
2016-03-01 22:57 GMT+00:00 xxcriticxx notifications@github.com:
napisz co do tej pory zrobiles
od montuj wszystko i zacznij od nowa
—
Reply to this email directly or view it on GitHub
https://github.com/mrknow/filmkodi/issues/140#issuecomment-190949752.
jakiego masz usera do windows? tyle ray pytalem
kiedys dziala?
podaj wiecej info
Kilka razy już ci pisałem
User do windowsa: chmurarafal1@gmail.com
Działało jak miałem windows 8.1, jak updragowałem do 10 to przestało działać, tzn. Nigdy nie działało jak mam windows 10, mimo wielokrotnego instalowania i odinstalowania kodi (wersje 15.2 i teraz 16).
Na początku Specto działało, teraz już nic nie działa, żadna wtyczka…
Moim zdaniem, cos to blokuje, ale nie wiem co i gdzie szukać…
R.
chmurka od kiedy email jest userem w windowsie?
oczywiści, że nie jest, sorry..
Już sobie sam poradziłem z problemem....
jak rozwiazales?
Temat widzę zamknięty a ja mam ten sam problem.
WIN10 Home Edition.
Konto zmieniłem bez polskich znaków.
Plik Parser.pyo jest z małych liter.
Mrknow- wywala błąd
KODI Log....też nie moge uruchomić - wywala błąd
Ale wtyczka SPECTO działa :)
Przekopałem internet- nic nie znalazłem. Kodi odinstalowałem 3 razy, czyściłem rejestr etc. Dalej to samo.
KODI- ostatnia wersja. 16.xxx
mrknow- ostatnia dostepna wtyczka.
Może jakiś pomysł co jeszcze mógłbym sprawdzić ? :)
@marqius2, w robocie właśnie mnie zmuszają do pracy z Windowsem (co za pech), akurat 10, to w tygodniu postaram się sprawdzić i zobaczyć czy działa.
|
gharchive/issue
| 2016-02-25T22:12:51 |
2025-04-01T04:35:08.343504
|
{
"authors": [
"chmurek77",
"marqius2",
"mrknow",
"plumers",
"rysson",
"xxcriticxx",
"zlotychlopak"
],
"repo": "mrknow/filmkodi",
"url": "https://github.com/mrknow/filmkodi/issues/140",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
141150874
|
Read Write lock cannot be correctly unlocked
in ReadLock and WriteLock, the unlock method use thread id to determine whether it is owned by this thread. it works fine for a single machine.
However, when it is used in a distributed processing framework such as MapReduce, the Read and Write lock cannot be correctly unlocked.
I browse the source codes and find that the UUID is generated in the lock, I think it is better to use UUID as lock id istead of thread id or provides a way to let users to set a unque id.
ReadLock and WriteLock should be tied to thread which invoked lock method. Same thread should be used for unlock method, this is a rule for all locks in java.
I cannot fully understarnd your comment, does it mean that redission cannot be used across different JVMs or different machines.
I used redission in MapReduce framework and each mapper executes codes like this
redission.getReadWriteLock("lock").readLock().lock();
....
redission.getReadWriteLock("lock").readLock().unlock();
however I often got IllegalMonitorStateException which tells me that the lock is not owend by this thread.
I make sure that there is no error in my codes and the unlock and lock method can be executed in pairs.
I have modified the codes of redission codes and added field in RedissionLock, RedissionWriteLock and RedissionLock to store my own generated unque id. It works fine temporarily, but I think it is better for redission to provides a better mechnism.
I also have tried to modify source codes of JDK Thread class to set my own thread id, but I gave up as there are native methods in the source codes.
I think it would be an improvent for redission to support locks that can work well across different machines.
ReadWriteLock does work in distributed env. But if you locked any lock in one thread you should use same thread for unlocking. This is the rule of thumb.
I wonder whether you have test redission lock in a distributed environment.
In a distributed environment, two threads in different machines may have the same thread Id. In this case, how does Redission check which thread is holding this lock.
If Redission does work in distributed environment, can you tell me why the codes does not work in my case?
I used redission in MapReduce framework and each mapper executes codes like this
redission.getReadWriteLock("lock").readLock().lock();
....
redission.getReadWriteLock("lock").readLock().unlock();
however I often got IllegalMonitorStateException which tells me that the lock is not owend by this thread.
I make sure that there is no error in my codes and the unlock and lock method can be executed in pairs.
I have modified the codes of redission codes and added field in RedissionLock, RedissionWriteLock and RedissionLock to store my own generated unque id. It works fine.
Lock contains info about RedissonClient (UUID) and thread in format uuid:threadId
Redisson Lock objects is proven by usage in cluster by many Redisson customers. It's the first object which they used in Redisson.
It's possible that you using another RedissonClient with same thread for unlock. Then you will get same issue.
I used redission in MapReduce framework and each mapper executes codes like this
redission.getReadWriteLock("lock").readLock().lock();
....
redission.getReadWriteLock("lock").readLock().unlock();
@milesandnick Just wondering how many Redisson instances to you have in total in the program? Am I right to assume you have one Redisson instance per mapper?
@jackygurui Yes, I have one Redission instance per mapper. Does this cause this kind of problems?
@mrniko I use only one RedissonClient per mapper, so there would not be another client with the same thread.
@milesandnick Are you using the same redisson instance to do the unlock?
@milesandnick Also I think it's easy to see the cause of problem if you add some logging thread and UUID info to lock and unlock methods
@milesandnick have you resolved it?
I guess that may be the lock exceeds the expire time and be unlocked automically.
@mrniko
It could be so
|
gharchive/issue
| 2016-03-16T02:04:03 |
2025-04-01T04:35:08.353615
|
{
"authors": [
"jackygurui",
"milesandnick",
"mrniko"
],
"repo": "mrniko/redisson",
"url": "https://github.com/mrniko/redisson/issues/436",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
162134004
|
RMapCache with timeout leaves redisson__idle__set__ behind
If you create a RMapCache with a TTL entry then delete the map, the idle_set is left behind. This should be deleted too.
E.g. On a clean redis instance, this leaves the ZSET redisson__idle__set__{testRMapCacheValues}
final RMapCache<String, String> map = redis.getMapCache( "testRMapCacheValues" );
try
{
map.put( "1234", "5678", 0, TimeUnit.MINUTES, 60, TimeUnit.MINUTES );
System.out.println( map.values() );
}
finally
{
map.delete();
}
127.0.0.1:6379> scan 0
"0"
"redisson__idle__set__{testRMapCacheValues}"
127.0.0.1:6379> zscan "redisson__idle__set__{testRMapCacheValues}" 1
"0"
""1234""
"1466772780912"
Fixed
|
gharchive/issue
| 2016-06-24T12:10:56 |
2025-04-01T04:35:08.357569
|
{
"authors": [
"mrniko",
"neilwightman"
],
"repo": "mrniko/redisson",
"url": "https://github.com/mrniko/redisson/issues/540",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
910640187
|
CA Detection Issue (Mi 11 Ultra Global/Europe)
Hi,
I use the Netmonster App since a long time, but now since I have my new phone it does not work properly as it should. My Mi 11 Ultra is on MIUI 12.5.3 Android 11.
At home I have 2CA B1+20 from o2 Germany, but the app does not recognize it like with my old phone properly does.
I noticed that the system app phone info has wrong download and upload values, too. (Already told the MIUI Rom Developers)
I suspected that the current ROM of my phone sends wrong readouts, but today I traveled to another city and could read 2CA of B7+20 just fine.
The speeds are as expected and LTE+/4G+ icons are there.
Maybe it's also an issue with Netmonster Core. Idk.
I cross-tested with a SIM of Deutsche Telekom in my Mi 11 Ultra. It seems to detect properly all the time.
O2 - de sadly only on B7+B20.
The readouts look exactly the same between o2 and DT, so I guess it is a problem of Netmonster not Interpreting correctly? I don't really know where to start.
To further clarify:
Netmonster shows "LTE-A xxxx", but not the other cells on o2 - de. It just works on 2600+800. Telekom and Vodafone work fine.
So I tested with a Vodafone.de SIM again and it seems the readout issue is also present here.
Probably Telekom, too, but I cannot test that at the moment. I don't have the necessary CA stations here.
So the bug is at least reproducable for B1+B20 on the Mi 11 Ultra. My other phone the Mi 9T Pro works fine over all combinations, though.
@mroczis Is there a way to send you debug info from NetMonster that maybe tells you more about the issue?
Detection is based on some officially not published metadata. It works flawlessly on some phones, yet on others it's a bit sketchy. The other factor that comes into the game is carrier-specific configuration. Some manufacturers enable only certain combinations of CA per carrier, behaviour you described could explain the reason why NetMonster works this way.
But according to this website https://cacombos.com/device/M2102K1G your device should be able to connect to such combos.
Detection is based on some officially not published metadata. It works flawlessly on some phones, yet on others it's a bit sketchy. The other factor that comes into the game is carrier-specific configuration. Some manufacturers enable only certain combinations of CA per carrier, behaviour you described could explain the reason why NetMonster works this way.
But according to this website https://cacombos.com/device/M2102K1G your device should be able to connect to such combos.
Yes. It supports it and also connects to both. I verified this by speedtesting and with other phones where I can lock the bands and get the same speeds.
I can't share a video on here, but in the phone info screen, where you can see the Physical Channel Config, the config disappears a lot. I assume it is a ROM bug and I already reported it to MIUI Devs.
Nevertheless Netmonster seems to catch all the other CA Combos just alright for example 900 + 1800 + 2100. I also noticed it seemed to work once or twice with 800 + 2100, too, but only for a few moments, before disappearing again.
I can provide you Diag-Logs if that would help you.
Oh that's quite sad, but still thanks for the info. Maybe with Android 12 it works as expected again.
Detection is based on some officially not published metadata. It works flawlessly on some phones, yet on others it's a bit sketchy. The other factor that comes into the game is carrier-specific configuration. Some manufacturers enable only certain combinations of CA per carrier, behaviour you described could explain the reason why NetMonster works this way.
But according to this website https://cacombos.com/device/M2102K1G your device should be able to connect to such combos.
Yes. It supports it and also connects to both. I verified this by speedtesting and with other phones where I can lock the bands and get the same speeds.
I can't share a video on here, but in the phone info screen, where you can see the Physical Channel Config, the config disappears a lot. I assume it is a ROM bug and I already reported it to MIUI Devs.
Nevertheless Netmonster seems to catch all the other CA Combos just alright for example 900 + 1800 + 2100. I also noticed it seemed to work once or twice with 800 + 2100, too, but only for a few moments, before disappearing again.
I can provide you Diag-Logs if that would help you.
I noticed that on my other phone in LTE Config "mContextIds" and "mPhysicalCellId" aren't *** or [] values. They have meaningful info.
Could that be a ROM Issue?
/**
* Return a copy of this PhysicalChannelConfig object but redact all the location info.
* @hide
*/
public PhysicalChannelConfig createLocationInfoSanitizedCopy() {
return new Builder(this).setPhysicalCellId(PHYSICAL_CELL_ID_UNKNOWN).build();
}
I found this in the AOSP Physical Channel Config Code. Is it possibly related to this?
What could cause the redaction of all cell location info?
Today my other phone - a Mi 9T Pro - got MIUI 12.5 and Android 11. So I tested if it also has this bug now and voila: The same bug is present on there now, too.
Strange enough, one of my friends uses a Samsung Galaxy S21 Ultra with Android 11 and does not have this bug at all.
It seems to lie somewhere in the MIUI 12.5 and Android 11 Combo.
PhysicalChannel is unfortunately not accessible for developers for a while. You can see it in Phone Info as you noted but it's buggy. It's not your ROM issue but generally AOSP issue. There were multiple attempts not only from my side to make it public but Google engineers do not feel like we should have access to it. Related issues are : https://issuetracker.google.com/issues/188565322, https://issuetracker.google.com/issues/182811458#comment8.
Oh no. I understand the real problem now. Android 11+ sucks for API access. I guess Netmonster is half-broken for the time being on phones with 11+. Hopefully proper API access will come back in some future version.
Physical Channel is not accessible since Android 11, hence there's no way to use it to improve cell info.
@mroczis Since the latest Netmonster update a few days ago, the CA detection seems to work again somewhat. Weird enough just for Deutsche Telekom, not for Vodafone or o2-de/Telefonica.
Did you manage to get a work-around or is it something on the network side?
Hello, any news ?
Same issue on Android 12.
I may have a workaround, you can get the Physical Channel configuration by executing this shell command through adb:
logcat -b radio RILQ:S | grep UNSOL_PHYSICAL_CHANNEL_CONFIG
Unfortunately, it may require root access and you would need to code the shell command directly in you application.
There is similar issue in my device (Mi11 Venus). The Android version is A13.
|
gharchive/issue
| 2021-06-03T16:02:45 |
2025-04-01T04:35:08.378266
|
{
"authors": [
"Blykam",
"Philippe2705",
"devenhala",
"mroczis"
],
"repo": "mroczis/netmonster-core",
"url": "https://github.com/mroczis/netmonster-core/issues/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1510190196
|
ReentrancyCheck App crash when using dispatch async queue and callback
WHAT
i am trying to add a callback for getting values via the dispatch queue in Objective C, however when i write the code as mentioned below i get the following error
Thread 2 Crashed:: Dispatch queue: com.apple.root.default-qos
0 RnAppTrial 0x10bdab6a2 facebook::react::(anonymous namespace)::ReentrancyCheck::before() + 322 (HermesExecutorFactory.cpp:123)
1 RnAppTrial 0x10bdab555 facebook::jsi::detail::BeforeCaller<facebook::react::(anonymous namespace)::ReentrancyCheck, void>::before(facebook::react::(anonymous namespace)::ReentrancyCheck&) + 21 (decorator.h:415)
2 RnAppTrial 0x10bdab533 facebook::jsi::WithRuntimeDecorator<facebook::react::(anonymous namespace)::ReentrancyCheck, facebook::jsi::Runtime, facebook::jsi::Runtime>::Around::Around(facebook::react::(anonymous namespace)::ReentrancyCheck&) + 35 (decorator.h:740)
3 RnAppTrial 0x10bdab4dd facebook::jsi::WithRuntimeDecorator<facebook::react::(anonymous namespace)::ReentrancyCheck, facebook::jsi::Runtime, facebook::jsi::Runtime>::Around::Around(facebook::react::(anonymous namespace)::ReentrancyCheck&) + 29 (decorator.h:739)
4 RnAppTrial 0x10bda2b75 facebook::jsi::WithRuntimeDecorator<facebook::react::(anonymous namespace)::ReentrancyCheck, facebook::jsi::Runtime, facebook::jsi::Runtime>::isFunction(facebook::jsi::Object const&) const + 37 (decorator.h:636)
5 RnAppTrial 0x10be49fd1 facebook::jsi::Object::isFunction(facebook::jsi::Runtime&) const + 33 (jsi.h:622)
6 RnAppTrial 0x10be4a60c facebook::jsi::Object::asFunction(facebook::jsi::Runtime&) const & + 60 (jsi.cpp:202)
7 RnAppTrial 0x10bf534ae invocation function for block in MmkvHostObject::get(facebook::jsi::Runtime&, facebook::jsi::PropNameID const&)::$_3::operator()(facebook::jsi::Runtime&, facebook::jsi::Value const&, facebook::jsi::Value const*, unsigned long) const + 110 (MmkvHostObject.mm:185)
8 libdispatch.dylib 0x1102838e4 _dispatch_call_block_and_release + 12
9 libdispatch.dylib 0x110284b25 _dispatch_client_callout + 8
10 libdispatch.dylib 0x11028731e _dispatch_queue_override_invoke + 835
11 libdispatch.dylib 0x110295310 _dispatch_root_queue_drain + 424
12 libdispatch.dylib 0x110295c5e _dispatch_worker_thread2 + 155
13 libsystem_pthread.dylib 0x1127c8f8a _pthread_wqthread + 256
14 libsystem_pthread.dylib 0x1127c7f57 start_wqthread + 15
This error gets hit in the hermesExecutor.cpp file, i was wondering if this i probably because the memory reference of arguments / useCallbackRef has changed and hence the app crashes. Im fairly new to Cxx and hence may not have a lot of knowledge about its internals
Code
if (propName == "getStringWithCallback") {
// MMKV.getString(key: string)
return jsi::Function::createFromHostFunction(runtime,
jsi::PropNameID::forAscii(runtime, funcName),
2, // key
[this](jsi::Runtime& runtime,
const jsi::Value& thisValue,
const jsi::Value* arguments,
size_t count) -> jsi::Value {
if (!arguments[0].isString()) {
throw jsi::JSError(runtime, "First argument ('key') has to be of type string!");
}
auto userCallbackRef = std::make_shared<jsi::Object>(arguments[1].getObject(runtime));
auto keyName = convertJSIStringToNSString(runtime, arguments[0].getString(runtime));
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
auto value = [instance getStringForKey:keyName];
if (value != nil) {
userCallbackRef->asFunction(runtime).call(runtime, convertNSStringToJSIString(runtime, value));
//return convertNSStringToJSIString(runtime, value);
} else {
userCallbackRef->asFunction(runtime).call(runtime, jsi::Value::undefined());
//return jsi::Value::undefined();
}
});
return jsi::Value::undefined();
});
}
Wait what? Why are you adding this to the C++ sources, this is slower, more code, and still synchronous?
Yep post creating the issue i read about the 2 different threads and how we need to do invokeAsync on the jsCallInvoker. However when i try to access the jsCallInvoker on the bridge i dont seem to be getting that variable..
Tried making a SOF question too about the same
https://stackoverflow.com/questions/74922372/access-jscall-invoker-via-the-bridge-in-react-native-module
Do i need to add any specific headers or initialising the bridge in a specific way or initing the new architecture to make the JS call invoker available.
No it doesn't, and yes you have to add the RCTBridge+Private.h import
but that's a hacky solution
|
gharchive/issue
| 2022-12-24T20:58:02 |
2025-04-01T04:35:08.388772
|
{
"authors": [
"mrousavy",
"nitish24p"
],
"repo": "mrousavy/react-native-mmkv",
"url": "https://github.com/mrousavy/react-native-mmkv/issues/492",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1908376210
|
Can not use react-native-vision-camera frameProcessor to scan QR codes 🐛
What's happening?
When using react native vision camera frameProcessor property in order to scan QR codes (combined with vision-camera-code-scanner), I get the following android error when navigating to the screen that renders the camera view (app builds correctly):
java.lang.UnsatisfiedLinkError: No implementation found for void com.mrousavy.camera.CameraView.frameProcessorCallback(androidx.camera.core.ImageProxy) (tried Java_com_mrousavy_camera_CameraView_frameProcessorCallback and Java_com_mrousavy_camera_CameraView_frameProcessorCallback__Landroidx_camera_core_ImageProxy_2)
and then the app dies.
Fun fact is that without using the frameProcessor prop, everything works fine.
This error happens without installing the library react-native-worklets-core and adding to babel.config.js: ['react-native-worklets-core/plugin']
On the other hand, If I add react-native-worklets-core and its babel config line, I get a different error and I can't even build the app:
What went wrong:
Could not determine the dependencies of task ':react-native-worklets-core:compileDebugAidl'.
> Could not resolve all task dependencies for configuration ':react-native-worklets-core:debugCompileClasspath'.
> Could not find com.facebook.react:react-android:.
Required by:
project :react-native-worklets-core
> Could not find com.facebook.react:hermes-android:.
Required by:
project :react-native-worklets-core
I have looked at this two similar bug reports and tried everything explained there but the issue persists and I am not finding a solution..
https://github.com/mrousavy/react-native-vision-camera/issues/1463
https://github.com/mrousavy/react-native-vision-camera/issues/1097
Reproduceable Code
import { useIsFocused } from '@react-navigation/native'
import { BarcodeFormat, Barcode, scanBarcodes } from 'vision-camera-code-scanner'
import React, { useEffect, useState } from 'react'
import { StyleSheet } from 'react-native'
import { runOnJS } from 'react-native-reanimated'
import { useCameraDevices, useFrameProcessor } from 'react-native-vision-camera'
import { Camera } from 'react-native-vision-camera'
import { LoaderComponent } from '#components/loader'
export const QRReader = () => {
const [hasPermission, setHasPermission] = React.useState(false)
const devices = useCameraDevices()
const device = devices.back
const [barcodes, setBarcodes] = useState<Barcode[]>([])
const isFocused = useIsFocused()
console.log('hasPermission', hasPermission)
useEffect(() => {
console.log('barcodes', barcodes)
}, [barcodes])
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
const detectedBarcodes = scanBarcodes(frame, [BarcodeFormat.QR_CODE])
runOnJS(setBarcodes)(detectedBarcodes)
}, [])
useEffect(() => {
const setCameraPermission = async () => {
const status = await Camera.requestCameraPermission()
setHasPermission(status === 'authorized')
const cameraPermission = await Camera.getCameraPermissionStatus()
console.log('cameraPermission', cameraPermission)
}
setCameraPermission()
}, [])
if (device == null) {
return <LoaderComponent />
}
return (
<Camera
style={StyleSheet.absoluteFill}
device={device}
isActive={isFocused}
frameProcessor={frameProcessor}
frameProcessorFps={5}
/>
)
}
Relevant log output
java.lang.UnsatisfiedLinkError: No implementation found for void com.mrousavy.camera.CameraView.frameProcessorCallback(androidx.camera.core.ImageProxy) (tried Java_com_mrousavy_camera_CameraView_frameProcessorCallback and Java_com_mrousavy_camera_CameraView_frameProcessorCallback__Landroidx_camera_core_ImageProxy_2)
FATAL EXCEPTION: pool-28-thread-1
Process: myApp.dev, PID: 22643
java.lang.UnsatisfiedLinkError: No implementation found for void com.mrousavy.camera.CameraView.frameProcessorCallback(androidx.camera.core.ImageProxy) (tried Java_com_mrousavy_camera_CameraView_frameProcessorCallback and Java_com_mrousavy_camera_CameraView_frameProcessorCallback__Landroidx_camera_core_ImageProxy_2)
at com.mrousavy.camera.CameraView.frameProcessorCallback(Native Method)
at com.mrousavy.camera.CameraView.configureSession$lambda-7$lambda-6(CameraView.kt:491)
at com.mrousavy.camera.CameraView.$r8$lambda$cqtIchEZdTZaV3R0UUrDpVbB1Es(Unknown Source:0)
at com.mrousavy.camera.CameraView$$ExternalSyntheticLambda1.analyze(Unknown Source:2)
at androidx.camera.core.ImageAnalysis.lambda$setAnalyzer$2(ImageAnalysis.java:476)
at androidx.camera.core.ImageAnalysis$$ExternalSyntheticLambda0.analyze(Unknown Source:2)
at androidx.camera.core.ImageAnalysisAbstractAnalyzer.lambda$analyzeImage$0$androidx-camera-core-ImageAnalysisAbstractAnalyzer(ImageAnalysisAbstractAnalyzer.java:285)
at androidx.camera.core.ImageAnalysisAbstractAnalyzer$$ExternalSyntheticLambda1.run(Unknown Source:14)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:637)
at java.lang.Thread.run(Thread.java:1012)
Camera Device
{
"formats": [],
"maxZoom": 10,
"minZoom": 1,
"supportsLowLightBoost": false,
"supportsDepthCapture": false,
"neutralZoom": 1,
"supportsFocus": true,
"supportsRawCapture": false,
"hasFlash": false,
"name": "back (0)",
"supportsParallelVideoProcessing": false,
"isMultiCam": false,
"hasTorch": false,
"devices": [
"telephoto-camera"
],
"position": "back",
"id": "0"
}
Package json dependencies:
{
"dependencies": {
"@babel/plugin-proposal-private-methods": "^7.18.6",
"@casl/ability": "^6.1.1",
"@casl/react": "^3.1.0",
"@datadog/mobile-react-native": "^1.0.0",
"@datadog/mobile-react-navigation": "^1.0.0",
"@formatjs/intl-datetimeformat": "^6.2.0",
"@formatjs/intl-getcanonicallocales": "^2.0.4",
"@formatjs/intl-locale": "^3.0.6",
"@formatjs/intl-numberformat": "^8.1.3",
"@formatjs/intl-pluralrules": "^5.1.3",
"@nunogois/proxy-client-react-native": "^0.0.28",
"@react-native-async-storage/async-storage": "^1.17.10",
"@react-native-community/clipboard": "^1.5.1",
"@react-native-community/netinfo": "^9.3.0",
"@react-native-firebase/app": "^14.9.3",
"@react-native-firebase/database": "^14.9.3",
"@react-native-firebase/messaging": "^14.9.3",
"@react-native-masked-view/masked-view": "0.2.0",
"@react-native-picker/picker": "^2.4.4",
"@react-navigation/bottom-tabs": "^6.5.8",
"@react-navigation/drawer": "^6.4.4",
"@react-navigation/native": "^6.0.12",
"@react-navigation/native-stack": "^6.8.0",
"@segment/analytics-react-native": "^2.7.1",
"@segment/sovran-react-native": "^0.4.4",
"@sentry/react-native": "^4.2.4",
"consistencss": "^1.7.0",
"date-fns-tz": "^1.3.6",
"deprecated-react-native-prop-types": "2.2.0",
"formik": "^2.2.9",
"i18next": "^21.9.1",
"lottie-ios": "3.4.0",
"lottie-react-native": "5.1.4",
"patch-package": "^6.4.7",
"postinstall-postinstall": "^2.1.0",
"react": "18.1.0",
"react-i18next": "^11.18.5",
"react-native": "0.70.9",
"react-native-autolink": "^4.0.0",
"react-native-calendars": "^1.1288.2",
"react-native-code-push": "^7.0.5",
"react-native-device-info": "^10.0.2",
"react-native-dotenv": "^3.3.1",
"react-native-email-link": "^1.14.1",
"react-native-error-boundary": "^1.1.15",
"react-native-eva-icons": "^1.3.1",
"react-native-event-listeners": "^1.0.7",
"react-native-fast-image": "^8.6.0",
"react-native-flipper": "^0.162.0",
"react-native-gesture-handler": "^2.6.0",
"react-native-image-picker": "^5.4.2",
"react-native-keyboard-manager": "4.0.13-17",
"react-native-linear-gradient": "^2.6.2",
"react-native-localize": "^2.2.3",
"react-native-mmkv": "2.5.1",
"react-native-mmkv-flipper-plugin": "^1.0.0",
"react-native-modal": "^13.0.1",
"react-native-offline": "^6.0.0",
"react-native-pager-view": "^5.4.25",
"react-native-picker-select": "^8.0.4",
"react-native-reanimated": "^2.13.0",
"react-native-restart": "^0.0.24",
"react-native-safe-area-context": "^4.3.3",
"react-native-screens": "^3.17.0",
"react-native-select-dropdown": "^3.3.3",
"react-native-shadow-2": "6.0.5",
"react-native-share": "^7.9.0",
"react-native-snackbar": "^2.4.0",
"react-native-snap-carousel": "^3.9.1",
"react-native-svg": "^12.3.0",
"react-native-switch-selector-fix": "^2.0.4",
"react-native-tab-view": "^3.1.1",
"react-native-vector-icons": "^9.2.0",
"react-native-vision-camera": "^2.16.1",
"react-native-walkthrough-tooltip": "^1.3.1",
"react-native-webp-format": "^1.1.2",
"react-native-webview": "^11.23.0",
"react-query": "^3.39.2",
"styled-components": "^5.3.6",
"ts-jest": "^27",
"unleash-proxy-client": "^2.3.0",
"vision-camera-code-scanner": "0.2.0",
"yup": "^0.32.11"
},
"devDependencies": {
"@babel/core": "^7.18.13",
"@babel/preset-typescript": "^7.18.6",
"@babel/runtime": "^7.18.9",
"@commitlint/cli": "^17.1.2",
"@commitlint/config-conventional": "^17.1.0",
"@react-native-community/eslint-config": "^3.1.0",
"@testing-library/react-native": "^11.1.0",
"@trivago/prettier-plugin-sort-imports": "^3.3.0",
"@types/jest": "^29.0.0",
"@types/lodash": "^4.14.184",
"@types/react": "^17.0.43",
"@types/react-native": "^0.67.4",
"@types/react-native-auth0": "^2.13.1",
"@types/react-native-calendars": "^1.1264.3",
"@types/react-native-restart": "^0.0.14",
"@types/react-native-share": "^3.3.3",
"@types/react-test-renderer": "^17.0.1",
"@types/styled-components-react-native": "^5.2.1",
"@typescript-eslint/eslint-plugin": "^5.36.0",
"appcenter-cli": "^2.11.0",
"babel-jest": "^27",
"babel-plugin-module-resolver": "^4.1.0",
"eslint": "^7.32.0",
"eslint-import-resolver-babel-module": "^5.3.1",
"eslint-plugin-detox": "^1.0.0",
"eslint-plugin-import": "^2.26.0",
"eslint-plugin-jest": "^27.0.1",
"eslint-plugin-prettier": "^4.2.1",
"husky": "^8.0.1",
"jest": "^27",
"metro-react-native-babel-preset": "0.72.3",
"prettier": "^2.7.1",
"react-native-flipper-performance-plugin": "^0.3.1",
"react-test-renderer": "18.1.0",
"standard-version": "^9.5.0",
"typescript": "^4.8.2"
},
}
Device
Android Pixel 4 api 31
VisionCamera Version
^2.16.1
Can you reproduce this issue in the VisionCamera Example app?
Yes, I can reproduce the same issue in the Example app here
Additional information
[ ] I am using Expo
[X] I have enabled Frame Processors (react-native-worklets-core)
[X] I have read the Troubleshooting Guide
[X] I agree to follow this project's Code of Conduct
[X] I searched for similar issues in this repository and found none.
I have the same issue while using vision-camera-code-scanner. need help
Making following changes should fix the issue - https://github.com/mrousavy/react-native-vision-camera/issues/1307#issuecomment-1731248952
Making following changes should fix the issue - #1307 (comment)
The thread you mentioned is for ios and mine is an issue with android devices
I was facing the exact same problem, curious enough. I tried with specific package versions and it worked:
"react-native-vision-camera": "2.15.6"
"react-native-reanimated": "^2.14.4"
"vision-camera-code-scanner": "github:jorgebrunetto/vision-camera-code-scanner"
You can try and see if this works for you too
I was facing the exact same problem, curious enough. I tried with specific package versions and it worked:
"react-native-vision-camera": "2.15.6" "react-native-reanimated": "2.17.0" "vision-camera-code-scanner": "github:jorgebrunetto/vision-camera-code-scanner"
You can try and see if this works for you too
Ey Jorge! And what is the "vision-camera-code-scanner” version you are using? That field is empty in your previous response
It is a fork that the team I am working with was using. Probably using the one you have right now won't impact. The breaking change is in "react-native-vision-camera" from 2.15.6 to 2.16.1 because they changed from CameraX to Camera2
It is a fork that the team I am working with was using. Probably using the one you have right now won't impact. The breaking change is in "react-native-vision-camera" from 2.15.6 to 2.16.1 because they changed from CameraX to Camera2
you just rock it man! With those libraries versions everything is working well in my android device!! 😁
Thanks a lot for your help! 🫶
Now I will check if everything works on the iOS side but I am confident that it will!
I was facing the exact same problem, curious enough. I tried with specific package versions and it worked:
"react-native-vision-camera": "2.15.6" "react-native-reanimated": "2.17.0" "vision-camera-code-scanner": "github:jorgebrunetto/vision-camera-code-scanner"
You can try and see if this works for you too
@JorgeQuevedoC what version of React Native are you using?
I was facing the exact same problem, curious enough. I tried with specific package versions and it worked:
"react-native-vision-camera": "2.15.6" "react-native-reanimated": "2.17.0" "vision-camera-code-scanner": "github:jorgebrunetto/vision-camera-code-scanner"
You can try and see if this works for you too
@JorgeQuevedoC what version of React Native are you using?
Ey @drastus! In my case I am using the version 0.70.9 and with the libraries versions that @JorgeQuevedoC suggested everything is working now! 😄
I was facing the exact same problem, curious enough. I tried with specific package versions and it worked:
"react-native-vision-camera": "2.15.6" "react-native-reanimated": "2.17.0" "vision-camera-code-scanner": "github:jorgebrunetto/vision-camera-code-scanner"
You can try and see if this works for you too
@JorgeQuevedoC what version of React Native are you using?
"react-native": "0.69.7",
Hey this should be fixed in V3
Hey!
JFYI; VisionCamera V3 now includes a QR/Barcode Scanner! 😍 Check out the CodeScanner Documentation 🚀
If you appreciate me dedicating my free time to improving VisionCamera and implementing features like the Code Scanner, please consider sponsoring me on GitHub 💖 to show your support.
@mrousavy VisionCamera V3 doesn't work with react-native: 0.70.6 as android build fails.
okay 👍
I want to perform all of the functions, perform OCR, do the barcode scanning and take a photo when I open a camera. I is possible in vision-camera-3
[V3] What you need can be found here:
https://www.react-native-vision-camera.com/docs/guides/frame-processor-plugin-list
[V3] I resolved the frameProcessor error through this note. Hope someone will need it
https://github.com/mrousavy/react-native-vision-camera/issues/1776#issuecomment-1766163792
vision-camera-code-scanner
Hi @qamarcloud
did you find any solution to perform all at one place?
|
gharchive/issue
| 2023-09-22T07:51:19 |
2025-04-01T04:35:08.416974
|
{
"authors": [
"JorgeQuevedoC",
"PranatoshRoy",
"QSuraj",
"arazaq917",
"drastus",
"fvalles",
"hardik-javascript",
"mrousavy",
"nc-hung",
"qamarcloud"
],
"repo": "mrousavy/react-native-vision-camera",
"url": "https://github.com/mrousavy/react-native-vision-camera/issues/1833",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
545279945
|
Add missing current ruby versions to .travis.yml
Thanks for docile, thought it might be nice to have these along :)
Codecov Report
Merging #40 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #40 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 6 6
Lines 82 82
=====================================
Hits 82 82
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update def1671...731eb26. Read the comment docs.
Thank you @pragtob!
|
gharchive/pull-request
| 2020-01-04T12:32:37 |
2025-04-01T04:35:08.453956
|
{
"authors": [
"PragTob",
"codecov-io",
"ms-ati"
],
"repo": "ms-ati/docile",
"url": "https://github.com/ms-ati/docile/pull/40",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
723826772
|
[Port] Support kuka_experimental on Windows
We have had a commercial request for Kuka ROS node on Windows.
https://answers.ros.org/question/363614/build-kuka_experimental-package-on-windows-10-ros1-noetic/
Edit to add link to repo: http://wiki.ros.org/kuka_eki_hw_interface
This package depends on industrial core:
https://github.com/ros-industrial/industrial_core
This package depends on industrial core:
https://github.com/ros-industrial/industrial_core
Only for an RViz configuration file.
That dependency is not important.
@gavanderhoorn Thanks for that tip - I started porting it - but will look into the Kuka one. Looking into it further, it was last updated for indigo and still marked experimental. Is there a more current Kuka ROS implementation we should be looking into?
Looking into it further, it was last updated for indigo and still marked experimental. Is there a more current Kuka ROS implementation we should be looking into?
We follow a branch-when-needed policy. Afaik, everything in there can be built on Indigo, Kinetic and Melodic. Hence no new branch.
And I expect only the RSI and EKI hardware_interfaces to require any changes. All other packages are robot support packages, which contain no code.
For what regards the KUKA RSI server, we ported it in the past for an internal project on Windows. At the time we did not ported the fixes upstream because the Windows interested seemed low, and also because we needed a non-ROS package to consume it as a dependency, so even if the fix was upstreamed we could not use it. However, if you want to take an inspiration the version that we are using is https://gist.github.com/traversaro/93de691ffb1b5344d1b127dca58233d3 (it is a modified version of https://github.com/ros-industrial/kuka_experimental/blob/indigo-devel/kuka_rsi_hw_interface/include/kuka_rsi_hw_interface/udp_server.h). However, note that, at least for what concern RSI it a protocol that is suppose to exchange data with the robot at 250 Hz, so depending on the specific application it may not be trivial to obtain on Windows the same performance that you obtained (for example) on a Linux with PREEMPT RT patch.
@traversaro wrote:
note that, at least for what concern RSI it a protocol that is suppose to exchange data with the robot at 250 Hz, so depending on the specific application it may not be trivial to obtain on Windows the same performance that you obtained (for example) on a Linux with PREEMPT RT patch.
yeah, I've mentioned something similar to the user requesting this.
They were more interested in EKI though, which will probably be doable.
But I don't guarantee anything.
@traversaro Thanks for the info.
250hz is achievable on Windows using system configuration. (We have documentation incoming which will cover sub-millisecond time criticality)
This would be a good test case for it.
Out of curiosity: are there any updates @ooeygui ?
@gavanderhoorn I was able to borrow two kuka arms, and trying to build a workcell in my garage LOL.
I have a plan for the realtime control loop on Windows.
If you have interested customers, I would love to chat about them!
I was able to borrow two kuka arms, and trying to build a workcell in my garage
:) hm, maybe I should start porting stuff too ..
I created a Windows Staging Fork here: https://github.com/ms-iot/kuka_experimental. We will PR up to ROS industrial once we are able to validate on a robot.
builds on windows now...
https://github.com/ms-iot/kuka_experimental/pull/2
Setting up to test it:
Hi @ooeygui, those seems to be KUKA iiwa, and I am afraid that they do not support the RSI protocol, did you checked with KUKA?
@traversaro well, bummer :-(
As far as I know those robots support another protocol called Fast Research Protocol, but I may be wrong.
Well .. it depends a bit on which version of the IIWA that is.
But it's likely they don't support it.
As far as I know those robots support another protocol called Fast Research Interface (FRI), but I may be wrong.
Older IIWAs would use FRI indeed. With newer you'd use SmartServo/DirectServo.
@gavanderhoorn @traversaro thanks for the chat. I haven't reached out to my contacts at Kuka yet.
These are 2017 models. I am missing a few parts, so haven't been able to power them up yet to get more details.
Is your "other project" going to / interested in using ROS 2 with those robots? If so, send me an email.
@ooeygui: what's the status here? You mentioned (here or on ROS Answers) you had some dependencies of the RSI / EKI packages which also needed some patches. Would that be ros_control/hardware_interface by any chance?
It seems we're running into similar issues with abb_robot_driver. I've @-mentioned you there.
@gavanderhoorn Thanks for the ping on this. I have a PR into the ms-iot staging branch, but haven't had an opportunity to test it. https://github.com/ms-iot/kuka_experimental/pull/2. I haven't investigated the ros_control issues, I'm going to work with @lilustga who is the manipulation lead in AERo.
I've left a comment on the PR.
I believe we're running into similar problems with base ros_control and ERROR in https://github.com/ros-industrial/abb_robot_driver/issues/17.
|
gharchive/issue
| 2020-10-17T18:57:33 |
2025-04-01T04:35:08.469494
|
{
"authors": [
"gavanderhoorn",
"ooeygui",
"traversaro"
],
"repo": "ms-iot/ROSOnWindows",
"url": "https://github.com/ms-iot/ROSOnWindows/issues/294",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
425873955
|
Set MAC address for LAN7800 of RPI3B+
In mailbox, queries and savesMAC address for LAN. When NDIS interface notification received, sets this MAC address in NetworkAddress in NDIS software key if the flag registry LanPropertyChange is not set.
During system boot, a service thread RpiLanPropertyChange is started and checks the flag RpiLanPropertyChange. If new MAC address is to be applied (RpiLanPropertyChange == 1), then restarts Lan7800, and sets flag RpiLanPropertyChange to 2.
Test results:
Testcase.xlsx
There are some dummy difference since last check in. Will close this and reopen a new one.
|
gharchive/pull-request
| 2019-03-27T10:21:00 |
2025-04-01T04:35:08.471838
|
{
"authors": [
"henie"
],
"repo": "ms-iot/rpi-iotcore",
"url": "https://github.com/ms-iot/rpi-iotcore/pull/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
602264158
|
Adds an option for inline parsing of elements
Why:
In some situations we might was to render Surface code to inline into
other pieces of Surface code
This change addresses the need by:
Adding an optional i to the H sigil that removes the final new line
Note: Not even sure if we want this, but it came up in #71 and I decided to give it a go.
Hi @zamith!
Is there a similar i modifier in ~L and ~E? If so, I have no problem adding this feature to keep consistency between them. Otherwise, I think we can just use \ at the end which achieves the same goal. Example:
~H"""
<div>
<span>Whatever</span>
<\div>\
"""
Doesn't seem like it. I guess that workaround is fine. I haven't actually needed it yet, to be honest.
I was trying to figure out where the \n was coming from which is also a good opportunity for me to learn a bit more about the internals of Surface. I'm more than happy to close this PR.
|
gharchive/pull-request
| 2020-04-17T22:20:23 |
2025-04-01T04:35:08.475453
|
{
"authors": [
"msaraiva",
"zamith"
],
"repo": "msaraiva/surface",
"url": "https://github.com/msaraiva/surface/pull/72",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
489581742
|
error
AAPT: error: resource attr/colorOnSurface not found
Are you using MaterialComponents theme or AppCompat?
|
gharchive/issue
| 2019-09-05T07:38:52 |
2025-04-01T04:35:08.476320
|
{
"authors": [
"ddthanh198",
"msasikanth"
],
"repo": "msasikanth/ColorSheet",
"url": "https://github.com/msasikanth/ColorSheet/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
496129865
|
[DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues
Node is issues the warnings listed below.
These are cause by https://github.com/mscdex/node-ftp/blob/master/lib/connection.js#L53
See https://medium.com/@jasnell/node-js-buffer-api-changes-3c21f1048f97 for more information.
(node:19124) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
at showFlaggedDeprecation (buffer.js:159:11)
at new Buffer (buffer.js:174:3)
at Object.<anonymous> (path/to/node_modules/ftp/lib/connection.js:52:17)
at Module._compile (internal/modules/cjs/loader.js:778:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
at Function.Module._load (internal/modules/cjs/loader.js:585:3)
at Module.require (internal/modules/cjs/loader.js:692:17)
at require (internal/modules/cjs/helpers.js:25:18)
Same issue above. data-uri-to-buffer version set "1".
node_modules/get-uri/package.json
"dependencies": {
"data-uri-to-buffer": "1",
"debug": "2",
"extend": "~3.0.2",
"file-uri-to-path": "1",
"ftp": "~0.3.10",
"readable-stream": "2"
},
I change mannually for
-> bytesNOOP = new Buffer.from('NOOP\r\n');
Same error appear with electron^9 and makes slow start.
https://github.com/mscdex/node-ftp/pull/230 fixes this
|
gharchive/issue
| 2019-09-20T03:50:52 |
2025-04-01T04:35:08.479814
|
{
"authors": [
"baerrach",
"frosas",
"samjegal",
"tutanra"
],
"repo": "mscdex/node-ftp",
"url": "https://github.com/mscdex/node-ftp/issues/255",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1740438879
|
feat: add spring integration
add spring support.
Any interface annotated with OpenAiInterface will automatically have an OpenAi proxy bean registered.
Required configuration properties (api key, base url, etc) are injected via spring properties.
The api call will be done by OpenAiApiClient, which internally uses Spring Webclient and is configured as a singleton bean
Hey @mscheong01 @davin111 - why use a proxy for OpenAI?
Hi @krrishdholakia 👋
The main idea kf this project is that you could define and use LLM tasks as local components - define an interface with functions that represent tasks that should be done by the LLM, and then you would be able to use(invoke) it without implementing it through our dynamic proxy generation feature.
Let me know if you have other questions 😀
|
gharchive/pull-request
| 2023-06-04T15:40:37 |
2025-04-01T04:35:08.482049
|
{
"authors": [
"krrishdholakia",
"mscheong01"
],
"repo": "mscheong01/interfAIce",
"url": "https://github.com/mscheong01/interfAIce/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2162218889
|
🛑 Do53 Roost IPv6 UDP is down
In aced301, Do53 Roost IPv6 UDP (http://107.189.10.142:9204) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Do53 Roost IPv6 UDP is back up in 60a2013 after 36 minutes.
|
gharchive/issue
| 2024-02-29T22:39:36 |
2025-04-01T04:35:08.485823
|
{
"authors": [
"mschirrmeister"
],
"repo": "mschirrmeister/upptime-loopx",
"url": "https://github.com/mschirrmeister/upptime-loopx/issues/1289",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2214599377
|
🛑 Do53 Roost IPv4 TCP is down
In 4e216b0, Do53 Roost IPv4 TCP (http://107.189.10.142:9203) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Do53 Roost IPv4 TCP is back up in 2ea9e16 after 31 minutes.
|
gharchive/issue
| 2024-03-29T03:51:03 |
2025-04-01T04:35:08.488160
|
{
"authors": [
"mschirrmeister"
],
"repo": "mschirrmeister/upptime-loopx",
"url": "https://github.com/mschirrmeister/upptime-loopx/issues/4380",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
392923948
|
Publish/Subscribe Eqnuiry
Hello, it is possible 1 consumer listen to more than one queue? If both queue is filled with message and is there any setting to set which queue high priority so that the program always will consume the message from high priority queue?
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 9425297d-8ddb-c092-5c2a-79015f4e8954
Version Independent ID: 95fcd443-8a9f-0838-e73a-c6578f2d57f8
Content: Competing Consumers
Content Source: docs/patterns/competing-consumers.md
Service: guidance
GitHub Login: @dragon119
Microsoft Alias: pnp
In terms of the design pattern, yes a consumer can listen to multiple queues. Having a "priority" queue and "normal" queue is a common approach. The implementation will depend on what queue mechanism you use.
|
gharchive/issue
| 2018-12-20T07:33:56 |
2025-04-01T04:35:08.555671
|
{
"authors": [
"CWJie91",
"MikeWasson"
],
"repo": "mspnp/architecture-center",
"url": "https://github.com/mspnp/architecture-center/issues/1112",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
377470048
|
Fix link
Fixes #915
:white_check_mark: Validation status: passed
File
Status
Preview URL
Details
docs/reference-architectures/serverless/web-app.md
:white_check_mark:Succeeded
View
For more details, please refer to the build report.
Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
|
gharchive/pull-request
| 2018-11-05T16:10:48 |
2025-04-01T04:35:08.559224
|
{
"authors": [
"MikeWasson",
"VSC-Service-Account"
],
"repo": "mspnp/architecture-center",
"url": "https://github.com/mspnp/architecture-center/pull/918",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2586714628
|
Load model locally
Hi,
I'm making an app that uses Tauri, and I want to bundle the model with the application (since I want it to be offline). Are there ways to load the model locally?
If I understand correctly, you need to loaf a local model, right?
Well, Tauri uses the system's browser to render the app, so I guess you can call it a 'web' app.
But it does run WASM.
Here, I put a test to see if WebAssembly works or not:
Here's the result:
So yes, it does support WASM. My question is whether there's a way to load the model instead of letting the user download it. if not, are there some ways to achieve that?
In that case, you can download a model via a call to createModel with your selected URL, path, and id.
This is the norm. I don't recall the user having to download the model themselves. A call to createModel with fetch and cache the model for you.
Okay. i did what you said.
But i got a problem, i'm getting this error:
10-15 20:31:05.080 7532 7532 E Tauri/Console: File: http://tauri.localhost/js/Vosklet.js - Line 1 - Msg: Uncaught (in promise) DataCloneError: Failed to execute 'postMessage' on 'Worker': SharedArrayBuffer transfer requires self.crossOriginIsolated.
i looked around and found out that CORS policy are blocking tauri's localhost. they said i should use it's http plugin. how can i edit vosklet.js to make it use tauri's http plugin?
Can you somehow use that plugin to inject the COOP AND COEP headers into your app using that HTTP client?
hi msqr1 ,
I am trying to add new language models to vosklet . How can I do that. BTW I am engaged in a linguistic project and need more assistance . Please contact at:
mehryar100@yahoo.de to explain more about my project .
We can talk about that over in #18. It seemed off topic here.
|
gharchive/issue
| 2024-10-14T18:03:15 |
2025-04-01T04:35:08.566951
|
{
"authors": [
"AhmedSawx",
"mehryar100",
"msqr1"
],
"repo": "msqr1/Vosklet",
"url": "https://github.com/msqr1/Vosklet/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
705723667
|
winpthreads install from staging broken
installing mingw-w64-i686-libwinpthread-git (8.0.0.6001.98dad1fe-1) breaks dependency 'mingw-w64-i686-libwinpthread-git=8.0.0.5906.c9a21571' required by mingw-w64-i686-winpthreads-git
I guess we should depend on all the build results of a package instead of the one specified.
fixed via a0c4802fdb3bd75c46c9
|
gharchive/issue
| 2020-09-21T15:55:08 |
2025-04-01T04:35:08.662308
|
{
"authors": [
"lazka"
],
"repo": "msys2/msys2-autobuild",
"url": "https://github.com/msys2/msys2-autobuild/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1708807288
|
🛑 https://detulkarm.edu.ps is down
In 1f03a9f, https://detulkarm.edu.ps (https://detulkarm.edu.ps) was down:
HTTP code: 0
Response time: 0 ms
Resolved: https://detulkarm.edu.ps is back up in a772c17.
|
gharchive/issue
| 2023-05-14T03:52:19 |
2025-04-01T04:35:08.701971
|
{
"authors": [
"mtitservice"
],
"repo": "mtitservice/site",
"url": "https://github.com/mtitservice/site/issues/12215",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1570344826
|
🛑 https://mtit.pna.ps is down
In 8d3adbd, https://mtit.pna.ps (https://mtit.pna.ps) was down:
HTTP code: 0
Response time: 0 ms
Resolved: https://mtit.pna.ps is back up in e7c4b0d.
|
gharchive/issue
| 2023-02-03T19:36:35 |
2025-04-01T04:35:08.704924
|
{
"authors": [
"mtitservice"
],
"repo": "mtitservice/site",
"url": "https://github.com/mtitservice/site/issues/197",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1620165472
|
🛑 https://detulkarm.edu.ps is down
In 170ea1b, https://detulkarm.edu.ps (https://detulkarm.edu.ps) was down:
HTTP code: 0
Response time: 0 ms
Resolved: https://detulkarm.edu.ps is back up in 7331a71.
|
gharchive/issue
| 2023-03-11T22:31:26 |
2025-04-01T04:35:08.708209
|
{
"authors": [
"mtitservice"
],
"repo": "mtitservice/site",
"url": "https://github.com/mtitservice/site/issues/2607",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1630867432
|
🛑 https://palpost.ps is down
In 8cc79f3, https://palpost.ps (https://palpost.ps) was down:
HTTP code: 0
Response time: 0 ms
Resolved: https://palpost.ps is back up in 7c4905b.
|
gharchive/issue
| 2023-03-19T10:05:36 |
2025-04-01T04:35:08.711084
|
{
"authors": [
"mtitservice"
],
"repo": "mtitservice/site",
"url": "https://github.com/mtitservice/site/issues/3619",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1634421003
|
🛑 https://salfeet.plo.ps is down
In 2009d52, https://salfeet.plo.ps (https://salfeet.plo.ps) was down:
HTTP code: 0
Response time: 0 ms
Resolved: https://salfeet.plo.ps is back up in deb9223.
|
gharchive/issue
| 2023-03-21T17:38:33 |
2025-04-01T04:35:08.714006
|
{
"authors": [
"mtitservice"
],
"repo": "mtitservice/site",
"url": "https://github.com/mtitservice/site/issues/3967",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1593643672
|
🛑 https://ejrs.moj.pna.ps is down
In 3ed74d8, https://ejrs.moj.pna.ps (https://ejrs.moj.pna.ps) was down:
HTTP code: 0
Response time: 0 ms
Resolved: https://ejrs.moj.pna.ps is back up in ecb0f71.
|
gharchive/issue
| 2023-02-21T15:06:38 |
2025-04-01T04:35:08.716915
|
{
"authors": [
"mtitservice"
],
"repo": "mtitservice/site",
"url": "https://github.com/mtitservice/site/issues/614",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1689392411
|
🛑 https://reports.mtit.pna.ps is down
In 5b481b2, https://reports.mtit.pna.ps (https://reports.mtit.pna.ps) was down:
HTTP code: 0
Response time: 0 ms
Resolved: https://reports.mtit.pna.ps is back up in c6ac439.
|
gharchive/issue
| 2023-04-29T04:58:09 |
2025-04-01T04:35:08.720068
|
{
"authors": [
"mtitservice"
],
"repo": "mtitservice/site",
"url": "https://github.com/mtitservice/site/issues/9714",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1719709375
|
Stack overflow
I have a relatively small Python program that I tried running pylyzer on. In (and outside) my virtual enviroment I execute pylyzer /path/to/project/main_of_project.py and I get:
cannot open '/path/to/project/source/utils/dependency1.py': [Errno 2] No such file or directory (os error 2)
cannot open '/path/to/project/source/utils/dependency2.py': [Errno 2] No such file or directory (os error 2)
cannot open '/path/to/project/source/dependency3.py': [Errno 2] No such file or directory (os error 2)
cannot open '/path/to/project/source/dependency4.py': [Errno 2] No such file or directory (os error 2)
cannot open '/path/to/project/source/dependency5.py': [Errno 2] No such file or directory (os error 2)
thread 'main' has overflowed its stack
fatal runtime error: stack overflow
Aborted (core dumped)
All dependency files are of course available.
Python: 3.11
OS: Kubuntu 23.04 x86_64
Kernel: 6.2.0-20-generic
Pylyzer: 0.0.27
Please update pylyzer to the latest version. If that does not solve the problem, please report the smallest code that reproduces the problem.
Fixed, thanks!
(tested with v0.0.29)
|
gharchive/issue
| 2023-05-22T13:34:42 |
2025-04-01T04:35:08.843120
|
{
"authors": [
"jp-berg",
"mtshiba"
],
"repo": "mtshiba/pylyzer",
"url": "https://github.com/mtshiba/pylyzer/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
849578989
|
Extension issue
Issue Type: Bug
Extension Name: sqltools
Extension Version: 0.23.0
OS Version: Windows_NT x64 10.0.19042
VS Code version: 1.55.0
:warning: We have written the needed data into your clipboard. Please paste! :warning:
Looks like there should have been some data associated with the bug that was copied to your clipboard and you need to paste? That info alone doesn't provide enough to troubleshoot the issue.
|
gharchive/issue
| 2021-04-03T05:02:41 |
2025-04-01T04:35:08.845232
|
{
"authors": [
"davidshq",
"samdruster"
],
"repo": "mtxr/vscode-sqltools",
"url": "https://github.com/mtxr/vscode-sqltools/issues/788",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
351448153
|
detect latency
Hi.
I'm sorry to post a question here and not really an issue.
Would be possible to detect latency, like in DataChannel.js method channel.onmessage = function(message, userid, latency) { }?
cheers
Please try this. Then manually call detectLatency for the person who wants to detect latency:
var latency_PING_sentAt;
var latencies = [];
function detectLatency(dontReset) {
if (!dontReset) {
latencies = [];
}
latency_PING_sentAt = (new Date).toISOString();
connection.send('PING_For_LATENCY_detection');
}
connection.onmessage = function(e) {
if (e.data === 'PING_For_LATENCY_detection') {
var latency_PONG_sentAt = (new Date).toISOString();
connection.send({
latency_PONG_sentAt: latency_PONG_sentAt
});
}
if (e.data.latency_PONG_sentAt) {
var latency = new Date(e.data.latency_PONG_sentAt) - new Date(latency_PING_sentAt);
if (latencies.length < 60) {
latencies.push(latency);
detectLatency(true); // retry until we get 60 latencies
return;
}
// now we need to detect average latency
var sum = 0;
for (var i = 0; i < latencies.length; i++) {
sum += parseInt(latencies[i], 10);
}
var avg = sum / latencies.length;
alert('Average latency in milliseconds: ' + avg + '\nAll latencies: ' + latencies.join(', '));
}
};
You can add a button:
document.querySelector('#btn-detect-latency').onclick = function() {
this.disabled = true;
detectLatency(0;
};
Remember, above code requires connection.session.data=true. You can try following demo or its console (chrome DEV tools):
https://rtcmulticonnection.herokuapp.com/demos/TextChat+FileSharing.html
|
gharchive/issue
| 2018-08-17T03:52:59 |
2025-04-01T04:35:08.876011
|
{
"authors": [
"gilfuser",
"muaz-khan"
],
"repo": "muaz-khan/RTCMultiConnection",
"url": "https://github.com/muaz-khan/RTCMultiConnection/issues/619",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
278675963
|
Connnection to Siganalling Server is failing
https://webrtcweb.com:9559
Error in chrome - ERR_CONNECTION_REFUSED.
@muaz-khan Can you please check if the service is running ?
Fixed.
@muaz-khan Thanks for the prompt fix.
Really great work 👍
|
gharchive/issue
| 2017-12-02T11:13:07 |
2025-04-01T04:35:08.877828
|
{
"authors": [
"jssisodiyaPG",
"muaz-khan"
],
"repo": "muaz-khan/WebRTC-Experiment",
"url": "https://github.com/muaz-khan/WebRTC-Experiment/issues/545",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2429086382
|
Removal of data from objects not working as expected
See test 1.3.
https://github.com/muchdogesec/arango_cti_processor/tree/adding-tests/tests#test-13-perform-another-update-to-change-capec-attack-pattern---attck-attack-pattern-relationship-capec-attack
In this test we have
[
{
"_key": "attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a+2024-07-25T05:42:04.725297Z",
"_id": "mitre_capec_vertex_collection/attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a+2024-07-25T05:42:04.725297Z",
"_rev": "_iMfnHdO--B",
"created": "2014-06-23T00:00:00.000Z",
"created_by_ref": "identity--e50ab59c-5c4f-4d40-bf6a-d58418d89bcd",
"description": "In this attack pattern, the adversary monitors network traffic between nodes of a public or multicast network in an attempt to capture sensitive information at the protocol level. Network sniffing applications can reveal TCP/IP, DNS, Ethernet, and other low-level network communication information. The adversary takes a passive role in this attack pattern and simply observes and analyzes the traffic. The adversary may precipitate or indirectly influence the content of the observed transaction, but is never the intended recipient of the target information.",
"external_references": [
{
"external_id": "CAPEC-158",
"source_name": "capec",
"url": "https://capec.mitre.org/data/definitions/158.html"
},
{
"external_id": "CWE-311",
"source_name": "cwe",
"url": "http://cwe.mitre.org/data/definitions/311.html"
},
{
"description": "Network Sniffing",
"external_id": "T1040",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1040"
},
{
"description": "Multi-Factor Authentication Interception",
"external_id": "T1111",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1111"
}
],
"id": "attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a",
"modified": "2024-01-15T00:00:00.000Z",
"name": "UPDATE OBJECT 3RD TIME",
"object_marking_refs": [
"marking-definition--17d82bb2-eeeb-4898-bda5-3ddbcd2b799d"
],
"spec_version": "2.1",
"type": "attack-pattern",
"x_capec_abstraction": "Detailed",
"x_capec_can_follow_refs": [
"attack-pattern--c9b31907-c466-4325-af55-c418aea8b964"
],
"x_capec_child_of_refs": [
"attack-pattern--bdcdc784-d891-4ca8-847b-38ddca37a6ec"
],
"x_capec_consequences": {
"Confidentiality": [
"Read Data"
]
},
"x_capec_domains": [
"Communications",
"Software"
],
"x_capec_prerequisites": [
"The target must be communicating on a network protocol visible by a network sniffing application.",
"The adversary must obtain a logical position on the network from intercepting target network traffic is possible. Depending on the network topology, traffic sniffing may be simple or challenging. If both the target sender and target recipient are members of a single subnet, the adversary must also be on that subnet in order to see their traffic communication."
],
"x_capec_resources_required": [
"A tool with the capability of presenting network communication traffic (e.g., Wireshark, tcpdump, Cain and Abel, etc.)."
],
"x_capec_skills_required": {
"Low": "Adversaries can obtain and set up open-source network sniffing tools easily."
},
"x_capec_status": "Draft",
"x_capec_typical_severity": "Medium",
"x_capec_version": "3.9",
"_bundle_id": "bundle--641bf5e8-d108-40be-9552-802e033aa4ea",
"_file_name": "arango-cti-capec-attack-update-3.json",
"_stix2arango_note": "v3.12",
"_record_md5_hash": "02696fe777f5474565c54925a0accfd8",
"_is_latest": true,
"_record_created": "2024-07-25T05:42:04.725297Z",
"_record_modified": "2024-07-25T05:42:04.725297Z"
}
]
In the previous test it was
{
"_key": "attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a+2024-07-25T05:41:48.366933Z",
"_id": "mitre_capec_vertex_collection/attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a+2024-07-25T05:41:48.366933Z",
"_rev": "_iMfnHdO--A",
"created": "2014-06-23T00:00:00.000Z",
"created_by_ref": "identity--e50ab59c-5c4f-4d40-bf6a-d58418d89bcd",
"description": "In this attack pattern, the adversary monitors network traffic between nodes of a public or multicast network in an attempt to capture sensitive information at the protocol level. Network sniffing applications can reveal TCP/IP, DNS, Ethernet, and other low-level network communication information. The adversary takes a passive role in this attack pattern and simply observes and analyzes the traffic. The adversary may precipitate or indirectly influence the content of the observed transaction, but is never the intended recipient of the target information.",
"external_references": [
{
"external_id": "CAPEC-158",
"source_name": "capec",
"url": "https://capec.mitre.org/data/definitions/158.html"
},
{
"external_id": "CWE-311",
"source_name": "cwe",
"url": "http://cwe.mitre.org/data/definitions/311.html"
},
{
"description": "Network Sniffing",
"external_id": "T1040",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1040"
},
{
"description": "Multi-Factor Authentication Interception",
"external_id": "T1111",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1111"
},
{
"description": "Acquire Access",
"external_id": "T1650",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1650"
},
{
"description": "Hijack Execution Flow: ServicesFile Permissions Weakness",
"external_id": "T1574.010",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1574/010"
}
],
"id": "attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a",
"modified": "2024-01-01T00:00:00.000Z",
"name": "UPDATE OBJECT 2ND TIME",
"object_marking_refs": [
"marking-definition--17d82bb2-eeeb-4898-bda5-3ddbcd2b799d"
],
"spec_version": "2.1",
"type": "attack-pattern",
"x_capec_abstraction": "Detailed",
"x_capec_can_follow_refs": [
"attack-pattern--c9b31907-c466-4325-af55-c418aea8b964"
],
"x_capec_child_of_refs": [
"attack-pattern--bdcdc784-d891-4ca8-847b-38ddca37a6ec"
],
"x_capec_consequences": {
"Confidentiality": [
"Read Data"
]
},
"x_capec_domains": [
"Communications",
"Software"
],
"x_capec_prerequisites": [
"The target must be communicating on a network protocol visible by a network sniffing application.",
"The adversary must obtain a logical position on the network from intercepting target network traffic is possible. Depending on the network topology, traffic sniffing may be simple or challenging. If both the target sender and target recipient are members of a single subnet, the adversary must also be on that subnet in order to see their traffic communication."
],
"x_capec_resources_required": [
"A tool with the capability of presenting network communication traffic (e.g., Wireshark, tcpdump, Cain and Abel, etc.)."
],
"x_capec_skills_required": {
"Low": "Adversaries can obtain and set up open-source network sniffing tools easily."
},
"x_capec_status": "Draft",
"x_capec_typical_severity": "Medium",
"x_capec_version": "3.9",
"_bundle_id": "bundle--7222bcf4-2bd1-454e-bc7e-82583f1f7e64",
"_file_name": "arango-cti-capec-attack-update-2.json",
"_stix2arango_note": "v3.11",
"_record_md5_hash": "3c016e853de5e3c7e63547e61c955206",
"_is_latest": false,
"_record_created": "2024-07-25T05:41:48.366933Z",
"_record_modified": "2024-07-25T05:41:48.366933Z"
}
]
"external_id": "T1040", -> in both (creates 2 relationships)
"external_id": "T1111", -> in both (creates 2 relationships)
"external_id": "T1650", -> not in latest (creates 1 relationships)
"external_id": "T1574.010",, -> not in latest (creates 1 relationships)
Thus test 2 Should return 15 results. oldest version (1.0) of CAPEC158 had 4 ATT&CK references, old version 1.1 had 5 ATT&CK references, old version 1.2 had 6 (2 of these still remain, each with 4 lines)
def test_02_updated_capec158_old_relationships(self):
query = """
RETURN COUNT(
FOR doc IN mitre_capec_edge_collection
FILTER doc.source_ref == "attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a"
AND doc._is_latest == false
AND doc._arango_cti_processor_note == "capec-attack"
RETURN doc
)
"""
result_count = self.run_query(query)
self.assertEqual(result_count, [15], f"Expected 13 documents, but found {result_count}.")
Then the new object should have 4 refs
Test 3 Should return 4 results because the new object has both T1040 (1 coa, 1 attack pattern) and T1111 (1 coa, 1 attack pattern)
def test_03_updated_capec158_new_relationships(self):
query = """
RETURN COUNT(
FOR doc IN mitre_capec_edge_collection
FILTER doc.source_ref == "attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a"
AND doc._is_latest == true
AND doc._arango_cti_processor_note == "capec-attack"
RETURN doc
)
"""
result_count = self.run_query(query)
self.assertEqual(result_count, [4], f"Expected 4 documents, but found {result_count}.")
Update behaviour for SROs created by this script described here: https://github.com/muchdogesec/arango_cti_processor/blob/adding-tests/docs/README.md#updating-sros-on-subsequent-runs
Can be tested using: https://github.com/muchdogesec/arango_cti_processor/blob/main/tests/tests.md#test-14-perform-another-update-to-change-capec-attack-pattern---attck-attack-pattern-relationship-capec-attack
@fqrious on Slack you said
https://github.com/muchdogesec/arango_cti_processor/blob/adding-tests/docs/README.md#updating-sros-on-subsequent-runs
"Similarly, when a record is removed from a source object (e.g ATT&CK reference removed from a CAPEC object), the object removed between updates is marked at _is_latest=false, but no new object recreated for it (because it no longer exist in latest version of source object)"
This affects multiple projects, for example ATS is supposed to only return objects where _is_latest==true on manifest/objects, and if this is implemented, old SROs will just be missing from the manifest (unless version is set to all in query)
This is not quite right...
stix2arango logic is fairly simplistic. If md5 of object changes, add the new one (as _is_latest=true), and make is _is_latest=false for all old versions.
arango_taxii_server is slightly different...
Here it is only concerned with creating relationships (SROs) between objects being changed (by stix2arango imports)
Lets me use an example (test 1.3 https://github.com/muchdogesec/arango_cti_processor/blob/adding-tests/tests/README.md#test-13-perform-another-update-to-change-capec-attack-pattern---attck-attack-pattern-relationship-capec-attack)
1.2
"external_references": [
{
"external_id": "CAPEC-158",
"source_name": "capec",
"url": "https://capec.mitre.org/data/definitions/158.html"
},
{
"external_id": "CWE-311",
"source_name": "cwe",
"url": "http://cwe.mitre.org/data/definitions/311.html"
},
{
"description": "Network Sniffing",
"external_id": "T1040",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1040"
},
{
"description": "Multi-Factor Authentication Interception",
"external_id": "T1111",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1111"
},
{
"description": "Acquire Access",
"external_id": "T1650",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1650"
},
{
"description": "Hijack Execution Flow: ServicesFile Permissions Weakness",
"external_id": "T1574.010",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1574/010"
}
],
"id": "attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a",
"modified": "2024-01-01T00:00:00.000Z",
1.3
"external_references": [
{
"external_id": "CAPEC-158",
"source_name": "capec",
"url": "https://capec.mitre.org/data/definitions/158.html"
},
{
"external_id": "CWE-311",
"source_name": "cwe",
"url": "http://cwe.mitre.org/data/definitions/311.html"
},
{
"description": "Network Sniffing",
"external_id": "T1040",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1040"
},
{
"description": "Multi-Factor Authentication Interception",
"external_id": "T1111",
"source_name": "ATTACK",
"url": "https://attack.mitre.org/wiki/Technique/T1111"
}
],
"id": "attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a",
"modified": "2024-01-15T00:00:00.000Z",
"name": "UPDATE OBJECT 3RD TIME",
2 of the ATT&CK references inside the CAPEC object (attack-pattern--897a5506-45bb-4f6f-96e7-55f4c0b9021a) are removed between 1.2 and 1.3 with 2 remaining T1040 and T1111 (total 4 ATT&CK links). T1650 and T1574.010 are removed (total 2 ATT&CK links). This is now the same as the original stix-capec-v3.9.json object
stix2arango handles the logic of updating and ageing out these objects based on the md5 hashes changing for the id. This is the logic ATS uses and it works fine.
Now, in 1.2 arango_cti_processor created 6 SROs to link the capec objects to attack. In 1.2, you will also see arango_cti_processor created doc._is_latest == false for all relationship objects from previous tests
This logic works fine when objects are added.
e.g. in 1.0 4 objects created, in 1.1 5 objects created (4 marked as old from 1.0), in 1.2 6 objects created (5 marked as old from 1.1)
However, when objects are removed from the
To solve this, the behaviour of ACTIP could be to check for changes to an id of object (which would cause a s2a change) and then check if the relevant data for the mode (e.g. capec-attack a change in ATT&CK refs) has change, if change like this detected, mark all links ACTIP created relationships from this source objects (and correct mode) as is_latest=false and then recreate the new relationship objects
|
gharchive/issue
| 2024-07-25T05:55:32 |
2025-04-01T04:35:08.899520
|
{
"authors": [
"himynamesdave"
],
"repo": "muchdogesec/arango_cti_processor",
"url": "https://github.com/muchdogesec/arango_cti_processor/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1194195325
|
[Feature/element] URL bar completely white
Sanity checks (must complete)
[x] I have read and followed the installation instructions in the README
[x] I have not modified the userChrome.css file
[x] I have tested the latest release for my Firefox version, or commit on master/beta branch (beta is for Firefox Beta only)
Describe the bug
The url bar and box appear fully white, making text unreadable
To Reproduce
Steps to reproduce the behavior:
Use any theme in the latest version, even defaut ones
Screenshots
System info
OS: [e.g. Windows 10]
Firefox version: 100.0b2
I had the same problem on macOS 12.3.1 with Firefox 100.0b2.
I changed the --toolbar-field-background-color, --toolbar-hover-background-color, and --toolbar-field-focus-background-color value in the variables.css file to something other than a hsl(), and it works. rgb(43, 42, 51) is a close enough color for the Firefox dark theme.
I changed the --toolbar-field-background-color, --toolbar-hover-background-color, and --toolbar-field-focus-background-color value in the variables.css file to something other than a hsl(), and it works. rgb(43, 42, 51) is a close enough color for the Firefox dark theme.
Thanks a lot. I also succeeded.
@missuo thanks, that worked for me.
I changed the --toolbar-field-background-color, --toolbar-hover-background-color, and --toolbar-field-focus-background-color value in the variables.css file to something other than a hsl(), and it works. rgb(43, 42, 51) is a close enough color for the Firefox dark theme.
Worked for me as well, thanks! Just a note that for me it was --toolbar-field-hover-background-color instead of --toolbar-hover-background-color in case anyone runs into this in their variables.css file. The other two variables had the same name as yours though.
This theme is the best stlye for firefox, and it is one of the reason that i sitll use firefox not chrome.
I changed the --toolbar-field-background-color, --toolbar-hover-background-color, and --toolbar-field-focus-background-color value in the variables.css file to something other than a hsl(), and it works. rgb(43, 42, 51) is a close enough color for the Firefox dark theme.
I use a Mac, so now it works great in dark mode, but when it switches to light mode the bar stays dark, is there any way to make a real fix so that way it switches between the two like it used to
when it switches to light mode the bar stays dark, is there any way to make a real fix so that way it switches between the two like it used to
I added the following around line 170 in variables.css (under :root:not(:-moz-lwtheme):not([privatebrowsingmode=temporary])):
--toolbar-field-background-color: rgb(241, 243, 245) !important;
--toolbar-field-hover-background-color: rgb(232, 235, 236) !important;
--toolbar-field-focus-background-color: rgb(255, 255, 255) !important;
when it switches to light mode the bar stays dark, is there any way to make a real fix so that way it switches between the two like it used to
I added the following around line 170 in variables.css (under :root:not(:-moz-lwtheme):not([privatebrowsingmode=temporary])):
--toolbar-field-background-color: rgb(241, 243, 245) !important;
--toolbar-field-hover-background-color: rgb(232, 235, 236) !important;
--toolbar-field-focus-background-color: rgb(255, 255, 255) !important;
It almost worked, matches the colour but now weird translucent results
Any plans to fix this in the latest version of this?
Same issue here
id argue the color it's meant to be is rgb(28, 27, 34)
Does anyone actually bother with this?
Guess I have to switch to Chrome since this issue essentially makes this theme unusable
The CSS selector used to detect a dark theme (:-moz-lwtheme-brighttext) seems not to work anymore:
https://github.com/muckSponge/MaterialFox/blob/8b1a7b10da0751a1f63e51eac428ba1eb1094b5f/chrome/global/variables.css#L159-L164
Replacing it with [lwtheme-brighttext] should restore the old behavior.
You Sir are a legend. Thank you very much! Fixed it for me.
Am 19.05.2022 um 22:47 schrieb Fabian Meyertöns @.***>:
The CSS selector used to detect a dark theme (:-moz-lwtheme-brighttext) seems not to work anymore:
https://github.com/muckSponge/MaterialFox/blob/8b1a7b10da0751a1f63e51eac428ba1eb1094b5f/chrome/global/variables.css#L159-L164 https://github.com/muckSponge/MaterialFox/blob/8b1a7b10da0751a1f63e51eac428ba1eb1094b5f/chrome/global/variables.css#L159-L164
Replacing it with [lwtheme-brighttext] should restore the old behavior.
—
Reply to this email directly, view it on GitHub https://github.com/muckSponge/MaterialFox/issues/317#issuecomment-1132194207, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZHKABD4FQTRFK7BWB3QPM3VK2SFNANCNFSM5SVDP4RQ.
You are receiving this because you commented.
You Sir are a legend. Thank you very much! Fixed it for me.
You are GOAT. Works for me.
Just to be sure, you only changed line 159 to: :root:-moz-any([lwtheme-brighttext], [privatebrowsingmode=temporary])?
Just to be sure, you only changed line 159 to: :root:-moz-any([lwtheme-brighttext], [privatebrowsingmode=temporary])?
Someone has a good eye, I accidentally slashed the [privatebrowsingmode=temporary]), so apologies for the mixup, this is what it looks like now.
I think the current correct code should be:
:root:-moz-any([lwtheme-brighttext], [privatebrowsingmode=temporary])
{
--toolbar-field-background-color: #202124 !important;
--toolbar-field-hover-background-color: #292a2d !important;
--toolbar-field-focus-background-color: #202124 !important;
}
It works perfect for me.
Just to be sure, you only changed line 159 to: :root:-moz-any([lwtheme-brighttext], [privatebrowsingmode=temporary])?
Someone has a good eye, I accidentally slashed the [privatebrowsingmode=temporary]), so apologies for the mixup, this is what it looks like now.
I think the current correct code should be:
:root:-moz-any([lwtheme-brighttext], [privatebrowsingmode=temporary])
{
--toolbar-field-background-color: #202124 !important;
--toolbar-field-hover-background-color: #292a2d !important;
--toolbar-field-focus-background-color: #202124 !important;
}
It works perfect for me.
This is almost perfect, but I think the URL could be a little darker in black in the Light theme.
Thanks for the solutions: just to summarize to anyone reading this thread:
Edit chrome/global/variables.css
Go to line 159
Replace it with :root:-moz-any([lwtheme-brighttext], [privatebrowsingmode=temporary])
|
gharchive/issue
| 2022-04-06T07:41:07 |
2025-04-01T04:35:08.925182
|
{
"authors": [
"DooNotResuscitate",
"Grogs",
"Lord-Lavios",
"Trident6355",
"fmeyertoens",
"juanpabloalfonzo",
"melodicwang",
"missuo",
"mtzfox",
"paralin",
"privacyguy123",
"quarkquartet",
"thebigsmileXD",
"utilisateurdegithub"
],
"repo": "muckSponge/MaterialFox",
"url": "https://github.com/muckSponge/MaterialFox/issues/317",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
259808424
|
Publish MIDI.js as npm module
Wouldn't be nice to be able to do npm install midi-js?
I had a fork of the repo and I will do it myself as I will need it for a project of mine, I can send a pull request afterwards, unless the owners have a different idea about this.
The module name midi-js is already taken by Microsoft, with and empty (or private?) repo :(
I have published as an npm module an ES6 modular version of the system as midicube: https://www.npmjs.com/package/midicube
(as noted, midijs was already taken as a name, so took midicube as an homage to our great mudcube).
This is my first npm module, so happy to take comments/suggestions/improvements at https://github.com/mscuthbert/midicube
This is awesome! Thank you @mscuthbert
This is not solved as far as this repository is concerned so I don't think the issue should be closed.
I suppose I agree: having this open would help others find the https://github.com/mscuthbert/midicube fork — happy to take further PRs and issues there and give committer access to people who have contributed here to ensure the problem doesn’t happen again down the road.
My apologies! I assumed given the previous conversation that there was no intent to take any further action in this repository. Issue re-opened.
|
gharchive/issue
| 2017-09-22T13:04:32 |
2025-04-01T04:35:08.967286
|
{
"authors": [
"amypellegrini",
"hmoffatt",
"mscuthbert"
],
"repo": "mudcube/MIDI.js",
"url": "https://github.com/mudcube/MIDI.js/issues/219",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2401794077
|
Run mail tests in CI
In https://github.com/muety/wakapi/commit/a5565b12ea11f0ef62214eb43a60b0b24c37a864 I added integration tests for SMTP mail sending using smtp4dev. Tests are run as part of the standard test suite, but are skipped silently if no smtp4dev endpoint is available. Running the tests requires to bring up a Docker container for smtp4dev, which is why currently these tests can only be run manually for simplicity. However, SMTP tests should be configured as part of the GHA workflow, too. Should be quite straightforward and similar to how we run the API tests against different DBs (running in Docker). Perhaps something for you, @YC?
Hi @muety, a test seems to be consistently failing.
https://github.com/muety/wakapi/actions/runs/9889197240/job/27314711747
There is possibly a race condition, in that the container may not be ready before the tests are executed.
With sleep 2 locally, I get the same error as in CI.
Without sleep locally:
Running tests ...
ok github.com/muety/wakapi/services/mail 0.340s
Wow, that was quick!
I noticed that the container takes a bit to reload its configuration. Perhaps adding a longer wait time after requesting a settings change (see here) would help?
|
gharchive/issue
| 2024-07-10T21:44:30 |
2025-04-01T04:35:08.978625
|
{
"authors": [
"YC",
"muety"
],
"repo": "muety/wakapi",
"url": "https://github.com/muety/wakapi/issues/658",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
413476775
|
Segmentation Fault error on windows10
When running the "python main.py", it would produced segmentation fault error.
Hi, sorry I didn't test SEAL on windows system. Can you successfully run other pytorch programs in windows 10, or is it only SEAL that cannot be ran?
Thanks for your reply. The pure PyTorch program is runnable on my environment.
And I used make as well as g++ in Cgywin to compile the DLL, I was wondering whether it should be blamed to Cgywin.
Furthermore, could you please provide the specific version of platform(eg. OS)&toolchain(eg. g++) to set up the project? Thanks.
Hi, I use
Linux version 3.10.0-514.21.1.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ).
Thanks for your info~
|
gharchive/issue
| 2019-02-22T16:21:32 |
2025-04-01T04:35:08.983140
|
{
"authors": [
"hazelnutsgz",
"muhanzhang"
],
"repo": "muhanzhang/pytorch_DGCNN",
"url": "https://github.com/muhanzhang/pytorch_DGCNN/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
599755189
|
OffTopic - Question.
Hi, Guys, I have only one questions, and is, Is possible use Material-UI suite in a React-Native project?
Regards, and I sorry for the OffTopic.
Javier.
Duplicate of #18276
|
gharchive/issue
| 2020-04-14T17:57:06 |
2025-04-01T04:35:08.984277
|
{
"authors": [
"jochercoles",
"oliviertassinari"
],
"repo": "mui-org/material-ui",
"url": "https://github.com/mui-org/material-ui/issues/20558",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
118428181
|
Switches "value" prop
I used "value" on Checkbox, as in the docs, but it was throwing an error. From looking at the code it seems like it should be "checked" instead of "value".
<Checkbox label="Label" value={this.state.thingIsEnabled} />
Warning: Failed propType: Invalid prop `value` of type `boolean` supplied to `EnhancedSwitch`, expected `string`. Check the render method of `Checkbox`.
Seems to me that the doc say to use defaultChecked. Could you be more explicit?
The props list defaultChecked, but the examples use "value", which causes a warning message in the console.
<Checkbox
name="checkboxName1"
value="checkboxValue1"
label="went for a run today"/>
value is the regular HTML property, see http://www.w3schools.com/tags/att_input_checked.asp.
i got confused thinking that value is the checked state. my bad.
Hence, I think that we should add the value property to the doc
Hence, I think that we should add the value property to the doc
I'm utterly confused with value, checked, defaultChecked.
What is the difference?
@pcompassion value and checked are two natives property than can be used to contolle components.
defaultCheck is specific to the Checkbox.
Just have a look at the documentation. That should be explicit enough. Otherwise, we need to improve it.
Sorry to resurrect such an old post, but can anyone share any links to where these two props are differentiated in the documentation? All I've been able to learn is that:
Name
Type
Default
Description
checked
bool
If true, the component is checked.
value
any
The value of the component. The DOM API casts this to a string.
It's still not clear to me what the value prop would be used for. What am I missing?
|
gharchive/issue
| 2015-11-23T17:17:18 |
2025-04-01T04:35:08.990616
|
{
"authors": [
"kmckee",
"oliviertassinari",
"pcompassion",
"rhythnic"
],
"repo": "mui-org/material-ui",
"url": "https://github.com/mui-org/material-ui/issues/2250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.