id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
131104529
|
add flatMapLatest to most
Has anyone considered adding flatMapLatest to most?
This is the problem I try to solve:
For each url I get from a stream, create a new EventSource connection. Whenever a new url is emitted I need to create a new connection and close the old one as soon as the first event of the new connection arrives.
flatMap leaves each connection open, as it does not unsubscribe from old streams. Another popular use case for flatMapLatest is autocomplete while typing. Users only care about suggestions for the currently entered text, previous requests can be discarded.
Maybe I’m missing something and this can be easily achieved with the current combinators?
@maxhoffmann
stream.map(someFunctionThatReturnsAStream).switch()
Is the equivalent of flatMapLatest()
Wow that was quick. Thanks!
I tried it and it closes the previous stream as soon as the new one is created. Do you have an idea how I could delay switching over to the new stream until the first event of it has arrived?
@maxhoffmann That's not actually what Rx's flatMapLatest() does either, it operates exactly the same way .map().switch() does in most.
This could be a possible solution, but it has a caveat
const pairwise = (initial, stream) => stream.loop(withPrev, initial);
const withPrev = (prev, x) => ({ seed: x, value: [prev, x] });
pairwise(someInitialValue, stream.map(functionThatReturnsAStream))
.map(([prev, curr])) => prev.takeUntil(curr).concat(curr)).switch()
@staltz pointed out this
takeUntil subscribes to curr, so you’d have both prev and curr subscribed
Then when the first curr event happens, you unsubscribe from both curr and prev, THEN you make a new subscription to curr, which starts curr all over again if it’s cold
So depending on your use-case this could work, but if the streams in question are 'cold' you may run into unexpected behavior. If you could explain your use-case further we may be able to help more, but if you are in fact dealing with 'cold' streams this could possibly require a new operator entirely.
Hey @maxhoffmann, this "wait until first event on the newly switched-to stream" is definitely an interesting situation. Have you implemented it successfully using another library (rxjs?).
Thanks for the code snippet @TylorS. Unfortunately in my use case the streams are cold.
@briancavalier I haven’t found a solution yet. Just came across this problem today.
The specific use case is creating a seamless stream of EventSource data that can change EventSources when it receives new URLs without any timeouts between closing/connecting.
URLs: ----url1------url2------------->
EventSource1: ----connect-------4-1-7-3-|
EventSource2: --------------connect---6-2-5-->
Result: -------------------4-1-7-6-2-5-->
The current EventSource should only close after the first event of the new EventSource was emitted, so that a seamless stream of data is guaranteed.
Ok, got it. Thanks for clarifying. My intuition is that it is possible, but also very non-obvious right now :) Let's keep brainstorming on it here and see what we can come up with.
Here's another strawman. It seems a little unstable when two events coincide the instant of switching (possibly other cases, too). There's also a lot going on ... using two joins is a bit expensive. The multicast is probably necessary to prevent total chaos if outer is cold.
In essence, given a stream of streams, doLazySwitch maps each inner stream to a version of itself that runs until the first event of the next stream to arrive on outer.
import { join, take, map, until, multicast } from 'most';
const lazySwitch = outer => doLazySwitch(multicast(outer));
const doLazySwitch = outer => join(map(curriedUntil(join(outer)), outer));
const curriedUntil = outer => inner => until(outer, inner);
Whoah O.o
Thanks for help. The problem I face has to be solved on the server for different reasons, but I learned a lot from this. :)
Cool, glad it was helpful!
|
gharchive/issue
| 2016-02-03T17:58:05 |
2025-04-01T04:33:54.579754
|
{
"authors": [
"TylorS",
"briancavalier",
"maxhoffmann"
],
"repo": "cujojs/most",
"url": "https://github.com/cujojs/most/issues/192",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1060617887
|
Structured Communication Plan for Liaisons
Request from Jennifer: "I also mentioned to Carolyn the importance of some kind of structured communication plan for sharing the interface transition with patrons, including timelines, template emails, and talking points for liaisons to be able to communicate with their departments."
Currently have met with Carolyn and @moellerGit and created a rough communication timeline, but this has not been updated.
[x] Work with @moellerGit and @jwfuller to either update or create a new plan.
[x] Possibly combine with training plan #58 ? @moellerGit will decide.
[x] Submit new plan to Carolyn.
[x] Carolyn approves/provides guidance.
[x] Send timeline, with communication milestones, to section leads.
New plan written by @moellerGit
After discussion with Carolyn EDS is going to separate communications/trainings from FOLIO, so maybe we should close this card? I'm going to open a new one for me on the discovery board.
|
gharchive/issue
| 2021-11-22T21:38:33 |
2025-04-01T04:33:54.583681
|
{
"authors": [
"nleboeuftrujillo"
],
"repo": "culibraries/folio",
"url": "https://github.com/culibraries/folio/issues/111",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
425292242
|
[feature-request] expose _mat_ptrs API to users
Hi, I want to use some low-level cublas api such as sgemmBatched in my cupy code and I found a great example in cupy's implementation of matmul:
https://github.com/cupy/cupy/blob/86c22271d0b417e6cfd7b539e160c050752c95f9/cupy/core/core.pyx#L2207-L2218
My question is that, the _mat_ptrs API seems currently not available to users (Please correct my if I miss something). Do you plan to expose it in the future?
Thanks!
Could you tell us why you need _mat_ptrs in your usecase?
Thanks for your reply. The main reason is that I want to use APIs such as sgemmBatched and I need _mat_ptrs to prepare the input arguments. A more detailed usecase is that I want to compute A*B.transpose in pytorch, but I have to write something like:
B_T = B.transpose(-1, -2)
C = torch.bmm(A, B_T)
I believe that the explicit transpose is not necessary, because sgemmBatched supports this operator,
Thanks for the quick response! Now I understood your usecase. As for your first question,
Do you plan to expose it in the future?
We don't have plan to publicize this internal details. Note that cublas.sgemmBatched itself is not public (documented in our API reference)... although we don't have any plan to change it drastically.
For now, you can copy the following code in your code (and remove cdef ... to make it work in pure-Python, if you need).
https://github.com/cupy/cupy/blob/86c22271d0b417e6cfd7b539e160c050752c95f9/cupy/core/core.pyx#L1970-L1999
|
gharchive/issue
| 2019-03-26T08:50:51 |
2025-04-01T04:33:54.591107
|
{
"authors": [
"kmaehashi",
"xptree"
],
"repo": "cupy/cupy",
"url": "https://github.com/cupy/cupy/issues/2115",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
684982566
|
Use cupy.core._dtype.to_cuda_dtype whenever possible
Reduce a bit of code duplication.
TODO:
[x] Check if CPU overhead is reduced.
Jenkins, test this please
Jenkins CI test (for commit f38a6bcc7340adf24fc4ce7855237bb2123f5f58, target branch master) succeeded!
Jenkins, test this please
Jenkins CI test (for commit dde5aaffdd6225e5b9795bec6ea8bf66c196cad3, target branch master) succeeded!
CPU overhead remains the same.
Jenkins, test this please
Jenkins CI test (for commit 5f80e5cd92bfd7c0ce4bee8d97f85b09dff142dd, target branch master) succeeded!
@leofang Could you resolve conflicts?
@leofang Could you resolve conflicts?
@asi1024 Thanks for reminder, I just did.
Jenkins, test this please.
Jenkins CI test (for commit 6bc14df7cc1ab30a6a00aed0306acc8604d47c62, target branch master) failed with status FAILURE.
Jenkins, test this please.
Jenkins CI test (for commit e4bede5da2f14cb60ca5b7ac2843d47fcc128912, target branch master) succeeded!
LGTM! Thanks!
Thanks 🙏
|
gharchive/pull-request
| 2020-08-24T21:38:15 |
2025-04-01T04:33:54.595849
|
{
"authors": [
"asi1024",
"chainer-ci",
"leofang"
],
"repo": "cupy/cupy",
"url": "https://github.com/cupy/cupy/pull/3853",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
772998314
|
Old devices don't go away
I recently moved my amazon devices to a new amazon account. Deleted this integration and re enabled with new account. Old non-amazon devices (mainly Sonos speakers) still there and can't be deleted without deleting the integration. Note that I've tripled checked that Sonos is NOT enabled in my Amazon Alexa profile so I'm assuming this integration has just "remembered" the old devices (maybe in cache) and has not deleted them
Remove unused entities through home assistant per HA standard practice.
|
gharchive/issue
| 2020-12-22T14:43:38 |
2025-04-01T04:33:54.649251
|
{
"authors": [
"alandtse",
"jl0810"
],
"repo": "custom-components/alexa_media_player",
"url": "https://github.com/custom-components/alexa_media_player/issues/1056",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1163675197
|
Enable Refresh Channels to be done during standby as well as active
I'm getting spurious 404 errors from the load_source_list logic in pySonyBraviaPSK.
There seems to have been some change in either HA or Sony firmware which is causing more error conditions (possibly race conditions) from the TV. You will see the proposal I have put in for pySonyBraviaPSK as well.
The challenge here is that channel refresh is a one time operation, currently when TV is active and when HA has been restarted. Quite often this will be the first time the TV has been switched on after an HA restart.
The idea of this change is to reduce the risk of a race condition (where the channel info is requested when the TV is busy switching on), since the channel list info is also available when the TV is is standby (or at least on my 2021 model it is).
Thanks!
|
gharchive/pull-request
| 2022-03-09T09:11:38 |
2025-04-01T04:33:54.653690
|
{
"authors": [
"RogerSelwyn",
"gerard33"
],
"repo": "custom-components/media_player.braviatv_psk",
"url": "https://github.com/custom-components/media_player.braviatv_psk/pull/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2705514980
|
Crash with RuntimeException: Unable to instantiate activity
SDK version: 4.3.0
Environment: Production
Are logs available?
Stacktrace:
Exception java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{fairmoneyPackage/io.customer.messaginginapp.gist.presentation.GistModalActivity}: java.lang.IllegalStateException: ModuleMessagingInApp not initialized
at android.app.ActivityThread.performLaunchActivity (ActivityThread.java:3616)
at android.app.ActivityThread.handleLaunchActivity (ActivityThread.java:3867)
at android.app.servertransaction.LaunchActivityItem.execute (LaunchActivityItem.java:105)
at android.app.servertransaction.TransactionExecutor.executeCallbacks (TransactionExecutor.java:136)
at android.app.servertransaction.TransactionExecutor.execute (TransactionExecutor.java:96)
at android.app.ActivityThread$H.handleMessage (ActivityThread.java:2279)
at android.os.Handler.dispatchMessage (Handler.java:106)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:7986)
at java.lang.reflect.Method.invoke
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:553)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1003)
Caused by java.lang.IllegalStateException: ModuleMessagingInApp not initialized
at io.customer.messaginginapp.ModuleMessagingInApp$Companion.instance (ModuleMessagingInApp.kt:104)
at io.customer.messaginginapp.di.DIGraphMessagingInAppKt.getInAppMessaging (DIGraphMessagingInApp.kt:43)
at io.customer.messaginginapp.di.DIGraphMessagingInAppKt.getInAppMessagingManager (DIGraphMessagingInApp.kt:33)
at io.customer.messaginginapp.gist.presentation.GistModalActivity.<init> (GistModalActivity.kt:33)
at java.lang.Class.newInstance
at android.app.AppComponentFactory.instantiateActivity (AppComponentFactory.java:95)
at androidx.core.app.CoreComponentFactory.instantiateActivity (CoreComponentFactory.java:44)
at android.app.Instrumentation.newActivity (Instrumentation.java:1273)
at android.app.ActivityThread.performLaunchActivity (ActivityThread.java:3602)
Describe the bug
Looks like we have an issue in production, probably when an user receives a push notification sent by Customer IO (will continue to try to reproduce the crash on my side). It's probably caused by the way we initialise the SDK (we need it to be done as late as possible and it's deferred, if we initialise it synchronously it leads to a huge amount of ANRs and it's an issue for us too)
It's currently our top 1 issue with 28% of crashes for our latest version (668 crashes at the moment)
To Reproduce
We were not able to reproduce this issue
Expected behavior
No crash
Screenshots
Additional context
It happens on all Android version our app targets (Android version 7 and +) on a lot of devices and brands (Huawei, Samsung, Oppo, Tecno, Infinix, Redmi, Honor...)
Hey @remi-ollivier, thank you for reaching out. Your diagnosis of the issue is spot on, it's due to the delayed initialization.
Regarding your ANR concerns, we worked on a total revamp of the SDK and made a lot of improvements, so you should not be getting those ANRs. Did you get any ANR after 4.2.0 release? If so, please share the stack trace because that should not happen.
Additionally, if you are using the new in-app message editor, that makes in-apps even faster and removes all the chances for any delay.
Hi. We didn't try with versions 4.2.0+, we rolled back to a sync init with a previous version because it was causing a lot of crashes, and it led to ANRs instead. We are currently using 4.3.0 with async init, and we submitted a version with Customer.IO SDK 4.4.1 with a sync init. I don't know if we plan to revert to a sync version because ANRs are impacting us badly so we need to avoid them
Absolutely, understand where you are coming from, is there a way to do a controlled release, I believe if you use the latest version with sync initialization there shouldn't be ANRs.
But If you are still facing them after, we would like to get to the bottom of it and resolve it, but we don't have any reports for ANRs with the latest release and the new in-app editor.
Ok sure let me discuss it with my manager and I'll get back to you next week, thanks for your help!
|
gharchive/issue
| 2024-11-29T15:45:34 |
2025-04-01T04:33:54.663259
|
{
"authors": [
"Shahroz16",
"remi-ollivier"
],
"repo": "customerio/customerio-android",
"url": "https://github.com/customerio/customerio-android/issues/470",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1313423061
|
refactor: change format rules to make lint happy
Sometimes the current format rules will break lint. This update should fix that.
Complete each step to get your pull request merged in. Learn more about the workflow this project uses.
[ ] Assign members of your team to review the pull request.
[ ] Wait for pull request status checks to complete. If there are problems, fix them until you see that all status checks are passing.
[ ] Wait until the pull request has been reviewed and approved by a teammate
[ ] After pull request is approved, and you determine it's ready add the label "Ready to merge" to the pull request. A bot will squash and merge the pull request for you after the label is added.
Pull request title looks good 👍. If this pull request gets merged, it will not cause a new release of the software.
Example: If this project's latest release version is 1.0.0. If this pull request gets merged in, the next release of this project will be 1.0.0.
To merge this pull request, add the label Ready to merge to this pull request and I'll merge it for you.
Warnings
:warning:
I noticed file Tests/MessagingPushFCM/APITest.swift was modified. That could mean that this pull request is introducing a breaking change to the SDK.
If this pull request does introduce a breaking change, make sure the pull request title is in the format:
<type>!: description of breaking change
// Example:
refactor!: remove onComplete callback from async functions
:warning:
I noticed file Tests/MessagingPushAPN/APITest.swift was modified. That could mean that this pull request is introducing a breaking change to the SDK.
If this pull request does introduce a breaking change, make sure the pull request title is in the format:
<type>!: description of breaking change
// Example:
refactor!: remove onComplete callback from async functions
Generated by :no_entry_sign: dangerJS against 5b1c27133a49bfd59e23b1bc7a19489553abe38f
Closing in favor of this PR https://github.com/customerio/customerio-ios/pull/187
|
gharchive/pull-request
| 2022-07-21T15:15:28 |
2025-04-01T04:33:54.671569
|
{
"authors": [
"ami-oss-ci",
"levibostian"
],
"repo": "customerio/customerio-ios",
"url": "https://github.com/customerio/customerio-ios/pull/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
431272073
|
Quick fix for 404 link to Code of Conduct
Hey there, not much of a contribution, but loving what you've done so far. I plan to use this gem all/most of the new projects I'm working on. Who knows, maybe I'll contribute some actual code next time :D
Hey James! Thanks for contributing. Super excited to see you are adopting the work here too :)
Hopefully in a few weeks I'll have all the docs in GitHub issues moved over to a new Lamby product site that will help outline how to get started better. So far the project's Rack integration is solid and I've even run some real world Rails apps thru some tests, no issues.
About the only thing on my wish list past the product site I'm building is making a bundle exec rake lamby:install task that creates the basic getting started resources with some Rails generator magic.
app.rb
template.yml
bin/build
bin/deploy
Thanks so much for all you've done here. I'm super excited to help as much as I can.
One issue I hit yesterday and not sure if I should open a ticket ... but during my first build I was seeing an error ... "RubyBundlerBuilder:CopySource - [Errno 6] No such device or address /path/to/puma.sock". I put the build in debug mode and then realized puma-dev had dropped the socket file in project/tmp and sam was trying to copy that. I looked around for a way to tell it to skip binary or socket files or even just to ignore a certain path(tmp/, log/), but alas ... I failed. So I rm'd the file and sam was happy again.
Also, any chance we see some bootstrap'd-by-template capabilities?
$ gem install lamby
$ lamby new my-app --template=react:graphql:aws-record:dynamodb
$ bundle exec rake lamby:install
$ gem install lamby-templates
$ bundle install
$ git add -a -m "All the Lamby goodness!!"
$ rails g model post title:string body:string
remembering the previously chosen template .... generates all the
boilerplate for
- reactjs based spa frontend (next, redux, etc) that talks graphql
- graphql (ruby impl via graphql/search_object/etc)
- placeholder tests for both front and back ends
$ say "OMG! This is amazing!!"
I don't know ... just spitballing :D This is something I may take a look
at later this week, possibly as a separate gem (kiss)
--
James Mitchell
http://www.JamesMitchell.us
On Tue, Apr 9, 2019 at 10:26 PM Ken Collins notifications@github.com
wrote:
Hey James! Thanks for contributing. Super excited to see you are adopting
the work here too :)
Hopefully in a few weeks I'll have all the docs in GitHub issues moved
over to a new Lamby product site that will help outline how to get started
better. So far the project's Rack integration is solid and I've even run
some real world Rails apps thru some tests, no issues.
About the only thing on my wish list past the product site I'm building is
making a bundle exec rake lamby:install task that creates the basic
getting started resources with some Rails generator magic.
app.rb
template.yml
bin/build
bin/deploy
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/customink/lamby/pull/31#issuecomment-481508697, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AASnFzSK0JetRpYw8RoEA4qKZLbxfOFRks5vfUvcgaJpZM4cl-E9
.
[Errno 6] No such device or address /path/to/puma.sock"
I've seen that too! I've just been pkill -USR1 puma-dev it for a bit as needed.
I looked around for a way to tell it to skip binary or socket files or even just to ignore a certain path(tmp/, log/), but alas ... I failed.
Thanks, I had not done that yet. Not sure if we should call that out in the docs either? 🤔
Also, any chance we see some bootstrap'd-by-template capabilities?
I'd like that. Just for starters the bundle exec rake lamby:install which installed the app,rb, template, and two bin scripts would suffice.
FYI, got most of the template work done in #35 and the README has a overhauled Getting Started section that leverages this. Cheers!
|
gharchive/pull-request
| 2019-04-10T02:14:42 |
2025-04-01T04:33:54.683099
|
{
"authors": [
"jmitchtx",
"metaskills"
],
"repo": "customink/lamby",
"url": "https://github.com/customink/lamby/pull/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2585911137
|
Refactor Playnite Web MQTT for improved logging and image handling
Summary by Sourcery
Enhance the Playnite Web MQTT component by improving logging, refactoring image compression to use WebP, and optimizing game discovery and cover image handling. Simplify the GitHub Actions workflows for version management.
Enhancements:
Improve logging throughout the Playnite Web MQTT component to provide more detailed information about the state and actions of the system.
Refactor the image compression logic to use WebP format and implement a progressive quality reduction strategy for better efficiency.
Enhance the handling of game discovery and cover image updates by introducing more robust checks and queue management.
CI:
Simplify the GitHub Actions workflow for creating new version branches and releases by removing redundant environment variable usage and streamlining the branch and tag creation process.
@sourcery-ai review
|
gharchive/pull-request
| 2024-10-14T12:33:26 |
2025-04-01T04:33:54.702108
|
{
"authors": [
"cvele"
],
"repo": "cvele/playnite_web_mqtt",
"url": "https://github.com/cvele/playnite_web_mqtt/pull/82",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
90679635
|
Make text selectable
It’s not cool that you can’t select any text inside an event detail, configuration panel etc.
Yes, you can. o.O
Yeah, some has already fixed it.
|
gharchive/issue
| 2015-06-24T13:44:43 |
2025-04-01T04:33:54.704335
|
{
"authors": [
"MMajko",
"jirutka"
],
"repo": "cvut/fittable-widget",
"url": "https://github.com/cvut/fittable-widget/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
654523905
|
[BUG] On the VSCode update, it just shows blank.
OS INFO :
I'm sorry I didn't have this problem, you need to open the console to give me the error message.
Open Command Palette.
I have the same problem, both on Windows and inside wsl. Here is the error and the stack trace:
Update to 2.3.2 fix.
|
gharchive/issue
| 2020-07-10T05:36:00 |
2025-04-01T04:33:54.741716
|
{
"authors": [
"cweijan",
"jpagarcia",
"xWillQ"
],
"repo": "cweijan/vscode-mysql",
"url": "https://github.com/cweijan/vscode-mysql/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2259892283
|
Support pasting images when using VS Code with WSL
Congratulations for making this extension. This makes Markdown a lot more powerful and productive.
Feature description
The paste image from clipboard is amazing, however it does not work when using VS Code with WSL. The problematic behavior from Office Viewer is similar to that from this other extension and it has been fixed. Please implement this fix in Office Viewer as well. I am using Windows 10.
Context details
I am using VS Code to update the Git repo behind GitHub Wiki pages. Some document names have unsupported characters for naming Windows files so I had to move to WSL to see all the files. However, this breaks the paste image from Windows clipboard.
Self assignment
Is there a guideline on how to contribute to this project? I would like to work on this feature.
The extension that solves the problem only works for Markdown editor. In addition, Office Viewer grabs the paste system event instead of that extension. However, that extension also has CTRL + ALT + V as pasting command that Office Viewer does not grab, but it can only be used withing the built-in Markdown editor.
I second this request, as I'm unable to paste an image into a markdown document when using the WSL Remote Extension.
|
gharchive/issue
| 2024-04-23T22:39:28 |
2025-04-01T04:33:54.744520
|
{
"authors": [
"MallocArray",
"RazvanModex"
],
"repo": "cweijan/vscode-office",
"url": "https://github.com/cweijan/vscode-office/issues/309",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
411879368
|
!fcc /content/tiger/Bali_tigris
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
Hi
Can you tell me a bit about your Python version/ setup? Are you running on a head-less server?
This bug may also occur if no graphical env is found: inside jupyter notebooks (likely regarding your "!" prepending the command) or from WSL.
|
gharchive/issue
| 2019-02-19T11:34:56 |
2025-04-01T04:33:54.749282
|
{
"authors": [
"H4dr1en",
"ankushbanyal",
"cwerner"
],
"repo": "cwerner/fastclass",
"url": "https://github.com/cwerner/fastclass/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
513301310
|
Convert getter resulting POST and PUT now working
I have been using your old repo for my App architecture. Recently I updated to 2.2, since then Post and PUT stop working.
Later I reliease that the Convert getter is actually what causing this Erro
InsufficientExecutionStackException: Insufficient stack to continue executing the program safely. This can happen from having too many functions on the call stack or function on the stack using too much stack space
After which I change the getter to a Method, then everything works fine.
Please is there any harm in using Method for Convert instead of using getter
Abdulrahman --
First I want to thank you for trusting my architecture for your work. There is no issue with changing the Getter to a Method. I am looking at it but would love to see your changes. Could you please send me a Pull Request with the code? Thanks!
Have a great day!
Woody
Thank you, @cwoodruff Woody
I have opened a pull request for that #22
|
gharchive/issue
| 2019-10-28T13:25:13 |
2025-04-01T04:33:54.753309
|
{
"authors": [
"amansulaiman",
"cwoodruff"
],
"repo": "cwoodruff/ChinookASPNETCoreAPINTier",
"url": "https://github.com/cwoodruff/ChinookASPNETCoreAPINTier/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
803083852
|
[0348/shortcut-key] 画面別ショートカットキーを実装
:hammer: 変更内容 / Details of Changes
画面別ショートカットキーを実装しました。
各設定は danoni_constants.js で行っています(拡張可能)。
キーの組み合わせが発生する場合は(対象キー1)_(対象キー2)の順に指定します。
下記リストの最初に合致したキーに対する、ボタン処理を実行します。
// 画面別ショートカット
const g_shortcutObj = {
title: {
Enter: `btnStart`,
},
option: {
KeyB: `btnBack`,
KeyD: `lnkDifficultyR`,
ShiftLeft_ArrowRight: `lnkSpeedR`,
ArrowRight: `lnkSpeedRR`,
ShiftLeft_ArrowLeft: `lnkSpeedL`,
ArrowLeft: `lnkSpeedLL`,
ArrowUp: `lnkMotionR`,
// 省略
ShiftLeft_Semicolon: `lnkAdjustmentR`,
Semicolon: `lnkAdjustmentRR`,
ShiftLeft_Minus: `lnkAdjustmentL`,
Minus: `lnkAdjustmentLL`,
KeyV: `lnkVolumeR`,
KeyI: `btnGraph`,
KeyB: `btnBack`,
KeyK: `btnKeyConfig`,
Enter: `btnPlay`,
},
// 省略
:bookmark: 関連Issue, 変更理由 / Related Issues, Reason for Changes
ショートカットキー拡充と、汎用性の向上のため。
:camera: スクリーンショット / Screenshot
:pencil: その他コメント / Other Comments
対象はタイトル画面、設定画面(Display含む)、結果画面です。
どのキーが対応するかの表記が入れられていないため、どこかに入れる予定です。
設定無効化時にエラーが出る(ボタンが定義されていないため)。
マージする前に修正すること。
キーボードショートカット(案)
タイトル画面
対応キー
操作
Enter
次の画面へ
設定画面(Settings)
対応キー
操作
D
Difficulty (右回り)
Shift+D
Difficulty (左回り)
→
Speed (右回り、0.25x刻み)
←
Speed (左回り、0.25x刻み)
Shift+→
Speed (右回り、1x刻み)
Shift+←
Speed (左回り、1x刻み)
↑
Motion (右回り)
Shift+↑
Motion (左回り)
↓
Scroll (右回り)
Shift+↓
Scroll (左回り)
R
Reverse
S
Shuffle (右回り)
Shift+S
Shaffle (左回り)
A
AutoPlay (右回り)
Shift+A
AutoPlay (左回り)
G
Gauge (右回り)
Shift+G
Gauge (左回り)
+
Adjustment (右回り、1刻み)
-
Adjustment (左回り、1刻み)
Shift++
Adjustment (右回り、5刻み)
Shift+-
Adjustment (左回り、5刻み)
V
Volume (右回り)
Shift+V
Volume (左回り)
B
前の画面へ
K
キーコンフィグ画面へ
Enter
次の画面へ
設定画面(Display)
対応キー
操作
A
Appearance (右回り)
Shift+A
Appearance (左回り)
O
Opacity (右回り)
Shift+O
Opacity (左回り)
B
前の画面へ
K
キーコンフィグ画面へ
Enter
次の画面へ
結果画面(Result)
対応キー
操作
B
前の画面へ
C
結果をクリップボードへコピー
T
結果をTwitterへ投稿
G
結果をGitterへ投稿(リンクのみ)
R
リトライ
|
gharchive/pull-request
| 2021-02-07T23:04:27 |
2025-04-01T04:33:54.772727
|
{
"authors": [
"cwtickle"
],
"repo": "cwtickle/danoniplus",
"url": "https://github.com/cwtickle/danoniplus/pull/964",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
158522672
|
Carthage Support
This was a little hacky, but it seems that CocoaPods generates a framework called UIMenuItem_CXAImageSupport.framework, whereas the default scheme only creates ImageMenuItem. I had to create a new header, target and scheme to make the naming match, and tested it in a separate project. Seems to work.
@alexpopov thanks.
Although this works, some projects intermittently complain about a missing umbrella header. I will figure this out an submit another PR 😊
|
gharchive/pull-request
| 2016-06-04T20:31:41 |
2025-04-01T04:33:54.774789
|
{
"authors": [
"alexpopov",
"cxa"
],
"repo": "cxa/UIMenuItem-CXAImageSupport",
"url": "https://github.com/cxa/UIMenuItem-CXAImageSupport/pull/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
565467773
|
Suite release runs daily pipeline builds
Pipeline is set up to run daily builds, so that the maintainers / release manager has insight into whether the e2e suite tests are passing against the latest tags of all suite components.
@doodlesbykumbi this is a follow-on to your current work - is the way that it's defined clear enough?
|
gharchive/issue
| 2020-02-14T17:43:37 |
2025-04-01T04:33:54.817450
|
{
"authors": [
"izgeri"
],
"repo": "cyberark/conjur-oss-suite-release",
"url": "https://github.com/cyberark/conjur-oss-suite-release/issues/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1209962406
|
Adding hello world program using python
I want to work on this. Please assign this issue to me.
/assign
|
gharchive/issue
| 2022-04-20T17:01:45 |
2025-04-01T04:33:54.855571
|
{
"authors": [
"lovishprabhakar"
],
"repo": "cyberbuddy-io/open-source-contribution-for-beginners",
"url": "https://github.com/cyberbuddy-io/open-source-contribution-for-beginners/issues/294",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1071459291
|
compound.py gives UnicodeEncodeError
Python gives the following error when running compound.py
--- Logging error ---
Traceback (most recent call last):
File "C:\Program Files (x86)\Python38-32\lib\logging_init_.py", line 1084, in emit
stream.write(msg + self.terminator)
File "C:\Program Files (x86)\Python38-32\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u20bf' in position 54: character maps to
Call stack:
File "compound.py", line 541, in
compound_bot(botdata)
File "compound.py", line 413, in compound_bot
update_bot(
File "compound.py", line 300, in update_bot
logger.info(
File "compound.py", line 167, in info
self.log(message, "info")
File "compound.py", line 157, in log
self.my_logger.info(message)
Hi Bram, how is your Bot called?
It's called: LunarCrush Auto Top 10 Alt Daily v6 - TP1%/2SO
I love short names ;-)
It is related to the Bitcoin symbol.
https://www.charactercodes.net/20BF
You mean I have to adjust the python code...?
That could be a temporary work around :)
@bramjekel can you check latest code of compound.py? (no version update, just pick the one from repo)
unfortunately, i still get the same error message
@bramjekel can you paste the name of the bot?
It works for me with t bitcoin symbol, but that already worked without the gix.
Probably due to using the correct encoding on my system (utf8), what environment are you running it on?
If run on linux what does this give:
$ locale
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC=nl_NL.UTF-8
LC_TIME=nl_NL.UTF-8
LC_COLLATE="en_US.UTF-8"
LC_MONETARY=nl_NL.UTF-8
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=nl_NL.UTF-8
LC_NAME=nl_NL.UTF-8
LC_ADDRESS=nl_NL.UTF-8
LC_TELEPHONE=nl_NL.UTF-8
LC_MEASUREMENT=nl_NL.UTF-8
LC_IDENTIFICATION=nl_NL.UTF-8
LC_ALL=
Ah yeah a win 10 install, I forgot.
Can you give a screenshot of the name, since it doesn't show it contains a bitcoin symbol when pasted.
Can you paste this in a python prompt, or in a file and run it?
import locale
all = locale.locale_alias().items()
utfs = [(k,v) for k,v in all if 'utf' in k.lower() or 'utf' in v.lower()]
# utf settings starting with en
en_utfs = [(k,v) for k,v in utfs if k.lower()[:2].lower() == 'en' or
v.lower()[:2] == 'en'
print en_utfs
first I got this error:
File "encode.py", line 9
print en_utfs
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(en_utfs)?
When I changed the last line of code to print (en_utfs) I got this error:
File "encode.py", line 9
print(en_utfs)
^
SyntaxError: invalid syntax
import locale
print(locale.getlocale())
('Dutch_Netherlands', '1252')
Can you add this near top of compound.py and see if it fixes anything?
import locale
locale.setlocale(locale.LC_ALL, "en_US.UTF8")
Unfortunately still the same error :-(
@cyberjunky I have been searching and tweaking myself and discovered the following.
if i replace this section:
def __init__(self, filename="", when="midnight", interval=1, backupCount=7):
super().__init__(
filename=filename,
when=when,
interval=int(interval),
backupCount=int(backupCount),
)
with this section
def __init__(self, filename="", when="midnight", interval=1, backupCount=7, encoding='utf-8'):
super().__init__(
filename=filename,
when=when,
interval=int(interval),
backupCount=int(backupCount),
encoding=encoding
)
and add the encoding to this as well, like:
# Log to file and rotate if needed
file_handle = TimedRotatingFileHandler(
filename=f"{datadir}/logs/{program}.log", backupCount=logstokeep, encoding='utf-8'
)
then I do not get an error...
@bramjekel Is the problem solved with your solution?
I have this error also (win10 install)
@toonvdijck yes it was
I have added it to the code base, thanks!
|
gharchive/issue
| 2021-12-05T13:25:50 |
2025-04-01T04:33:54.879120
|
{
"authors": [
"bramjekel",
"cyberjunky",
"toonvdijck"
],
"repo": "cyberjunky/3commas-cyber-bots",
"url": "https://github.com/cyberjunky/3commas-cyber-bots/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1984022183
|
App example request for income and outgoing calls
i have used your web example and it is working fine on web ...
but it is not working when i run on the mobile phone i.e app
it is requested to provide and example of android app as you provided the example of web
Have you followed the needs and permissions for android?
Also for registering token is should indicate which platform with client id.
Specially in android , It has a lot of permission to allow used native incoming call.
i have added these permissions
my case is that we have created an erp in laravel. i want to recieve call from that side.
when i run my flutter project on web i do recieve call. but when the same code is run on mobile app it does not work,
even that event listners does not work
i have added these permissions my case is that we have created an erp in laravel. i want to recieve call from that side. when i run my flutter project on web i do recieve call. but when the same code is run on mobile app it does not work, even that event listners does not work .
i just used the example provided in github repo
Hi @Muhammadjaved7209
Hmm, I have considered providing a sample Android & iOS app to the relevant appstores but don't have any plans to do so short-term.
i have used your web example and it is working fine on web ... but it is not working when i run on the mobile phone i.e app it is requested to provide and example of android app as you provided the example of web
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
i just downloaded the zip code and run your example project. i provide the identity and accessToken (provided by my backend developer - laravel). i run your example project on flutter web . After registering for calls it printed on the console "device ready fot voip call " . but i did not see any message like this on my android 12 redme phone. When i try to place call it navigate to the native call log of my device.
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
i just downloaded the zip code and run your example project. i provide the identity and accessToken (provided by my backend developer - laravel). i run your example project on flutter web . After registering for calls it printed on the console "device ready fot voip call " . but i did not see any message like this on my android 12 redme phone. When i try to place call it navigate to the native call log of my device.
I see, there are several permissions you need to grant prior to being able to receive calls. This includes:
android.permission.FOREGROUND_SERVICE
android.permission.RECORD_AUDIO
android.permission.READ_PHONE_STATE
android.permission.READ_PHONE_NUMBERS
android.permission.CALL_PHONE
and finally,
Active the calling account, see this for more info.
You can review these permission in Android Setup with a more detailed discussion in Notes - Android.
These permissions are all visible on the mobile app, see here for code reference.
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
i just downloaded the zip code and run your example project. i provide the identity and accessToken (provided by my backend developer - laravel). i run your example project on flutter web . After registering for calls it printed on the console "device ready fot voip call " . but i did not see any message like this on my android 12 redme phone. When i try to place call it navigate to the native call log of my device.
I see, there are several permissions you need to grant prior to being able to receive calls. This includes:
android.permission.FOREGROUND_SERVICE
android.permission.RECORD_AUDIO
android.permission.READ_PHONE_STATE
android.permission.READ_PHONE_NUMBERS
android.permission.CALL_PHONE
and finally,
Active the calling account, see this for more info.
You can review these permission in Android Setup with a more detailed discussion in Notes - Android.
These permissions are all visible on the mobile app, see here for code reference.
Thank you for being so coporative..
i have already added these permissions and also enable calling account.
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
i just downloaded the zip code and run your example project. i provide the identity and accessToken (provided by my backend developer - laravel). i run your example project on flutter web . After registering for calls it printed on the console "device ready fot voip call " . but i did not see any message like this on my android 12 redme phone. When i try to place call it navigate to the native call log of my device.
I see, there are several permissions you need to grant prior to being able to receive calls. This includes:
android.permission.FOREGROUND_SERVICE
android.permission.RECORD_AUDIO
android.permission.READ_PHONE_STATE
android.permission.READ_PHONE_NUMBERS
android.permission.CALL_PHONE
and finally,
Active the calling account, see this for more info.
You can review these permission in Android Setup with a more detailed discussion in Notes - Android.
These permissions are all visible on the mobile app, see here for code reference.
i have added all these permission and also enable calling account
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
i just downloaded the zip code and run your example project. i provide the identity and accessToken (provided by my backend developer - laravel). i run your example project on flutter web . After registering for calls it printed on the console "device ready fot voip call " . but i did not see any message like this on my android 12 redme phone. When i try to place call it navigate to the native call log of my device.
I see, there are several permissions you need to grant prior to being able to receive calls. This includes:
android.permission.FOREGROUND_SERVICE
android.permission.RECORD_AUDIO
android.permission.READ_PHONE_STATE
android.permission.READ_PHONE_NUMBERS
android.permission.CALL_PHONE
and finally,
Active the calling account, see this for more info.
You can review these permission in Android Setup with a more detailed discussion in Notes - Android.
These permissions are all visible on the mobile app, see here for code reference.
Thank you for being so coporative.. i have already added these permissions and also enable calling account.
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
i just downloaded the zip code and run your example project. i provide the identity and accessToken (provided by my backend developer - laravel). i run your example project on flutter web . After registering for calls it printed on the console "device ready fot voip call " . but i did not see any message like this on my android 12 redme phone. When i try to place call it navigate to the native call log of my device.
I see, there are several permissions you need to grant prior to being able to receive calls. This includes:
android.permission.FOREGROUND_SERVICE
android.permission.RECORD_AUDIO
android.permission.READ_PHONE_STATE
android.permission.READ_PHONE_NUMBERS
android.permission.CALL_PHONE
and finally,
Active the calling account, see this for more info.
You can review these permission in Android Setup with a more detailed discussion in Notes - Android.
These permissions are all visible on the mobile app, see here for code reference.
i have added all these permission and also enable calling account
Hmm, my next best guess is Redmi doesn't like Calling Accounts or something inbetween.
Could you grab the stack trace from when the call is expected to come through and post it here - might hint at what's causing the problem.
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
i just downloaded the zip code and run your example project. i provide the identity and accessToken (provided by my backend developer - laravel). i run your example project on flutter web . After registering for calls it printed on the console "device ready fot voip call " . but i did not see any message like this on my android 12 redme phone. When i try to place call it navigate to the native call log of my device.
I see, there are several permissions you need to grant prior to being able to receive calls. This includes:
android.permission.FOREGROUND_SERVICE
android.permission.RECORD_AUDIO
android.permission.READ_PHONE_STATE
android.permission.READ_PHONE_NUMBERS
android.permission.CALL_PHONE
and finally,
Active the calling account, see this for more info.
You can review these permission in Android Setup with a more detailed discussion in Notes - Android.
These permissions are all visible on the mobile app, see here for code reference.
Thank you for being so coporative.. i have already added these permissions and also enable calling account.
hi @cybex-dev glad to see your response.. could you please tell me the possible reason why your given example is not working for android
I'd require more information from your side, how are you running (native or web or web pwa), OS version, etc.
i just downloaded the zip code and run your example project. i provide the identity and accessToken (provided by my backend developer - laravel). i run your example project on flutter web . After registering for calls it printed on the console "device ready fot voip call " . but i did not see any message like this on my android 12 redme phone. When i try to place call it navigate to the native call log of my device.
I see, there are several permissions you need to grant prior to being able to receive calls. This includes:
android.permission.FOREGROUND_SERVICE
android.permission.RECORD_AUDIO
android.permission.READ_PHONE_STATE
android.permission.READ_PHONE_NUMBERS
android.permission.CALL_PHONE
and finally,
Active the calling account, see this for more info.
You can review these permission in Android Setup with a more detailed discussion in Notes - Android.
These permissions are all visible on the mobile app, see here for code reference.
i have added all these permission and also enable calling account
Hmm, my next best guess is Redmi doesn't like Calling Accounts or something inbetween.
Could you grab the stack trace from when the call is expected to come through and post it here - might hint at what's causing the problem.
ok i will try on some other devices with some more solutions... thank you for beign coporative
@Muhammadjaved7209
I tested this on:
Google API Emulators API 28, 31, 33 and 34
Physical devices:
Various Samsung
Various Google Pixel
several other makes and models
Could we use Emulators as a reference point?
@cybex-dev please take a look what is missing to recieve call
@Muhammadjaved7209 this is the flutter log
I need the adb log.
@Muhammadjaved7209 this is the flutter log
I need the adb log.
sorry to say i dont know what is adb log
@Muhammadjaved7209 this is the flutter log
I need the adb log.
sorry to say i dont know what is adb log
Please click on the link I added in "adb log". This is Android system's log. Flutter log comes from print() where Android's log is stdout, stderr merge.
Am sharing my some of code on how i can make sure am receiving calls on android and logs am doing.
I hope this helps @Muhammadjaved7209
On my Android manifest i do have this permissions
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="android.permission.READ_PHONE_STATE" />
<uses-permission android:name="android.permission.CALL_PHONE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MANAGE_OWN_CALLS"/> // This help actually to use the native call UI
Then as my twilio side am using getx for the mean time as an easy example
class TwilioController extends GetxController{
final isOnCallNow = false.obs;
// Request permission for audio and phone calling account to access native ui when answering calls
requestAllowPhoneContact()async{
await Permission.phone.request();
if((await Permission.phone.status.isGranted)){
if (Platform.isAndroid) {
await TwilioVoice.instance
.requestReadPhoneNumbersPermission(); // Gives Android permissions to read Phone Accounts
await TwilioVoice.instance.registerPhoneAccount();
bool knowIt = await TwilioVoice.instance.isPhoneAccountEnabled();
if (knowIt == true) {
log("ENABLE isPhoneAccountEnabled");
} else {
await TwilioVoice.instance.openPhoneAccountSettings();
}
}
await TwilioVoice.instance
.requestCallPhonePermission(); // Gives Android permissions to place calls
await TwilioVoice.instance
.requestReadPhoneStatePermission(); // Gives Android permissions to read Phone State
}
}
// Register token
registerToken()async{
await TwilioVoice.instance
.setTokens(accessToken: accessToken,
deviceToken:
{Platform.isIOS ? "":
tokenPlatform,
)
.then((value){
// Log result when registering token to know if it successful if not maybe the token is the problem
log("${Platform.isIOS ? "IOS" : "Android "} Access Token Registered $value",name:"Registe Token Log")
});
}
checkActiveCall({CallEvent? event}) async {
final isOnCall = await TwilioVoice.instance.call.isOnCall();
// Log some results
log("Twilio Controller checkActiveCall $isOnCall",
name: "Twilio Action Event Lob");
final activeCall = TwilioVoice.instance.call.activeCall;
if (event != CallEvent.log &&
activeCall!.callDirection == CallDirection.incoming) {
switch (event) {
case CallEvent.answer:
isOnCallNow.value = true;
update();
break;
case CallEvent.connected:
isOnCallNow.value = true;
update();
break;
case CallEvent.declined:
isOnCallNow.value = false;
update();
break;
case CallEvent.callEnded:
isOnCallNow.value = false;
Get.back();
update();
break;
default:
}
}
}
waitForCall() {
TwilioVoice.instance.callEventsListener.listen((event) async {
if (event != CallEvent.log) {
checkActiveCall(event: event);
}
switch (event) {
case CallEvent.incoming:
// Do something
break;
case CallEvent.answer:
// Do something
break;
case CallEvent.ringing:
// Do something
break;
case CallEvent.declined:
// Do something
break;
case CallEvent.connected:
// Do something
break;
case CallEvent.callEnded:
// Do something
break;
case CallEvent.missedCall:
// Do something
break;
case CallEvent.returningCall:
// Do something
break;
case CallEvent.speakerOn:
// Do something
break;
case CallEvent.speakerOff:
// Do something
break;
case CallEvent.log:
// Do something
break;
default:
break;
}
});
}
}
then call waitCall() to initState or onInit
either wait maybe we can find something to this.
I hope this helps to find some problem
Note should also consider allowing the TwilioVoice.instance.requestReadPhoneStatePermission();
Am sharing my some of code on how i can make sure am receiving calls on android and logs am doing.
I hope this helps @Muhammadjaved7209
On my Android manifest i do have this permissions
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="android.permission.READ_PHONE_STATE" />
<uses-permission android:name="android.permission.CALL_PHONE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MANAGE_OWN_CALLS"/> // This help actually to use the native call UI
Then as my twilio side am using getx for the mean time as an easy example
class TwilioController extends GetxController{
final isOnCallNow = false.obs;
// Request permission for audio and phone calling account to access native ui when answering calls
requestAllowPhoneContact()async{
await Permission.phone.request();
if((await Permission.phone.status.isGranted)){
if (Platform.isAndroid) {
await TwilioVoice.instance
.requestReadPhoneNumbersPermission(); // Gives Android permissions to read Phone Accounts
await TwilioVoice.instance.registerPhoneAccount();
bool knowIt = await TwilioVoice.instance.isPhoneAccountEnabled();
if (knowIt == true) {
log("ENABLE isPhoneAccountEnabled");
} else {
await TwilioVoice.instance.openPhoneAccountSettings();
}
}
await TwilioVoice.instance
.requestCallPhonePermission(); // Gives Android permissions to place calls
await TwilioVoice.instance
.requestReadPhoneStatePermission(); // Gives Android permissions to read Phone State
}
}
// Register token
registerToken()async{
await TwilioVoice.instance
.setTokens(accessToken: accessToken,
deviceToken:
Platform.isIOS ? "":
tokenPlatform,
)
.then((value){
// Log result when registering token to know if it successful if not maybe the token is the problem
log("${Platform.isIOS ? "IOS" : "Android "} Access Token Registered $value",name:"Registe Token Log")
});
}
checkActiveCall({CallEvent? event}) async {
final isOnCall = await TwilioVoice.instance.call.isOnCall();
// Log some results
log("Twilio Controller checkActiveCall $isOnCall",
name: "Twilio Action Event Lob");
final activeCall = TwilioVoice.instance.call.activeCall;
if (event != CallEvent.log &&
activeCall!.callDirection == CallDirection.incoming) {
switch (event) {
case CallEvent.answer:
isOnCallNow.value = true;
update();
break;
case CallEvent.connected:
isOnCallNow.value = true;
update();
break;
case CallEvent.declined:
isOnCallNow.value = false;
update();
break;
case CallEvent.callEnded:
isOnCallNow.value = false;
Get.back();
update();
break;
default:
}
}
}
waitForCall() {
TwilioVoice.instance.callEventsListener.listen((event) async {
if (event != CallEvent.log) {
checkActiveCall(event: event);
}
switch (event) {
case CallEvent.incoming:
// Do something
break;
case CallEvent.answer:
// Do something
break;
case CallEvent.ringing:
// Do something
break;
case CallEvent.declined:
// Do something
break;
case CallEvent.connected:
// Do something
break;
case CallEvent.callEnded:
// Do something
break;
case CallEvent.missedCall:
// Do something
break;
case CallEvent.returningCall:
// Do something
break;
case CallEvent.speakerOn:
// Do something
break;
case CallEvent.speakerOff:
// Do something
break;
case CallEvent.log:
// Do something
break;
default:
break;
}
});
}
}
then call waitCall() to initState or onInit
either wait maybe we can find something to this.
I hope this helps to find some problem
Note should also consider allowing the TwilioVoice.instance.requestReadPhoneStatePermission();
According to Android docs, MANAGE_OWN_CALLS permission isn't necessary for system-managed calls.
I'll be adding this nontheless for Android 13 and lower devices as it seems to be required for atleast Android 13.
<uses-permission android:name="android.permission.MANAGE_OWN_CALLS"/> // This help actually to use the native
According to Android docs, MANAGE_OWN_CALLS permission isn't necessary for system-managed calls.
I'll be adding this nontheless for Android 13 and lower devices as it seems to be required for atleast Android 13.
@cybex-dev how much time i have to wait for your app example ???
Am sharing my some of code on how i can make sure am receiving calls on android and logs am doing.
I hope this helps @Muhammadjaved7209
On my Android manifest i do have this permissions
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="android.permission.READ_PHONE_STATE" />
<uses-permission android:name="android.permission.CALL_PHONE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MANAGE_OWN_CALLS"/> // This help actually to use the native call UI
Then as my twilio side am using getx for the mean time as an easy example
class TwilioController extends GetxController{
final isOnCallNow = false.obs;
// Request permission for audio and phone calling account to access native ui when answering calls
requestAllowPhoneContact()async{
await Permission.phone.request();
if((await Permission.phone.status.isGranted)){
if (Platform.isAndroid) {
await TwilioVoice.instance
.requestReadPhoneNumbersPermission(); // Gives Android permissions to read Phone Accounts
await TwilioVoice.instance.registerPhoneAccount();
bool knowIt = await TwilioVoice.instance.isPhoneAccountEnabled();
if (knowIt == true) {
log("ENABLE isPhoneAccountEnabled");
} else {
await TwilioVoice.instance.openPhoneAccountSettings();
}
}
await TwilioVoice.instance
.requestCallPhonePermission(); // Gives Android permissions to place calls
await TwilioVoice.instance
.requestReadPhoneStatePermission(); // Gives Android permissions to read Phone State
}
}
// Register token
registerToken()async{
await TwilioVoice.instance
.setTokens(accessToken: accessToken,
deviceToken:
Platform.isIOS ? "":
tokenPlatform,
)
.then((value){
// Log result when registering token to know if it successful if not maybe the token is the problem
log("${Platform.isIOS ? "IOS" : "Android "} Access Token Registered $value",name:"Registe Token Log")
});
}
checkActiveCall({CallEvent? event}) async {
final isOnCall = await TwilioVoice.instance.call.isOnCall();
// Log some results
log("Twilio Controller checkActiveCall $isOnCall",
name: "Twilio Action Event Lob");
final activeCall = TwilioVoice.instance.call.activeCall;
if (event != CallEvent.log &&
activeCall!.callDirection == CallDirection.incoming) {
switch (event) {
case CallEvent.answer:
isOnCallNow.value = true;
update();
break;
case CallEvent.connected:
isOnCallNow.value = true;
update();
break;
case CallEvent.declined:
isOnCallNow.value = false;
update();
break;
case CallEvent.callEnded:
isOnCallNow.value = false;
Get.back();
update();
break;
default:
}
}
}
waitForCall() {
TwilioVoice.instance.callEventsListener.listen((event) async {
if (event != CallEvent.log) {
checkActiveCall(event: event);
}
switch (event) {
case CallEvent.incoming:
// Do something
break;
case CallEvent.answer:
// Do something
break;
case CallEvent.ringing:
// Do something
break;
case CallEvent.declined:
// Do something
break;
case CallEvent.connected:
// Do something
break;
case CallEvent.callEnded:
// Do something
break;
case CallEvent.missedCall:
// Do something
break;
case CallEvent.returningCall:
// Do something
break;
case CallEvent.speakerOn:
// Do something
break;
case CallEvent.speakerOff:
// Do something
break;
case CallEvent.log:
// Do something
break;
default:
break;
}
});
}
}
then call waitCall() to initState or onInit
either wait maybe we can find something to this.
I hope this helps to find some problem
Note should also consider allowing the TwilioVoice.instance.requestReadPhoneStatePermission();
thank you for help... i have tried everything but all in vain
<uses-permission android:name="android.permission.MANAGE_OWN_CALLS"/> // This help actually to use the native
According to Android docs, MANAGE_OWN_CALLS permission isn't necessary for system-managed calls.
I'll be adding this nontheless for Android 13 and lower devices as it seems to be required for atleast Android 13.
Well in my case on the error log it showing that it require the app to have that permission, And yes it Android 13 and above. Havent tried on lower ones.
<uses-permission android:name="android.permission.MANAGE_OWN_CALLS"/> // This help actually to use the native
According to Android docs, MANAGE_OWN_CALLS permission isn't necessary for system-managed calls.
I'll be adding this nontheless for Android 13 and lower devices as it seems to be required for atleast Android 13.
Well in my case on the error log it showing that it require the app to have that permission, And yes it Android 13 and above. Havent tried on lower ones.
@Erchil66 how can i contact you on any other platform other than git
We can talk it here btw hahaha, It’s better we involve @cybex-dev on the conversation also
We can talk it here btw hahaha, It’s better we involve @cybex-dev on the conversation also
and you know am also using getx in my project
@cybex-dev
OS version android 11 API version 30
--- > before i provider id and token : I/TwilioVoicePlugin(18212): Removing event sink I/TwilioVoicePlugin(18212): Setting event sink I/flutter (18212): voip-service registration I/flutter (18212): voip-registering with environment variables I/flutter (18212): Failed to register with environment variables, please provide ID and TOKEN D/TwilioVoicePlugin(18212): logEvent: LOG|Registering client alicesId:Alice D/TwilioVoicePlugin(18212): logEvent: LOG|requestingReadPhoneNumbersPermission D/TwilioVoicePlugin(18212): checkReadPhoneNumbersPermission I/flutter (18212): Registering client alicesId:Alice I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): requestingReadPhoneNumbersPermission I/flutter (18212): voip-onCallStateChanged CallEvent.log D/TwilioVoicePlugin(18212): logEvent: LOG|registerPhoneAccount D/TwilioVoiceConnectionService(18212): getPhoneAccountHandle: Get PhoneAccountHandle with name: twilio_voice_example, componentName: ComponentInfo{com.twilio.twilio_voice_example/com.twilio.twilio_voice.service.TVConnectionService} I/flutter (18212): registerPhoneAccount I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): yahophoneregustrd D/TwilioVoicePlugin(18212): logEvent: LOG|isPhoneAccountEnabled D/TwilioVoiceConnectionService(18212): getPhoneAccountHandle: Get PhoneAccountHandle with name: twilio_voice_example, componentName: ComponentInfo{com.twilio.twilio_voice_example/com.twilio.twilio_voice.service.TVConnectionService} I/flutter (18212): isPhoneAccountEnabled I/flutter (18212): voip-onCallStateChanged CallEvent.log D/TwilioVoicePlugin(18212): logEvent: LOG|requestingReadPhoneStatePermission D/TwilioVoicePlugin(18212): checkReadPhoneStatePermission I/flutter (18212): requestingReadPhoneStatePermission D/TwilioVoicePlugin(18212): logEvent: LOG|requestingCallPhonePermission D/TwilioVoicePlugin(18212): checkCallPhonePermission I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): requestingCallPhonePermission I/flutter (18212): voip-onCallStateChanged CallEvent.log
--- > after i provided id and token : I/TwilioVoicePlugin(18212): Removing event sink I/TwilioVoicePlugin(18212): Setting event sink I/flutter (18212): voip-service registration I/flutter (18212): voip-registering with environment variables I/flutter (18212): Failed to register with environment variables, please provide ID and TOKEN D/TwilioVoicePlugin(18212): logEvent: LOG|Registering client alicesId:Alice D/TwilioVoicePlugin(18212): logEvent: LOG|requestingReadPhoneNumbersPermission D/TwilioVoicePlugin(18212): checkReadPhoneNumbersPermission I/flutter (18212): Registering client alicesId:Alice I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): requestingReadPhoneNumbersPermission I/flutter (18212): voip-onCallStateChanged CallEvent.log D/TwilioVoicePlugin(18212): logEvent: LOG|registerPhoneAccount D/TwilioVoiceConnectionService(18212): getPhoneAccountHandle: Get PhoneAccountHandle with name: twilio_voice_example, componentName: ComponentInfo{com.twilio.twilio_voice_example/com.twilio.twilio_voice.service.TVConnectionService} I/flutter (18212): registerPhoneAccount I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): yahophoneregustrd D/TwilioVoicePlugin(18212): logEvent: LOG|isPhoneAccountEnabled D/TwilioVoiceConnectionService(18212): getPhoneAccountHandle: Get PhoneAccountHandle with name: twilio_voice_example, componentName: ComponentInfo{com.twilio.twilio_voice_example/com.twilio.twilio_voice.service.TVConnectionService} I/flutter (18212): isPhoneAccountEnabled I/flutter (18212): voip-onCallStateChanged CallEvent.log D/TwilioVoicePlugin(18212): logEvent: LOG|requestingReadPhoneStatePermission D/TwilioVoicePlugin(18212): checkReadPhoneStatePermission I/flutter (18212): requestingReadPhoneStatePermission D/TwilioVoicePlugin(18212): logEvent: LOG|requestingCallPhonePermission D/TwilioVoicePlugin(18212): checkCallPhonePermission I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): requestingCallPhonePermission I/flutter (18212): voip-onCallStateChanged CallEvent.log .
Ive backread to this but i havent seen a FCM registered successfully on this log, Did you check the token on this? On your logs?
<uses-permission android:name="android.permission.MANAGE_OWN_CALLS"/> // This help actually to use the native
According to Android docs, MANAGE_OWN_CALLS permission isn't necessary for system-managed calls.
I'll be adding this nontheless for Android 13 and lower devices as it seems to be required for atleast Android 13.
@cybex-dev how much time i have to wait for your app example ???
<uses-permission android:name="android.permission.MANAGE_OWN_CALLS"/> // This help actually to use the native
According to Android docs, MANAGE_OWN_CALLS permission isn't necessary for system-managed calls.
I'll be adding this nontheless for Android 13 and lower devices as it seems to be required for atleast Android 13.
@cybex-dev how much time i have to wait for your app example ???
I do not intend to provide a native Android & iOS example app any time soon. With regards to your issue, please see my earlier comment, without which we progress to resolve your issue cannot be made.
Ref: https://github.com/cybex-dev/twilio_voice/issues/194#issuecomment-1803825316
TL;DR - I need your ADB log.
@cybex-dev
OS version android 11 API version 30
--- > before i provider id and token : I/TwilioVoicePlugin(18212): Removing event sink I/TwilioVoicePlugin(18212): Setting event sink I/flutter (18212): voip-service registration I/flutter (18212): voip-registering with environment variables I/flutter (18212): Failed to register with environment variables, please provide ID and TOKEN D/TwilioVoicePlugin(18212): logEvent: LOG|Registering client alicesId:Alice D/TwilioVoicePlugin(18212): logEvent: LOG|requestingReadPhoneNumbersPermission D/TwilioVoicePlugin(18212): checkReadPhoneNumbersPermission I/flutter (18212): Registering client alicesId:Alice I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): requestingReadPhoneNumbersPermission I/flutter (18212): voip-onCallStateChanged CallEvent.log D/TwilioVoicePlugin(18212): logEvent: LOG|registerPhoneAccount D/TwilioVoiceConnectionService(18212): getPhoneAccountHandle: Get PhoneAccountHandle with name: twilio_voice_example, componentName: ComponentInfo{com.twilio.twilio_voice_example/com.twilio.twilio_voice.service.TVConnectionService} I/flutter (18212): registerPhoneAccount I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): yahophoneregustrd D/TwilioVoicePlugin(18212): logEvent: LOG|isPhoneAccountEnabled D/TwilioVoiceConnectionService(18212): getPhoneAccountHandle: Get PhoneAccountHandle with name: twilio_voice_example, componentName: ComponentInfo{com.twilio.twilio_voice_example/com.twilio.twilio_voice.service.TVConnectionService} I/flutter (18212): isPhoneAccountEnabled I/flutter (18212): voip-onCallStateChanged CallEvent.log D/TwilioVoicePlugin(18212): logEvent: LOG|requestingReadPhoneStatePermission D/TwilioVoicePlugin(18212): checkReadPhoneStatePermission I/flutter (18212): requestingReadPhoneStatePermission D/TwilioVoicePlugin(18212): logEvent: LOG|requestingCallPhonePermission D/TwilioVoicePlugin(18212): checkCallPhonePermission I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): requestingCallPhonePermission I/flutter (18212): voip-onCallStateChanged CallEvent.log
--- > after i provided id and token : I/TwilioVoicePlugin(18212): Removing event sink I/TwilioVoicePlugin(18212): Setting event sink I/flutter (18212): voip-service registration I/flutter (18212): voip-registering with environment variables I/flutter (18212): Failed to register with environment variables, please provide ID and TOKEN D/TwilioVoicePlugin(18212): logEvent: LOG|Registering client alicesId:Alice D/TwilioVoicePlugin(18212): logEvent: LOG|requestingReadPhoneNumbersPermission D/TwilioVoicePlugin(18212): checkReadPhoneNumbersPermission I/flutter (18212): Registering client alicesId:Alice I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): requestingReadPhoneNumbersPermission I/flutter (18212): voip-onCallStateChanged CallEvent.log D/TwilioVoicePlugin(18212): logEvent: LOG|registerPhoneAccount D/TwilioVoiceConnectionService(18212): getPhoneAccountHandle: Get PhoneAccountHandle with name: twilio_voice_example, componentName: ComponentInfo{com.twilio.twilio_voice_example/com.twilio.twilio_voice.service.TVConnectionService} I/flutter (18212): registerPhoneAccount I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): yahophoneregustrd D/TwilioVoicePlugin(18212): logEvent: LOG|isPhoneAccountEnabled D/TwilioVoiceConnectionService(18212): getPhoneAccountHandle: Get PhoneAccountHandle with name: twilio_voice_example, componentName: ComponentInfo{com.twilio.twilio_voice_example/com.twilio.twilio_voice.service.TVConnectionService} I/flutter (18212): isPhoneAccountEnabled I/flutter (18212): voip-onCallStateChanged CallEvent.log D/TwilioVoicePlugin(18212): logEvent: LOG|requestingReadPhoneStatePermission D/TwilioVoicePlugin(18212): checkReadPhoneStatePermission I/flutter (18212): requestingReadPhoneStatePermission D/TwilioVoicePlugin(18212): logEvent: LOG|requestingCallPhonePermission D/TwilioVoicePlugin(18212): checkCallPhonePermission I/flutter (18212): voip-onCallStateChanged CallEvent.log I/flutter (18212): requestingCallPhonePermission I/flutter (18212): voip-onCallStateChanged CallEvent.log .
Ive backread to this but i havent seen a FCM registered successfully on this log, Did you check the token on this? On your logs?
W/ple.twillio_ap(14057): Long monitor contention with owner WebSocketConnectReadThread-161 (14849) at void com.android.org.conscrypt.ConscryptEngineSocket.startHandshake()(ConscryptEngineSocket.java:219) waiters=0 in void com.android.org.conscrypt.ConscryptEngineSocket.startHandshake() for 285ms
I/flutter (14057): Connection: CONNECTED
I/flutter (14057): onSubscriptionSucceeded: twilio-43 data: {}
I/flutter (14057): Me: null
D/TwilioVoicePlugin(14057): onRequestPermissionsResult: 24
D/DecorView: onWindowFocusChanged hasWindowFocus true
D/TwilioVoicePlugin(14057): logEvent: LOG|requestingReadPhoneNumbersPermission
D/TwilioVoicePlugin(14057): checkReadPhoneNumbersPermission
I/flutter (14057): requestingReadPhoneNumbersPermission
I/flutter (14057): CallEvent.log
I/flutter (14057): requestingReadPhoneNumbersPermission
D/TwilioVoicePlugin(14057): logEvent: LOG|registerPhoneAccount
I/flutter (14057): registerPhoneAccount
I/flutter (14057): CallEvent.log
I/flutter (14057): registerPhoneAccount
D/TwilioVoiceConnectionService(14057): getPhoneAccountHandle: Get PhoneAccountHandle with name: twillio_app, componentName: ComponentInfo{com.example.twillio_app/com.twilio.twilio_voice.service.TVConnectionService}
D/TwilioVoicePlugin(14057): logEvent: LOG|isPhoneAccountEnabled
I/flutter (14057): isPhoneAccountEnabled
D/TwilioVoiceConnectionService(14057): getPhoneAccountHandle: Get PhoneAccountHandle with name: twillio_app, componentName: ComponentInfo{com.example.twillio_app/com.twilio.twilio_voice.service.TVConnectionService}
I/flutter (14057): CallEvent.log
I/flutter (14057): isPhoneAccountEnabled
D/TwilioVoicePlugin(14057): Successfully registered FCM filltPLFStWo_boCKUix4o:APA91bGEe-ntmE8Pb4MBY80AXYZzV7LYP4jFjPHUFUtvjg_q9CHF8JsUy7zgXYR1tyr5kjDROfiVKyeKHsPEWmp0MDi5gPWtChuN4bc15bc6m5xgcJF448kyRT5eTXwCythbROclL87r
D/TwilioVoicePlugin(14057): logEvent: LOG|changePhoneAccount
I/flutter (14057): changePhoneAccount
I/flutter (14057): CallEvent.log
I/flutter (14057): changePhoneAccount
--- > after i provided id and token : I/TwilioVoicePlugin(18212): Removing event sink I/TwilioVoicePlugin(18212): Setting event sink I/flutter (18212): voip-service registration I/flutter (18212): voip-registering with environment variables I/flutter (18212): Failed to register with environment variables, please provide ID and TOKEN D/
Did you see my comment: https://github.com/cybex-dev/twilio_voice/issues/194#issuecomment-1802399665?
I referred to this line to configure your application with token & Id, etc.
--- > after i provided id and token : I/TwilioVoicePlugin(18212): Removing event sink I/TwilioVoicePlugin(18212): Setting event sink I/flutter (18212): voip-service registration I/flutter (18212): voip-registering with environment variables I/flutter (18212): Failed to register with environment variables, please provide ID and TOKEN
Did you see my comment: #194 (comment)?
I referred to this line to configure your application with token & Id, etc.
yes i provide id and token... this is my func for registration " void register() async {
String? deviceToken = LocalStorageMethods.instance.getDeviceToken();
String? callAccessToken =
LocalStorageMethods.instance.getIdentityTokenForCallAccess();
String? identity = await LocalStorageMethods.instance.getIdentity();
final result = await TwilioVoice.instance.setTokens(
accessToken: callAccessToken ?? "", deviceToken: deviceToken,);
await TwilioVoice.instance.registerClient(identity, "abc");
if (result ?? false) {
twilioInit.value = true;
} else {
debugPrint("somne thing went wrong while registring...");
}
}"
I think better to give adb.log or logcat feom android studio. To folow up the reason why since it have pront debug from native which is errors showing.
I think better to give adb.log or logcat from android studio. To folow up the reason why since it have pront debug from native which is errors showing.
ok am providing
I think better to give adb.log or logcat from android studio. To folow up the reason why since it have pront debug from native which is errors showing.
ok am providing
is it what i have to show you??
I think better to give adb.log or logcat from android studio. To folow up the reason why since it have pront debug from native which is errors showing.
ok am providing
is it what i have to show you??
add me on whatsapp ao i can guid you where to find it
I think better to give adb.log or logcat from android studio. To folow up the reason why since it have pront debug from native which is errors showing.
ok am providing
is it what i have to show you??
add me on whatsapp ao i can guid you where to find it
sure... +923073594209 my whatsapp
@Muhammadjaved7209 am pasting here the link
https://www.twilio.com/docs/notify/configure-android-push-notifications
Thats the thing lacking from you acess token
The push credential from fcm
@cybex-dev as i though the acess token was lacking some info.
I think is should known as solve.
@Muhammadjaved7209 @Erchil66 can I mark this issue as resolved?
#198
https://github.com/cybex-dev/twilio_voice/releases/tag/0.1.3
Closing for now.
Hi,
I am facing same issue.
Example app working fine on web.
but on android(Infinix note 7) only outgoing calls are working.
Incoming calls are not working even got log FCM is successfully registered.
Unable to identify issue
|
gharchive/issue
| 2023-11-08T16:52:24 |
2025-04-01T04:33:54.987898
|
{
"authors": [
"Erchil66",
"Muhammadjaved7209",
"cybex-dev",
"mohsinnaqvi606"
],
"repo": "cybex-dev/twilio_voice",
"url": "https://github.com/cybex-dev/twilio_voice/issues/194",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
421844351
|
Ssv Normany bug
#141
Improved SSV Normandy plugin integration.
Colors actually change when switching to themes that are not Monika.
[ ] Works on MacOS
[ ] Works on Windows
[ ] Make darker secondary color.
Go Flight.
|
gharchive/pull-request
| 2019-03-16T19:34:39 |
2025-04-01T04:33:54.995675
|
{
"authors": [
"cyclic-reference"
],
"repo": "cyclic-reference/ddlc-jetbrains-theme",
"url": "https://github.com/cyclic-reference/ddlc-jetbrains-theme/pull/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
406012916
|
404 Page not working
https://frugal-aws.acari.io/this/not/real does unexpected things rather than telling you that that it does not exist.
frugal-aws-ui 404 Page not working
|
gharchive/issue
| 2019-02-02T19:15:38 |
2025-04-01T04:33:54.997503
|
{
"authors": [
"cyclic-reference"
],
"repo": "cyclic-reference/frugal-aws-ui",
"url": "https://github.com/cyclic-reference/frugal-aws-ui/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
121797205
|
Cyclus
I have make some correction of ex (simulation lengh.. and upate the fuel output recipe)
I add also CYCLUS input for ex 2 & 3
I also add some results in cvs format (probably not standard yet, and should be updated... but this is still something...)
@scopatz yes, it was: buy one get 1 free!
Because you don't like it, I have just resale them... for twice the rices :)
I don't trust camels since they are known deserters :).
Looks good, thanks.
|
gharchive/pull-request
| 2015-12-11T21:43:51 |
2025-04-01T04:33:55.006335
|
{
"authors": [
"Baaaaam",
"scopatz"
],
"repo": "cyclus/benchmarks",
"url": "https://github.com/cyclus/benchmarks/pull/28",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1789307455
|
Setup plugin
Initial setup of a plugin
Automatic detection in ftdetect/cylc.vim
Current syntax file copied to syntax/cylc.vim
New syntax group cylcMultiString for triple-quoted strings, which is linked to String, unlike single-quoted strings which are not highlighed at all
Simple local settings in ftplugin/cylc.vim (for :filetype plugin)
Recommended indentation in indent/cylc.vim (for :filetype indent)
Check List
[x] I have read CONTRIBUTING.md and added my name as a Code Contributor.
[x] Contains logically grouped changes (else tidy your branch by rebase).
Opened a PR to update the docs to point at this plugin: https://github.com/cylc/cylc-doc/pull/624
Sorry I totally left this hanging - just reminded by recent prompts for people to get set up with cylc 8. I didn't realise how long it had been!
I'll address the existing comments shortly, and have a look for if the original syntax file has moved on at all. (Unless you're already aware of any changes and can easily point them out?)
No probs, thanks for contributing this.
Apologies, my vim script abilities aren't good enough to help out properly.
Not much has happened to the syntax file in the Cylc repo, main changes:
support the ".cylc" file extension
improvements to Jinja2 lexing (see also the PR).
I'll try to do a quick review too once you're done @vsherratt
@vsherratt, sorry this slipped the net, bit manic at the moment, we will get it in soon.
We've now switched from GPL to BSD for text editor extensions to get around licensing constraints.
Can you confirm you're happy with the BSD licence.
|
gharchive/pull-request
| 2023-07-05T10:57:42 |
2025-04-01T04:33:55.032486
|
{
"authors": [
"hjoliver",
"oliver-sanders",
"vsherratt"
],
"repo": "cylc/cylc.vim",
"url": "https://github.com/cylc/cylc.vim/pull/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1177183463
|
Replicasets not in READY state
replicasets.txt
describe-voting.txt
describe-web.txt
Issue is resolved in the latest version.
|
gharchive/issue
| 2022-03-22T18:56:32 |
2025-04-01T04:33:55.051464
|
{
"authors": [
"kurthv"
],
"repo": "cypherfox/cloud-native-demo",
"url": "https://github.com/cypherfox/cloud-native-demo/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1514323116
|
README lists unsupported Node.js versions 10 and 12
Problem description
Examples in the README.md file list old unsupported Node.js versions.
Tag recordings
Custom cache key
Node versions
each specify node: [10, 12].
According to the Node.js release schedule the versions 10 and 12 are already in the category End-of-Life Release and so are not ideal to be contained in good current examples.
Suggested fix
Update the examples in the README.md file to specify node: [14, 16, 18] according to the Release schedule for versions in status "Maintenance" and "LTS".
Note: According to issue https://github.com/cypress-io/github-action/issues/642, the node version is not passed to GHA. This issue is currently awaiting clarification.
PR #692 submitted to resolve this issue
|
gharchive/issue
| 2022-12-30T07:57:08 |
2025-04-01T04:33:55.183602
|
{
"authors": [
"MikeMcC399"
],
"repo": "cypress-io/github-action",
"url": "https://github.com/cypress-io/github-action/issues/691",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
307223006
|
Page.loadEventFired not triggered on redirect
I'm not sure if this is an issue with the dev tools or with this library; but it seems that loadEventFired and other events such as frameNavigated are not triggered when a form is submitted and/or the page is redirected.
I have a page with a login form (https://fabric.io/login); I submit this form, and then the page is redirected to another address. No events are triggered for the "main" frame; I only receive various "frameNavigated" events for iframes etc.
Okay, nevermind.. It turns out that the website in question is using the pushState API, and there really is NO reload happening :)
=> I'm closing the issue.
|
gharchive/issue
| 2018-03-21T12:23:11 |
2025-04-01T04:33:55.187367
|
{
"authors": [
"Tharit"
],
"repo": "cyrus-and/chrome-remote-interface",
"url": "https://github.com/cyrus-and/chrome-remote-interface/issues/333",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
622694046
|
Module has no attribute 'pyx_capi'
Hi, I am using Python 3.7 with cython 0.29.14.
I am confused about the error in the title for long time.
Specifically, I have directories like this:
my_pkg/
__init__.py
a/
__init__.py
__init__.pxd
setup.py
cython_ext/
__init__.py
__init__.pxd
basics.pyx
basics.pxd
b/
__init__.py
__init__.pxd
setup.py
cython_ext/
__init__.py
basics.pyx
There is function defined in a/cython_ext/basics.pyx:
cpdef void f() nogil
and is also declared in a/cython_ext/basics.pxd.
a/setup.py has an Extention entry like this:
Extension(
"basics",
sources=["cython_ext/basics.pyx"],
language="c++",
extra_compile_args=["-std=c++17"],
include_dirs=[numpy.get_include()],
)
I also have options={"build": {"build_lib": "cython_ext"}} so that the .so file is generated under cython_ext.
I then cimport the function f in b/cython_ext/basics.pyx:
from my_pkg.a.cython_ext.basics cimport f
However, when importing the module b.cython_ext/basics, it always complains
AttributeError: module 'my_pkg.a.cython_ext.basics' has no attribute '__pyx_capi__'
Any idea what goes wrong? I used to have basics.pyx and basics.pxd directly under a/ rather than an extra cython_ext and they used to work. Really confused.
I spent the whole weekend trying to figure out this issue, and finally got it ...
First, the setup.py has to be changed to
Extension(
"cython_ext.basics",
sources=["cython_ext/basics.pyx"],
language="c++",
extra_compile_args=["-std=c++17"],
include_dirs=[numpy.get_include()],
)
However, doing this alone won't work, and this is what made me fail again and again... After changing setup.py, one also needs to delete all __pycahe__ and build in all relevant folders before running python setup.py build_ext (simply adding --force without deleting those intermediate files won't work).
Ah, yes. This is wrong:
I also have options={"build": {"build_lib": "cython_ext"}} so that the .so file is generated under cython_ext.
This does not make "cython_ext" a package. If you want the extension to be in a package, you have to name it that way. If you have a specific place in the documentation where you looked and did not find that information, I'd be happy to receive a PR that clarifies it.
Ah, yes. This is wrong:
I also have options={"build": {"build_lib": "cython_ext"}} so that the .so file is generated under cython_ext.
This does not make "cython_ext" a package. If you want the extension to be in a package, you have to name it that way. If you have a specific place in the documentation where you looked and did not find that information, I'd be happy to receive a PR that clarifies it.
Thanks!
It would be nice if you could add a simple example illustrating building extensions in subdirectories in https://cython.readthedocs.io/en/latest/src/userguide/source_files_and_compilation.html
|
gharchive/issue
| 2020-05-21T18:30:51 |
2025-04-01T04:33:55.198451
|
{
"authors": [
"scoder",
"xin-jin"
],
"repo": "cython/cython",
"url": "https://github.com/cython/cython/issues/3627",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
449131506
|
Issues accessing Devilbox Intranet
Did a clean install of Ubuntu and downloaded The Devilbox. Everything seems to work and I can access my local projects with virtual hosts. However, I can't access the Devilbox intranet through localhost or 127.0.0.1
Any idea why this happens?
In Linux you have to use 172.16.238.11 instead of 127.0.0.1 for httpd. The problem is that this whole thing is built with the notion of localhost resolver, for example: In the vhost section the system tells you to create host entries for 127.0.0.1 which won't work and devilbox will complain about these not being present in your /etc/hosts file.
Same goes to every possible host that Docker-Compose may declare, for example, host mysql won't work, but 172.16.238.12 will do.
https://devilbox.readthedocs.io/en/latest/advanced/connect-to-host-os.html#docker-on-linux
Aha, I see! The funny thing, however, is that I'm getting the same result for localhost, 127.0.0.1 and 172.16.238.11 which is "404 Not found (nginx/1.14.2)". localhost used to work before I installed the latest version.
@zaewin it should definitely be available on 127.0.0.1 or localhost, especially on Linux. Please try to normalize your .env file with env-example, checkout latest stable git tag and if this does not work, you should open a detailed bug report with steps to reproduce.
|
gharchive/issue
| 2019-05-28T08:28:40 |
2025-04-01T04:33:55.202966
|
{
"authors": [
"cytopia",
"rodolfoberrios",
"zaewin"
],
"repo": "cytopia/devilbox",
"url": "https://github.com/cytopia/devilbox/issues/585",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
115053314
|
Support for dynamic context menu's
Small addition to have dynamic menu's support on each element
Thanks; I'll take a look at this when I'm back on the week of 24 Nov
Any updates on this?
It's been merged into 2.3 with fixes to resolve some issues brought on by the changes. It's unnecessary to have two options, so only options.commands is used.
Ah nice! assumed having one option to support both could be confusing for users.
But this is for sure cleaner.
|
gharchive/pull-request
| 2015-11-04T13:38:57 |
2025-04-01T04:33:55.205085
|
{
"authors": [
"janblok",
"maxkfranz"
],
"repo": "cytoscape/cytoscape.js-cxtmenu",
"url": "https://github.com/cytoscape/cytoscape.js-cxtmenu/pull/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
269304369
|
Nercomant/Skelet Mage - [Ru]
Breakpoint - bag or error
https://www.d3planner.com/351824991
+Attack speed > dps Skelet Mage. Breakpoint 24 always
+Attack speed = dps Command Skeletons. Breakpoint 24 always
При увеличение скорости атаки увеличивается дпс у скелет магов. хотя брэйк поинт не меняется. А у обычных скелетов не увеличивается. так же и у дх увеличивается только при смене брэйкпоинта. В чём ошибка?
Тестирование показало, что скорость атаки скелетов магов не зависит от скорости атаки персонажа, а вместо этого меняется их урон (при этом перчатки Таскер могут менять скорость атаки).
Подобная механика используется, например, для фетишистов ВД. У других прислужников, наоборот, меняется скорость атаки. Я не уверен про обычных скелетов, вроде никто не проводил подробных тестов для них.
|
gharchive/issue
| 2017-10-28T10:21:48 |
2025-04-01T04:33:55.224383
|
{
"authors": [
"KlipDe",
"d07RiV"
],
"repo": "d07RiV/d3planner",
"url": "https://github.com/d07RiV/d3planner/issues/280",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1605008128
|
🛑 Demo Webhook Public Api is down
In e2edecd, Demo Webhook Public Api ($DEMO_WEBHOOK) was down:
HTTP code: 503
Response time: 87 ms
Resolved: Demo Webhook Public Api is back up in 40c2046.
|
gharchive/issue
| 2023-03-01T13:48:37 |
2025-04-01T04:33:55.226778
|
{
"authors": [
"d0kify"
],
"repo": "d0kify/upptime",
"url": "https://github.com/d0kify/upptime/issues/787",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
757638686
|
Fix spring force in forceLink.
Here is the thing, I think forceLink is mean to implement a spring force, if so, now code of the force has a issue.
Prove the implementation is spring force
I just give some reason to prove this is a implementation of spring force, you can skip this part if you alredy believe this.
According to Hooke's law:
F = k * dl // here F is the spring force,k is a positive real number, characteristic of the spring, dl is the delta length of spring.
And according to Newton's laws:
F = m * a // here F is force, m is the mass of object, a is acceleration of object
So we can get a = k * dl / m, we can set strength = k / m, then get this:
a = strength * dl
Then we get delta x and delta y like this:
// if we alread computed delta x as x, delta y as y
l = Math.sqrt(x * x + y * y);
dl = (l - distance);
acceleration = strength * dl; // => strength * (l - distance)
dv = acceleration * dt; // => strength * (l - distance) * dt
// next (x / l) means get delta in x axis
dvx = dv * dt * x / l; // => x * (l - distance) * strength * dt * dt / l
// next (y / l) means get delta in y axis
dvy = dv * dt * y / l; // => y * (l - distance) * strength * dt * dt / l
Here we do some change to the variable.
l = Math.sqrt(x * x + y * y);
l = (l - distances[i]) / l * strengths[i] * (dt * dt);
x *= l, y *= l;
You can see the only difference with here forceLink code is dt * dt and alpha.
My problem with compute delta x and delta y
My problem is here:
x = target.x - source.x; // instead of target.x + target.vx - source.x - source.vx
y = target.y - source.y; // insrtead of target.y + target.vy - source.y - source.vy
If we add target.vx - source.vx (call this dvx) or target.vy - source.vy (call this dvy), image that,
spring distance is geting larger and larger, so dlx = target.x - source.x is getting lagger
the velocity is geting smaller and smaller, so dvx = target.vx - source.vx is getting smaller
al last, dlx + dvx will become smaller, it will make the force become smaller, then the dx and dy will change less.
This is addressed in https://github.com/d3/d3-force#forces:
forces may also “peek ahead” to the anticipated next position of the node,
@Fil Sorry, I ignored this.
|
gharchive/pull-request
| 2020-12-05T09:12:08 |
2025-04-01T04:33:55.235503
|
{
"authors": [
"Fil",
"rogerdehe"
],
"repo": "d3/d3-force",
"url": "https://github.com/d3/d3-force/pull/185",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
}
|
223519082
|
Update README
Align API Documentation with source code.
Add missing methods size and link
Remove inapplicable references to accessor functions
Provide information about nodes and links
Clarify how links can be initialized based on numeric index into nodes array.
For completeness I have added the fix to using HTML syntax highlighting proposed by @curran in PR #20.
@mbostock or @arankek could you kindly review/merge this documentation PR. Let me know, if you require any changes.
@mbostock @arankek ping?
I made some tweaks and merged. I think I’m finally motivated to take a pass over the API as well.
@mbostock Thanks, the changes in the underlying API/implementation are great. I.p. since the link shape is now used!
|
gharchive/pull-request
| 2017-04-21T23:06:53 |
2025-04-01T04:33:55.238774
|
{
"authors": [
"mbostock",
"tomwanzek"
],
"repo": "d3/d3-sankey",
"url": "https://github.com/d3/d3-sankey/pull/23",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1236821545
|
🛑 KBBZone is down
In eddee27, KBBZone (https://kbbzone.com) was down:
HTTP code: 500
Response time: 1079 ms
Resolved: KBBZone is back up in 3c84f56.
|
gharchive/issue
| 2022-05-16T08:42:46 |
2025-04-01T04:33:55.241416
|
{
"authors": [
"d35k"
],
"repo": "d35k/uptime-bot",
"url": "https://github.com/d35k/uptime-bot/issues/328",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
306397291
|
Demo with changable parameters?
http://jsfiddle.net/q6reLg0v/
The jsfiddle demo is nice, but not of much use when testing custom options.
If you don't have the time to make a dynamic demo, can you tell a way on how to hijack the fiddle? I don't find the initialization of lethargy in the js-code there.
Thanks!
@katerlouis Apologies for the late reply.
That demo is not a demo of lethargy. It's just a way for you to see the delta values of your scroll. This is the demo for lethargy, which you can download and update the parameters passed to the constructor.
|
gharchive/issue
| 2018-03-19T10:01:41 |
2025-04-01T04:33:55.255364
|
{
"authors": [
"d4nyll",
"katerlouis"
],
"repo": "d4nyll/lethargy",
"url": "https://github.com/d4nyll/lethargy/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2436426079
|
create a directory that will point to a specific pool
create per pool a directory, storage class and link that any write to that directory will prefer specified pool, i.e.
write into /data/pool-a will go into pool-a
reviewing
|
gharchive/pull-request
| 2024-07-29T21:38:24 |
2025-04-01T04:33:55.257996
|
{
"authors": [
"khys95",
"kofemann"
],
"repo": "dCache/dcache-helm",
"url": "https://github.com/dCache/dcache-helm/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2725924384
|
update: change python version required in setup.py
reuse of quote has been introduced in python 3.12, and will not work in version =< 3.11
Solved by commit 9a98969
|
gharchive/pull-request
| 2024-12-09T04:04:28 |
2025-04-01T04:33:55.337048
|
{
"authors": [
"daddodev",
"shobu13"
],
"repo": "daddodev/pimpmyrice",
"url": "https://github.com/daddodev/pimpmyrice/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2090851798
|
🛑 minecraft server (6365, 6366) is down
In 0545fea, minecraft server (6365, 6366) (http://sahachai.thddns.net:6366) was down:
HTTP code: 0
Response time: 0 ms
Resolved: minecraft server (6365, 6366) is back up in 55e9e6c after 1 day, 23 hours, 51 minutes.
|
gharchive/issue
| 2024-01-19T16:09:11 |
2025-04-01T04:33:55.339464
|
{
"authors": [
"daddybannk"
],
"repo": "daddybannk/uptime",
"url": "https://github.com/daddybannk/uptime/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
980886336
|
Refactor agreggator
Esse PR
cria uma flag para selecionar o tipo de agregação que será efetuada
separa funções uteis para o agregador para reutilizações futuras
separa a responsabilidade de agregar os dados de um órgão por ano dentro de uma função
verifica a existência da flag de órgão apenas dentro do contexto de agregação por órgão singular/ano
cria uma função para agregar os dados de todos os órgãos por ano
deleta os arquivos csv extraídos de cada agregação após serem lidos
Professor, é que o próximo PR já resolve a maioria dessas issues, aí tipo, tem que mesmo assim fazer as correções ? @danielfireman
|
gharchive/pull-request
| 2021-08-27T05:25:04 |
2025-04-01T04:33:55.341562
|
{
"authors": [
"Manuel-Antunes"
],
"repo": "dadosjusbr/agregador",
"url": "https://github.com/dadosjusbr/agregador/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1189924151
|
🐞 Dagger: Migrating from the dagger library to core
What is the issue?
When I use dagger.#Writefile to write file and mount it with docker.#Run.mounts, it says mounts key is not set, Instead of report dagger.#Writefile package does not exist. Finally with the help of @grouville I realized that some packages were migrated from dagger to core.
I think there should be a more friendly alert, such as a package does not exist, or a way to automatically link dagger to the core package.
Finally, I think this kind of migration should be such from 0.2 to 0.3 migration, because it already affects my usage.
Log output
Steps to reproduce
No response
Dagger version
0.2.4
OS version
macOS12.2.1
Sorry about that. We have plans moving forward to implement deprecation warnings, but in this case it's hard because it creates a cyclic dependency error, so we decided to take the opportunity of the public launch to make this breaking change.
The fact that there was no error is dependent on upstream:
https://github.com/dagger/dagger/issues/493
|
gharchive/issue
| 2022-04-01T15:02:20 |
2025-04-01T04:33:55.350923
|
{
"authors": [
"helderco",
"lyzhang1999"
],
"repo": "dagger/dagger",
"url": "https://github.com/dagger/dagger/issues/1990",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1266684577
|
Guide for showing easiest way to develop a Universe package right in a fork of dagger/dagger
What is the issue?
Since it's complicated to get package management right and we'd love folks to share their creations with the whole Dagger community via Universe, it's easiest if they develop their packages right in a Universe context versus in a disconnected repo.
To that end we'd recommend creating a fork of dagger/dagger, creating a branch for their new package, then putting it under universe.dagger.io/x/<email>/<package> and taking advantage of the cue.mod already governing the universe.dagger.io module namespace. Their testing/develop will go easier and their finished product will be easy to contribute.
Motivation:
The current guides we have give a lot of info, but can be difficult to follow in practice.
@jpadams with upcoming changes, should we close this issue?
closing unless demand brings it back.
|
gharchive/issue
| 2022-06-09T21:36:05 |
2025-04-01T04:33:55.353403
|
{
"authors": [
"jpadams",
"mircubed"
],
"repo": "dagger/dagger",
"url": "https://github.com/dagger/dagger/issues/2609",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1937348217
|
Zenith: host directory access
Today, in zenith we have a small inconsistency in the way that host directory access works:
We can read the host's current directory:
dag.Host().Directory(".")
But we can't export to the host:
// this actually exports to the module container
dir.Export(ctx, "build/")
I think there's two points here:
As we encourage users to modularise existing dagger code, I think this will be a small papercut since this changes from the previous way of doing things. Maybe it's just a docs-level change, or maybe we could consider making it clearer in code that calling Export in a module doesn't export to the host anymore.
It's also probably a security issue that any module (no matter how deep in the chain) can access the host (or perform other privileged operations, like publishing images) - maybe we should instead consider only allowing the top-level module (e.g. dagger query or dagger call to access the host, and then all modules have to take these as parameters, which would also make them more reusable).
We can read the host's current directory:
That call made in a Module will actually read from the module container, not the caller's host. The workdir of the module container is the directory of the module source code, so if you loaded the module from the host then you will see that directory from your host. However, it's just that directory synced into buildkit, you can't read anything from the host arbitrarily.
@jedevc Does that align with what you're seeing? If you saw behavior where you could read arbitrary files from the caller's host, then something is going wrong and we need to fix.
Yup, this tracks. That makes a lot of sense - the security issue isn't a problem then, I was misunderstanding what's actually going on.
I still think there's a papercut to resolve here at some point though - the Host isn't the host, it's the module. Though I'm not entirely sure if there's a neat fix for that.
Deduped against https://github.com/dagger/dagger/issues/6312, which is clearer.
|
gharchive/issue
| 2023-10-11T09:53:08 |
2025-04-01T04:33:55.358146
|
{
"authors": [
"jedevc",
"sipsma"
],
"repo": "dagger/dagger",
"url": "https://github.com/dagger/dagger/issues/5867",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1204061508
|
universe: Add Terraform package
Signed-off-by: Ezequiel Foncubierta ezequiel@foncubierta.com
I know there is another contribution for a Terraform package (#2110), but this one we are currently using and implementing at work, and supports a lot more features. The approach is similar to Nico's implementation, so feel free to merge it into his package, or the other way around, if necessary.
Implementation details
There is a terraform.#Run action to wrap the Terraform command and run it within the official Terraform container image, from Hashicorp. You can set the Terraform command you want to run (i.e. plan, apply), the arguments (i.e. -var-file, -var), and environment variables. For common environment variables or arguments, there is a field in terraform.#Run you can use. For example, the workspace fields will become a TF_WORKSPACE environment variable.
For each particular Terraform command (i.e. plan, apply) there is an action (i.e. terraform.#Plan) that extends terraform.#Run. These actions override the command field and define additional fields when necessary. For example, a planFile field.
For things that are not "yet supported", you can always fall back to the terraform.#Run action.
The terraform.#Run actions have two outputs:
output: A dagger.#FS with the modified Terraform workspace.
outputImage: The docker.#Image that was used to run the command.
Not yet supported
Authentication in Terraform Cloud: https://www.terraform.io/cli/auth
Explicit actions (you can still use terraform.#Run though):
to import infrastructure: https://www.terraform.io/cli/import
to inspect infrastructure: https://www.terraform.io/cli/inspect
to manipulate state: https://www.terraform.io/cli/state
to manage plugins: https://www.terraform.io/cli/plugins
Use a container as input for terraform.#Run so you can chain multiple commands within the same container?
The approach is similar to Nico's implementation, so feel free to merge it into his package, or the other way around, if necessary.
Since packages are experimental, it's okay to have duplication.
We will make a choice when we will port a terraform package in the "official" universe
I amended most of the requested changes, but using mount instead of docker.#Copy. The /src directory is empty when I use mount.
I am getting the following error when running dagger do lint. I don't know exactly what's going on here.
1:49PM FTL failed to execute plan: task failed: actions.lint.cue._dag."4"._exec: process "bash --norc -e -o pipefail /bash/scripts/run.sh" did not complete successfully: exit code: 1
Re squashing commit. I was able to do it only on commits after the rebase from main.
Really looking forward to merge this. What's missing? Anything we can help with?
Fixing lint and another approval from me after a manual test of the package, I'll do it tomorrow
Thanks @TomChv!
I manually tested the package! It's really good :rocket:
We can merge that package just after lint error are fixed.
@efoncubierta You can just run make cuefmt and then amend your last commit to apply lint.
We will merge your package just after that
@efoncubierta I fixed your linter error, just squash all your commits (do not forget to pull, I've fix your lint error) and we will merge that one!
@TomChv I squashed the commits. I can't squash the two commits before the merge from the main branch.
@aluzzardi Ready for merge!
|
gharchive/pull-request
| 2022-04-14T05:37:18 |
2025-04-01T04:33:55.369246
|
{
"authors": [
"TomChv",
"aluzzardi",
"efoncubierta"
],
"repo": "dagger/dagger",
"url": "https://github.com/dagger/dagger/pull/2192",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2352579682
|
chore: add integration tests to other VCS
⚠️ This is an ugly implementation atm (lots of copy-paste), but it is a foundation for the question:
what testing strategy do we want over the modules integration tests, across VCS ?
In the current state of the PR, It basically covers every test relying on the modules test repo with its public GitLab equivalent
This won't work as is for bitbucket (because of a different valid HTMLURL), and tests will probably need to be extended to not dumbly fail over refs with .git (which work), but this is not handled properly at the moment
This relies on a mirror of dagger-module-tests:
gitlab.com/dagger-modules/test/more/dagger-test-modules-public (auto-sync to-be-done)
bitbucket incoming
cc @jedevc, I realized at the same time that the current HTMLURL() implementation does not handle bitbucket's URL format.
We currently build it as such: u := "https://" + src.Root + "/tree/" + src.Commit, but a real bitbucket example would be: https://bitbucket.org/test-travail/test/commits/b17a8e871052f7d14dbb198b7071986f997ab7a8.
I still wonder what is the best implementation: shall we store the corresponding VCS as part of the gitmodulesource ? As the HTMLURL seems resolved from the gitsource, I don't think we will have access to this information ?
Both failing tests are due to flakiness
|
gharchive/pull-request
| 2024-06-14T05:59:01 |
2025-04-01T04:33:55.373710
|
{
"authors": [
"grouville"
],
"repo": "dagger/dagger",
"url": "https://github.com/dagger/dagger/pull/7663",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
314368272
|
"(50,8):Command:" Exception triggered on OK for Task Editor
First, thanks for all the work you've done on this very nice library! I allows me to edit and monitor existing tasks very nicely.
I bumped into a problem with creating a new task. A click on the OK button triggers a "(50,8):Command:" exception. See image with more detail here: http://screencast.com/t/k98w6BiKDwz3
editorForm.RegisterTaskOnAccept is set to True.
Any idea what could be causing this exception?
Originally posted: 2015-12-21T17:24:07
That is a Microsoft native library error indicated that there is something wrong with the value supplied for the Path of an ExecAction. Check to make sure you don't have invalid characters or empty string for that value.
Originally posted: 2015-12-21T18:46:38
many Thanks for the fast and accurate answer!
Originally posted: 2015-12-21T21:48:45
|
gharchive/issue
| 2018-04-14T22:45:55 |
2025-04-01T04:33:55.426472
|
{
"authors": [
"dahall"
],
"repo": "dahall/TaskScheduler",
"url": "https://github.com/dahall/TaskScheduler/issues/650",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1498884855
|
[Logic]: entrance rando: bombable walls are one way.
The issue was spotted with Magpie*: there are several bombable doors in LADX, but they only serve as a lock for bombs one-way: the other side is always open, so if you end up on the wrong side, you can pass through. Related area for Martha's Bay is logic/overworld.py, line 186, related area for Multi Chest is line 389. There are probably other instances of this, but I don't know about them yet.
The issue in the tracker here is that raft return exit should flag raft hut as a check, but it is listed under hell logic instead.
Looking into fixing these ones myself, but I need a shove in the right direction for how to fix it: how would I say "connect back to the main section as a one-way UNLESS you have bombs"?
* rando info & tracker export
settings: LhAghsHTWRDKruEIlc
seed: E292F3EC1521B28F40CF5A926A2F9A8D
magpie json export:
{"inventory":{"SHIELD":2,"SWORD":1,"TOADSTOOL":0,"MAGIC_POWDER":0,"MAX_POWDER_UPGRADE":0,"SHOVEL":1,"BOMB":0,"MAX_BOMBS_UPGRADE":0,"BOW":0,"MAX_ARROWS_UPGRADE":0,"FEATHER":1,"ROOSTER":0,"POWER_BRACELET":0,"PEGASUS_BOOTS":1,"FLIPPERS":1,"HOOKSHOT":0,"MAGIC_ROD":0,"BLUE_TUNIC":0,"RED_TUNIC":0,"OCARINA":0,"SONG1":0,"SONG2":0,"SONG3":0,"BOWWOW":0,"BOOMERANG":0,"SEASHELL":3,"TAIL_KEY":0,"ANGLER_KEY":0,"FACE_KEY":0,"BIRD_KEY":0,"SLIME_KEY":0,"GOLD_LEAF":1,"TRADING_ITEM_YOSHI_DOLL":0,"TRADING_ITEM_RIBBON":0,"TRADING_ITEM_DOG_FOOD":0,"TRADING_ITEM_BANANAS":0,"TRADING_ITEM_STICK":0,"TRADING_ITEM_HONEYCOMB":1,"TRADING_ITEM_PINEAPPLE":0,"TRADING_ITEM_HIBISCUS":0,"TRADING_ITEM_LETTER":0,"TRADING_ITEM_BROOM":0,"TRADING_ITEM_FISHING_HOOK":0,"TRADING_ITEM_NECKLACE":1,"TRADING_ITEM_SCALE":0,"TRADING_ITEM_MAGNIFYING_GLASS":0,"INSTRUMENT1":0,"KEY1":2,"NIGHTMARE_KEY1":0,"INSTRUMENT2":0,"KEY2":1,"NIGHTMARE_KEY2":0,"INSTRUMENT3":0,"KEY3":2,"NIGHTMARE_KEY3":0,"INSTRUMENT4":0,"KEY4":1,"NIGHTMARE_KEY4":0,"INSTRUMENT5":0,"KEY5":0,"NIGHTMARE_KEY5":0,"INSTRUMENT6":0,"KEY6":1,"NIGHTMARE_KEY6":0,"INSTRUMENT7":0,"KEY7":0,"NIGHTMARE_KEY7":0,"INSTRUMENT8":0,"KEY8":2,"NIGHTMARE_KEY8":0,"GREAT_FAIRY":0,"KEY9":0,"NIGHTMARE_KEY9":0},"settings":{"py/object":"magpie.LocalSettings","checkSize":"32","mapBrightness":"50","showOutOfLogic":false,"animateChecks":true,"swapMouseButtons":false,"swapItemsAndMap":false,"hideChecked":false,"ignoreHigherLogic":false,"hideVanilla":false,"dungeonItemsTemplate":"compact.html","itemsTemplate":"default.html","customDungeonItems":null,"customItems":null,"showDungeonItemCount":false,"showItemsOnly":false,"highlightItemsOnHover":true,"enableAutotracking":false},"args":{"py/object":"ladxrInterface.getArgs.<locals>.Args","flags":[],"logic":"hard","accessibility":"all","race":false,"heartpiece":true,"seashells":true,"heartcontainers":true,"instruments":false,"tradequest":true,"witch":true,"rooster":true,"boomerang":"gift","dungeon_items":"keysanity","randomstartlocation":true,"dungeonshuffle":true,"entranceshuffle":"insanity","boss":"default","miniboss":"default","goal":"egg","itempool":"","hpmode":"default","hardmode":"none","steal":"default","bowwow":"normal","overworld":"normal","owlstatues":"","superweapons":false,"nagmessages":true,"multiworld":null},"checkedChecks":{"Mabe Village-Tarin's Gift":{"name":"Tarin's Gift","area":"Mabe Village"},"Southern Face Shrine-Under Armos Cave":{"name":"Under Armos Cave","area":"Southern Face Shrine"},"Mabe Village-Bush Field":{"name":"Bush Field","area":"Mabe Village"},"Mabe Village-Well Heart Piece":{"name":"Well Heart Piece","area":"Mabe Village"},"Toronbo Shores-Sword on the Beach":{"name":"Sword on the Beach","area":"Toronbo Shores"},"Kanalet Castle-Darknut, Zol, Bubble Leaf":{"name":"Darknut, Zol, Bubble Leaf","area":"Kanalet Castle"},"Tal Tal Heights-Damp Cave Heart Piece":{"name":"Damp Cave Heart Piece","area":"Tal Tal Heights"},"Mabe Village-Fishing Game Heart Piece":{"name":"Fishing Game Heart Piece","area":"Mabe Village"},"Ukuku Prairie-East of Seashell Mansion Bush":{"name":"East of Seashell Mansion Bush","area":"Ukuku Prairie"},"Kanalet Castle-Bomberman Meets Whack-a-mole Leaf":{"name":"Bomberman Meets Whack-a-mole Leaf","area":"Kanalet Castle"},"Kanalet Castle-In the Moat Heart Piece":{"name":"In the Moat Heart Piece","area":"Kanalet Castle"},"Tail Cave-Four Zol Chest":{"name":"Four Zol Chest","area":"Tail Cave"},"Tail Cave-Hardhat Beetles Key":{"name":"Hardhat Beetles Key","area":"Tail Cave"},"Tail Cave-Pit Button Chest":{"name":"Pit Button Chest","area":"Tail Cave"},"Tail Cave-Two Stalfos, Two Keese Chest":{"name":"Two Stalfos, Two Keese Chest","area":"Tail Cave"},"Tail Cave-Spark, Mini-Moldorm Chest":{"name":"Spark, Mini-Moldorm Chest","area":"Tail Cave"},"Tail Cave-Mini-Moldorm Spawn Chest":{"name":"Mini-Moldorm Spawn Chest","area":"Tail Cave"},"Tail Cave-Feather Chest":{"name":"Feather Chest","area":"Tail Cave"},"Tal Tal Mountains-Access Tunnel Exterior":{"name":"Access Tunnel Exterior","area":"Tal Tal Mountains"},"Turtle Rock-Left of Hinox Zamboni Chest":{"name":"Left of Hinox Zamboni Chest","area":"Turtle Rock"},"Turtle Rock-Vacuum Mouth Chest":{"name":"Vacuum Mouth Chest","area":"Turtle Rock"},"Turtle Rock-Left Vire Key":{"name":"Left Vire Key","area":"Turtle Rock"},"Turtle Rock-Spark, Pit Chest":{"name":"Spark, Pit Chest","area":"Turtle Rock"},"Turtle Rock-Push Block Chest":{"name":"Push Block Chest","area":"Turtle Rock"},"Turtle Rock-Right Lava Chest":{"name":"Right Lava Chest","area":"Turtle Rock"},"Turtle Rock-Zamboni, Two Zol Key":{"name":"Zamboni, Two Zol Key","area":"Turtle Rock"},"Eagle's Tower-Entrance Key":{"name":"Entrance Key","area":"Eagle's Tower"},"Mabe Village-Dog House Dig":{"name":"Dog House Dig","area":"Mabe Village"},"Toronbo Shores-Outside D1 Tree Bonk":{"name":"Outside D1 Tree Bonk","area":"Toronbo Shores"},"Mabe Village-Shop 200 Item":{"name":"Shop 200 Item","area":"Mabe Village"},"Mysterious Woods-Tail Key Chest":{"name":"Tail Key Chest","area":"Mysterious Woods"},"Mysterious Woods-Toadstool":{"name":"Toadstool","area":"Mysterious Woods"},"Tal Tal Mountains-Paphl Cave":{"name":"Paphl Cave","area":"Tal Tal Mountains"},"Tal Tal Mountains-Outside Mad Batter":{"name":"Outside Mad Batter","area":"Tal Tal Mountains"},"Kanalet Castle-Boots Pit":{"name":"Boots Pit","area":"Kanalet Castle"},"Goponga Swamp-Swampy Chest":{"name":"Swampy Chest","area":"Goponga Swamp"},"Koholint Prairie-Heart Piece of Shame":{"name":"Heart Piece of Shame","area":"Koholint Prairie"}},"entranceMap":{"madambowwow":"start_house","kennel":"landfill","ulrira":"banana_seller","mabe_phone":"landfill","start_house":"landfill","shop":"armos_maze_cave","papahl_house_right":"richard_house","papahl_house_left":"obstacle_cave_exit","trendy_shop":"landfill","library":"kennel","banana_seller":"shop","toadstool_entrance":"right_taltal_connector2","prairie_right_cave_top":"right_taltal_connector1","prairie_right_cave_bottom":"castle_main_entrance","obstacle_cave_exit":"castle_upper_left","papahl_entrance":"castle_secret_entrance","multichest_left":"right_taltal_connector6","right_taltal_connector3":"right_taltal_connector5","right_taltal_connector2":"raft_return_enter","right_taltal_connector6":"raft_return_exit","dream_hut":"heartpiece_swim_cave","witch":"landfill","prairie_to_animal_connector":"left_to_right_taltalentrance","seashell_mansion":"landfill","castle_secret_exit":"papahl_house_right","d6_connector_entrance":"papahl_house_left","castle_main_entrance":"multichest_top","castle_secret_entrance":"multichest_left","writes_cave_left":"multichest_right","writes_cave_right":"prairie_madbatter_connector_exit","obstacle_cave_outside_chest":"prairie_madbatter_connector_entrance","writes_house":"landfill","castle_phone":"writes_house","animal_house1":"hookshot_cave","animal_house2":"d8","animal_house3":"prairie_madbatter","animal_house4":"ghost_house","animal_house5":"d1","fire_cave_exit":"obstacle_cave_outside_chest","castle_upper_left":"obstacle_cave_entrance","castle_upper_right":"d7","toadstool_exit":"papahl_exit","left_taltal_entrance":"papahl_entrance","writes_phone":"castle_jump_cave","moblin_cave":"landfill","photo_house":"desert_cave","graveyard_cave_right":"animal_to_prairie_connector","right_taltal_connector5":"prairie_to_animal_connector","right_taltal_connector4":"prairie_right_cave_high","raft_return_exit":"prairie_right_cave_bottom","fire_cave_entrance":"prairie_right_cave_top"},"connections":[{"entrances":["toadstool_entrance","prairie_right_cave_top"],"connector":{"id":"outer_rainbow","name":"Outer Rainbow Cave","entrances":["right_taltal_connector1","right_taltal_connector2"],"obstacleTypes":[],"checks":[]},"label":"A"},{"entrances":["prairie_right_cave_bottom","obstacle_cave_exit"],"connector":{"id":"castle","name":"Kanalet Castle","entrances":["castle_upper_left","castle_main_entrance"],"obstacleTypes":["BOMBS","SWORD"],"checks":["0x2C5","0x2D2","0x2C6"]},"label":"B"},{"entrances":["multichest_left","right_taltal_connector3"],"connector":{"id":"to_d7","name":"Path to D7","entrances":["right_taltal_connector5","right_taltal_connector6"],"obstacleTypes":[],"checks":[]},"label":"C"},{"entrances":["right_taltal_connector2","right_taltal_connector6"],"connector":{"id":"raft_return","name":"Raft Return","entrances":["raft_return_exit","raft_return_enter"],"obstacleTypes":["ONEWAY"],"checks":[]},"label":"D"},{"entrances":["castle_secret_exit","d6_connector_entrance"],"connector":{"id":"quadruplets","name":"Quadruplets' House","entrances":["papahl_house_left","papahl_house_right"],"obstacleTypes":["YOSHI"],"checks":["0x2A6-Trade"]},"label":"E"},{"entrances":["castle_main_entrance","castle_secret_entrance","writes_cave_left"],"connector":{"id":"multichest","name":"To Five Chest Game","entrances":["multichest_left","multichest_right","multichest_top"],"obstacleTypes":["BOMBS"],"checks":[]},"label":"F"},{"entrances":["writes_cave_right","obstacle_cave_outside_chest"],"connector":{"id":"bay_batter","name":"Bay Mad Batter","entrances":["prairie_madbatter_connector_entrance","prairie_madbatter_connector_exit"],"obstacleTypes":["FLIPPERS"],"checks":[]},"label":"G"},{"entrances":["papahl_house_left","fire_cave_exit","castle_upper_left"],"connector":{"id":"mountain_access","name":"Mountain Access","entrances":["obstacle_cave_entrance","obstacle_cave_outside_chest","obstacle_cave_exit"],"obstacleTypes":["SWORD","BOOTS","HOOKSHOT"],"checks":["0x2BB"]},"label":"H"},{"entrances":["toadstool_exit","left_taltal_entrance"],"connector":{"id":"papahl","name":"Path to Papahl","entrances":["papahl_entrance","papahl_exit"],"obstacleTypes":[],"checks":[]},"label":"I"},{"entrances":["graveyard_cave_right","right_taltal_connector5"],"connector":{"id":"under_river","name":"Under the River","entrances":["prairie_to_animal_connector","animal_to_prairie_connector"],"obstacleTypes":["BOOTS"],"checks":[]},"label":"J"},{"entrances":["right_taltal_connector4","raft_return_exit","fire_cave_entrance"],"connector":{"id":"bay_cliff","name":"Martha's Bay Cliff","entrances":["prairie_right_cave_top","prairie_right_cave_bottom","prairie_right_cave_high"],"obstacleTypes":["BOMBS","FEATHER"],"checks":[]},"label":"K"}]}
Noticed line 150: I don't think it exactly applies to here, but it seems like it might help, assuming it isn't locked to overworld-entrance connections..
are conditional one-ways even possible? can i just throw in a one_way=true, and set it to one_way=false upon collection of something?
Most bomb caves are setup to allow exit without any items, see animal cave as example:
https://github.com/daid/LADXR/blob/master/logic/overworld.py#L288
But this isn't a door, this is a bombable wall. Which haven't been checked in this manner yet, so I think the right fix is:
prairie_cave_secret_exit = Location()
prairie_cave_secret_exit.connect(prairie_cave, OR(FEATHER, ROOSTER), one_way=True)
prairie_cave.connect(prairie_cave_secret_exit, AND(BOMB, OR(FEATHER, ROOSTER)), one_way=True)
thank you, this helps.
|
gharchive/issue
| 2022-12-15T18:15:41 |
2025-04-01T04:33:55.442299
|
{
"authors": [
"daid",
"foxsouns"
],
"repo": "daid/LADXR",
"url": "https://github.com/daid/LADXR/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2567606868
|
Correct way to use upstream_proxy
Hey this tool is pretty awesome, thank you for sharing this.
I'm having a bit of trouble using the upstream_proxy parameter, both in python script, and in the command line implementation. Just seems like the parameter is being ignored entirely.
Also just skimming through the code, I can't see exactly where upstream_proxy is used for connections.
Again, thanks for the tool, and thanks for the help in advance.
It seems that main.go is missing
flag.StringVar(&Flags.UpstreamProxy, "upstream_proxy", "", "Upstream")
in main() when parsing flags
Yes, the parameter upsstream_dexy is not effective
|
gharchive/issue
| 2024-10-05T02:38:12 |
2025-04-01T04:33:55.446217
|
{
"authors": [
"B0tton",
"ItsAlwaysBeenWankershim"
],
"repo": "daijro/hazetunnel",
"url": "https://github.com/daijro/hazetunnel/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2031084730
|
feat: allow merging PRs with Github's auto-merge
This adds support for Github's auto-merge feature:
https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-auto-merge-for-pull-requests-in-your-repository
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/automatically-merging-a-pull-request
This can only be done via the GraphQL API, so I've had to vendor another library:
https://docs.github.com/en/graphql/reference/mutations#enablepullrequestautomerge
To maintain backwards compatibilty, the exisiting method of merging a PR is left intact and as the default.
Two new flags are introduced, --pr-merge-auto & --pr-merge-auto-wait.
--pr-merge-auto enables "auto-merge" for the PR (via a GraphQL mutation). This does not wait for the PR to actually be merged. That's left up to Github once all branch protection rules pass.
--pr-merge-auto-wait can optionally wait until the PR is actually merged by Github.
Both of these flags are also exposed on a per repository basis using the mergeauto & mergeautowait options.
Notes
PRs must be in a non "clean" state when auto-merge is enabled.
Practically, this means there must be at least 1 branch protection rule that blocks the merge initially.
Cannot enable for PRs in "draft" state
Snuck in a flake.nix. Makes it easy to install/lock to any commit with Nix. e.g. nix run github:jashandeep-sohi/octopilot/feat/github-auto-merge. Happy to remove it if you like.
When --pr-merge-auto-wait is enabled, and we timeout waiting, should auto-merge for the PR be disabled? The merge could still happen at a later point if we leave it enabled. Which might be surprising.
When --pr-merge-auto-wait is enabled, and we timeout waiting, should auto-merge for the PR be disabled? The merge could still happen at a later point if we leave it enabled. Which might be surprising.
Yes I think we should avoid this kind of surprise, and disable auto-merge if we timeout.
maybe later we can add another option to leave it as-is - but if people want to use an async auto merge, they can just set the merge-auto-wait flag to false
thanks for your contribution!
I'll just let you remove the auto-merge if we time-out waiting, otherwise it's good for me
Just doublechecking: Does this require the auto merge feature to be enabled on the repos getting the PR? If yes, does it also set it? I have thousands of repos where I'd like to automerge, but their configs may differ. Curious about if/how you've solved this?
Just doublechecking: Does this require the auto merge feature to be enabled on the repos getting the PR? If yes, does it also set it? I have thousands of repos where I'd like to automerge, but their configs may differ. Curious about if/how you've solved this?
Yes, it requires the auto merge feature to be enabled on the repo.
This is a one-time step, but you bring up a good point about updating thousands of repos.
If you're managing your repo settings with terraform or the like, maybe you could do it there.
The gh CLI does allow you to do this one a per repo basis, so you could also do something like:
gh search repos --owner=jashandeep-sohi "octopilot" --json url --jq '.[].url' | xargs -L1 gh repo edit --enable-auto-merge
I think we could attempt to do it within octopilot every time, if not enabled.
I believe this can be done via the REST API by PATCHing allow_auto_merge. But this means needing write repo admin permissions, which I was trying to avoid.
|
gharchive/pull-request
| 2023-12-07T16:13:06 |
2025-04-01T04:33:55.465477
|
{
"authors": [
"MPV",
"jashandeep-sohi",
"vbehar"
],
"repo": "dailymotion-oss/octopilot",
"url": "https://github.com/dailymotion-oss/octopilot/pull/292",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
276482422
|
Timeout value is being ignored - a 75 sec timeout is used instead
Hi,
I'm using version 3.0 with Swift 4.
When connecting to a web socket server, the timeout interval is being ignored, since the "disconnect" callback is only triggered after 75 seconds, instead of the timeout value that I define (I've tried several values like 5 or 10 seconds, but I get the same result).
In order to test the timeout, I'm disabling the WiFi in my iPhone before trying to establish a connection with a web socket server.
Here's the code snippet I'm using when defining the url timeout:
var urlRequest = URLRequest(url: url)
urlRequest.timeoutInterval = TimeInterval(10)
webSocket = WebSocket(request: urlRequest)
Is there anything I could be doing wrong?
Thanks!
I've also tried to implement my own login timeout logic, using a dispatch source timer, and by forcing the web socket to disconnect when the timeout is reached. However, I'm not able to interrupt an on-going connection attempt when forcing the web socket to disconnect -
webSocket.disconnect(forceTimeout: TimeInterval(0)). The connection only times out after 75 seconds.
Does the disconnect function only work after a connection is successfully established?
I'm experiencing the exact same issue on 3.0.4. I have my own timeout which forces the connection to be shut down. Using webSocket.disconnect(forceTimeout: TimeInterval(0)) freezes the UI for more than a minute. Odd thing is, if I use webSocket.disconnect(forceTimeout: nil) I manage to shut down and reestablish the connection.
Do you have any thoughts on what the issue might be? Can you confirm that 3.0.4 includes the fix?
|
gharchive/issue
| 2017-11-23T22:55:12 |
2025-04-01T04:33:55.486047
|
{
"authors": [
"SnitramDev",
"brunomorgado"
],
"repo": "daltoniam/Starscream",
"url": "https://github.com/daltoniam/Starscream/issues/429",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
171031888
|
"Load" and "save" buttons in configuration tab are confusing.
Because it seems like they should load and save everything, not just the tunable parameters.
Added texts: https://github.com/damellis/ESP/commit/b1fcee6d79de4e7c7497512e237113c7b23de33b.
|
gharchive/issue
| 2016-08-13T22:56:37 |
2025-04-01T04:33:55.487487
|
{
"authors": [
"damellis",
"nebgnahz"
],
"repo": "damellis/ESP",
"url": "https://github.com/damellis/ESP/issues/330",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
227097375
|
Improve documentation
Current documentation is very "Akeneo PIM" oriented, but most of it can be applied for any PHP application ran with Docker.
The "Compose" section, both Apache and FPM, should be split:
a first part on simple PHP, or even Symfony applications: using the images with compose, managing the containers, XDebug configuration, (incoming) PHPStorm confguration…
a second part with only elements specific to Akeneo: install + configuration, behats…
This could also be the opportunity to put the doc on a true website (dedicated or using GitHub pages?), with an integration of the Changelog, to make it more visible.
FPM section for Akeneo is incomplete, as it needs specific nginx server configurations, one for prod/dev, one for behat, with root at /home/docker/pim(default is /home/docker/application.
Also, the specific configuration for MySQL 5.7 could be added.
|
gharchive/issue
| 2017-05-08T16:13:31 |
2025-04-01T04:33:55.492949
|
{
"authors": [
"damien-carcel"
],
"repo": "damien-carcel/Dockerfiles",
"url": "https://github.com/damien-carcel/Dockerfiles/issues/199",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
83333519
|
Add Tasker APIs
From @GoogleCodeExporter on May 31, 2015 11:28
http://tasker.dinglisch.net/invoketasks.html
Original issue reported on code.google.com by damonkoh...@gmail.com on 8 Nov 2010 at 8:05
Copied from original issue: damonkohler/android-scripting#473
From @GoogleCodeExporter on May 31, 2015 11:28
Issue 464 has been merged into this issue.
Original comment by damonkoh...@gmail.com on 8 Nov 2010 at 1:43
From @GoogleCodeExporter on May 31, 2015 11:28
This is the same as issue 464.
Original comment by dch...@gmail.com on 8 Nov 2010 at 1:35
From @GoogleCodeExporter on May 31, 2015 11:28
See this:
http://groups.google.com/group/taskerpro/browse_thread/thread/5577b2ee589e5d64#
Original comment by dch...@gmail.com on 30 Jun 2011 at 3:16
From @GoogleCodeExporter on May 31, 2015 11:28
Shouldn't this just be the sendBroadcast stuff? Which should really be just a
copy of the startActivity stuff with the optional receiver permissions somehow
added? Or am I missing something important?
Original comment by slisto...@gmail.com on 28 Feb 2011 at 5:09
From @GoogleCodeExporter on May 31, 2015 11:28
Until this is implemented, what kind of workarounds exist for passing values
from sl4a to Tasker? Passing values via text files?
Original comment by tomlo...@gmail.com on 30 Jun 2011 at 2:28
From @GoogleCodeExporter on May 31, 2015 11:28
When I try to use the sendbroadcast published API
http://code.google.com/p/android-scripting/wiki/ApiReference#sendBroadcast I
get :-
V/sl4a.JsonRpcServer$ConnectionThread:89( 1608): Received: {"params": [[1,
null, "com.googlecode.android_scripting.rpc.RpcError: Unknown RPC."]], "id": 2,
"method": "sendBroadcastIntent"}
V/sl4a.JsonRpcServer$ConnectionThread:132( 1608): Sent:
{"error":"com.googlecode.android_scripting.rpc.RpcError: Unknown
RPC.","id":2,"result":null}
in my logcat output.
Am I right in concluding that this RPC isn't implemented yet?
Original comment by warrell....@gmail.com on 1 Apr 2011 at 12:09
|
gharchive/issue
| 2015-06-01T05:58:43 |
2025-04-01T04:33:55.504714
|
{
"authors": [
"damonkohler"
],
"repo": "damonkohler/sl4a",
"url": "https://github.com/damonkohler/sl4a/issues/192",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
739421574
|
Palette colors 16,32,…240?
Officially the palette has 256 colors, but it seems impossible to use some of these colors. The 4 LSB for pixels are taken from the tile graphics, where (local) color 0x0 always means transparent. When the hindmost layer is transparent it will default to (global) color 0x00. Does this mean that other 0x?0 palette color indices (so 16,32,…240) are never used?
These colors usually aren't used. They can be used with the affine layer since it's an 8bit layer. It gets a special opacity check here since only color 0 will make it transparent with the rest being opaque.
Thanks for the explanation, glad to know my reasoning here was on the right track. I wasn't aware that the affine layer was fully 8-bit! Haven't really done anything with that yet.
So this seems to be by design, and was more of a question in the first place, so closing the issue.
|
gharchive/issue
| 2020-11-09T22:47:23 |
2025-04-01T04:33:55.537337
|
{
"authors": [
"dan-rodrigues",
"vmedea"
],
"repo": "dan-rodrigues/icestation-32",
"url": "https://github.com/dan-rodrigues/icestation-32/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1420830528
|
Torrentio Brazuca
Addon Manifest URL
https://torrentio.strem.fun/brazuca/manifest.json
Addon Description
Preconfigured version of Torrentio addon for Brazilian content. To configure advanced options visit https://torrentio.strem.fun/brazuca
Language of Content
Brazilian Portuguese (pt-BR)
LGTM
Config not working
Alguns site que conheço
https://bludvfilmes.tv/
https://torrentdosfilmes.site/ = só abre no smartphone
https://comando.la/
https://thepiratefilmes.vip/
https://cinematico.fun/
https://comandoplay.com/
https://lapumia.net/
https://dfilmestorrent.org/
https://baixotorrent.com/
https://torrentfilmes.fun/
https://hiperfilmes.net/
https://apachetorrent.com/
https://ofilmetorrent.com/
https://torrentfilmes.com.br/
https://tiacidinha.com/
https://thepiratetorrent.tech/
https://filmeshdtorrent.megatorrents.info/
https://wolverdon.net/
https://brtorrents.org/
https://topdezfilmes.de/
https://limontorrents.com/
https://flixtorrentv.com/
O que houve
https://adorocinematorrent.com.atlaq.com/
https://vacatorrent.com.atlaq.com/
EN
Its working just fine to me.
I get dual audio, which didn’t happen with the simple 'torrentio.'
There’s just one thing: since I have both 'torrentio' and 'torrentio brazuca' installed, it would be nice if there were a distinction in the name within Stremio. When I go to the list, I can choose: 'All, Torrentio, Torrentio.' It would be great if it were possible to change the name to 'TorrentioBR' or 'Torrentio Brazuca' directly in the list.
Anyway, it’s just a small detail that doesn’t affect the proper functioning of the addon.
Thank you.
PT
A mim aparecem-me os dual audio, o que não acontecia com o simples "torrentio".
Só tem uma coisa, como eu tenho o "torrentio" e o "torrentio brazuca" instalados, seria legal que houvesse distinção no nome dentro do stremio, pois quando eu vou à lista, posso escolher: "All, Torrentio, Torrentio" Seria legal se desse para alterar o nome para "TorrentioBR" ou "Torrentio Brazuca" na lista mesmo.
De qualquer maneira, é só um detalhe que não impede o bom funcionamento do addon.
Obrigado.
|
gharchive/issue
| 2022-10-24T13:26:26 |
2025-04-01T04:33:55.551492
|
{
"authors": [
"Donisajo",
"MullerHub",
"TheBeastLT",
"jfurlan55",
"nowadays666"
],
"repo": "danamag/stremio-addons-list",
"url": "https://github.com/danamag/stremio-addons-list/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2177936853
|
Populates local annex instead of dropbox, although marking "populated".
The idea is that we do not keep data locally but rather copy from web to dropbox. But it seems that we are amassing data locally
(dandisets-2) dandi@drogon:/mnt/backup/dandi/dandisets$ du -scm */.git/annex/objects | grep -v '^0'
40 000025/.git/annex/objects
1383 000026/.git/annex/objects
1579 000035/.git/annex/objects
23502 000037/.git/annex/objects
41886 000039/.git/annex/objects
767 000043/.git/annex/objects
33395 000114/.git/annex/objects
8768 000121/.git/annex/objects
48 000122/.git/annex/objects
3088 000125/.git/annex/objects
1739 000127/.git/annex/objects
1152 000128/.git/annex/objects
49 000129/.git/annex/objects
27 000130/.git/annex/objects
2627 000148/.git/annex/objects
...
Taking a sample dandiset
(dandisets-2) dandi@drogon:/mnt/backup/dandi/dandisets$ ds=000674; ( cd $ds && du -scm .git/annex/objects; git annex find --in here --and --not --in dandi-dandisets-dropbox; )
1469 .git/annex/objects
1469 total
sub-YinMPscMs1mm/ses-20230610h05m38s48/micr/sub-YinMPscMs1mm_ses-20230610h05m38s48_sample-ACEProteinSMI_SMI_stain-SMI.tiff
sub-YinMPscMs1mm/ses-20230611h01m56s56/micr/sub-YinMPscMs1mm_ses-20230611h01m56s56_sample-ACEProteinSox10_SOX10_stain-SOX10.tiff
sub-YinMPscMs1mm/ses-20230611h22m30s24/micr/sub-YinMPscMs1mm_ses-20230611h22m30s24_sample-ACEProteinAQP4_AQP4_stain-AQP4.tiff
so we have 3 files present locally but not in dropbox. But we marked as it was populated
(dandisets-2) dandi@drogon:/mnt/backup/dandi/dandisets/000674$ git show | head -n 1
commit 1585e628468b5970e77e1f621f5c2ada15371934
(dandisets-2) dandi@drogon:/mnt/backup/dandi/dandisets/000674$ git config dandi.populated
1585e628468b5970e77e1f621f5c2ada15371934
some unfinished attempt to troubleshoot -- got to the point of having libssl issue
if I unset that (git config --unset dandi.populated) and rerun cron script
(dandisets-2) dandi@drogon:/mnt/backup/dandi/dandisets/000674$ flock -E 0 -e -n /home/dandi/.run/backup2datalad-populate-cron.lock /mnt/backup/dandi/dandisets/tools/backups2datalad-populate-assets-cron 000674
> backups2datalad -l DEBUG --backup-root /mnt/backup/dandi --config tools/backups2datalad.cfg.yaml populate 000674
but that one hanged at that and in ps auxw -H it didn't show that backups2datalad process under that bash script hierarchy (found it later detached, so it is odd). I interrupted
but running directly until completion lead me to
(dandisets-2) dandi@drogon:/mnt/backup/dandi/dandisets$ backups2datalad -l DEBUG --backup-root /mnt/backup/dandi --config tools/backups2datalad.cfg.yaml populate 000674
2024-03-10T18:24:41-0400 [INFO ] backups2datalad: Saving logs to /mnt/backup/dandi/dandisets/.git/dandi/backups2datalad/2024.03.10.22.23.49Z.log
2024-03-10T18:24:41-0400 [INFO ] backups2datalad: COMMAND: /home/dandi/miniconda3/envs/dandisets-2/bin/backups2datalad -l DEBUG --backup-root /mnt/backup/dandi --config tools/backups2datalad.cfg.yaml populate 000674
2024-03-10T18:24:41-0400 [DEBUG ] backups2datalad: Running: git -c receive.autogc=0 -c gc.auto=0 config --get dandi.populated [cwd=/mnt/backup/dandi/dandisets/000674]
2024-03-10T18:24:41-0400 [DEBUG ] backups2datalad: Finished [rc=1]: git -c receive.autogc=0 -c gc.auto=0 config --get dandi.populated [cwd=/mnt/backup/dandi/dandisets/000674]
2024-03-10T18:24:41-0400 [DEBUG ] backups2datalad: Running: git -c receive.autogc=0 -c gc.auto=0 show -s --format=%H [cwd=/mnt/backup/dandi/dandisets/000674]
2024-03-10T18:24:41-0400 [DEBUG ] backups2datalad: Finished [rc=0]: git -c receive.autogc=0 -c gc.auto=0 show -s --format=%H [cwd=/mnt/backup/dandi/dandisets/000674]
2024-03-10T18:24:41-0400 [INFO ] backups2datalad: Copying files for Dandiset 000674 to backup remote
2024-03-10T18:24:41-0400 [DEBUG ] backups2datalad: Opening pipe to `git -c receive.autogc=0 -c gc.auto=0 annex copy -c annex.retry=3 --jobs 3 --fast --to dandi-dandisets-dropbox --exclude '.dandi/*' --json --json-error-messages` [cwd=/mnt/backup/dandi/dandisets/000674]
2024-03-10T18:25:19-0400 [DEBUG ] backups2datalad: Command `git -c receive.autogc=0 -c gc.auto=0 annex copy -c annex.retry=3 --jobs 3 --fast --to dandi-dandisets-dropbox --exclude '.dandi/*' --json --json-error-messages` [cwd=/mnt/backup/dandi/dandisets/000674] exited with return code 0
2024-03-10T18:25:19-0400 [INFO ] backups2datalad: git-annex copy -c annex.retry=3 --jobs 3 --fast --to dandi-dandisets-dropbox --exclude '.dandi/*': 3 files succeeded, 0 files failed
2024-03-10T18:25:19-0400 [DEBUG ] backups2datalad: Opening pipe to `git -c receive.autogc=0 -c gc.auto=0 annex copy -c annex.retry=3 --jobs 3 --fast --from web --to dandi-dandisets-dropbox --exclude '.dandi/*' --json --json-error-messages` [cwd=/mnt/backup/dandi/dandisets/000674]
2024-03-10T18:25:24-0400 [DEBUG ] backups2datalad: Command `git -c receive.autogc=0 -c gc.auto=0 annex copy -c annex.retry=3 --jobs 3 --fast --from web --to dandi-dandisets-dropbox --exclude '.dandi/*' --json --json-error-messages` [cwd=/mnt/backup/dandi/dandisets/000674] exited with return code 0
2024-03-10T18:25:24-0400 [INFO ] backups2datalad: git-annex copy -c annex.retry=3 --jobs 3 --fast --from web --to dandi-dandisets-dropbox --exclude '.dandi/*': 22 files succeeded, 0 files failed
2024-03-10T18:25:24-0400 [DEBUG ] backups2datalad: Running: git -c receive.autogc=0 -c gc.auto=0 push github git-annex [cwd=/mnt/backup/dandi/dandisets/000674]
2024-03-10T18:25:24-0400 [WARNING ] backups2datalad: Failed [rc=128]: git -c receive.autogc=0 -c gc.auto=0 push github git-annex [cwd=/mnt/backup/dandi/dandisets/000674]
Stdout: <empty>
Stderr:
OpenSSL version mismatch. Built against 30000070, you have 30200010
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
2024-03-10T18:25:24-0400 [ERROR ] backups2datalad: Job failed on input PosixPath('/mnt/backup/dandi/dandisets/000674'):
Traceback (most recent call last):
File "/home/dandi/miniconda3/envs/dandisets-2/lib/python3.10/site-packages/backups2datalad/aioutil.py", line 177, in dowork
outp = await func(inp)
File "/home/dandi/miniconda3/envs/dandisets-2/lib/python3.10/site-packages/backups2datalad/__main__.py", line 558, in populate
await ds.call_git("push", "github", "git-annex")
File "/home/dandi/miniconda3/envs/dandisets-2/lib/python3.10/site-packages/backups2datalad/adataset.py", line 219, in call_git
await aruncmd(
File "/home/dandi/miniconda3/envs/dandisets-2/lib/python3.10/site-packages/backups2datalad/aioutil.py", line 224, in aruncmd
raise e
File "/home/dandi/miniconda3/envs/dandisets-2/lib/python3.10/site-packages/backups2datalad/aioutil.py", line 206, in aruncmd
r = await anyio.run_process(argstrs, **kwargs)
File "/home/dandi/miniconda3/envs/dandisets-2/lib/python3.10/site-packages/anyio/_core/_subprocesses.py", line 85, in run_process
raise CalledProcessError(cast(int, process.returncode), command, output, errors)
subprocess.CalledProcessError: Command '['git', '-c', 'receive.autogc=0', '-c', 'gc.auto=0', 'push', 'github', 'git-annex']' returned non-zero exit status 128.
1 populate job failed
so it failed to push but it did succeed to copy some stuff
$ git-annex copy -c annex.retry=3 --jobs 3 --fast --from web --to dandi-dandisets-dropbox --exclude '.dandi/*': 22 files succeeded, 0 files failed
and here we have only 3 files
(dandisets-2) dandi@drogon:/mnt/backup/dandi/dandisets/000674$ git annex find --in here
sub-YinMPscMs1mm/ses-20230610h05m38s48/micr/sub-YinMPscMs1mm_ses-20230610h05m38s48_sample-ACEProteinSMI_SMI_stain-SMI.tiff
sub-YinMPscMs1mm/ses-20230611h01m56s56/micr/sub-YinMPscMs1mm_ses-20230611h01m56s56_sample-ACEProteinSox10_SOX10_stain-SOX10.tiff
sub-YinMPscMs1mm/ses-20230611h22m30s24/micr/sub-YinMPscMs1mm_ses-20230611h22m30s24_sample-ACEProteinAQP4_AQP4_stain-AQP4.tiff
and it is altogether 25 files on dropbox
(dandisets-2) dandi@drogon:/mnt/backup/dandi/dandisets/000674$ git annex find --in dandi-dandisets-dropbox | wc -l
25
I think because we are having a generic copy , we might end up with some copies locally if were interrupted? or are then from previous run?
meanwhile I think we should
add --not --in dandi-dandisets-dropbox to the copy command so we do not even go through the files if we know they are already there
investigate and solve ssh issue
but mystery might not be solved with that alone.
I will run following in a loop across all to move & drop everything accumulated locally
(base) dandi@drogon:/mnt/backup/dandi/dandisets$ ds=000674; ( cd $ds && du -scm .git/annex/objects; git annex move -J 3 --to dandi-dandisets-dropbox --fast --in here --not --in dandi-dandisets-dropbox && git annex drop --all && datalad clean; du -scm .git/annex/objects; )
1469 .git/annex/objects
1469 total
drop SHA256E-s513106608--1af40ffeab3eaaadd0da46966a998ba916f13df2143cb0c5da4e8ad77099c80d.tiff ok
drop SHA256E-s513106608--c70bd8ee7b0bb18cf83662bf0ac27bd5a18f329b6d01f65968d951bb39859dad.tiff ok
drop SHA256E-s513106608--96c8c60dc23d42d63ab7859ae6fd02731b4e3499bdf83c1882164306bcdc1958.tiff ok
(recording state in git...)
clean(ok): .git/annex/tmp (directory) [Removed empty temporary annex directory]
clean(ok): .git/annex/transfer (directory) [Removed 3 annex temporary transfer directories: download, failed, upload]
0 .git/annex/objects
0 total
@yarikoptic How were git and the OpenSSL library installed on the system? Could you try updating/reinstalling both of them?
@yarikoptic Running conda update git in the dandisets-2 environment seemed to fix the problem.
FTR, after I did some manual cleaning etc we have
dandi@drogon:/mnt/backup/dandi/dandisets$ du -scm */.git/annex/objects | grep -v '^0'
89737 000559/.git/annex/objects
89737 total
but the population script is running... will check tomorrow if anything changes (i.e. we got this down - so it was only temporary). if not - it would mean that might need further investigation
ATM we have a good number of files not backed up. I have ran
for d in 000*; do git -C $d annex find --not --in dandi-dandisets-dropbox | grep . && echo $d; done | tee /tmp/nobackedup.txt
and output is at http://www.oneukrainian.com/tmp/nobackedup.txt
above is yet to be troubleshooted -- may be our copy or move is not sufficient somehow which lead to above situation that we update the mark of last population but then end up with assets not in backup. Please troubleshoot @jwodder
@yarikoptic Current observations:
The populate cronjobs are currently disabled, with a comment stating that they're currently run "manually in screen." However, based on the logs, it seems that the last time the populate command was run was April 12.
I ran the below script to list all assets that are not currently in Dropbox and that were last modified before the time of the last populate command:
#!/bin/bash
set -ex
cd /mnt/backup/dandi/dandisets
CUTOFF=1712962995 # 2024-04-12T23:03:15Z
for d in 00*
do
git -C "$d" annex find --not --in dandi-dandisets-dropbox \
| while read -r fpath
do
changed="$(git -C "$d" log -1 --format=%cd --date=unix "$fpath")"
if [ "$changed" -lt "$CUTOFF" ]
then echo "$d $fpath $(date -u --date=@"$changed" +%FT%TZ)" | tee -a ~/undropped-added.txt
fi
done
done
This returned 27905 assets in 111 Dandisets. However, all of the Dandisets either are currently embargoed or (in the case of 000408) were only unembargoed after the latest populate run.
The logs for the latest populate run list a number of failed git annex copy commands, all with the error message "no known url". All the ones I've checked have been for embargoed Dandisets, and the assets that failed to copy do have web URLs registered. I think the failure is because backups2datalad isn't passing DATALAD_dandi_token to git-annex copy. For the record, do we want to copy embargoed assets to Dropbox or not?
Thanks for the investigation. I could have sworn that I did rerun population a few times since then but "logs do not lie", so might have been that bad.
dropbox is soon to go away, that is why I populating (fetching from S3) released versions of dandisets. I guess we could still run population job to feed them to dropbox as well for nowbeing but I think that ample dropbox will seeze to become available within a month or so.
I think the failure is because backups2datalad isn't passing DATALAD_dandi_token to git-annex copy. For the record, do we want to copy embargoed assets to Dropbox or not?
I vaguely remember doing something about authentication recently but I think it was about github ... forgot. Since dropbox is soon no more, let's not bother populating for embargoed, but we need to make sure that we are adding urls for embargoed assets correctly, IIRC they are to be handled via datalad special remote from API URLs. Hence we have that
https://github.com/dandi/backups2datalad/issues/35
@yarikoptic I've updated the code to not populate embargoed Dandisets (#47), and #35/#36 is blocked by a need for newer git-annex. Is there anything more to be done for this issue specifically?
@yarikoptic Ping.
I think there is a remaining issue here but as I am actually now fetching content for released dandisets to drogon, it becomes kinda difficult to test/troubleshoot, so let's indeed close for now.
|
gharchive/issue
| 2024-03-10T22:38:33 |
2025-04-01T04:33:55.568917
|
{
"authors": [
"jwodder",
"yarikoptic"
],
"repo": "dandi/backups2datalad",
"url": "https://github.com/dandi/backups2datalad/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
350067562
|
Support installing default pip packages
Subj. Allows you to install default packages on each python install.
Basically same as in nodejs / ruby official plugins:
https://github.com/asdf-vm/asdf-nodejs/blob/master/bin/install#L238-L252
https://github.com/asdf-vm/asdf-ruby/blob/master/bin/install#L52-L71
pkgs should be listed in ~/.default-eggs, one per line.
I'm not sure if "egg" is correct naming for python pkgs, though.
any updates on this?
Hi, sorry for the delay and thank you for the PR.
I think this feature is a good idea, but I do not like the name of the file, as egg is a packaging format which is being replaced by wheel. As this is a way to install Python packages, I would rather simply name the file ~/.default-python-packages. What do you think.
Yep, looks better.
Should be good now?
Great, thank you!
Should update the readme with this!
|
gharchive/pull-request
| 2018-08-13T15:00:12 |
2025-04-01T04:33:55.608976
|
{
"authors": [
"adamyonk",
"danhper",
"rlex",
"tuvistavie"
],
"repo": "danhper/asdf-python",
"url": "https://github.com/danhper/asdf-python/pull/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2512069914
|
🛑 Ramen Katsu is down
In 787a903, Ramen Katsu (https://systemalpha.net/ramenkatsu) was down:
HTTP code: 403
Response time: 456 ms
Resolved: Ramen Katsu is back up in edc2324 after 11 minutes.
|
gharchive/issue
| 2024-09-07T20:57:25 |
2025-04-01T04:33:55.631342
|
{
"authors": [
"danichrisd"
],
"repo": "danichrisd/up",
"url": "https://github.com/danichrisd/up/issues/364",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
124557606
|
[Feature] Is Row dirty
Hello Daniel,
its super that you finally implemented the Large/Small Dialog for cell editing.
I asked myself how can I use this demo application in real life.
What i want is to enter multiple cell values THEN click a create/add button positioned on the edited row. This button may only be enabled when the cell values data is valid.
Thus I need a md-row isValid() function.
Furthermore the content of the invalid cells should not be rejected when I leave the cell or confirma via enter/return key.
Without this new behavior "Row-Inline-Editing" I see only less real world use for the new editing dialogs. Nearly nobody is saving a single cell value to the database. Only a full entity.
I think I know what you mean but if you want to edit multiple cells before saving, I think it would be better to open a custom dialog. You can create a custom dialog using this module's $mdEditDialog service or Angular Material's $mdDialog service.
You can then place an icon in a table cell to launch a dialog for the item you want to edit.
I think I know what you mean but if you want to edit multiple cells before saving, I think it would be better to open a custom dialog. You can create a custom dialog using this module's $mdEditDialog service or Angular Material's $mdDialog service.
You can then place an icon in a table cell to launch a dialog for the item you want to edit.
Some month later I take back what I said. Modifiying a single field in the UI ;-)
"Nearly nobody is saving a single cell value to the database. Only a full entity."
|
gharchive/issue
| 2016-01-01T19:07:03 |
2025-04-01T04:33:55.635253
|
{
"authors": [
"bastienJS",
"daniel-nagy"
],
"repo": "daniel-nagy/md-data-table",
"url": "https://github.com/daniel-nagy/md-data-table/issues/226",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
269700950
|
UnknownHostException
Hi,
First of all, thanks for cool tutorial - but I've some problems setting it all up:
Whitelabel Error Page
This application has no explicit mapping for /error, so you are seeing this as a fallback.
Mon Oct 30 17:41:23 UTC 2017
There was an unexpected error (type=Internal Server Error, status=500).
I/O error on GET request for "http://productcatalogue:8020/products": productcatalogue; nested exception is java.net.UnknownHostException: productcatalogue
I'm getting it after executing:
minikube service shopfront
What's the problem? All containers run correctly. Am I missing something?
Hey @Opalo,
Thanks for the feedback about the tutorial! In regards to your issue, could you let me know what URL you are hitting when you get this issue?
Could you also show me the output of kubectl get svc please? You should get something like:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 <none> 443/TCP 31d productcatalogue 10.0.0.37 <nodes> 8020:31803/TCP 30d shopfront 10.0.0.216 <nodes> 8010:31208/TCP 30d stockmanager 10.0.0.149 <nodes> 8030:30723/TCP 30d
What I'm getting after running kubectl get svc is as follows:
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master*] kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4h
productcatalogue NodePort 10.0.0.242 <none> 8020:32010/TCP 4h
shopfront NodePort 10.0.0.193 <none> 8010:30403/TCP 4h
stockmanager NodePort 10.0.0.173 <none> 8030:31954/TCP 4h
Running: minikube service shopfront opens the following URL: http://192.168.99.100:30403/
Thanks for prompt reply!
Here's also an interesting part:
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master*] kubectl get pods
NAME READY STATUS RESTARTS AGE
productcatalogue-gm487 1/1 Running 39 11h
shopfront-hwdlw 1/1 Running 0 11h
stockmanager-clnh9 1/1 Running 0 11h
Why does productcatalogue-gm487 gets restarted?
Thanks for this, and a nice piece of debugging on your part, with a look at the pods. My guess is that minikube has not been given enough RAM to play with, and so the productcatalogue JVM is getting OOM-killed due to the small amount of RAM given to this container, which leads the pod to be restarted.
I think I should have made clearer in the article that you really need to apply 3Gb for minikube, and ideally 4GB+. This can be done when starting minikube:
$ minikube start --cpus 2 --memory 4096
If you stop and restart minikube with these flags does this solve your issue?
Thanks! I run minikube as follows:
minikube start --cpus 2 --memory 8192
Then applied all the pods. Same result. See the output below:
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master*] kubectl logs -p productcatalogue-b5tr4
INFO [2017-10-31 08:27:40,366] org.eclipse.jetty.util.log: Logging initialized @1432ms to org.eclipse.jetty.util.log.Slf4jLog
INFO [2017-10-31 08:27:40,443] io.dropwizard.server.DefaultServerFactory: Registering jersey handler with root path prefix: /
INFO [2017-10-31 08:27:40,445] io.dropwizard.server.DefaultServerFactory: Registering admin handler with root path prefix: /
INFO [2017-10-31 08:27:40,779] io.dropwizard.server.DefaultServerFactory: Registering jersey handler with root path prefix: /
INFO [2017-10-31 08:27:40,779] io.dropwizard.server.DefaultServerFactory: Registering admin handler with root path prefix: /
INFO [2017-10-31 08:27:40,780] io.dropwizard.server.ServerFactory: Starting product-list-service
INFO [2017-10-31 08:27:40,980] org.eclipse.jetty.setuid.SetUIDListener: Opened application@6c0e13b7{HTTP/1.1,[http/1.1]}{0.0.0.0:8020}
INFO [2017-10-31 08:27:40,981] org.eclipse.jetty.setuid.SetUIDListener: Opened admin@22eaa86e{HTTP/1.1,[http/1.1]}{0.0.0.0:8025}
INFO [2017-10-31 08:27:40,984] org.eclipse.jetty.server.Server: jetty-9.4.z-SNAPSHOT
INFO [2017-10-31 08:27:41,571] io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:
GET /products (uk.co.danielbryant.djshopping.productcatalogue.resources.ProductResource)
GET /products/{id} (uk.co.danielbryant.djshopping.productcatalogue.resources.ProductResource)
INFO [2017-10-31 08:27:41,572] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@7a682d35{/,null,AVAILABLE}
INFO [2017-10-31 08:27:41,577] io.dropwizard.setup.AdminEnvironment: tasks =
POST /tasks/log-level (io.dropwizard.servlets.tasks.LogConfigurationTask)
POST /tasks/gc (io.dropwizard.servlets.tasks.GarbageCollectionTask)
INFO [2017-10-31 08:27:41,584] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@b791a81{/,null,AVAILABLE}
INFO [2017-10-31 08:27:41,596] org.eclipse.jetty.server.AbstractConnector: Started application@6c0e13b7{HTTP/1.1,[http/1.1]}{0.0.0.0:8020}
INFO [2017-10-31 08:27:41,596] org.eclipse.jetty.server.AbstractConnector: Started admin@22eaa86e{HTTP/1.1,[http/1.1]}{0.0.0.0:8025}
INFO [2017-10-31 08:27:41,596] org.eclipse.jetty.server.Server: Started @2665ms
172.17.0.1 - - [31/Oct/2017:08:28:08 +0000] "GET /health HTTP/1.1" 404 43 "-" "kube-probe/1.8" 96
172.17.0.1 - - [31/Oct/2017:08:28:18 +0000] "GET /health HTTP/1.1" 404 43 "-" "kube-probe/1.8" 2
INFO [2017-10-31 08:28:18,654] org.eclipse.jetty.server.AbstractConnector: Stopped application@6c0e13b7{HTTP/1.1,[http/1.1]}{0.0.0.0:8020}
INFO [2017-10-31 08:28:18,656] org.eclipse.jetty.server.AbstractConnector: Stopped admin@22eaa86e{HTTP/1.1,[http/1.1]}{0.0.0.0:8025}
INFO [2017-10-31 08:28:18,657] org.eclipse.jetty.server.handler.ContextHandler: Stopped i.d.j.MutableServletContextHandler@b791a81{/,null,UNAVAILABLE}
INFO [2017-10-31 08:28:18,667] org.eclipse.jetty.server.handler.ContextHandler: Stopped i.d.j.MutableServletContextHandler@7a682d35{/,null,UNAVAILABLE}
Any ideas how can it further investigate it?
I think you might have to delete and re-start your minikube for this config change to take effect? (https://github.com/kubernetes/minikube/issues/567)
You can get cluster info by looking at kubectl describe nodes and also if you have jq installed, you can use this jq '.Items[0].Status.Capacity'
I did delete the minikube several times. It for sure has 8 GB of memory. It still does not work, no idea why.
Running:
kubectl cluster-info dump | jq '.Items[0].Status.Capacity'
gives:
[ ~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master*] kubectl cluster-info dump | jq '.Items[0].Status.Capacity'
{
"cpu": "2",
"memory": "8175252Ki",
"pods": "110"
}
null
null
null
null
null
null
null
parse error: Invalid numeric literal at line 627, column 5](url)
Thank you for your time, once again!
I think I've got it @Opalo - it looks like I configured the healthcheck of the productcatalogue incorrectly. The productcatalogue service is different than the other two, in that it is a Dropwizard-based service. This means that the health check endpoint is exposed on a different port and endpoint (productcatalogue:8025/healthcheck and not productcatalogue:8020/health)
I've now updated the Kubernetes yaml files, and so if you git pull and re-"kubectl apply -f X" all of the services you should be good to go!
I would like to say a massive thanks for reporting this, and apologies for any confusion caused! I'm slightly puzzled how this ever worked, although I'm sure it did, as I took the screenshot of the shopfront UI for the article when I was running this in Kubernetes? The only thing I can think of, is that I initially built the app in Kubernetes 1.7 (minikube 0.22). I saw in your debug info that you were running 1.8, and so upgraded my local minikube this afternoon, before testing everything again.
I've asked around to see if anyone else has this issue with healthcheck, and will update this issue if I find anything.
Thanks again - I'm sure others must have experienced this issue, but no-one else reported it!
As an FYI for anyone interested, I debugged this issue by using Datawire's Telepresence to proxy to the cluster, and then curled all of the endpoints as if it was the shopfront calling the endpoints.
I got a 404 when curling the productcatalogue health check (curl productcatalogue:8020/health), and so then looked through the logs of the productcatalogue (kubectl logs productcatalogue-4ltp9), and then I saw the mention of the 'admin' endpoint being active.
I then ran the productcatalogue locally via Docker (mapping the app and admin ports) and after a few curls I realised that the health check endpoint is exposed only on the admin port and under 'healthcheck' (not 'health').
After this I fixed the Kubernetes productcatalogue yaml, and tested in minikube - everything looked good :-)
Thanks @danielbryantuk. It's stopped to restart all the time but when I hit shopfront in the browser I'm still getting this Whitelabel Error Page as in the first post here. I've retcreated minikube, applied all yaml files again, before that all docker images were rebuilt and pushed. /healthcheck gives me 404:
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master] curl $(minikube service productcatalogue --url)/healthcheck
{"code":404,"message":"HTTP 404 Not Found"}%
Of course I've synchronized the repo.
Hey @Opalo, I can't seem to recreate the issue? Are you using your own Docker Hub account and pushing your own builds of the containers? If so, I'm assuming that you've updated the k8s yaml files to use your containers?
As an FYI, I started with a clean K8s cluster, and then kubectl apply -f the three yaml files. I then do minikube service shopfront, and after a minute of so (when the containers have downloaded into minikube/k8s and the apps have initialised), I refresh the browser and get the expected UI.
You won't be able to curl the healthcheck endpoint on the productcatalogue, because of the port issue I mentioned in my earlier comment (i.e. we aren't exposing the admin port used by healthcheck in the k8s Service yaml). You can exec (e.g. kubectl exec -it <<pod>> -- /bin/bash) into the container and curl the admin port endpoint via localhost e.g.
(master) kubernetes $ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 443/TCP 22h
productcatalogue 10.0.0.115 8020:32608/TCP 8s
shopfront 10.0.0.54 8010:32734/TCP 15s
stockmanager 10.0.0.207 8030:32558/TCP 1s
(master) kubernetes $ minikube service shopfront
Opening kubernetes service default/shopfront in default browser...
(master) kubernetes $ kubectl get pods
NAME READY STATUS RESTARTS AGE
productcatalogue-brdvk 1/1 Running 0 37s
shopfront-jvlsm 1/1 Running 0 44s
stockmanager-m9tcp 1/1 Running 0 30s
(master) kubernetes $ kubectl exec -it productcatalogue-brdvk -- /bin/bash
root@productcatalogue-brdvk:/# curl localhost:8025/healthcheck
{"deadlocks":{"healthy":true},"template":{"healthy":true,"message":"Ok with version: 1.0-SNAPSHOT"}}root@productcatalogue-brdvk:/#
At the very beginning I created my own docker images, pushed them do docker hub and altered all *yaml files. But now I just use the project as it was provided. I still have this issue:
I/O error on GET request for "http://productcatalogue:8020/products": productcatalogue; nested exception is java.net.UnknownHostException: productcatalogue
It looks like shopfront could not connect to productcatalogue. As if they were not in the same net. Does it make any difference that I'm using Mac OS? I've uninstalled minikube and kubectl before trying again. productcatalogue behaves much better now, it does nor restart once and again.
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master] kubectl get pods
NAME READY STATUS RESTARTS AGE
productcatalogue-vq2hj 1/1 Running 0 17m
shopfront-ff7x9 1/1 Running 0 17m
stockmanager-nmwds 1/1 Running 0 17m
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master] kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 23m
productcatalogue NodePort 10.0.0.181 <none> 8020:31731/TCP 17m
shopfront NodePort 10.0.0.130 <none> 8010:31271/TCP 17m
stockmanager NodePort 10.0.0.254 <none> 8030:32640/TCP 17m
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master] kubectl exec -it productcatalogue-vq2hj -- /bin/bash
root@productcatalogue-vq2hj:/# curl localhost:8025/healthcheck
{"deadlocks":{"healthy":true},"template":{"healthy":true,"message":"Ok with version: 1.0-SNAPSHOT"}}root@productcatalogue-vq2hj:/#
root@productcatalogue-vq2hj:/# exit
exit
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master]
it seem the pods shopfront can't resolve the ip address for productcatalogue , as work around
execute :
kubectl exec -it shopfront-ID /bin/bash ( using kubect get pods )
echo "ipadress productcatalogue" >> /etc/hosts
|
gharchive/issue
| 2017-10-30T18:11:15 |
2025-04-01T04:33:55.661927
|
{
"authors": [
"Opalo",
"danielbryantuk",
"miledhileli"
],
"repo": "danielbryantuk/oreilly-docker-java-shopping",
"url": "https://github.com/danielbryantuk/oreilly-docker-java-shopping/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
304363628
|
Settings button over the osx buttons
the settings button of the site is over the 3 buttons of the windows
Duplicate to #409
https://github.com/danielbuechele/goofy/pull/411
Created a pull request at #427 which should fix this
|
gharchive/issue
| 2018-03-12T13:06:42 |
2025-04-01T04:33:55.665505
|
{
"authors": [
"jklp",
"killbom",
"petr-ujezdsky",
"pintiliedragosgeorge"
],
"repo": "danielbuechele/goofy",
"url": "https://github.com/danielbuechele/goofy/issues/410",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
908122622
|
Prepare a presentation at the Uni 10/06
A 20 minutes presentation about artificial intelligence.
The main focus is how to bring volunteers to help in the parallel corpus.
ALPP - 08-06-21.pptx
|
gharchive/issue
| 2021-06-01T10:00:02 |
2025-04-01T04:33:55.704079
|
{
"authors": [
"danielinux7"
],
"repo": "danielinux7/Multilingual-Parallel-Corpus",
"url": "https://github.com/danielinux7/Multilingual-Parallel-Corpus/issues/51",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2178892912
|
Config-flow could not be loaded: 500 Internal server error
Hi Daniel,
I wanted to install the integration on my HA hosted on a docker container but I got the following issue.
`Logger: homeassistant.util.package
Source: util/package.py:108
First occurred: 12:16:57 (3 occurrences)
Last logged: 12:17:55
Unable to install package pyairstage>=1.1.1: ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/.local' Check the permissions. [notice] A new release of pip is available: 23.2.1 -> 24.0 [notice] To update, run: pip install --upgrade pip`
Any idea how I can solve it? To which location is this ./local reference pointing?
I have the Airstage components available here: /home/pi/.../homeassistant/config/custom_components/fujitsu_airstage/
Can I also do a 'manual' installation adapting the .yaml files?
Tnx, Steven
Found the solution by commenting out the User in the docker compose file in the home assistant section...Still wondering why this was required for the integration to work as other integrations worked perfectly without commenting out the user.
Would you care to share your docker compose file @StevenHermans?
sent it via mail...
|
gharchive/issue
| 2024-03-11T11:25:59 |
2025-04-01T04:33:55.708203
|
{
"authors": [
"StevenHermans",
"danielkaldheim"
],
"repo": "danielkaldheim/ha_airstage",
"url": "https://github.com/danielkaldheim/ha_airstage/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
662727946
|
Update sap.txt
Added URLs related to [CVE-2020-6287].
Thank yoU!
|
gharchive/pull-request
| 2020-07-21T08:11:18 |
2025-04-01T04:33:55.709453
|
{
"authors": [
"g0tmi1k",
"joegoerlich"
],
"repo": "danielmiessler/SecLists",
"url": "https://github.com/danielmiessler/SecLists/pull/475",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
729600010
|
HmIP-MP3P
As described in #28, I created the necessary files for the homematic ip wireless MP3 door bell.
Here is the gist:
https://gist.github.com/regnets/9c7913758ea889caf2d2e8e59f5a0134
I also now have this doorbell, what help us required for the implementation in home assistant. Anything I can do to get this working?
I also now have this doorbell, what help us required for the implementation in home assistant. Anything I can do to get this working?
what help us required for the implementation in home assistant
We need somebody who understands how pyhomematic works, understands the information contained in the posted JSON files, and knows how a sensible integration should look like. This isn't a simple device like a switch or sensor. Some features might not even be possible to integrate in the Home Assitant UI at all. But that really depends on what the device can do and how its represented in the CCU.
what help us required for the implementation in home assistant
We need somebody who understands how pyhomematic works, understands the information contained in the posted JSON files, and knows how a sensible integration should look like. This isn't a simple device like a switch or sensor. Some features might not even be possible to integrate in the Home Assitant UI at all. But that really depends on what the device can do and how its represented in the CCU.
Hmmm... Sounds like a bit over my head at the moment. I will need to read and dig a bit more for this. I have seen there is already another MP3 Homematic devices, which is supported as far as I have seen.
I will just try to dig around and see if I can find anything. Until then I'll first use the device but environment variables and programs on the CCU. That way I should already be able too at least get it working for now and use the fancy colours and sounds.
But I am also not entirely sure yet on what I want to use it for, besides the simple door bell. The are probably tons of ideas for colours and sounds based on states off devices, just need to find the use case for it...;)
Hmmm... Sounds like a bit over my head at the moment. I will need to read and dig a bit more for this. I have seen there is already another MP3 Homematic devices, which is supported as far as I have seen.
I will just try to dig around and see if I can find anything. Until then I'll first use the device but environment variables and programs on the CCU. That way I should already be able too at least get it working for now and use the fancy colours and sounds.
But I am also not entirely sure yet on what I want to use it for, besides the simple door bell. The are probably tons of ideas for colours and sounds based on states off devices, just need to find the use case for it...;)
In general you should already be able to control the device by using the set_device_value or put_paramset_value services. In the JSON you'll find multiple VALUES sections. These are the parameters that can be controlled. How they can be controlled depends on the OPERATIONS and Type attributes. It's a bitmask, which is explained in the XML-RPC API documentation.
So as an example on channel 3 there's the parameter LEVEL of type FLOAT and an OPERATIONS value of 7. 7 is 1 + 2 + 4, so it's read, write and event. Because it's writable, you can send a float value (0.0 - 1.01) to it to make it do something.
Manually in Python 3 you would control it like this:
from xmlrpc.client import ServerProxy
p = ServerProxy("http://1.2.3.4:2010") # IP address of the CCU
p.setValue("aabbccdd:3", "LEVEL", 0.5) # aabbccdd is the device ID
current_value = p.getValue("aabbccdd:3", "LEVEL")
The set_device_value service mentioned above really is just a wrapper to call the setValue method which you see in the Python code. So everything you manage to do in Python can be done via manual service calls. For the parameters of type ENUM you'll also see a list of values in the JSON. For those you use the setValue method and a value from 0 to N. The elements in those lists are counted from 0 upwards. So it look like sending a number (0, 1, 2...) at the parameter SHORT_OUTPUT_BEHAVIOUR could make the device play a sound.
Hope this helps.
In general you should already be able to control the device by using the set_device_value or put_paramset_value services. In the JSON you'll find multiple VALUES sections. These are the parameters that can be controlled. How they can be controlled depends on the OPERATIONS and Type attributes. It's a bitmask, which is explained in the XML-RPC API documentation.
So as an example on channel 3 there's the parameter LEVEL of type FLOAT and an OPERATIONS value of 7. 7 is 1 + 2 + 4, so it's read, write and event. Because it's writable, you can send a float value (0.0 - 1.01) to it to make it do something.
Manually in Python 3 you would control it like this:
from xmlrpc.client import ServerProxy
p = ServerProxy("http://1.2.3.4:2010") # IP address of the CCU
p.setValue("aabbccdd:3", "LEVEL", 0.5) # aabbccdd is the device ID
current_value = p.getValue("aabbccdd:3", "LEVEL")
The set_device_value service mentioned above really is just a wrapper to call the setValue method which you see in the Python code. So everything you manage to do in Python can be done via manual service calls. For the parameters of type ENUM you'll also see a list of values in the JSON. For those you use the setValue method and a value from 0 to N. The elements in those lists are counted from 0 upwards. So it look like sending a number (0, 1, 2...) at the parameter SHORT_OUTPUT_BEHAVIOUR could make the device play a sound.
Hope this helps.
Hey, I have been reading along here for a bit. Recently I also acquired this device and would be eager to use it in home assistant. Unfortunately I am only just getting a glimpse of home assistant / pyhomematic development but I am trying to figure it out. This is of course a more complex device and not the best for easing into... I think I will try a few things and see where I can get. Probably I will start by trying to get the light to work. Wish me luck.
Any updates on the support of the HmIP-MP3P?
This device should now be supported by the new integration.
|
gharchive/issue
| 2020-10-26T14:13:44 |
2025-04-01T04:33:55.723008
|
{
"authors": [
"Plaethe",
"Remko76",
"danielperna84",
"regnets"
],
"repo": "danielperna84/pyhomematic",
"url": "https://github.com/danielperna84/pyhomematic/issues/345",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
235411685
|
Finalize Localization of all strings
finish localizing all strings in the editor
All hardcoded strings that rendered onto the UI should be within the resources file.
|
gharchive/issue
| 2017-06-13T01:37:04 |
2025-04-01T04:33:55.724273
|
{
"authors": [
"danielricci"
],
"repo": "danielricci/einstein-assets-editor",
"url": "https://github.com/danielricci/einstein-assets-editor/issues/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
91894760
|
Slow read performance
I have just been checking out yas3fs as I thought it might be useful in my application.
FYI: S3 and EC2 where I ran my tests are both in the same region.
The first test I did was to mount one of my buckets and then attempt to do an operation (sha1 in this case) on that file. The file was ~850MB in size and I was expecting there to be a delay on the first run because the file would need to be downloaded. That turned out to be the case, but wildly more than I was expecting.
$ time sha1sum 03302014-r1.nd.ome.tif
efebdfcd4241fb53fc8f04d31897970db21bdca1 03302014-r1.nd.ome.tif
real 2m9.312s
user 0m2.448s
sys 0m1.318s
I thought that was rather a long time as I can download this file from s3 to this EC2 instance in ~15s usually. Anyway, I thought I'd then repeat the experiment to see how it performed once the file has been cached locally. I was surprised that the result was not a lot better which I guess points to the overhead of yas3fs/FUSE as the problem.
$ time sha1sum 03302014-r1.nd.ome.tif
efebdfcd4241fb53fc8f04d31897970db21bdca1 03302014-r1.nd.ome.tif
real 1m40.551s
user 0m2.568s
sys 0m1.245s
Just in case something weird was happening in the case of calculating the sha1, I also tried a basic copy operation (to a non s3 location):
$ time cp 03302014-r1.nd.ome.tif ~/
real 1m37.469s
user 0m0.002s
sys 0m1.280s
Also for reference, this is how long it takes to calculate the sha1 from the cached file:
$ time sha1sum /tmp/yas3fs/dpwr/files/s3test/03302014-r1.nd.ome.tif
efebdfcd4241fb53fc8f04d31897970db21bdca1 /tmp/yas3fs/dpwr/files/s3test/03302014-r1.nd.ome.tif
real 0m2.890s
user 0m2.793s
sys 0m0.099s
Is this likely to be a bug or is that the expected performance of FUSE and/or S3 based filesystems? I don't have any previous experience with either and I'm trying to figure out the best avenue to explore for my application.
Thanks a lot,
Douglas
see #44, #64, #43
Ah, that's too bad. Thanks for the response.
|
gharchive/issue
| 2015-06-29T20:00:15 |
2025-04-01T04:33:55.729863
|
{
"authors": [
"bitsofinfo",
"dpwrussell"
],
"repo": "danilop/yas3fs",
"url": "https://github.com/danilop/yas3fs/issues/98",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2161894519
|
Black screen in v162
I just updated from r161 to r162, and now the canvas is black, and there are no error messages in the javascript console.
I will try to find out what causes this, I just would like to report. So something is different in r162.
Can be tested here: https://www.fciv.net/?webgpu=true
This is for this project: https://github.com/fciv-net/fciv-net
It seems to be fine in the demo. How do you set it up the nodes system or wgsl ?
https://danrossi.github.io/three-webgpu-renderer/tests/webgpu_video_panorama_equirectangular.html
They have made a module build for this now I believe. I haven't had the time to check. If it needs further customisation for iife packages I'll look into it. It won't work by default in iife packages I have a special build for that.
|
gharchive/issue
| 2024-02-29T18:50:25 |
2025-04-01T04:33:55.813478
|
{
"authors": [
"andreasrosdal",
"danrossi"
],
"repo": "danrossi/three-webgpu-renderer",
"url": "https://github.com/danrossi/three-webgpu-renderer/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
490044002
|
Confirmation issue in production with version 1.0.13
Haven't been receiving any confirmation emails on user creation since I've upgraded from 1.0.12 to 1.0.13.
Confirmation through user update works just fine, all other emails work just fine, version 1.0.12 is completely fine and even using Bamboo.SendGridAdapter on local env with production api key gets my emails delivered just fine as well. It's just user creation on prod that isn't working.
If I debug production I receive the message "Sending email with Bamboo.SendGridAdapter", but no mail is actually sent out.
Does anyone else have this issue?
Sounds like an issue in sendgrid, since it’s pushed successfully to the sendgrid API. You should check the statistics dashboard in your sendgrid account to see why the e-mail doesn’t get delivered: https://sendgrid.com/docs/glossary/drops/
@danschultzer Problem is there were no drops, although I get the debug message the email isn't on Sendgrid, it's really strange because I'm not getting anything logged after that message even with def process(email), do: deliver_now(email, response: true) |> inspect() |> Logger.info().
Just looked it up, and the debug message happens no matter if the email could be delivered or not: https://github.com/thoughtbot/bamboo/blob/214f5f3e3be51e15d7c2d4c7449fe1d48a6aa472/lib/bamboo/mailer.ex#L130
Looking at the sendgrid api call it may ignore some errors:
https://github.com/thoughtbot/bamboo/blob/214f5f3e3be51e15d7c2d4c7449fe1d48a6aa472/lib/bamboo/adapters/send_grid_adapter.ex#L38-L54
No idea why this only occurs after you upgraded, and only happens in production. I suspect it’s something unrelated to Pow, but will look into the 1.0.13 changes just to be sure.
I suspected it was unrelated to Pow as well so after two hours of changing/testing options I ended changing back to 1.0.12 and the problem was fixed. It could still just be an issue on my end though.
That’s frustrating. The major change in 1.0.13 is in the e-mail change logic. These are the relevant changes from the changelog:
Updated PowEmailConfirmation.Ecto.Schema.changeset/3 so; (#259)
when :email is identical to :unconfirmed_email it won't generate new :email_confirmation_token
when :email is identical to the persisted :email value both :email_confirmation_token and :unconfirmed_email will be set to nil
when there is no :email value in the params nothing happens
Fixed bug in PowEmailConfirmation.Phoenix.ControllerCallbacks.send_confirmation_email/2 where the confirmation e-mail wasn't send to the updated e-mail address (#256)
If this is Pow, then it may be the last change, which fixed a bug where the confirmation e-mail was send to the current e-mail rather than the updated one.
I think that the same thing is happening with Swoosh and AmazonSES, for me. Which suggests it's a Pow issue. (Reset email works fine.)
Pow.Mailer.cast is being called.
But Pow.Mailer.process/1 is not being called.
I'll look into it today.
Do you still get redirected and get the "You'll need to confirm your e-mail before you can sign in. An e-mail confirmation link has been sent to you." flash message?
Yes, v1.0.13 will redirect with flash saying "... confirmation link has been sent..."; and, mine worked locally, too.
I have made a prod release using Pow v1.0.13, Swoosh and SendGrid. It works as expected, and IO.inspect/0 in both cast/1 and process/1 works.
I wonder if this could be a build issue; perhaps we should compare how @danschultzer , @albertoramires and I am doing the release:
I (@sensiblearts) am not using a vm for building; I use mix release on my laptop in Windows WSL, Ubuntu 18.04; then I push it out to DigitalOcean Ubuntu 18.04; hence, both x86_64. Specifically:
mix clean
source .env
MIX_ENV=prod mix release.init
MIX_ENV=prod mix release beta1
cd /tmp
tar -czvf oiv_beta1.tar.gz -C /mnt/c/Users/...../beta1 .
And then Ansible script to untar the results on the server and restart.
It would be best if you can make it reproducible (e.g. create a demo repo on GH).
I've been setting up a brand new Phoenix app with Pow and Swoosh enabling email confirmation. Then I built a production release MIX_ENV=prod mix release and ran it locally to test it out _build/prod/rel/pow_demo/bin/pow_demo start.
@sensiblearts if you start the release locally do you still experience the same issue?
@danschultzer , you make me feel old when you suggest the obvious that I'm overlooking :-)
When I run the release locally with start it crashes because of Que, even for v1.0.12:
12:46:18.021 [info] Application que exited: Que.start(:normal, []) returned an error: shutdown: failed to start child: Que.ServerSupervisor
** (EXIT) an exception was raised:
** (Memento.Error) Transaction Failed with: {:no_exists, Que.Persistence.Mnesia.DB.Jobs}
...
(I had some code in for starting Que and Memento etc manually amd initializing its file, but I removed that and went back to letting the application start it when I added the new MnesiaClusterSupervisor code. I could put that manual code back, but it would be a waste of time. See below.)
I've not yet tested v.1.0.13 locally, but I think I need to abandon using Que first.
So, rather than pursue this, I will switch to a different job queue, perhaps Honeydew, because Honeydew is currently working with multi-node Mnesia, wherease Que does not yet have multi-node.
I was planning to move away from Que after I got multi-node Pow working, but now I realize Que may prevent that!
@danschultzer : Do you think it is worth time studying the Que issue as it relates to Pow, or should we just find a good queue that works with Pow and recommend that one?
Also, if you were designing a system, would you try to use Mnesia for both queue and Pow cache, or is that asking for headaches? (I could switch to postgres or add redis.)
Thanks so much.
@albertoramires , have you run your production release locally yet? My guess is that, what you and I have in common is we're both using some other library that uses Mnesia, and it's interfering with Pow's cache..?
@sensiblearts We are using :libcluster, Honeydew and Pow in production with :mnesia for queue and session cache. We don't have any issues.
@Schultzer Cool, thank you.
You guys related?
@sensiblearts Having both Pow and a queue system running on Mnesia is no issue, but if you need distribution for both then I would probably separate the two into individual local Mnesia nodes rather than running both in the same node. Try start the Pow MnesiaCache before Que.
Also it may be better to centralize the job queue (postgres/redis/mnesia) since job queue processing should be handled by node(s) separated from the web endpoints. Not sure if distributed job queue makes sense here.
I don't believe that Mnesia is related to this issue. There is something that halts email delivery for whatever reason. I've been helping a user on the elixir slack channel pretty much the same issue, email confirmation is the only email not working and it only happens in production. In their case they couldn't even get Logger to print debug messages after a certain point (e.g. Logger doesn't output message when added to process/1 or when added to certain spots inside Pow code).
It's beyond me how this can happen. I feel I'm missing something obvious but without reproducible codebase I can't figure it out.
@danschultzer Thanks, that manual code I mentioned that was starting Que after Pow. And you're right, I don't need really need a distributed queue. (You mentioned it, and I thought this through before but did't write it down. I can't work at your speed; I'm 59 years old.)
@albertoramires , please ignore my comment about mnesia in your case.
Anyway, I'll create a fresh project and try to replicate the problem, and let you know.
Thought: Could a transient process be shutting down faster in production, and not have time to finish the callback? Or maybe DNS redirects in production (www.my vs. my) load new controllers and kill something? Speculating.
I made a simple project without any mailer (just log to debug).
It does work for:
mix phx.server and MIX_ENV=prod mix phx.server
but it does not work for:
MIX_ENV=prod mix release beta1
_build/prod/rel/powprod/bin/beta1 start
behavior: It just hangs, no output at all, no log file created. Won't boot.
(Note that I named the release inside mix.exs. I tried the default, too; same beahvior.)
I'll try again tomorrow.
@sensiblearts Can you please share the project?
@danschultzer , @Schultzer , @albertoramires : The simple project (above) was not working because of that silly config server: true -- which I've forgotten twice now.
Results: The bug does not occur in a simple project. @Schultzer , no point in sending that project, since it works.
Below, "success" means that an IO.puts to the log file shows that the process/1 callback is being called in the Mailer.
In a FRESH PROJECT with a dummy (log only) Mailer:
mix phx.server
"success"
MIX_ENV=prod mix phx.server
"success"
_build/prod/rel/beta1/bin/beta1 start
"success"
Now, switch to my REAL PRODUCTION APP (running localhost):
tldr: All "success", too; hence, my next step is to diff the current state of my production app and try to narrow it down. I'll share another update Monday.
Details of what I did with my production app:
replace mailer with one that only logs
remove Swoosh from mix.exs
leave Que in mix.exs {:que, "~> 0.10.1", runtime: false } but don't start it in application
Then:
mix phx.server
"success"
MIX_ENV=prod mix phx.server
"success"
_build/prod/rel/beta1/bin/beta1 start
"success"
add Swoosh back to mix.exs, and add it to :extra_applications
_build/prod/rel/beta1/bin/beta1 start
"success"
bring back the original mailer that uses Swoosh/sAmazonSES
_build/prod/rel/beta1/bin/beta1 start
"success" and actual email received from AmazonSES
start Que again (manually, in application.ex)
_build/prod/rel/beta1/bin/beta1 start
"success" and actual email received from AmazonSES
let the app start Que (remove manual code, and in mix, remove runtime: false)
_build/prod/rel/beta1/bin/beta1 start
"success" and actual email received from AmazonSES
So, although I still have issues with Que, it's apparently not contributing to this problem, as @danschultzer said.
Again, my next step is to diff the current state of my production app and try to narrow it down.
If I can get the bug to reproduce in a localhost prod release, then I will have found the problem.
If I cannot then I will push it out to DigitalOcean and see if it happens live.
More later.
Update: It seems to be an environment issue.
All of the following refer to a mix release build of my actual app running in MIX_ENV=prod mode.
"Success" means that actual emails were sent and delivered, for both Registration, and for attempt to sign in with email unconfirmed. Results:
Run on localhost, connected to local DEV database, and behind HAProxy and (self signed) SSL cert:
"success"
As in (1) but connected remotely to production database on DigitalOcean:
"success"
Exact same binaries as in (2), pushed out to DigitalOcean:
"failure"
As we found in the beginning:
A production release running on DigitalOcean (domain GardenJournal.App), with Pow v1.0.13 behaves as follows:
def deliver(conn, email) do
config = Plug.fetch_config(conn)
mailer = Config.get(config, :mailer_backend) || raise_no_mailer_backend_set()
email
|> mailer.cast() # succeeds
|> mailer.process() # "fails to reach callback target
end
However, the exact same binaries running in a localhost environment, behind HAProxy and self signed SSL, and connected to remote (DigitalOcean) database, succeeds at both cast/1 and process/1.
My interpretation of this is that it could be a redirect / routing issue that kills a process and affects the callback. Not sure if this helps, but here is my endpoint in production:
config :gjwapp, GjwappWeb.Endpoint,
http: [port: System.get_env("PORT") || xxxx], # xxxx to hide here
url: [host: "gardenjournal.app", scheme: "https", port: 443],
cache_static_manifest: "priv/static/cache_manifest.json",
check_origin: false,
root: "."
vs. the one in localhost:
config :gjwapp, GjwappWeb.Endpoint,
url: [host: "127.0.0.1", scheme: "http"],
http: [port: System.get_env("PORT") || 4000],
https: [
port: 4001,
cipher_suite: :strong,
certfile: "priv/cert/selfsigned.pem",
keyfile: "priv/cert/selfsigned_key.pem"
],
debug_errors: true,
code_reloader: true,
check_origin: false,
watchers: [
node: [
"node_modules/webpack/bin/webpack.js",
"--mode",
"development",
"--watch-stdin",
cd: Path.expand("../assets", __DIR__)
]
]
@danschultzer , @Schultzer , I made some progress: Test project works.
I added to the new phoenix app so that it is only:
{:pow, "~> 1.0.13"},
{:swoosh, "~> 0.23"},
{:gen_smtp, "~> 0.13"},
{:ex_aws, "~> 2.1"}
Also, it connects to my production database and uses my SES keys.
I pushed this out on an Ubuntu server identical to the one that is failing, but this one succeeds! to send email confirmations.
So @danschultzer and @Schultzer can forget about it for now -- it does not appear to be Pow problem.
As for @albertoramires , I will go through the projects and see what differences I can find on Tuesday.
Actually I may test the firewall idea today.
@danschultzer , @Schultzer , @albertoramires , what do you make of this?:
Again, I'm testing whether a message in log file shows that process/1 is being called, and if the email is actually sent.
If I direct my browser to:
gardenjournal.app, then it is broken; process/1 is not called
the IP for the HAProxy load balancer, then it is partly broken: process/1 is called because I see the debug message in the log. However, the email is not sent after a new registration is created.
the IP for the backend webserver, then it works fine; email arrives.
So, it seems like, the more layers it has to go through (DNS, proxy, phoenix routing), the longer it takes, then the less of the sequence gets completed.
Again, to me this looks like a process dies early or otherwise cannot complete. Speculation: Is the session process supervising the Mailer, and when an unconfirmed email is detected (or registration attempted), could the old session be discarded as a new one is created?
Oh, that's curious. I wonder, is the e-mail delivery async? I have no idea what actually happens here. If you get redirected then the whole chain is definitely called, with the e-mail being sent, but I wonder if the mailer (and logger) are async processes that gets terminated before they are done.
I just tried on my server to log in an unconfirmed user via http://localhost:4000 (lynx, directly on the server) and the confirmation email was sent. However, if I access it via my Nginx fronted domain the email never gets sent.
I tried using Caddy v2 as the reverse proxy instead of Nginx and the email was sent.
So, it seems like, the more layers it has to go through (DNS, proxy, phoenix routing), the longer it takes, then the less of the sequence gets completed.
Again, to me this looks like a process dies early or otherwise cannot complete. Speculation: Is the session process supervising the Mailer, and when an unconfirmed email is detected (or registration attempted), could the old session be discarded as a new one is created?
I have the same issue in production (nginx). When I create a user, I get 'email is sent' success flash.
No email is really sent though.
Then, I tried to sign in as the same user. Since email is unconfirmed, it warns me that I need to confirm email first.
I repeat this process of signing-in about 3 times and each time I get a little bit further in my "IO.inspect" debug logs. So to me it also looks like for some reason confirmation process dies early. But since each time it goes slightly further, it seems that something is giving it more time on each subsequent try. Up to 3 times or so.
@danschultzer , Looking at the code again today. I misspoke in earlier posts; I said "session" when I meant connection. Also, when I "log" debug info I meant just IO.puts:
def process(email) do
IO.puts "==DELIVERING EMAIL==" # sometimes see, sometimes not
deliver(email) # sometimes receive email, sometimes not
end
I don't know about the internals of IO.puts, but it seems like not seeing that output would be a Pow issue.
And as the @Abat says above, it seems to get further into the function call if there are fewer layers/obstacles, i.e., if it's faster.
Looking through PowEmailConfirmation.Phoenix.ControllerCallbacks , I have no ideas.
Speculating: is it possible that PowEmailConfirmation.Plug.maybe_renew_conn is broken and renewing the connection when it should not?
@danschultzer : Do you think it's worth my time to fork Pow and step through, try to find a Pow issue? Or do you still think it's with the mailers?
Thanks.
def create(params) do
Multi.new
|> Multi.run(:user, fn Platform.Repo, _ ->
IO.inspect "1"
pow_create(params)
end)
|> Multi.run(:profile, fn _repo, %{user: user} ->
IO.inspect "2"
PlatformWeb.Email.welcome_email |> PlatformWeb.PowMailer.deliver_now
Platform.Profiles.create_profile(user.id)
end)
|> Repo.transaction
|> case do
{:ok, %{user: user, profile: profile}} ->
# Yay, success!
IO.inspect "3"
{:ok, user}
{:error, op, res, others} ->
{:error, res}
end
end
In my Users context, I tested sending an email as can be seen above. With nginx and in fully production mode. It works.
This seems to prove to me that the issue is with Pow's EmailConfirmation. Sorry if the assumption is incorrect and I missed something.
Really want to figure it out!
in defmodule PowEmailConfirmation.Phoenix.ControllerCallbacks do
in defp halt_and_send_confirmation_email(conn, return_path) do
If I comment out Phoenix.Controller.redirect(to: return_path) in
conn =
conn
|> Phoenix.Controller.put_flash(:error, error)
# |> Phoenix.Controller.redirect(to: return_path)
Email is sent! Could someone please have a look if I'm on to something?
@Abat I think you're onto something! That's one of the changes in v1.0.13.
send_confirmation_email/2 is called after Phoenix.Controller.redirect/2. Redirect uses Plug.Conn.send_resp/1 which may cause email delivery to be blocked.
Try change the order:
defp halt_and_send_confirmation_email(conn, return_path) do
user = Plug.current_user(conn)
{:ok, conn} = Plug.clear_authenticated_user(conn)
error = extension_messages(conn).email_confirmation_required(conn)
- conn =
+
+ send_confirmation_email(user, conn)
+
+ conn =
conn
|> Phoenix.Controller.put_flash(:error, error)
|> Phoenix.Controller.redirect(to: return_path)
- send_confirmation_email(user, conn)
-
{:halt, conn}
end
I'll update the master branch to call redirect last.
@Abat I think you're onto something! That's one of the changes in v1.0.13.
send_confirmation_email/2 is called after Phoenix.Controller.redirect/2. Redirect uses Plug.Conn.send_resp/1 which may cause email delivery to be blocked.
Try change the order:
defp halt_and_send_confirmation_email(conn, return_path) do
user = Plug.current_user(conn)
{:ok, conn} = Plug.clear_authenticated_user(conn)
error = extension_messages(conn).email_confirmation_required(conn)
- conn =
+
+ send_confirmation_email(user, conn)
+
+ conn =
conn
|> Phoenix.Controller.put_flash(:error, error)
|> Phoenix.Controller.redirect(to: return_path)
- send_confirmation_email(user, conn)
-
{:halt, conn}
end
I'll update the master branch to call redirect last.
I did just that :) Was about to post. Thanks a lot guys! Thanks for Pow!
Opened #309. Be warned that the current master has a lot of refactoring that could break your cache store if you downgrade after so I recommend first testing this out by just updating Pow locally with the above fix to confirm it works. I plan to release v1.0.14 this week.
@sensiblearts @jmn can you confirm?
@danschultzer , Yes! that fixed it for me. Beers for @Abat and @danschultzer .
Confirmed working. Well done guys!
|
gharchive/issue
| 2019-09-05T22:45:56 |
2025-04-01T04:33:55.869644
|
{
"authors": [
"Abat",
"Schultzer",
"albertoramires",
"danschultzer",
"jmn",
"sensiblearts"
],
"repo": "danschultzer/pow",
"url": "https://github.com/danschultzer/pow/issues/266",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1975368757
|
remove parent ns from model proxy serializer
fixes issue #137
@psongers Hi
That is the intended behavior that a sub-model inherits the namespace from its parent.
But there is indeed a bug: empty entity namespace and empty model namespace are ignored since bool('') is False
the fix should be like this:
if ctx.entity_ns is not None:
ns = ctx.entity_ns
elif model_cls.__xml_ns__ is not None:
ns = model_cls.__xml_ns__
else:
ctx.parent_ns
@psongers Hi
That is the intended behavior that a sub-model inherits the namespace from its parent. But there is indeed a bug: empty entity namespace and empty model namespace are ignored since bool('') is False
the fix should be like this:
if ctx.entity_ns is not None:
ns = ctx.entity_ns
elif model_cls.__xml_ns__ is not None:
ns = model_cls.__xml_ns__
else:
ctx.parent_ns
gotchya. that makes sense. I'll update this PR
Codecov Report
Attention: 1 lines in your changes are missing coverage. Please review.
Comparison is base (9e2b126) 91.65% compared to head (2137120) 91.63%.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## dev #138 +/- ##
==========================================
- Coverage 91.65% 91.63% -0.02%
==========================================
Files 25 25
Lines 1366 1375 +9
==========================================
+ Hits 1252 1260 +8
- Misses 114 115 +1
Flag
Coverage Δ
unittests
91.63% <95.45%> (-0.02%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Files
Coverage Δ
pydantic_xml/serializers/factories/mapping.py
88.75% <100.00%> (ø)
pydantic_xml/serializers/factories/model.py
95.16% <100.00%> (ø)
pydantic_xml/serializers/factories/raw.py
80.00% <100.00%> (ø)
pydantic_xml/serializers/factories/wrapper.py
95.12% <100.00%> (ø)
pydantic_xml/serializers/serializer.py
96.17% <100.00%> (+0.04%)
:arrow_up:
pydantic_xml/utils.py
94.87% <100.00%> (+0.75%)
:arrow_up:
pydantic_xml/serializers/factories/primitive.py
94.73% <80.00%> (-0.97%)
:arrow_down:
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2023-11-03T02:34:09 |
2025-04-01T04:33:55.967364
|
{
"authors": [
"codecov-commenter",
"dapper91",
"psongers"
],
"repo": "dapper91/pydantic-xml",
"url": "https://github.com/dapper91/pydantic-xml/pull/138",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1728378024
|
feat: use app-channel-address
Fixes: #14
This pull request adds --app-channel-address instead redirecting request from dapr-ambient manually.
Is important to know that this features will be released on version 1.11.0.
Now, the main dapr-ambient goal is to configure daprd getting credentials.
How-to test
Build dapr-ambient-proxy image:
go build main.go
Build the image:
docker build -t <username>/<tag>
Build the daprd image:
cd daprd
docker build -t <username>/<tag>
Change the values.yaml file package and follow this tutorial:
Install redis:
kind create cluster --name dapr-ambient
helm install redis bitnami/redis --set image.tag=6.2 --set architecture=standalone
Install dapr:
helm upgrade --install dapr dapr/dapr \
--version=1.10.4 \
--namespace dapr-system \
--create-namespace \
--wait
Install all components:
kubectl apply -f - <<EOF
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: keyPrefix
value: name
- name: redisHost
value: redis-master:6379
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
auth:
secretStore: kubernetes
EOF
kubectl apply -f - <<EOF
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: notifications-pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: redis-master:6379
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
auth:
secretStore: kubernetes
EOF
kubectl apply -f - <<EOF
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: notifications-subscription
spec:
topic: notifications
route: /notifications
pubsubname: notifications-pubsub
EOF
Create all apps:
kubectl apply -f https://raw.githubusercontent.com/salaboy/dapr-ambient-examples/main/apps.yaml
Build and run dapr-ambient:
helm package chart/dapr-ambient
helm install my-ambient-dapr-ambient dapr-ambient-1.9.5.tgz --set ambient.appId=my-dapr-app --set ambient.proxy.remoteURL=subscriber-svc
Note that some code will be replaced by the #17, then we need to merge this pr first!
|
gharchive/pull-request
| 2023-05-27T00:42:32 |
2025-04-01T04:33:55.975302
|
{
"authors": [
"mcruzdev"
],
"repo": "dapr-sandbox/dapr-ambient",
"url": "https://github.com/dapr-sandbox/dapr-ambient/pull/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1649666918
|
Initialize dapr dashboard from separate helm chart
Describe the proposal
based on this PR https://github.com/dapr/dashboard/pull/250
initilize dapr dashboard on dapr init -k
Release Note
RELEASE NOTE:
Corresponding PR- https://github.com/dapr/cli/pull/1272
@artursouza @mukundansundar
|
gharchive/issue
| 2023-03-31T16:22:45 |
2025-04-01T04:33:55.977725
|
{
"authors": [
"mukundansundar",
"pravinpushkar"
],
"repo": "dapr/cli",
"url": "https://github.com/dapr/cli/issues/1266",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
683592983
|
[Proposal] Database component based on bindings
Goal
Implement a Dapr component to interact with an external database using output/input bindings allowing users to have the freedom to interact with a database in a more rich way.
Context
The current implementation of Dapr allows users to save the information into the database and it also allows users to get the state from the database only given a key or a group of keys in the case of bulk operations. This way to work is very useful for keeping a state between services call or for applications where data is stored following a key-value architecture.
The limitations of the current implementation arise if the user applications need to perform any kind of query further than get data by Id. If the users need to perform any kind of query they have to perform them using specific database connections outside the Dapr ecosystem and losing all the isolation and encapsulation features provided by Dapr.
Why Bindings?
According to Dapr definition, bindings are the way to connect to external resources and in this proposal, we are not going to propose to change the current state store implementation because the current implementation as we set in the previous section works smoothly for some specific use cases.
With this proposal, we want to fill the gap and give an option for other applications to have more freedom when they are accessing a database from a Dapr application without losing any of the Dapr advantages.
The Foundations
The main idea is to create a generic database component on top of a binding component. We can have two types of bindings related to databases:
Output binding:
create: to save the information inside the database.
query: to perform a query against the database.
get: this can be used to fetch just one element from the database using it id.
delete: to remove specific data from the database.
Input binding:
to subscribe to a possible event change stream provided by the database.
The Query Language
To perform queries we can use a syntax similar to MongoDB based on JSON:
{
"collection": "collectionName",
"query" : {
"field" : "fieldValue"
}
}
This query is the equivalent sql:
SELECT * FROM collectionName WHERE field = fieldValue
In order to specify AND conditions:
{
"collection": "collectionName",
"query":{
"field" : "fieldValue",
"field2" : { "$lt": 20}
}
}
The equivalent sql is the following:
SELECT * FROM collectionName WHERE field = "fieldValue" AND field2 < 20
In order to include operators have introduced it as a JSON object with the structure {$operatorName: operatorValue}. To support the OR condition we can do it in the following way:
{
"collection": "collectionName",
"query":{
"field" : "fieldValue",
"$or" : [{"field2": {"$lt": 20}}, {"field3": {"$gt": 30}}]
}
}
The equivalent sql statement is:
SELECT * FROM collectionName WHERE field = "fieldValue" AND ( field2 > 20 OR field3 >30)
Regarding the number of operators, we can implement a subset of operators from MongoDB: https://docs.mongodb.com/manual/reference/operator/query/.
Usage
Output Binding
As a normal binding we need to define the component specs in a yaml file:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mydatabase
namespace: <NAMESPACE>
spec:
type: bindings.database.mongodb
metadata:
- name: host
value: us-west-2
- name: username
value: *****************
- name: password
value: *****************
After applying the yaml file to the cluster we can start to perform calls using the different actions to the database:
curl -X POST -H http://localhost:3500/v1.0/bindings/mydatabase -d
'{ "data": {
"collectionName": "collectionName",
"data": {
"field1": "value1",
"field2":"value2"
}
},
"operation": "create"
}'
In the case of queries we can have this:
curl -X POST -H http://localhost:3500/v1.0/bindings/mydatabase -d
'{ "data": {
"collectionName": "collectionName",
"data": {
"field1": "value1",
"field2":{"$gt": 20}
}
},
"operation": "query"
}'
Input Binding
Input Binding
The input bindings can be used to get subscribed to the database change stream. We need to provide the event configuration using the yaml file:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mydatabase
namespace: <NAMESPACE>
spec:
type: bindings.database.mongodb
metadata:
- name: host
value: us-west-2
- name: username
value: *****************
- name: password
value: *****************
- name: collection
value: collectionName
- name: subscriptionType
value: all #one of [all, save, delete or update]
Then the user as other input bindings needs to provide the listening method for the event. More information about the listener implementation here: https://github.com/dapr/docs/tree/v0.10.0/howto/trigger-app-with-input-binding
Implementation
The database binding component can be implemented just using the current binding component interface just with the implementation of a specific database query parse method that is going to be called from the Invoke method.
type InputBinding interface {
Init(metadata Metadata) error
Read(handler func(*ReadResponse) error) error
}
type OutputBinding interface {
Init(metadata Metadata) error
Invoke(req *InvokeRequest) (*InvokeResponse, error)
Operations() []OperationKind
}
What do you think about this implementation? I think that it could help a lot to solve the problem that other people have raised about the use of the state store and queries without changing Dapr core and at the same time in a flexible and extensible way.
Sorry for the long pause on this one, we (Dapr maintainers) are currently focused on the 1.0 release.
This issue was raised multiple times by many users and community members, and I think the Dapr project should take this on. This probably won't make it in time into 1.0 but it can very land after.
Regarding the design:
I think we can actually add this to the State Store API instead of using bindings, especially if we're going the route of providing a single query language that gets translated into each DB's native query language.
Bi-directional bindings are more for a use case of sending binding specific data to the component rather than enforcing a standardized API.
You might ask how this works if not all state stores can support this: and the answer is that Dapr already has a Transactions API that works for a subset of supported state stores.
When a call is made into Dapr, the Dapr runtime checks if the state store implements the Transactions interface. If it doesn't, an error is returned, and if it does the call is allowed to go through.
We can do the same thing with queries: introduce a new QueryStore (just an example) interface and then let every component construct its own native SQL from the selected query language, which in my eyes can be OData, ANSI-SQL or anything else we come up with.
What do you think?
Any update on this component, especially being added to the base of Dapr.
IMO, it's a critical missing piece, and I would hope it would allow adhoc queries as well.
If we want to query the data from two tables, it become complex.
So I think we should support SQL92 solutions. https://github.com/dapr/dapr/issues/3354
@cvictory, this is a GREAT idea!
My preference would be to stick with SQL92, but the direction is good.
|
gharchive/issue
| 2020-08-21T14:15:01 |
2025-04-01T04:33:55.993378
|
{
"authors": [
"cvictory",
"rdiaz82",
"samcov",
"yaron2"
],
"repo": "dapr/components-contrib",
"url": "https://github.com/dapr/components-contrib/issues/441",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1519201938
|
Add bulkSubscribe request params to SubscribeRequest
Signed-off-by: Deepanshu Agarwal deepanshu.agarwal1984@gmail.com
Description
This change adds BulkSubscribeRequest option to SubscribeRequest. If a User specified certain bulkSubscribe related options, they will be passed on via BulkSubscribeRequest in SubscribeRequest, instead of through metadata.
Also, it changes name of parameters from MaxBulkSubCount to MaxMessagesCount and similarly for AwaitMs.
Issue reference
We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation.
Please reference the issue this PR will close: #2404
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list:
[x] Code compiles correctly
[x] Created/updated tests
[ ] Extended the documentation / Created issue in the https://github.com/dapr/docs/ repo: dapr/docs#[issue number]
@DeepanshuA this is puzzling me too. Wonder if there's something wrong with the agents. Or maybe something got into your fork perhaps?
I have updated go.mod with my dapr runtime fork to pick latest changes but still cert tests are failing due to https://github.com/DeepanshuA/components-contrib/actions/runs/3840216913/jobs/6539005648#step:21:183. I am trying to understand the cause but not able to properly understand the reason, any suggestions @berndverst @ItalyPaleAle ?
If you look at the logs every test is failing with:
time="2023-01-04T17:34:21.862174703Z" level=fatal msg="failed to start HTTP server: could not listen on any endpoint for profiling API" instance=fv-az449-122 scope=dapr.runtime type=log ver=unknown
So something about the profiling API seems broken. Was something introduced in dapr/dapr recently that caused this? Is your fork up to date with dapr/dapr@master + your commit?
@ItalyPaleAle @DeepanshuA I checked out this PR locally and tried running one of the cert tests and I'm getting the same error. This is not workflow / Github actions related.
It seems the dapr/dapr fork used here is bad (which could be new commits created by @DeepanshuA or an issue that exists in dapr/dapr master)
I checked - the dapr/dapr fork itself doesn't seem to be the problem. Perhaps the test framework needs to be updated because some runtime signature changed.
Apparently importing latest dapr/dapr@master into components-contrib@master cert tests also causes these failures -- so it is unrelated to this PR here.
I tried with
export DAPR_PACKAGE=github.com/dapr/dapr@v1.9.4-0.20230104234828-82b6903bf38f
make replaceruntime-all
make modtidy-all
@DeepanshuA the culprit is https://github.com/dapr/dapr/pull/5648/files
For a quick fix, you could check out dapr/dapr@v1.9.4-0.20230103184645-d72583982ef2 and add your commit on top of that -- then make that your fork which you import here.
Otherwise you will have to wait for @ItalyPaleAle to fix dapr/dapr@master and then update your dapr/dapr fork, and pin the newest fork version.
Here's the fix, you can merge that into your branch to get the tests to work again. Sorry about that
https://github.com/dapr/dapr/pull/5707
@DeepanshuA I verified that dapr/dapr@v1.9.4-0.20230105041431-785d20140ec4 will address this. So for your fork, please base your commit on top of that and then pin your updated fork here. It should work then.
Thanks @berndverst for investigating this issue, really appreciate it.
I had started to investigate but it was getting quite late here, so I had thought to continue today, but with your investigation, it really helps.
Thanks @ItalyPaleAle for providing a quick fix.
@dapr/approvers-components-contrib @dapr/maintainers-components-contrib Please re-review.
3 cert tests(bindings.kafka, bindings.cron and bondings.rabbitmq) are failing which are NOT due to any changes in this PR. These are failing when referring to latest dapr master.
https://github.com/dapr/components-contrib/actions/runs/3872016087/jobs/6600380515#step:23:174
https://github.com/dapr/components-contrib/actions/runs/3872016087/jobs/6600381140#step:23:184
https://github.com/dapr/components-contrib/actions/runs/3872016087/jobs/6600380245#step:23:318
When I took latest dapr/dapr and contrib, without my changes, even then it fails:
https://github.com/DeepanshuA/components-contrib/actions/runs/3873108358/jobs/6602751037
It seems that policyDef is nil, policyDef.Hasretries() is failing: https://github.com/DeepanshuA/dapr/blob/ce6dbf12fab1bc69c328899aad5baebaf8e3f51d/pkg/runtime/runtime.go#LL1298C14-L1298C14
@dapr/approvers-components-contrib @dapr/maintainers-components-contrib Please review this PR. This is important to go in asap, as only after this is merged, https://github.com/dapr/dapr/pull/5662 can go in, which would then need to be followed by changes in SDK PRs updating the way bulkSubscribe options are getting passed on and also Perf test for Bulk Subscribe would need to be changed.
This broke several certification tests.
/ok-to-test
|
gharchive/pull-request
| 2023-01-04T15:51:19 |
2025-04-01T04:33:56.008211
|
{
"authors": [
"DeepanshuA",
"ItalyPaleAle",
"berndverst"
],
"repo": "dapr/components-contrib",
"url": "https://github.com/dapr/components-contrib/pull/2405",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1279505777
|
v1.8 Endgame
Release Coordinator: @pravinpushkar
Release Buddy: @amulyavarote
Build and Test Champion: @pravinpushkar
Preflight check: 06/14/2022
Code freeze: 06/21/2022
Endgame starts: 06/22/2022
RC1 date: 06/22/2022 (earliest)
Release date: 06/30/2022
Release Tasks:
[ ] No P0 issues before code freeze.
[ ] Code freeze at 06/21/2022
[ ] Check all new components are registered in dapr/dapr
[ ] E2E, certification and perf tests are passing
[ ] Update components-contrib certification tests go.mods with dapr/dapr 1.8.0-rc.1 commit hash
[ ] Create release-1.8 branch and cut RC 1.8.0-rc.1 for component-contrib
[ ] Update dapr/component-contrib pkg to 1.8.0-rc.1 in dapr/dapr master branch
[ ] Create release-1.8 branch for Dapr runtime
[ ] Verify helm chart and cut the 1.8.0-rc.1 release for Dapr runtime
[ ] Create release-1.8 branch for CLI
[ ] Update dapr/dapr pkg to 1.8.0-rc.1 in dapr/cli release-1.8 branch
[ ] Cut the 1.8.0-rc.1 release for CLI
[ ] Notify users about dapr/dapr 1.8.0-rc.1 via Discord, ML
[ ] Validate the upgrade path from the previous release -- CRDs updated, no deletion of components/configs etc. ().(This can partially be accomplished by updating the CLI upgrade_test.go)
[ ] Validate dapr init in self hosted mode for RC from GHCR and default registry
[ ] Validate dapr init in k8s mode for RC from GHCR and default registry
[ ] Validate renew-certificate command with a cluster with a Dapr enabled app
[ ] Create rc for installer-bundle
[ ] Update proto and cut the rc release for Java SDK
[ ] Update proto and cut the rc release for Python SDK
[ ] Update proto and cut the rc release for Go SDK
[ ] Update proto and cut the rc release for dontnet SDK
[ ] Update proto and cut the rc release for cpp SDK
[ ] Update proto and cut the rc release for js SDK
[ ] Update proto and cut the rc release for rust SDK
[ ] Send dapr runtime/cli to vscode dev containers team
[ ] Update longhaul tests dotnet SDK version
[ ] Update longhaul tests Dapr runtime version
[ ] Update tutorials automated validation and check it works - covers Linux Standalone/ K8s / Darwin MacOS. Workflow pulls the latest dapr version before the validations begin
[ ] Test and validate tutorials in Linux ARM64 k8s and self-hosted
[ ] Test and validate tutorials in windows
[ ] Validate Dapr CLI in windows
[ ] Validate longhaul metrics
[ ] Test and validate the .NET SDK - run through samples
[ ] Test and validate the java SDK - run through samples
[ ] Test and validate the python SDK - run through samples
[ ] Test and validate the Go SDK - run through samples
[ ] Update certification tests in components-contrib release branch (prior to merge back into master) to use the RC and Go SDK release.
[ ] Create release notes - CLI, Dapr, .NET SDK, Java SDK, Python SDK, Go SDK. Check them in release branch before cutting the RC release.
[ ] Edit and complete release-notes on HackMD
[ ] Verify no breaking changes section in release notes
[ ] Merge release notes into release branch
[ ] Release CLI 1.8
[ ] Release Installer Bundle 1.8
[ ] Create release-1.8 branch for tutorials, update version tag in the images, create v1.8.0 tag and update the readme in master
[ ] For all SDKs, update dapr version and SDK rc version, and check e2e tests
[ ] Release cpp-sdk
[ ] Release js-sdk
[ ] Release go-sdk
[ ] Release java-sdk
[ ] Release python-sdk
[ ] Release rust-sdk
[ ] Release dotnet-sdk and release nuget packages
[ ] Make sure the release notes are copied into the GitHub release description at https://github.com/dapr/dapr/releases
[ ] Create components-contrib release from Tag. Copy components release notes section into the description of the release.
[ ] In the dapr/dapr release branch, update DEV_CONTAINER_VERSION_TAG in docker/docker.mk to the next release increment. Update the DEV_CONTAINER_CLI_TAG to the latest CLI release. In the same PR also update .devcontainer/devcontainer.json to refer to the new Dev Container version.
[ ] Manually trigger the dapr-dev-container workflow on the release branch to publish the new dev container (daprio/dapr-dev).
[ ] In the dapr/components-contrib release branch, update dapr/components-contrib/.devcontainer/Dockerfile to reference the updated dapr-dev image tag for the new release.
[ ] In the dapr/cli release branch, update dapr/cli/.devcontainer/Dockerfile to reference the updated dapr-dev image tag for the new release.
[ ] Update certification tests in components-contrib release branch (prior to merge back into master) to use latest runtime and Go SDK release.
[ ] Generate new Java docs based on the release version of the Java SDK
[ ] Update supported versions table in docs
[ ] Release blog
[ ] Tweet release announcement
[ ] Discord release announcement
[ ] Update versions in the E2E tests in CLI to the latest runtime and dashboard versions
[ ] Merge (NOT SQUASH MERGE/AUTO MERGE) back release-1.8 branch into master for required repos
[ ] Celebrate 🎈
Kindly mention release blockers here in the comments (if there are any)
Java SDK testing and validation ... https://github.com/dapr/java-sdk/pull/756
Completed and released
|
gharchive/issue
| 2022-06-22T04:36:13 |
2025-04-01T04:33:56.027428
|
{
"authors": [
"msfussell",
"mukundansundar",
"pravinpushkar"
],
"repo": "dapr/dapr",
"url": "https://github.com/dapr/dapr/issues/4814",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1755069059
|
Duplicate PosgreSQL entry on menu
Describe the bug
Duplicate the PostgreSQL entry on Menu
Steps to reproduce
Steps to reproduce the behavior:
Go to https://docs.dapr.io/reference/components-reference/supported-configuration-stores
Expected behavior
Single entry of PostgreSQL
Screenshots
Desktop (please complete the following information):
OS: Debian 11
Browser Firefox
Version 102.12.0 ESR
opened PR that includes this change: #3532
resolved - closing
|
gharchive/issue
| 2023-06-13T14:48:09 |
2025-04-01T04:33:56.032165
|
{
"authors": [
"hhunter-ms",
"sang-au"
],
"repo": "dapr/docs",
"url": "https://github.com/dapr/docs/issues/3533",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1910420882
|
Add socket wait for ToxiProxy client.
Description
Add socket wait for ToxiProxy client. It should reduce flakiness on SdkResiliencyIT
Issue reference
We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation.
Please reference the issue this PR will close: N/A
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list:
[ ] Code compiles correctly
[ ] Created/updated tests
[ ] Extended the documentation
No change with upstream anymore.
|
gharchive/pull-request
| 2023-09-24T23:37:41 |
2025-04-01T04:33:56.034447
|
{
"authors": [
"artursouza"
],
"repo": "dapr/java-sdk",
"url": "https://github.com/dapr/java-sdk/pull/929",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1865034231
|
SCP/SFTP Operations Timeout
I have a bit of a weird issue. I can set up new SSH sessions fine but trying SCP or SFTP fails with operation timeout. I upgraded my cmdlets to the newest version and same issue.
PS C:\> Set-SCPItem -ComputerName $computer -Credential $cred -Path $fileName -Destination /tmp -AcceptKey:$true -Verbose
VERBOSE: Using SSH Username and Password authentication for connection.
VERBOSE: ssh-ed25519 Fingerprint for server1: xx.xx.xx.xx.xx
VERBOSE: Fingerprint matched trusted ssh-ed25519 fingerprint for host server1
Set-SCPItem : Session operation has timed out
At line:1 char:1
Set-SCPItem -ComputerName $computer -Credential $cred -Path $fileName ...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationTimeout: (Renci.SshNet.ScpClient:ScpClient) [Set-SCPItem], SshOperationTimeoutException
+ FullyQualifiedErrorId : SSH.SetScpItem
I've tried extending the timeout to no avail and tried various servers. I verified the remote hosts were operational for SCP/SFTP with WinSCP which works fine.
And like I said... it's weird because I can throw a new SSH session up and that piece works, just not the scp/sftp portions. SCP fails as above as does trying to create a new SFTP session. I turned on verbose but not really giving me anything useful. I also cleared out the keys for those hosts to see if that helped but no dice.
nvm - figured it out
|
gharchive/issue
| 2023-08-24T12:17:08 |
2025-04-01T04:33:56.061365
|
{
"authors": [
"jdixon12"
],
"repo": "darkoperator/Posh-SSH",
"url": "https://github.com/darkoperator/Posh-SSH/issues/543",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
53513365
|
Allow enums in the input.
Enum declarations do not contain the async modifier, so they are copied
unchanged into the output. Uses of enum values of the form E.id will parse
as PrefixedIdentifiers, in which case the translation already handles them
correctly.
Enabling support simply requires setting the analyzer's flag to enable them.
@floitschG
|
gharchive/pull-request
| 2015-01-06T13:13:07 |
2025-04-01T04:33:56.156770
|
{
"authors": [
"kmillikin"
],
"repo": "dart-lang/async_await",
"url": "https://github.com/dart-lang/async_await/pull/76",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
92138032
|
mobile UI punch list
Punch list to re-release the mobile UI:
we need to restore animations (transitions) between pages - when transitioning to the execution results page and back again
in the execution results page, we need more padding on the left hand side of the toolbar
the foreground color of the FAB button can regress in some circumstances
Vulcanization seems to affect how certain buttons interact. The menu button in the top right doesn't function correctly in chrome anymore.
Can you screenshot what you're seeing with the execution results page needing more padding?
The back button on the far left. But I think the toolbar on both screens needs left and right padding - the other buttons look too close to the edges once you click on them (and see the ink press).
#561 #564
|
gharchive/issue
| 2015-06-30T16:31:49 |
2025-04-01T04:33:56.167023
|
{
"authors": [
"Georgehe4",
"devoncarew"
],
"repo": "dart-lang/dart-pad",
"url": "https://github.com/dart-lang/dart-pad/issues/558",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
99695811
|
Consume the save event
@devoncarew @lukechurch
This change consumes the save event. Also change the names to GA so we can differentiate between index and embed run requests.
lgtm!
#630
Merging.
|
gharchive/pull-request
| 2015-08-07T17:47:03 |
2025-04-01T04:33:56.168463
|
{
"authors": [
"Georgehe4",
"devoncarew"
],
"repo": "dart-lang/dart-pad",
"url": "https://github.com/dart-lang/dart-pad/pull/642",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
103322047
|
Add a diff tool
The dump info visualizer has a pretty useful tab to see the diff between two different dump info files. It would be great to get a similar thing for this package.
great idea. It should be fairly easy to add this.
I'm thinking the format could be something like this:
> pub global run dart2js_info:library_size_diff json1 json2
...
package size (json1) size(json2) diff
csslib 195826 6.35% 195876 6.39% 50 +0.04%
angular2 704387 22.83% 664387 20.83% -4000 -2%
We would sort on the absolute diff and use red/green terminal colors to highlight the numbers in the diff column. Otherwise, we'd use the same clustering that library_size_split does.
sounds good!
|
gharchive/issue
| 2015-08-26T17:10:07 |
2025-04-01T04:33:56.170303
|
{
"authors": [
"jakemac53",
"sigmundch"
],
"repo": "dart-lang/dart2js_info",
"url": "https://github.com/dart-lang/dart2js_info/issues/5",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
268832317
|
ignore the src/ lint warning (implementation_imports)
add a comment to suppress a lint about importing from src/ directories (fix https://github.com/dart-lang/intl/issues/151)
@alan-knight
Closing (and re-opening internally).
|
gharchive/pull-request
| 2017-10-26T16:43:04 |
2025-04-01T04:33:56.173636
|
{
"authors": [
"devoncarew"
],
"repo": "dart-lang/intl_translation",
"url": "https://github.com/dart-lang/intl_translation/pull/19",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2268123285
|
[ASK] How To Use Arena in ffi
Sorry, this is a question, I was looking for a way to immediately free up memory after calling the function so that the memory doesn't keep increasing, then I found a solution using arena but I can't use it,
How to use the Arena function?
Currently my script is like this
import 'dart:ffi';
import 'package:ffi/ffi.dart';
typedef MyLibStringNative = Pointer<Utf8>;
typedef MyLibReceiveNative = MyLibStringNative Function(Double timout);
typedef MyLibReceiveDart = MyLibStringNative Function(double timout);
void main(List<String> args) {
DynamicLibrary myLib = DynamicLibrary.open("liblibrary.so");
MyLibReceiveDart myLibReceiveFunction = myLib.lookupFunction<MyLibReceiveNative, MyLibReceiveDart>("my_function");
MyLibStringNative update = myLibReceiveFunction(10);
if (update.address != 0) {
String updateString = update.toDartString();
print(updateString);
}
}
is this code correct?
import 'dart:ffi';
import 'package:ffi/ffi.dart';
typedef MyLibStringNative = Pointer<Utf8>;
typedef MyLibReceiveNative = MyLibStringNative Function(Double timout);
typedef MyLibReceiveDart = MyLibStringNative Function(double timout);
void main(List<String> args) {
// open library
DynamicLibrary myLib = DynamicLibrary.open("liblibrary.so");
// use arena
Arena arena = Arena();
MyLibReceiveDart myLibReceiveFunction = myLib.lookupFunction<MyLibReceiveNative, MyLibReceiveDart>("my_function");
MyLibStringNative update = arena.using(myLibReceiveFunction(10), (p0) {});
if (update.address != 0) {
String updateString = update.toDartString();
print(updateString);
}
// get free up memory
arena.releaseAll();
}
or
is this code correct?
import 'dart:ffi';
import 'package:ffi/ffi.dart';
typedef MyLibStringNative = Pointer<Utf8>;
typedef MyLibReceiveNative = MyLibStringNative Function(Double timout);
typedef MyLibReceiveDart = MyLibStringNative Function(double timout);
void main(List<String> args) {
// open library
DynamicLibrary myLib = DynamicLibrary.open("liblibrary.so");
// use arena
//
String? result = using((Arena arena) {
MyLibReceiveDart myLibReceiveFunction = myLib.lookupFunction<MyLibReceiveNative, MyLibReceiveDart>("my_function");
MyLibStringNative update = arena.using(myLibReceiveFunction(10), (p0) {});
if (update.address != 0) {
String updateString = update.toDartString();
print(updateString);
return updateString;
}
// get free up memory
arena.releaseAll();
return null;
});
print(result);
}
Sorry, I ask a lot of questions, I have tried various methods but after running the program for a long time the memory doesn't go down
You are not providing a free method to using, so it will just run the empty (p0) {} closure.
For example if you have used malloc to allocate the Pointer<Utf8> in C, you can pass malloc.free to free it.
using ((arena) {
arena.using(myLibReceiveFunction(10), malloc.free);
});
// The memory is now released.
// No need for writing arena.releaseAll(), the using block has taken care of calling that.
It's easier to use using ((arena) { /* ... */ }); instead of manually running arena.releaseAll(). As it accounts for exceptions and async.
You are not providing a free method to using, so it will just run the empty (p0) {} closure.
For example if you have used malloc to allocate the Pointer<Utf8> in C, you can pass malloc.free to free it.
using ((arena) {
arena.using(myLibReceiveFunction(10), malloc.free);
});
// The memory is now released.
// No need for writing arena.releaseAll(), the using block has taken care of calling that.
It's easier to use using ((arena) { /* ... */ }); instead of manually running arena.releaseAll(). As it accounts for exceptions and async.
I have tried but this error appears
free(): double free detected in tcache 2
[1] 485641 IOT instructions (core dumped) dart run
And the program immediately stops
Can you post your full code?
Can you post your full code?
i tried running tdlib (Telegram database library),
I managed to run it well without malloc.free but when the old program runs the memory will continue to increase, I have tried restarting / carrying out instructions from tdlib but to no avail, so I tried using arena
then add the code according to what you told me
using ((arena) {
arena.using(myLibReceiveFunction(10), malloc.free);
});
```dart
this is my code
```dart
// ignore_for_file: empty_catches, unused_local_variable
import 'dart:ffi';
import 'package:ffi/ffi.dart';
import 'dart:convert' as convert;
typedef TdPointerNative = Pointer;
typedef TdPointerFunctionNative = TdPointerNative Function();
typedef TdStringNative = Pointer<Utf8>;
typedef TdReceiveNative = TdStringNative Function(Double timout);
typedef TdReceiveDart = TdStringNative Function(double timout);
typedef TdSendNative = Void Function(TdPointerNative client, TdStringNative request);
typedef TdSendDart = void Function(TdPointerNative client, TdStringNative request);
typedef TdExecuteNative = TdStringNative Function(TdStringNative parameters);
typedef TdDestroyNative = Void Function(Pointer clientId);
typedef TdDestroyDart = void Function(Pointer clientId);
int tdCreateClientId({
required DynamicLibrary tdLib,
}) {
int clientIdNew = using((Arena arena) {
// https://core.telegram.org/tdlib/docs/td__json__client_8h.html#a7feda953a66eee36bc207eb71a55c490
TdPointerFunctionNative tdPointerNativeFunction = tdLib.lookupFunction<TdPointerFunctionNative, TdPointerFunctionNative>('td_create_client_id');
Pointer tdPointerNativeResult = arena.using(tdPointerNativeFunction(), freeMemory);
int clientIdNew = tdPointerNativeResult.address;
return clientIdNew;
});
return clientIdNew;
}
/// td_send
void tdSend({
required int clientId,
Map? parameters,
required DynamicLibrary tdLib,
}) {
using((Arena arena) {
Pointer clientIdAddresData = Pointer.fromAddress(clientId);
TdStringNative requestData = convert.json.encode(parameters).toNativeUtf8();
Arena arena = Arena();
TdSendDart tdSendFunction = tdLib.lookupFunction<TdSendNative, TdSendDart>('td_send');
void tdSendResult = arena.using(tdSendFunction(clientIdAddresData, requestData), (p0) {});
malloc.free(requestData);
});
return;
}
Map<String, dynamic>? tdReceiveStatic({
required DynamicLibrary tdLib,
double timeout = 1.0,
bool isAndroid = false,
}) {
try {
Map<String, dynamic>? result = using((Arena arena) {
/// Docs: https://core.telegram.org/tdlib/docs/td__json__client_8h.html#a62715bea8e41a554d1bac763c187b662
TdReceiveDart tdReceiveFunction = tdLib.lookupFunction<TdReceiveNative, TdReceiveDart>(
'${isAndroid ? "_" : ""}td_receive',
);
TdStringNative update = arena.using(tdReceiveFunction(timeout), freeMemory);
if (update.address != 0) {
String updateString = update.toDartString();
if (updateString.isEmpty) {
return null;
}
Map<String, dynamic>? updateOrigin;
try {
updateOrigin = convert.json.decode(update.toDartString());
} catch (e) {}
if (updateOrigin != null) {
return updateOrigin;
}
} else {}
return null;
});
return result;
} catch (e) {}
return null;
}
void freeMemory(Pointer<NativeType> pointer) {
malloc.free(pointer);
}
void main(List<String> args) async {
// open library tdlib
// Tdlib docs: https://core.telegram.org/tdlib/docs/td__json__client_8h.html
// Tdlib is Telegram Database Library for interact with telegram api so you can make
// Program Application, Bot, Userbot Custom
DynamicLibrary tdLib = DynamicLibrary.open("libtdjson.so");
int clientId = tdCreateClientId(tdLib: tdLib);
tdSend(clientId: clientId, tdLib: tdLib, parameters: {
"@type": "getOption",
"name": "version",
});
while (true) {
await Future.delayed(Duration(microseconds: 1));
Map? update = tdReceiveStatic(tdLib: tdLib);
if (update != null) {
print(update);
}
}
}
current error code
===== CRASH =====
si_signo=Segmentation fault(11), si_code=SEGV_MAPERR(1), si_addr=0xfffffffffffffff9
version=3.3.3 (stable) (Tue Mar 26 14:21:33 2024 +0000) on "linux_x64"
pid=564959, thread=564986, isolate_group=main(0x5bdb6457f540), isolate=main(0x5bdb645858d0)
os=linux, arch=x64, comp=no, sim=no
isolate_instructions=5bdb6238d580, vm_instructions=5bdb6238d580
fp=7d1cf87fe0b8, sp=7d1cf87fe070, pc=7d1d0b2a881e
pc 0x00007d1d0b2a881e fp 0x00007d1cf87fe0b8 free+0x1e
pc 0x00007d1d09ba5dd7 fp 0x00007d1cf87fe0f0 Unknown symbol
pc 0x00007d1d09ba5433 fp 0x00007d1cf87fe138 Unknown symbol
pc 0x00007d1d09ba5288 fp 0x00007d1cf87fe170 Unknown symbol
pc 0x00007d1d09ba51e9 fp 0x00007d1cf87fe1a0 Unknown symbol
pc 0x00007d1d09ba50e3 fp 0x00007d1cf87fe1f8 Unknown symbol
pc 0x00007d1d09ba4f80 fp 0x00007d1cf87fe250 Unknown symbol
pc 0x00007d1d09ba4e8a fp 0x00007d1cf87fe2a0 Unknown symbol
pc 0x00007d1d09ba4275 fp 0x00007d1cf87fe310 Unknown symbol
pc 0x00007d1d09ba2cd8 fp 0x00007d1cf87fe3a0 Unknown symbol
pc 0x00007d1d09ba28af fp 0x00007d1cf87fe400 Unknown symbol
pc 0x00007d1d09ba20a1 fp 0x00007d1cf87fe458 Unknown symbol
pc 0x00007d1d09ba1f99 fp 0x00007d1cf87fe488 Unknown symbol
pc 0x00007d1d09ba1106 fp 0x00007d1cf87fe4f0 Unknown symbol
pc 0x00007d1d09ba138c fp 0x00007d1cf87fe538 Unknown symbol
pc 0x00007d1d09ba1106 fp 0x00007d1cf87fe5a0 Unknown symbol
pc 0x00007d1d09ba01bc fp 0x00007d1cf87fe5f8 Unknown symbol
pc 0x00007d1d0ac02e46 fp 0x00007d1cf87fe670 Unknown symbol
pc 0x00005bdb624cb0e2 fp 0x00007d1cf87fe6d0 dart::DartEntry::InvokeFunction+0x162
pc 0x00005bdb624ccad3 fp 0x00007d1cf87fe710 dart::DartLibraryCalls::HandleMessage+0x123
pc 0x00005bdb624e944f fp 0x00007d1cf87feca0 dart::IsolateMessageHandler::HandleMessage+0x2bf
pc 0x00005bdb6250b7d6 fp 0x00007d1cf87fed10 dart::MessageHandler::HandleMessages+0x116
pc 0x00005bdb6250bdc8 fp 0x00007d1cf87fed60 dart::MessageHandler::TaskCallback+0x1e8
pc 0x00005bdb626098e7 fp 0x00007d1cf87fede0 dart::ThreadPool::WorkerLoop+0x137
pc 0x00005bdb62609b72 fp 0x00007d1cf87fee10 dart::ThreadPool::Worker::Main+0x72
pc 0x00005bdb62593096 fp 0x00007d1cf87feed0 dart::ThreadStart+0xd6
-- End of DumpStackTrace
pc 0x0000000000000000 fp 0x00007d1cf87fe0b8 sp 0x0000000000000000 Cannot find code object
pc 0x00007d1d09ba5dd7 fp 0x00007d1cf87fe0f0 sp 0x00007d1cf87fe0c8 [Optimized] init:posixFree.#ffiClosure2
pc 0x00007d1d09ba5433 fp 0x00007d1cf87fe138 sp 0x00007d1cf87fe100 [Unoptimized] MallocAllocator.free
pc 0x00007d1d09ba5288 fp 0x00007d1cf87fe170 sp 0x00007d1cf87fe148 [Unoptimized] freeMemory
pc 0x00007d1d09ba51e9 fp 0x00007d1cf87fe1a0 sp 0x00007d1cf87fe180 [Unoptimized] freeMemory
pc 0x00007d1d09ba50e3 fp 0x00007d1cf87fe1f8 sp 0x00007d1cf87fe1b0 [Unoptimized] _RootZone@4048458.runUnary
pc 0x00007d1d09ba4f80 fp 0x00007d1cf87fe250 sp 0x00007d1cf87fe208 [Unoptimized] _RootZone@4048458.bindUnaryCallback.<anonymous closure>
pc 0x00007d1d09ba4e8a fp 0x00007d1cf87fe2a0 sp 0x00007d1cf87fe260 [Unoptimized] Arena.using.<anonymous closure>
pc 0x00007d1d09ba4275 fp 0x00007d1cf87fe310 sp 0x00007d1cf87fe2b0 [Unoptimized] Arena.releaseAll
pc 0x00007d1d09ba2cd8 fp 0x00007d1cf87fe3a0 sp 0x00007d1cf87fe320 [Unoptimized] using
pc 0x00007d1d09ba28af fp 0x00007d1cf87fe400 sp 0x00007d1cf87fe3b0 [Unoptimized] tdCreateClientId
pc 0x00007d1d09ba20a1 fp 0x00007d1cf87fe458 sp 0x00007d1cf87fe410 [Unoptimized] main
pc 0x00007d1d09ba1f99 fp 0x00007d1cf87fe488 sp 0x00007d1cf87fe468 [Unoptimized] main
pc 0x00007d1d09ba1106 fp 0x00007d1cf87fe4f0 sp 0x00007d1cf87fe498 [Unoptimized] _Closure@0150898.dyn:call
pc 0x00007d1d09ba138c fp 0x00007d1cf87fe538 sp 0x00007d1cf87fe500 [Unoptimized] _delayEntrypointInvocation@1026248.<anonymous closure>
pc 0x00007d1d09ba1106 fp 0x00007d1cf87fe5a0 sp 0x00007d1cf87fe548 [Unoptimized] _Closure@0150898.dyn:call
pc 0x00007d1d09ba01bc fp 0x00007d1cf87fe5f8 sp 0x00007d1cf87fe5b0 [Unoptimized] _RawReceivePort@1026248._handleMessage@1026248
pc 0x00007d1d0ac02e46 fp 0x00007d1cf87fe670 sp 0x00007d1cf87fe608 [Stub] InvokeDartCode
[1] 564959 IOT instruction (core dumped) dart run
So I started reading your code and the first problem I found is the fact that based on the URL in the comment: td_create_client_id returns an integer and not a pointer. Trying to free some random integer will cause a segmentation fault.
Your original question of how to release memory using Arena is answered. For further questions please use Q&A sites like StackOverFlow. Closing this.
|
gharchive/issue
| 2024-04-29T04:57:41 |
2025-04-01T04:33:56.196810
|
{
"authors": [
"HosseinYousefi",
"azkadev"
],
"repo": "dart-lang/native",
"url": "https://github.com/dart-lang/native/issues/1109",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
127821843
|
polymer 1 rc14 - error when access to a polymer_element attribute
I am updating an app to polymer 1.0.0-rc14.
This code was working before this version:
PaperDrawerPanel get drawer => $['drawerPanel'];
@reflectable
pageChanged(String value, String old) {
...
drawer?.drawerWidth = "0";
...
}
Now I am having this error.
Unhandled exception:
NoSuchMethodError: method not found: 'drawerWidth='
Receiver: Instance of 'JsObjectImpl'
Arguments: ["0"]
#0 JsObject.noSuchMethod.throwError (dart:js:1084)
#1 JsObject.noSuchMethod (dart:js:1111)
#2 RootElement.pageChanged (package:walletek_app_web/elements/root_element/root_element.dart:38:17)
#3 Function._apply (dart:core-patch/function_patch.dart:7)
#4 Function.apply (dart:core-patch/function_patch.dart:28)
#5 _InstanceMirrorImpl.invoke (package:reflectable/src/reflectable_transformer_based.dart:252:21)
#6 addDeclarationToPrototype.<anonymous closure> (package:polymer/src/common/declarations.dart:134:35)
#7 JsFunction._apply (dart:js:1178)
#8 JsFunction.apply (dart:js:1176)
#9 HtmlElement&PolymerMixin&PolymerBase.notifyPath (package:polymer_interop/src/polymer_base.dart:232:28)
#10 PolymerElementPropertyNotifier.notifyPath (package:polymer_autonotify/polymer_autonotify.dart:340:33)
#11 PropertyNotifier&HasChildrenMixin&HasChildrenReflectiveMixin.observe.<anonymous closure>.<anonymous closure> (package:polymer_autonotify/polymer_autonotify.dart:223:9)
#12 _HashVMBase&MapMixin&&_LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:340)
#13 PropertyNotifier&HasChildrenMixin&HasChildrenReflectiveMixin.observe.<anonymous closure> (package:polymer_autonotify/polymer_autonotify.dart:220:17)
#14 _RootZone.runUnaryGuarded (dart:async/zone.dart:1087)
#15 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#16 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#17 _SyncBroadcastStreamController._sendData (dart:async/broadcast_stream_controller.dart:381)
#18 _BroadcastStreamController.add (dart:async/broadcast_stream_controller.dart:256)
#19 PolymerElement&AutonotifyBehavior&JsProxy&ChangeNotifier.deliverChanges (package:observe/src/change_notifier.dart:49:16)
#20 _microtaskLoop (dart:async/schedule_microtask.dart:43)
#21 _microtaskLoopEntry (dart:async/schedule_microtask.dart:52)
#22 _ScheduleImmediateHelper._handleMutation (dart:html:49298)
#23 MutationObserver._create.<anonymous closure> (dart:html:27545)
#0 JsFunction._apply (dart:js:1178)
#1 JsFunction.apply (dart:js:1176)
#2 HtmlElement&PolymerMixin&PolymerBase.notifyPath (package:polymer_interop/src/polymer_base.dart:232:28)
#3 PolymerElementPropertyNotifier.notifyPath (package:polymer_autonotify/polymer_autonotify.dart:340:33)
#4 PropertyNotifier&HasChildrenMixin&HasChildrenReflectiveMixin.observe.<anonymous closure>.<anonymous closure> (package:polymer_autonotify/polymer_autonotify.dart:223:9)
#5 _HashVMBase&MapMixin&&_LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:340)
#6 PropertyNotifier&HasChildrenMixin&HasChildrenReflectiveMixin.observe.<anonymous closure> (package:polymer_autonotify/polymer_autonotify.dart:220:17)
#7 _RootZone.runUnaryGuarded (dart:async/zone.dart:1087)
#8 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#9 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#10 _SyncBroadcastStreamController._sendData (dart:async/broadcast_stream_controller.dart:381)
#11 _BroadcastStreamController.add (dart:async/broadcast_stream_controller.dart:256)
#12 PolymerElement&AutonotifyBehavior&JsProxy&ChangeNotifier.deliverChanges (package:observe/src/change_notifier.dart:49:16)
#13 _microtaskLoop (dart:async/schedule_microtask.dart:43)
#14 _microtaskLoopEntry (dart:async/schedule_microtask.dart:52)
#15 _ScheduleImmediateHelper._handleMutation (dart:html:49298)
#16 MutationObserver._create.<anonymous closure> (dart:html:27545)
I tried to access to other attributes of other elements it's the same.
I noticed you are using PolymerElementPropertyNotifier which I think is from the polymer_autonotify package? It seems to me that it is actually causing the issue although its hard to tell for sure. This isn't a general systemic issue cause all the polymer_elements tests are passing.
Okay maybe I missed or I didn't understand something here.
I tried without polymer_autonotify it's still the same
My application look like this
my_app_library.dart
library my_app;
export "my_elements.dart";
export "material.dart"
my_elements.dart
library my_app.elements;
export "root_element.dart";
material.dart
library my_app.material;
export "package:polymer_elements/paper_drawer_panel";
And this is the root-element
import "package:my_app/my_app_library.dart";
@PolymerRegister("root-element")
class RootElement extends PolymerElement {
RootElement.created() : super.created();
PaperDrawerPanel get drawer => $['drawerPanel'];
ready() {
print(drawer.drawerWidth);
}
}
I have no error if I import directly in the root-element the material library. Why ?
import "package:my_app/my_app_library.dart";
import "package:my_app/material.dart";
So the issue here is actually the order in which things are being registered. In general, the best way to avoid this type of problem is to not use the export pattern to bundle up large number of elements. If you do that you need to make sure that you put them in the right order.
Specifically in your case it looks like the way the exports were set up, root_element.dart would get loaded (and registered) before material.dart (and thus paper-drawer-panel). You could also solve the issue by just changing the order of your exports in the my_app_library.dart file to make the material.dart file come first.
|
gharchive/issue
| 2016-01-21T01:06:37 |
2025-04-01T04:33:56.206654
|
{
"authors": [
"jakemac53",
"lejard-h"
],
"repo": "dart-lang/polymer_elements",
"url": "https://github.com/dart-lang/polymer_elements/issues/120",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
84584636
|
pub upgrade reports newer versions but doesn't actually upgrade
This issue was originally filed by dave...@gmail.com
My pubspec.yaml and pub upgrade results:
https://gist.github.com/DaveNotik/d3b47c97aab76eeb58b7
What is the expected output? What do you see instead?
I expected upgraded components. I got notice about newer versions available and "no dependencies updated" instead.
What version of the product are you using? On what operating system?
Mac OS X. Dart 1.5.1.
Please provide any additional information below.
On #dart:
15:17 floitsch: Either there is a different dependency that forbids the update or a bug. (or I don't get it)
15:17 DaveNotik: floitsch: https://gist.github.com/DaveNotik/d3b47c97aab76eeb58b7
15:19 floitsch: DaveNotik: could be that one of the packages requires an older version of XY and that one keeps all the others pinned.
15:20 DaveNotik: But wouldn't it inform me...
15:20 floitsch: I would file a bug.
15:20 floitsch: Either it should inform you, or it should upgrade.
15:20 floitsch: This is not helpful.
This issue has been moved to dart-lang/pub#1033.
|
gharchive/issue
| 2014-07-01T19:24:18 |
2025-04-01T04:33:56.225017
|
{
"authors": [
"DartBot"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/19760",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
238992965
|
Removed files not recognized as removed
I had one file a.dart and a second file b.dart that imported "a.dart" (in case the style of import is important. If I delete a.dart, the import in b.dart isn't marked as being invalid. This persists even if I restart the analysis server.
@scheglov Is server finding the summary for a.dart and not checking to see whether the file currently exists? If so, should we remove the summary when a file is deleted?
This might be related to #29968.
I was not able to reproduce this issue.
The test in https://codereview.chromium.org/2959903005 does remove/re-add and the error is reported after remove.
I don't have time right now to try to reproduce it, so I'll close this.
|
gharchive/issue
| 2017-06-27T21:47:38 |
2025-04-01T04:33:56.227600
|
{
"authors": [
"bwilkerson",
"scheglov"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/30032",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
239059964
|
No compile-time error for initialization clash
According to #29656 there should be a compile error for the following code
class C {
final x = 1;
C() : this.x = 2 {}
}
main() {
try {
new C();
} catch(e) {
print("Runtime exception: " + e.toString());
}
}
This programm can be run without any compile error. Output is
Runtime exception: 'file://...': error: line 39 pos 9: final field 'x' is already initialized.
C() : this.x = 2 {}
^
There should be compile time, not a runtime error
Tested on Dart VM version: 1.25.0-dev.3.0 (Fri Jun 23 04:25:18 2017) on "windows_x64"
This example has a static error in both CFE and analyzer now.
|
gharchive/issue
| 2017-06-28T05:55:33 |
2025-04-01T04:33:56.229881
|
{
"authors": [
"natebosch",
"sgrekhov"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/30036",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
296552178
|
Pub on Windows fails with TLS error
From https://ci.appveyor.com/project/flutter/flutter/build/1.0.9587 which keeps going on indefinitely
Build started
git clone -q https://github.com/flutter/flutter.git C:\projects\flutter
git fetch -q origin +refs/pull/14610/merge:
git checkout -qf FETCH_HEAD
Restoring build cache
Cache 'C:\Users\appveyor\AppData\Roaming\Pub\Cache' - Restored
Running Install scripts
cd ..
move flutter "flutter sdk"
1 dir(s) moved.
cd "flutter sdk"
bin\flutter.bat config --no-analytics
Checking Dart SDK version...
Downloading Dart SDK from Flutter engine f5a4a9378740c3d5996583a9ed1f7e28ff08ee85...
Unzipping Dart SDK...
Updating flutter tool...
Got TLS error trying to find package archive at https://pub.dartlang.org.
Error: Unable to 'pub upgrade' flutter tool. Retrying in five seconds...
Waiting for 5 seconds, press CTRL+C to quit ...43210
Got TLS error trying to find package archive at https://pub.dartlang.org.
Error: Unable to 'pub upgrade' flutter tool. Retrying in five seconds...
Waiting for 5 seconds, press CTRL+C to quit ...43210
Got TLS error trying to find package archive at https://pub.dartlang.org.
Error: Unable to 'pub upgrade' flutter tool. Retrying in five seconds...
Waiting for 5 seconds, press CTRL+C to quit ...43210
Got TLS error trying to find package archive at https://pub.dartlang.org.
Error: Unable to 'pub upgrade' flutter tool. Retrying in five seconds...
Waiting for 5 seconds, press CTRL+C to quit ...43210
Got TLS error trying to find package archive at https://pub.dartlang.org.
Error: Unable to 'pub upgrade' flutter tool. Retrying in five seconds...
...
Pub just uses dart:io's HTTP implementation. Any protocol errors either come from there or from https://github.com/dart-lang/pub-dartlang-dart.
Here is more output from actual dev windows box:
C:\src\flutter\flutter\packages\flutter_tools [use-host-dart-sdk ≡ +4 ~1 -0 !]> C:\src\flutter\flutter\bin\cache\dart-sdk\bin\pub.bat upgrade --verbosity=all --no
-packages-dir
FINE: Pub 2.0.0-edge.28757928b47b192efcec082c78258102beb03f78
IO : Spawning "cmd /c ver" in C:\src\flutter\flutter\packages\flutter_tools\.
IO : Finished ver. Exit code 0.
| stdout:
| |
| | Microsoft Windows [Version 10.0.14393]
| Nothing output on stderr.
MSG : Resolving dependencies...
SLVR: Solving dependencies:
| - coverage 0.10.0 from hosted (coverage)
| - test 0.12.30+3 from hosted (test)
| - file 2.3.6 from hosted (file)
| - mustache 1.0.0 from hosted (mustache)
| - meta 1.1.2 from hosted (meta)
| - web_socket_channel 1.0.7 from hosted (web_socket_channel)
| - http 0.11.3+16 from hosted (http)
| - xml 2.6.0 from hosted (xml)
| - json_rpc_2 2.0.7 from hosted (json_rpc_2)
| - stream_channel 1.6.3 from hosted (stream_channel)
| - process 2.0.7 from hosted (process)
| - vm_service_client 0.2.4+1 from hosted (vm_service_client)
| - front_end any from hosted (front_end)
| - linter 0.1.43 from hosted (linter)
| - quiver 0.28.0 from hosted (quiver)
| - args 1.3.0 from hosted (args)
| - package_config 1.0.3 from hosted (package_config)
| - crypto 2.0.2+1 from hosted (crypto)
| - platform 2.1.2 from hosted (platform)
| - plugin 0.2.0+2 from hosted (plugin)
| - stack_trace 1.9.1 from hosted (stack_trace)
| - usage 3.3.0 from hosted (usage)
| - intl 0.15.2 from hosted (intl)
| - archive 1.0.33 from hosted (archive)
| - cli_util 0.1.2+1 from hosted (cli_util)
| - json_schema 1.0.8 from hosted (json_schema)
| - yaml 2.1.13 from hosted (yaml)
| - analyzer any from hosted (analyzer)
IO : Get versions from https://pub.dartlang.org/api/packages/coverage.
IO : HTTP GET https://pub.dartlang.org/api/packages/coverage
| Accept: application/vnd.pub.v2+json
| X-Pub-OS: windows
| X-Pub-Command: upgrade
| X-Pub-Session-ID: 1AA465EA-FD18-4E59-AED7-AA340932B152
| X-Pub-Reason: direct
| user-agent: Dart pub 2.0.0-edge.28757928b47b192efcec082c78258102beb03f78
SLVR: Could not get versions for coverage from hosted:
| Got TLS error trying to find package coverage at https://pub.dartlang.org.
|
| package:pub/src/utils.dart 733 fail
| package:pub/src/source/hosted.dart 335 BoundHostedSource._throwFriendlyError
| package:pub/src/source/hosted.dart 141 BoundHostedSource.doGetVersions
| ===== asynchronous gap ===========================
| dart:async _Completer.completeError
| package:pub/src/source/hosted.dart BoundHostedSource.doGetVersions
| ===== asynchronous gap ===========================
| dart:async _asyncErrorWrapperHelper
| package:pub/src/source/hosted.dart 130 BoundHostedSource.doGetVersions
| package:pub/src/source.dart 169 BoundSource.getVersions
| package:pub/src/solver/version_solver.dart 237 SolverCache.getVersions.<fn>
| dart:async runZoned
| package:pub/src/http.dart 267 withDependencyType
| package:pub/src/solver/version_solver.dart 236 SolverCache.getVersions
| ===== asynchronous gap ===========================
| dart:async new Future.microtask
| package:pub/src/solver/version_solver.dart 210 SolverCache.getVersions
| package:pub/src/solver/unselected_package_queue.dart 121 UnselectedPackageQueue._getNumVersions
| ===== asynchronous gap ===========================
| dart:async new Future.microtask
| package:pub/src/solver/unselected_package_queue.dart 115 UnselectedPackageQueue._getNumVersions
| package:pub/src/solver/unselected_package_queue.dart 50 UnselectedPackageQueue.add
| ===== asynchronous gap ===========================
| dart:async new Future.microtask
| package:pub/src/solver/unselected_package_queue.dart 44 UnselectedPackageQueue.add
| package:pub/src/solver/version_selection.dart 88 VersionSelection._addDependencies
| ===== asynchronous gap ===========================
| dart:async new Future.microtask
| package:pub/src/solver/version_selection.dart 70 VersionSelection._addDependencies
| package:pub/src/solver/version_selection.dart 63 VersionSelection.select
| ===== asynchronous gap ===========================
| dart:async _asyncThenWrapperHelper
| package:pub/src/solver/version_selection.dart 58 VersionSelection.select
| package:pub/src/solver/backtracking_solver.dart 174 BacktrackingSolver.solve
| ===== asynchronous gap ===========================
| dart:async new Future.microtask
| package:pub/src/solver/backtracking_solver.dart 160 BacktrackingSolver.solve
| package:pub/src/solver/version_solver.dart 42 resolveVersions.<fn>
| package:pub/src/log.dart 409 progress
| package:pub/src/solver/version_solver.dart 40 resolveVersions
| package:pub/src/entrypoint.dart 195 Entrypoint.acquireDependencies
@zanderso any thoughts? This works fine on Linux, but fails on Windows.
Chatted a bit in person with @aam. This is a bit mystifying since the secure socket implementation hasn't been touched by anyone in months. Has something changed recently with pub.dartlang.org? @mkustermann
Let's track the Windows SSL errors here: https://github.com/dart-lang/sdk/issues/32131
I believe this is different from #32131 .
Here the problem seems to be with dart_use_fallback_root_certificates having default false value, which is overridden in dart sdk with true in args.gn, which is put there by sdk/tools/gn.py.
This true-setting logic doesn't exist in flutter build.
@zanderso There have been no changes to the certificate served from pub.dartlang.org for many months.
https://github.com/flutter/engine/pull/4662 fixes this problem.
can anyone tell me how i can get certificate file in windows 10?
/cc @jonasfj for pub server.
@shajji1, I'm not sure what your issue is...
Have you checked out: https://dart.dev/tools/pub/troubleshoot#pub-get-fails-from-behind-a-corporate-firewall
Or file a new issue on https://github.com/dart-lang/pub
@jonasfj, i'm getting Got TLS error trying to find package cupertino_icons at https://pub.dartlang.org. while running command flutter packages get and i checked your link but i don't know values of proxy server environment variables, kindly help.
If you're behind a proxy your browser might know the configuration.. and hopefully your IT admin.
Try internet settings, hmm, I'm not a Windows expert here, sorry..
k i'll try to solve this issue, well thnx for your prompt response :)
Try deleting the "cache" folder in Flutter "bin" folder and then run flutter pub get again
for all those getting the TLS error:
check if you are able to ssh to any other machine
if not it means your port 22 might be blocked. get that unblocked by your IT if you are on a corporate machine
@anujkapoor pub doesn't use SSH to talk to pub.dartlang.org.
@PStoner3, pub uses package:http, so HttpClient from dart:io, this should be HTTP 1.1, afaik there is no HTTP 2 support in the Dart SDK.
|
gharchive/issue
| 2018-02-12T22:47:12 |
2025-04-01T04:33:56.242726
|
{
"authors": [
"aam",
"anujkapoor",
"jonasfj",
"kpratihast",
"mkustermann",
"nex3",
"shajji1",
"zanderso"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/32129",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
316579906
|
object.cc: 16408: error: unreachable code when calling _typeOf<Provider>()
This code does not fail on startup, but during hot reload.
Flutter version
Flutter 0.3.0 • channel dev • https://github.com/flutter/flutter.git
Framework • revision c73b8a7cf6 (7 days ago) • 2018-04-12 16:17:26 -0700
Engine • revision 8a6e64a8ef
Tools • Dart 2.0.0-dev.47.0.flutter-4126459025
Repro code
//-------------------------------------------------------------------
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
final String title;
MyApp({Key key, this.title}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp(
title: "Flutter Redux Demo",
theme: ThemeData(
primarySwatch: Colors.red,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class AppState {
}
class Provider<T> {
}
abstract class PageState<W extends StatefulWidget, S> extends State<W> {
// Workaround to capture generics
static Type _typeOf<T>() => T;
void init() {
final type1 = _typeOf<Provider<AppState>>(); // works
final type2 = _typeOf<Provider<S>>(); // DOES NOT WORK - object.cc: 16408: error: unreachable code
}
}
class MyHomePage extends StatefulWidget {
final String title;
MyHomePage({Key key, this.title}) : super(key: key);
@override
_MyHomePageState createState() => new _MyHomePageState();
}
class _MyHomePageState extends PageState<MyHomePage, AppState> {
@override
Widget build(BuildContext context) {
super.init(); // calls into method that causes failure
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: Column(
),
),
);
}
}
Log Failure:
Initializing hot reload...
E/DartVM ( 3758): ../../third_party/dart/runtime/vm/object.cc: 16408: error: unreachable code
E/DartVM ( 3758): Dumping native stack trace for thread ec6
E/DartVM ( 3758): [0x0000736d9c49df67] Unknown symbol
E/DartVM ( 3758): [0x0000736d9c49df67] Unknown symbol
E/DartVM ( 3758): [0x0000736d9c739786] Unknown symbol
E/DartVM ( 3758): -- End of DumpStackTrace
F/libc ( 3758): Fatal signal 6 (SIGABRT), code -6 in tid 3782 (Thread-2)
Build fingerprint: 'Android/sdk_google_phone_x86_64/generic_x86_64:7.0/NYC/4662066:userdebug/dev-keys'
Revision: '0'
ABI: 'x86_64'
pid: 3758, tid: 3782, name: Thread-2 >>> com.example.helloworld <<<
signal 6 (SIGABRT), code -6 (SI_TKILL), fault addr --------
rax 0000000000000000 rbx 0000736d9b5654f8 rcx ffffffffffffffff rdx 0000000000000006
rsi 0000000000000ec6 rdi 0000000000000eae
r8 0000000000000000 r9 000000000000001f r10 0000000000000008 r11 0000000000000202
r12 0000000000000ec6 r13 0000000000000006 r14 0000736d9c7a2837 r15 0000736d9b55dbd0
cs 0000000000000033 ss 000000000000002b
rip 0000736db7c93b67 rbp 0000000000000000 rsp 0000736d9b55da38 eflags 0000000000000202
backtrace:
#00 pc 000000000008db67 /system/lib64/libc.so (tgkill+7)
#1 pc 000000000008a601 /system/lib64/libc.so (pthread_kill+65)
#2 pc 0000000000030241 /system/lib64/libc.so (raise+17)
#3 pc 000000000002877d /system/lib64/libc.so (abort+77)
#4 pc 000000000078c695 /data/app/com.example.helloworld-1/lib/x86_64/libflutter.so
#5 pc 0000000000a8078a /data/app/com.example.helloworld-1/lib/x86_64/libflutter.so
#6 pc 0000000000750ed2 /data/app/com.example.helloworld-1/lib/x86_64/libflutter.so
#7 pc 0000000000704cf6 /data/app/com.example.helloworld-1/lib/x86_64/libflutter.so
#8 pc 000000000071e48f /data/app/com.example.helloworld-1/lib/x86_64/libflutter.so
#9 pc 000000000082ea2c /data/app/com.example.helloworld-1/lib/x86_64/libflutter.so
#10 pc 000000000000063a anonymous:0000736d98940000
#11 pc 0000000000020fa0 anonymous:0000736d8f580000
#12 pc 0000000000020840 anonymous:0000736d8f580000
#13 pc 0000000000002233 anonymous:0000736d8de00000
#14 pc 0000000000008523 anonymous:0000736d8e100000
#15 pc 000000000000768c anonymous:0000736d8e100000
#16 pc 0000000000006f98 anonymous:0000736d8e100000
#17 pc 000000000003f2c6 anonymous:0000736d8e100000
#18 pc 000000000000687f anonymous:0000736d8e100000
#19 pc 00000000000054b2 anonymous:0000736d8e100000
#20 pc 00000000000046af anonymous:0000736d8e100000
#21 pc 000000000000887b anonymous:0000736d8e100000
#22 pc 000000000000768c anonymous:0000736d8e100000
#23 pc 0000000000006f98 anonymous:0000736d8e100000
#24 pc 000000000000687f anonymous:0000736d8e100000
#25 pc 00000000000054b2 anonymous:0000736d8e100000
#26 pc 00000000000046af anonymous:0000736d8e100000
#27 pc 0000000000031077 anonymous:0000736d8db80000
#28 pc 00000000000054b2 anonymous:0000736d8e100000
#29 pc 00000000000046af anonymous:0000736d8e100000
#30 pc 0000000000031077 anonymous:0000736d8db80000
#31 pc 00000000000054b2 anonymous:0000736d8e100000
#32 pc 00000000000046af anonymous:0000736d8e100000
#33 pc 0000000000031077 anonymous:0000736d8db80000
#34 pc 00000000000054b2 anonymous:0000736d8e100000
#35 pc 00000000000046af anonymous:0000736d8e100000
#36 pc 0000000000031077 anonymous:0000736d8db80000
#37 pc 00000000000054b2 anonymous:0000736d8e100000
#38 pc 00000000000046af anonymous:0000736d8e100000
#39 pc 000000000000887b anonymous:0000736d8e100000
#40 pc 000000000000768c anonymous:0000736d8e100000
#41 pc 0000000000006f98 anonymous:0000736d8e100000
#42 pc 000000000003f2c6 anonymous:0000736d8e100000
#43 pc 000000000000687f anonymous:0000736d8e100000
#44 pc 00000000000054b2 anonymous:0000736d8e100000
#45 pc 00000000000046af anonymous:0000736d8e100000
#46 pc 000000000000887b anonymous:0000736d8e100000
#47 pc 000000000000768c anonymous:0000736d8e100000
#48 pc 0000000000006f98 anonymous:0000736d8e100000
#49 pc 000000000000687f anonymous:0000736d8e100000
#50 pc 00000000000054b2 anonymous:0000736d8e100000
#51 pc 00000000000046af anonymous:0000736d8e100000
#52 pc 000000000000887b anonymous:0000736d8e100000
#53 pc 000000000000768c anonymous:0000736d8e100000
#54 pc 0000000000006f98 anonymous:0000736d8e100000
#55 pc 000000000003f2c6 anonymous:0000736d8e100000
#56 pc 000000000000687f anonymous:0000736d8e100000
#57 pc 00000000000054b2 anonymous:0000736d8e100000
#58 pc 00000000000046af anonymous:0000736d8e100000
#59 pc 0000000000031077 anonymous:0000736d8db80000
#60 pc 00000000000054b2 anonymous:0000736d8e100000
#61 pc 00000000000046af anonymous:0000736d8e100000
#62 pc 000000000000887b anonymous:0000736d8e100000
#63 pc 000000000000768c anonymous:0000736d8e100000
Lost connection to device.
Tentatively targeting beta4
I am not able to reproduce this right away - hot reload seems to be working fine as I am making changes to 'Flutter Redux Demo' and 'Flutter Demo Home Page' string literals in the code sample above.
$FH/flutter/bin/flutter doctor
[✓] Flutter (Channel master, v0.3.6-pre.113, on Linux, locale en_US.UTF-8)
• Flutter version 0.3.6-pre.113 at /usr/local/google/home/aam/p/f/t11/flutter/flutter
• Framework revision d820e5f3b1 (4 days ago), 2018-05-03 22:27:29 -0700
• Engine revision e976be13c5
• Dart version 2.0.0-dev.53.0.flutter-e6d7d67f4b
@affinnen can you share more details what you change when the hot reload fails for you?
My test was done in IOS 6 emulator.
Sent from my iPhone
On May 7, 2018, at 4:24 PM, Alexander Aprelev <notifications@github.commailto:notifications@github.com> wrote:
Hot reloading while changing primarySwatch: Colors.red to primarySwatch: Colors.blue also works.
What device did you run this on? I'm running on Android Nexus4, with flutter dev running on Linux host.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/dart-lang/sdk/issues/32942#issuecomment-387211363, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AHK8PFEdHqAB5w1Gge0YWEzvbj_Xslp0ks5twLuZgaJpZM4Te3jQ.
Let me try to reproduce it again when I get home in like an hour to give you more details.
Sent from my iPhone
On May 7, 2018, at 4:24 PM, Alexander Aprelev <notifications@github.commailto:notifications@github.com> wrote:
Hot reloading while changing primarySwatch: Colors.red to primarySwatch: Colors.blue also works.
What device did you run this on? I'm running on Android Nexus4, with flutter dev running on Linux host.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/dart-lang/sdk/issues/32942#issuecomment-387211363, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AHK8PFEdHqAB5w1Gge0YWEzvbj_Xslp0ks5twLuZgaJpZM4Te3jQ.
@aam, here are the exact repro steps.
Start Simulator iPhone 6 - 11.2
Create a new app from Visual Studio Code - hello_world
Start Debugging (you may have to set your debug configuration)
Finally, paste the code example I provided above and Save.
Hot reload should trigger and you will get the exception.
Visual Studio Code: Version 1.22.2
Mac OS: 10.13.4
Flutter 0.3.5 • channel dev • https://github.com/flutter/flutter.git
Framework • revision 7ffcd3d22d (13 days ago) • 2018-04-24 14:03:41 -0700
Engine • revision ec611470b5
Tools • Dart 2.0.0-dev.48.0.flutter-fe606f890b
/cc @crelier
Can't repro this on iOS simulator on mac neither.
Can you try flutter master channel?
Can you try running your app from command line(flutter run -v, then 'r' to hot reload)?
@affinnen wrote:
Create a new app from Visual Studio Code - hello_world
Start Debugging (you may have to set your debug configuration)
Finally, paste the code example I provided above and Save.
Can you please clarify whether it crashes for you only when you replace complete hello_world app with your sample and do hot reload, while hot reload works fine if you start from your sample and hot reload incremental change(like changing primarySwatch color)
Here is the stacktrace with line numbers
E/DartVM (27829): ../../third_party/dart/runtime/vm/object.cc: 16493: error: unreachable code
E/DartVM (27829): Dumping native stack trace for thread 6cce
E/DartVM (27829): [0xa3af498d] Unknown symbol
E/DartVM (27829): [0xa3af498d] Unknown symbol
E/DartVM (27829): [0xa3db4341] Unknown symbol
E/DartVM (27829): [0xa3a850e5] Unknown symbol
E/DartVM (27829): [0xa3a41835] Unknown symbol
E/DartVM (27829): [0xa3a89271] Unknown symbol
E/DartVM (27829): [0xa3a41835] Unknown symbol
E/DartVM (27829): [0xa3a574c3] Unknown symbol
E/DartVM (27829): [0xa3b32b45] Unknown symbol
E/DartVM (27829): -- End of DumpStackTrace
F/libc (27829): Fatal signal 6 (SIGABRT), code -6 in tid 27854 (Thread-579)
I/DEBUG ( 188): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
I/DEBUG ( 188): Build fingerprint: 'google/occam/mako:5.1.1/LMP/2348323:userdebug/dev-keys'
I/DEBUG ( 188): Revision: '11'
I/DEBUG ( 188): ABI: 'arm'
I/DEBUG ( 188): pid: 27829, tid: 27854, name: Thread-579 >>> com.yourcompany.sample <<<
I/DEBUG ( 188): signal 6 (SIGABRT), code -6 (SI_TKILL), fault addr --------
I/DEBUG ( 188): r0 00000000 r1 00006cce r2 00000006 r3 00000000
I/DEBUG ( 188): r4 afa29dd8 r5 00000006 r6 0000000b r7 0000010c
I/DEBUG ( 188): r8 afa27fb8 r9 afa27fb4 sl 00000002 fp 00000000
I/DEBUG ( 188): ip 00006cce sp afa277b8 lr b6dbe989 pc b6de3fe4 cpsr 600f0010
I/DEBUG ( 188):
I/DEBUG ( 188): backtrace:
I/DEBUG ( 188): #00 pc 0003bfe4 /system/lib/libc.so (tgkill+12)
I/DEBUG ( 188): #01 pc 00016985 /system/lib/libc.so (pthread_kill+52)
I/DEBUG ( 188): #02 pc 00017597 /system/lib/libc.so (raise+10)
I/DEBUG ( 188): #03 pc 00013d3d /system/lib/libc.so (__libc_android_abort+36)
I/DEBUG ( 188): #04 pc 000124ec /system/lib/libc.so (abort+4)
I/DEBUG ( 188): #05 pc 003e5567 /data/app/com.yourcompany.sample-2/lib/arm/libflutter.so
I/DEBUG ( 188): #06 pc 01007343 /data/app/com.yourcompany.sample-2/lib/arm/libflutter.so
I/DEBUG ( 188): #07 pc 00cd80e1 /data/app/com.yourcompany.sample-2/lib/arm/libflutter.so
I/DEBUG ( 188): #08 pc 00c94833 /data/app/com.yourcompany.sample-2/lib/arm/libflutter.so
I/DEBUG ( 188): #09 pc 00cdc26d /data/app/com.yourcompany.sample-2/lib/arm/libflutter.so
I/DEBUG ( 188): #10 pc 00c94833 /data/app/com.yourcompany.sample-2/lib/arm/libflutter.so
I/DEBUG ( 188): #11 pc 00caa4bf /data/app/com.yourcompany.sample-2/lib/arm/libflutter.so
I/DEBUG ( 188): #12 pc 00d85b41 /data/app/com.yourcompany.sample-2/lib/arm/libflutter.so
I/DEBUG ( 188): #13 pc 0000031c <unknown>
I/DEBUG ( 188):
../../third_party/skia/src/ports/SkMemory_malloc.cpp:41
../../third_party/dart/runtime/platform/assert.cc:43
../../third_party/dart/runtime/vm/object.cc:16493
../../third_party/dart/runtime/vm/object.cc:5425
../../third_party/dart/runtime/vm/object.cc:17809
../../third_party/dart/runtime/vm/object.cc:5425
../../third_party/dart/runtime/vm/object.cc:5301
../../third_party/dart/runtime/vm/runtime_entry.cc:376
Since we are having trouble reproducing this, I could add some defensive code to prevent this from happening. Hot reload is brittle anyway, so more robust coding cannot hurt. I cannot guarantee it would not crash later, though.
From the stack trace above, I see that a type argument vector is being canonicalized, and a type argument in the vector is null. This can happen for recursive types, when they are not yet finalized. But this should not happen at runtime. This is probably an issue with hot reload, but until we can reproduce it, I do not know why this is happening.
The fix would be to guard these lines in object.cc against a null type argument:
RawTypeArguments* TypeArguments::Canonicalize(TrailPtr trail) const {
[...]
if (result.IsNull()) {
// Canonicalize each type argument.
AbstractType& type_arg = AbstractType::Handle(zone);
for (intptr_t i = 0; i < num_types; i++) {
type_arg = TypeAt(i);
if (!type_arg.IsNull()) { // <<<<<< ADD THIS LINE
type_arg = type_arg.Canonicalize(trail);
if (IsCanonical()) {
// Canonicalizing this type_arg canonicalized this type.
ASSERT(IsRecursive());
return this->raw();
}
SetTypeAt(i, type_arg);
} // <<<<<< ADD THIS LINE
}
[...]
What do you think?
Starting from the app below, then changing three lines marked with //HR will crash VM with the same error.
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
final String title;
MyApp({Key key, this.title}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp(
title: "Flutter Redux Demo",
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class AppState {
}
class Provider<T> {
}
abstract class PageState<W extends StatefulWidget, S> extends State<W> {
// Workaround to capture generics
static Type _typeOf<T>() => T;
void init() {
final type1 = _typeOf<Provider<AppState>>(); // works
final type2 = _typeOf<Provider<S>>(); // DOES NOT WORK - object.cc: 16408: error: unreachable code
}
}
class MyHomePage extends StatefulWidget {
final String title;
MyHomePage({Key key, this.title}) : super(key: key);
@override
_MyHomePageState createState() => new _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> { //HR: comment this line
//HR class _MyHomePageState extends PageState<MyHomePage, AppState> {
@override
Widget build(BuildContext context) {
//HR super.init(); // calls into method that causes failure
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: Column(
),
),
);
}
}
Changing the super type of class _MyHomePageState should invalidate all code from this class, including the code for _MyHomePageState.build and its call to super.init().
Current theory is that hot-reload request should have been rejected because _MyHomePageState class got different number of type parameters(used to be 1, now it is 2). Since it was not rejected, that leads to the crash further down the pipeline.
cc @rmacnak-google
Fixed via 2ad715e
|
gharchive/issue
| 2018-04-22T13:39:15 |
2025-04-01T04:33:56.280063
|
{
"authors": [
"a-siva",
"aam",
"affinnen",
"crelier",
"dgrove"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/32942",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
362339458
|
CFE issues incorrect error with redirecting generic factory constructor
The code below passes the analyzer with no errors (I believe correctly). The CFE (somewhere between #4134b95 and #2e3f17f) has started issuing an error (I believe incorrectly).
class Foo<T> {}
class Bar<T extends Foo<T>> {
Bar._();
factory Bar()= _Bar<T>._;
}
class _Bar<T extends Foo<T>> extends Bar<T> {
_Bar._() : super._();
}
void main() {
}
file:///Users/leafp/tmp/ddctest.dart:10:18: Error: The type variable 'T' has bound '#lib1::Foo<#lib1::Bar::•::T>' but the context expects a type variable with bound '#lib1::Foo<#lib1::_Bar::T>'.
Try redirecting to a different constructor.
factory Bar()= _Bar<T>._;
This is breaking the bleeding edge google roll. cc @keertip @kmillikin @stefantsov
It was introduced here: https://github.com/dart-lang/sdk/commit/de984e58cbe29a13c078e20ee992f006ad33c891
/cc @dhil
By substituting on the bounds, Fasta now accepts your program. I will give my proposed fix a spin in the CQ. Moreover, I'll add your example to the test suite.
|
gharchive/issue
| 2018-09-20T19:48:45 |
2025-04-01T04:33:56.283538
|
{
"authors": [
"dhil",
"kmillikin",
"leafpetersen"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/34528",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
426466537
|
Dart analyzer error
Analyzer Feedback from IntelliJ
Version information
IDEA AI-182.5107.16.33.5314842
2.2.1-edge.571ea80e1101e706980ea8aefa7fc18a0c8ba2ec
AI-182.5107.16.33.5314842, JRE 1.8.0_152-release-1248-b01x64 JetBrains s.r.o, OS Mac OS X(x86_64) v10.14.3 unknown, screens 1440x900 Retina
Exception
Dart analysis server, SDK version 2.2.1-edge.571ea80e1101e706980ea8aefa7fc18a0c8ba2ec, server version 1.24.0, error: Analysis failed: /Users/----/dev/flutter/phrases/test_driver/app_test.dart context: exception_20190328_132949_920
Bad state: Too many elements
#0 List.single (dart:core/runtime/lib/growable_array.dart:227:5)
#1 ExprBuilder.build (package:analyzer/src/summary/expr_builder.dart:293:18)
#2 ExprTypeComputer.compute (package:analyzer/src/summary/link.dart:2532:29)
#3 TypeInferenceNode.evaluate (package:analyzer/src/summary/link.dart:5292:36)
#4 TypeInferenceDependencyWalker.evaluate (package:analyzer/src/summary/link.dart:5142:7)
#5 DependencyWalker.walk.strongConnect (package:analyzer/src/summary/link.dart:2223:13)
#6 DependencyWalker.walk (package:analyzer/src/summary/link.dart:2242:18)
#7 VariableElementForLink.inferredType (package:analyzer/src/summary/link.dart:5568:47)
#8 TopLevelVariableElementForLink.link (package:analyzer/src/summary/link.dart:5127:48)
#9 CompilationUnitElementInBuildUnit.link (package:analyzer/src/summary/link.dart:1646:16)
#10 LibraryElementInBuildUnit.link (package:analyzer/src/summary/link.dart:3816:12)
#11 LibraryCycleNode.link (package:analyzer/src/summary/link.dart:3522:15)
#12 LibraryCycleDependencyWalker.evaluate (package:analyzer/src/summary/link.dart:3462:7)
#13 DependencyWalker.walk.strongConnect (package:analyzer/src/summary/link.dart:2223:13)
#14 DependencyWalker.walk.strongConnect (package:analyzer/src/summary/link.dart:2189:24)
#15 DependencyWalker.walk.strongConnect (package:analyzer/src/summary/link.dart:2189:24)
#16 DependencyWalker.walk.strongConnect (package:analyzer/src/summary/link.dart:2189:24)
#17 DependencyWalker.walk.strongConnect (package:analyzer/src/summary/link.dart:2189:24)
#18 DependencyWalker.walk (package:analyzer/src/summary/link.dart:2242:18)
#19 LibraryCycleForLink.ensureLinked (package:analyzer/src/summary/link.dart:3495:42)
#20 Linker.link (package:analyzer/src/summary/link.dart:4018:35)
#21 _relink (package:analyzer/src/summary/link.dart:305:57)
#22 link (package:analyzer/src/summary/link.dart:123:3)
#23 LibraryContext.load.<anonymous closure> (package:analyzer/src/dart/analysis/library_context.dart:170:25)
#24 PerformanceLog.run (package:analyzer/src/dart/analysis/performance_logger.dart:34:15)
#25 LibraryContext.load (package:analyzer/src/dart/analysis/library_context.dart:169:12)
#26 new LibraryContext (package:analyzer/src/dart/analysis/library_context.dart:67:5)
#27 AnalysisDriver._createLibraryContext (package:analyzer/src/dart/analysis/driver.dart:1612:29)
#28 AnalysisDriver._computeAnalysisResult.<anonymous closure> (package:analyzer/src/dart/analysis/driver.dart:1426:30)
#29 PerformanceLog.run (package:analyzer/src/dart/analysis/performance_logger.dart:34:15)
#30 AnalysisDriver._computeAnalysisResult (package:analyzer/src/dart/analysis/driver.dart:1416:20)
#31 AnalysisDriver.performWork (package:analyzer/src/dart/analysis/driver.dart:1216:17)
<asynchronous suspension>
#32 AnalysisDriverScheduler._run (package:analyzer/src/dart/analysis/driver.dart:2145:24)
<asynchronous suspension>
#33 AnalysisDriverScheduler.start (package:analyzer/src/dart/analysis/driver.dart:2075:5)
#34 new AnalysisServer (package:analysis_server/src/analysis_server.dart:213:29)
#35 SocketServer.createAnalysisServer (package:analysis_server/src/socket_server.dart:86:26)
#36 StdioAnalysisServer.serveStdio (package:analysis_server/src/server/stdio_server.dart:37:18)
#37 Driver.startAnalysisServer.<anonymous closure> (package:analysis_server/src/server/driver.dart:511:21)
#38 _rootRun (dart:async/zone.dart:1124:13)
#39 _CustomZone.run (dart:async/zone.dart:1021:19)
#40 _runZoned (dart:async/zone.dart:1516:10)
#41 runZoned (dart:async/zone.dart:1463:12)
#42 Driver._captureExceptions (package:analysis_server/src/server/driver.dart:594:12)
#43 Driver.startAnalysisServer (package:analysis_server/src/server/driver.dart:509:7)
#44 Driver.start (package:analysis_server/src/server/driver.dart:412:7)
#45 main (file:///b/s/w/ir/k/src/third_party/dart/pkg/analysis_server/bin/server.dart:12:11)
#46 _AsyncAwaitCompleter.start (dart:async/runtime/lib/async_patch.dart:49:6)
#47 main (file:///b/s/w/ir/k/src/third_party/dart/pkg/analysis_server/bin/server.dart:10:10)
#48 _startIsolate.<anonymous closure> (dart:isolate/runtime/lib/isolate_patch.dart:298:32)
#49 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/lib/isolate_patch.dart:171:12)
For additional log information, please append the contents of
file:///private/var/folders/kk/l92q93c14zn1wlmfwcbt12w40000gn/T/report2.txt.
Duplicate of https://github.com/dart-lang/sdk/issues/36371
|
gharchive/issue
| 2019-03-28T12:31:35 |
2025-04-01T04:33:56.287153
|
{
"authors": [
"i-schuetz"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/36368",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
587291810
|
Migration should consistently comment out or delete @ on @required
It seems like the migration tool is commenting out the @ in @required regardless of whether fix builder is set to delete or not. Investigate.
Is the general idea behind "removeViaComments" that the tool should not be overly-destructive? Or is it about removing code that may still be live in weak null-checking mode? If it's the former, we may want to keep the existing behavior. i.e. if we change @required to required and then remove the import "package:meta/meta.dart import, that seems on the same order-of-destruction as removing dead code branches. WDYT?
Oh, I ask also because there is a comment in edit_plan.dart:
// TODO(paulberry): don't remove comments
and comments cannot affect strong/weak checking; their removal only affects how destructive the tool is...
|
gharchive/issue
| 2020-03-24T21:29:56 |
2025-04-01T04:33:56.290641
|
{
"authors": [
"MichaelRFairhurst",
"srawlins"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/41181",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
640110669
|
modular test suite does not support package_config.json
While trying to opt in package:js to nnbd, and add packages to the allow list, I got a failure on the tests/modular/js_interop test. It looks like the modular test infrastructure depends heavily on the old .packages format and does not support package_config.json at all.
My plan for now is to accept this failure, as it cannot be resolved until support for that exists, and we don't want to block on that.
I don't know what area to assign this to, so assigning to @sigmundch who I believe is the most familiar with the internals of this. It looks like it needs a pretty big overhaul though.
cc @nshahan @joshualitt @rakudrama
Thanks Jake - bypassing the failure is perfect for now.
@joshualitt did a short term fix for dart2js's modular tests (https://dart-review.googlesource.com/c/sdk/+/152361), we will properly integrate this into the modular framework, but meanwhile it may be worth looking at a similar workaround in ddc's modular tests.
|
gharchive/issue
| 2020-06-17T03:25:40 |
2025-04-01T04:33:56.293460
|
{
"authors": [
"jakemac53",
"leafpetersen",
"sigmundch"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/42367",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
788319347
|
Invalid Dart code doesn't analyze/generate errors correctly
The following code does not produce any diagnostics in my IDE:
const dirHandle = await a.b();
Instead, it produces errors from the LSP server that suggest the file has not been analyzed. If I reproduce this in a test like this:
Future<void> test_danny2() async {
createProject();
addTestFile('const dirHandle = await a.b();');
await waitForTasksFinished();
await pumpEventQueue(times: 5000);
expect(filesErrors[testFile], isNotEmpty);
}
It triggers the following exception during analysis:
This seems to occur because AstBinaryWriter doesn't have an implementation of visitAwaitExpression. I don't know if that's the problem, or if it's not supposed to have gotten here.
@bwilkerson
@scheglov
@scheglov
https://dart-review.googlesource.com/c/sdk/+/180460
https://dart-review.googlesource.com/c/sdk/+/180460
|
gharchive/issue
| 2021-01-18T14:35:17 |
2025-04-01T04:33:56.296662
|
{
"authors": [
"DanTup",
"bwilkerson",
"scheglov"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/44699",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1274957967
|
../../runtime/vm/message_snapshot.cc: 576: error: expected: !lib.IsNull()
I'm seeing the following crash in the analyzer server in the context of working on a plugin:
[13:59:42] [Analyzer] [Error] ../../runtime/vm/message_snapshot.cc: 576: error: expected: !lib.IsNull()
version=2.18.0-190.0.dev (dev) (Tue Jun 14 21:17:02 2022 -0700) on "macos_arm64"
pid=16256, thread=3599, isolate_group=main(0x1101bae00), isolate=main(0x110221800)
isolate_instructions=100a02be0, vm_instructions=100a02be0
[13:59:42] [Analyzer] [Error] pc 0x0000000100c01bdc fp 0x0000000177386560 dart::Profiler::DumpStackTrace(void*)+0x68
[13:59:42] [Analyzer] [Error] pc 0x0000000100a02d64 fp 0x0000000177386580 dart::Assert::Fail(char const*, ...) const+0x28
[13:59:42] [Analyzer] [Error] pc 0x0000000100b73864 fp 0x0000000177386600 dart::ReadApiMessage(dart::Zone*, dart::Message*)+0x72f4
[13:59:42] [Analyzer] [Error] pc 0x0000000100b6b5c4 fp 0x0000000177386680 dart::MessageDeserializer::Deserialize()+0x244
[13:59:42] [Analyzer] [Error] pc 0x0000000100b6c4fc fp 0x00000001773867f0 dart::ReadMessage(dart::Thread*, dart::Message*)+0x148
[13:59:42] [Analyzer] [Error] pc 0x0000000100b3f18c fp 0x0000000177386da0 dart::IsolateMessageHandler::HandleMessage(std::__2::unique_ptr<dart::Message, std::__2::default_delete<dart::Message> >)+0xcc
[13:59:42] [Analyzer] [Error] pc 0x0000000100b67420 fp 0x0000000177386e10 dart::MessageHandler::HandleMessages(dart::MonitorLocker*, bool, bool)+0x154
[13:59:42] [Analyzer] [Error] pc 0x0000000100b67b5c fp 0x0000000177386e70 dart::MessageHandler::TaskCallback()+0x204
[13:59:42] [Analyzer] [Error] pc 0x0000000100c86068 fp 0x0000000177386f30 dart::ThreadPool::WorkerLoop(dart::ThreadPool::Worker*)+0x14c
[13:59:42] [Analyzer] [Error] pc 0x0000000100c86448 fp 0x0000000177386f60 dart::ThreadPool::Worker::Main(unsigned long)+0x78
[13:59:42] [Analyzer] [Error] pc 0x0000000100bfe234 fp 0x0000000177386fc0 dart::OSThread::GetMaxStackSize()+0xac
pc 0x000000019ccc026c fp 0x0000000177386fe0 _pthread_start+0x94
In an attempt to make my analyzer plugin debuggable during development, I'm replacing the actual plugin, that is spawned in its own isolate by the analyzer server, with a proxy, which forwards messages to the actual plugin over web-socket.
class ProxyPlugin {
late final SendPort _serverSendPort;
late final ReceivePort _pluginReceivePort;
late final IOWebSocketChannel _remotePluginChannel;
Future<void> start(SendPort serverSendPort) async {
_serverSendPort = serverSendPort;
_pluginReceivePort = ReceivePort();
_serverSendPort.send(_pluginReceivePort.sendPort);
_remotePluginChannel = IOWebSocketChannel.connect('ws://localhost:9999');
_pluginReceivePort.listen(cancelOnError: false, (message) {
final request = json.encode(message);
_remotePluginChannel.sink.add(request);
});
_remotePluginChannel.stream.listen(cancelOnError: false, (message) {
final response = json.decode(message as String) as Map<String, Object?>;
if (response.containsKey('event')) {
// Workaround for bug in analyzer server which expects `params` to have type
// `Map<String, Object>` instead `Map<String, Object?>`.
response['params'] = {
...(response['params']! as Map).cast<String, Object>()
};
}
_serverSendPort.send(response);
});
}
}
The messages that are passed to _serverSendPort.send and cause the crash consist of just plain dart objects as returned by json.decode.
Flutter 3.1.0-0.0.pre.1292 • channel master • https://github.com/flutter/flutter.git
Framework • revision b29c64b3f9 (2 hours ago) • 2022-06-17 06:28:07 -0400
Engine • revision 6cb83ab0f1
Tools • Dart 2.18.0 (build 2.18.0-190.0.dev) • DevTools 2.14.0
Full Logs from VS Code extension
instrumentation.log.txt
@bwilkerson
Sorry, but I have no idea what that output even means. Perhaps someone on the VM team with knowledge of isolates could provide some insight.
The crash is caused by the VM not being able to find the library for the class of an object that has been sent from one to another isolate, probably from the server isolate to the plugin isolate.
Sorry, but that doesn't help much (probably because of a lack of knowledge on my part). The analysis server communicates with its plugins by sending and receiving string encodings of json compatible objects (maps and lists of strings, bools, and ints). Those then get translated into more semantically meaningful objects by the server and the plugins (assuming the plugins are built on the analyzer_plugin package). The conversion code explicitly imports and references that classes that it builds, so as far as I know the VM is guaranteed to know what library the classes came from.
Is there a mechanism that you're using to send something other than strings across the IOWebSocketChannel?
No. All that the ProxyPlugin from above does is take the json compatible objects, encode them into json strings, send them over the IOWebSocketChannel and then decode json strings coming out of the channel and send those into the _serverSendPort.
Have you tried not decoding the strings from the _remotePluginChannel and just sending the strings through the _serverSendPort? If the _serverSentPort is the port used to communicate with the analysis server, then it's expecting strings and will do its own decoding.
The _serverSendPort is the port that is passed into the main function of the plugin. I did try just passing the JSON strings from the _remotePluginChannel channel through, but then they seem to be ignored.
If I understand the analysis server side correctly, it expects either a SendPort or Map and ignores everything else:
https://github.com/dart-lang/sdk/blob/d0fc029f68a67dfe14ab4a1e7fc760ab6661c5c4/pkg/analyzer_plugin/lib/src/channel/isolate_channel.dart#L222-L237
Just FYI, the proxied plugin is using in the analyzer_plugin package, so it should be sending the correct messages.
That doesn't match my memory, but obviously my memory is wrong (or the code's been changed since I last looked at it).
Is there any way (breakpoint, print, etc.) for you to figure out what object structure is being passed? That seems like your best bet for debugging the situation.
The instrumentation log contains the messages passed between the server and the plugin.
I have created a repository with a minimal setup to reproduce the issue.
Curiously, there is a way to make the crash go away. The isolate that is spawned by the server for the plugin does not import the analyzer_plugin package. It just de/encodes to/from JSON and communicates over the network with the actual plugin running in a separate process.
When import package:analyzer_plugin/plugin/plugin.dart in the isolate running the proxy, the crash does not occur and everything works as expected. I'm still not using any of the imported elements.
Awhile ago I was working on a proxy plugin just like this one, you can work around the library error by putting assert(ServerPluginStarter == ServerPluginStarter); somewhere in the file, it seems the analysis server crashes if the plugin libraries aren't loaded in the plugin isolates.
I'm closing this issue in favor of https://github.com/dart-lang/sdk/issues/50594. The root cause is that a HashMap is being sent to analyzer plugins when only Maps are supported by SendPorts between isolates spawned with Isolate.spawnUri.
|
gharchive/issue
| 2022-06-17T12:13:48 |
2025-04-01T04:33:56.307818
|
{
"authors": [
"PixelToast",
"blaugold",
"bwilkerson",
"pq"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/49281",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2184846700
|
False crashes reported by the analysis server
The method PluginManager.recordPluginFailure is currently reporting a server "crash" when the URI for a plugin can't be resolved to a file path or when the file path points to something other than the root of a plugin package.
While both of these problems impact the user, neither one causes the server to crash, so they shouldn't be reported as such.
I suspect that the best course of action at this point is to just stop reporting these as crashes, silently ignoring them.
Fixed by https://dart-review.git.corp.google.com/c/sdk/+/357212.
|
gharchive/issue
| 2024-03-13T20:38:53 |
2025-04-01T04:33:56.310041
|
{
"authors": [
"bwilkerson"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/55188",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
757666111
|
Migrating to null safety
I was wondering if there are any plans to migrate this package to null safety?
I would be happy to have an attempt at null safety migration if there are not.
Please do!
On Sat, Dec 5, 2020 at 4:05 AM Thomas Clark notifications@github.com
wrote:
I was wondering if there are any plans to migrate this package to null
safety?
I would be happy to have an attempt at null safety migration if there are
not.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/dart-lang/shelf_static/issues/41, or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAEFCU6WE3456PRIFRXEWTSTIOXBANCNFSM4UOQCAEA
.
You might want to wait until we publish pkg:shelf, though!
On Sat, Dec 5, 2020 at 2:11 PM Kevin Moore kevmoo@google.com wrote:
Please do!
On Sat, Dec 5, 2020 at 4:05 AM Thomas Clark notifications@github.com
wrote:
I was wondering if there are any plans to migrate this package to null
safety?
I would be happy to have an attempt at null safety migration if there are
not.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/dart-lang/shelf_static/issues/41, or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAEFCU6WE3456PRIFRXEWTSTIOXBANCNFSM4UOQCAEA
.
I second this request. I think, all that needs to change (besides tests which I didn't test) is this.
pubspec.yaml:
version: 1.0.0
...
environment:
sdk: '>=2.12.0-0 <3.0.0'
dependencies:
convert: '>=3.0.0 <4.0.0'
http_parser: '>=4.0.0 <5.0.0'
mime: '>=1.0.0 <2.0.0'
path: '>=1.8.0 <2.0.0'
shelf: '>=1.0.0 <2.0.0'
and static_handler.dart:
```
// Copyright (c) 2015, the Dart project authors. Please see the AUTHORS file
// for details. All rights reserved. Use of this source code is governed by a
// BSD-style license that can be found in the LICENSE file.
import 'dart:async';
import 'dart:io';
import 'dart:math' as math;
import 'package:convert/convert.dart';
import 'package:http_parser/http_parser.dart';
import 'package:mime/mime.dart';
import 'package:path/path.dart' as p;
import 'package:shelf/shelf.dart';
import 'directory_listing.dart';
import 'util.dart';
/// The default resolver for MIME types based on file extensions.
final _defaultMimeTypeResolver = MimeTypeResolver();
// TODO option to exclude hidden files?
/// Creates a Shelf [Handler] that serves files from the provided
/// [fileSystemPath].
///
/// Accessing a path containing symbolic links will succeed only if the resolved
/// path is within [fileSystemPath]. To allow access to paths outside of
/// [fileSystemPath], set [serveFilesOutsidePath] to true.
///
/// When a existing directory is requested and a [defaultDocument] is specified
/// the directory is checked for a file with that name. If it exists, it is
/// served.
///
/// If no [defaultDocument] is found and [listDirectories] is true, then the
/// handler produces a listing of the directory.
///
/// If [useHeaderBytesForContentType] is true, the contents of the
/// file will be used along with the file path to determine the content type.
///
/// Specify a custom [contentTypeResolver] to customize automatic content type
/// detection.
Handler createStaticHandler(String fileSystemPath,
{bool serveFilesOutsidePath = false,
String? defaultDocument,
bool listDirectories = false,
bool useHeaderBytesForContentType = false,
MimeTypeResolver? contentTypeResolver}) {
final rootDir = Directory(fileSystemPath);
if (!rootDir.existsSync()) {
throw ArgumentError('A directory corresponding to fileSystemPath '
'"$fileSystemPath" could not be found');
}
fileSystemPath = rootDir.resolveSymbolicLinksSync();
if (defaultDocument != null) {
if (defaultDocument != p.basename(defaultDocument)) {
throw ArgumentError('defaultDocument must be a file name.');
}
}
contentTypeResolver ??= _defaultMimeTypeResolver;
return (Request request) {
final segs = [fileSystemPath, ...request.url.pathSegments];
final fsPath = p.joinAll(segs);
final entityType = FileSystemEntity.typeSync(fsPath);
File? file;
if (entityType == FileSystemEntityType.file) {
file = File(fsPath);
} else if (entityType == FileSystemEntityType.directory) {
file = _tryDefaultFile(fsPath, defaultDocument);
if (file == null && listDirectories) {
final uri = request.requestedUri;
if (!uri.path.endsWith('/')) return _redirectToAddTrailingSlash(uri);
return listDirectory(fileSystemPath, fsPath);
}
}
if (file == null) {
return Response.notFound('Not Found');
}
if (!serveFilesOutsidePath) {
final resolvedPath = file.resolveSymbolicLinksSync();
// Do not serve a file outside of the original fileSystemPath
if (!p.isWithin(fileSystemPath, resolvedPath)) {
return Response.notFound('Not Found');
}
}
// when serving the default document for a directory, if the requested
// path doesn't end with '/', redirect to the path with a trailing '/'
final uri = request.requestedUri;
if (entityType == FileSystemEntityType.directory &&
!uri.path.endsWith('/')) {
return _redirectToAddTrailingSlash(uri);
}
return _handleFile(request, file, () async {
if (useHeaderBytesForContentType) {
final length = math.min(
contentTypeResolver!.magicNumbersMaxLength, file!.lengthSync());
final byteSink = ByteAccumulatorSink();
await file.openRead(0, length).listen(byteSink.add).asFuture();
return contentTypeResolver.lookup(file.path,
headerBytes: byteSink.bytes);
} else {
return contentTypeResolver!.lookup(file!.path);
}
});
};
}
Response _redirectToAddTrailingSlash(Uri uri) {
final location = Uri(
scheme: uri.scheme,
userInfo: uri.userInfo,
host: uri.host,
port: uri.port,
path: '${uri.path}/',
query: uri.query);
return Response.movedPermanently(location.toString());
}
File? _tryDefaultFile(String dirPath, String? defaultFile) {
if (defaultFile == null) return null;
final filePath = p.join(dirPath, defaultFile);
final file = File(filePath);
if (file.existsSync()) {
return file;
}
return null;
}
/// Creates a shelf [Handler] that serves the file at [path].
///
/// This returns a 404 response for any requests whose [Request.url] doesn't
/// match [url]. The [url] defaults to the basename of [path].
///
/// This uses the given [contentType] for the Content-Type header. It defaults
/// to looking up a content type based on [path]'s file extension, and failing
/// that doesn't sent a [contentType] header at all.
Handler createFileHandler(String path, {String? url, String? contentType}) {
final file = File(path);
if (!file.existsSync()) {
throw ArgumentError.value(path, 'path', 'does not exist.');
} else if (url != null && !p.url.isRelative(url)) {
throw ArgumentError.value(url, 'url', 'must be relative.');
}
contentType ??= _defaultMimeTypeResolver.lookup(path);
url ??= p.toUri(p.basename(path)).toString();
return (request) {
if (request.url.path != url) return Response.notFound('Not Found');
return _handleFile(request, file, () => contentType);
};
}
/// Serves the contents of [file] in response to [request].
///
/// This handles caching, and sends a 304 Not Modified response if the request
/// indicates that it has the latest version of a file. Otherwise, it calls
/// [getContentType] and uses it to populate the Content-Type header.
Future _handleFile(Request request, File file,
FutureOr<String?> Function() getContentType) async {
final stat = file.statSync();
final ifModifiedSince = request.ifModifiedSince;
if (ifModifiedSince != null) {
final fileChangeAtSecResolution = toSecondResolution(stat.modified);
if (!fileChangeAtSecResolution.isAfter(ifModifiedSince)) {
return Response.notModified();
}
}
final headers = {
HttpHeaders.contentLengthHeader: stat.size.toString(),
HttpHeaders.lastModifiedHeader: formatHttpDate(stat.modified)
};
final contentType = await getContentType();
if (contentType != null) headers[HttpHeaders.contentTypeHeader] = contentType;
return Response.ok(file.openRead(), headers: headers);
}
</details>
|
gharchive/issue
| 2020-12-05T12:04:53 |
2025-04-01T04:33:56.331757
|
{
"authors": [
"kevmoo",
"sma",
"tnc1997"
],
"repo": "dart-lang/shelf_static",
"url": "https://github.com/dart-lang/shelf_static/issues/41",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
126281459
|
allow periods in package names
cc @keertip
LGTM. maybe add the test for a..b?
|
gharchive/pull-request
| 2016-01-12T21:33:29 |
2025-04-01T04:33:56.335306
|
{
"authors": [
"jakemac53",
"sigmundch"
],
"repo": "dart-lang/web-components",
"url": "https://github.com/dart-lang/web-components/pull/42",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.