id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1894594319
test rerun Please add a meaningful description for your change here Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily: [ ] Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead. [ ] Update CHANGES.md with noteworthy changes. [ ] If this contribution is large, please file an Apache Individual Contributor License Agreement. See the Contributor Guide for more tips on how to make review process smoother. To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md GitHub Actions Tests Status (on master branch) See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows. Run Go Samza ValidatesRunner
gharchive/pull-request
2023-09-13T13:56:33
2025-04-01T06:40:52.950238
{ "authors": [ "volatilemolotov" ], "repo": "volatilemolotov/beam", "url": "https://github.com/volatilemolotov/beam/pull/60", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1990071237
[Question] Is it possible to calculate IV if the option price is given? Is it possible to calculate implied volatility if the option price is given? I wrote a routine to do this. Basically, you run the model and change the IV until you reach your desired price. 1) You need an IV value to begin from. 2) There are limits to the equation. So as your IV values get very large or very small the price of the option becomes asymptotic and creates a pseudo infinite loop. But I find it to be a valuable tool because the only thing you can lean on is the bid/ask price. Everything else is theoretical and therefore relative to your model.
gharchive/issue
2023-11-13T07:56:03
2025-04-01T06:40:52.955243
{ "authors": [ "mainTrim13", "pinkfrog9" ], "repo": "vollib/py_vollib", "url": "https://github.com/vollib/py_vollib/issues/24", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1056431912
WIP add data_source_volterra_service_policy ref: https://github.com/volterraedge/terraform-provider-volterra/issues/95 I have this working locally I just need the test to pass: $ terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.volterra_service_policy.foobar will be read during apply # (config refers to values not yet known) <= data "volterra_service_policy" "foobar" { + id = (known after apply) + name = "foobar" + namespace = "shared" } # volterra_service_policy.foobar will be created + resource "volterra_service_policy" "foobar" { + algo = "FIRST_MATCH" + allow_all_requests = true + any_server = true + id = (known after apply) + name = "foobar" + namespace = "shared" } Plan: 1 to add, 0 to change, 0 to destroy. Changes to Outputs: + test = { + id = (known after apply) + name = "foobar" + namespace = "shared" } Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes volterra_service_policy.foobar: Creating... volterra_service_policy.foobar: Creation complete after 1s [id=35ede2bf-6c9f-4920-bceb-63570a964012] data.volterra_service_policy.foobar: Reading... data.volterra_service_policy.foobar: Read complete after 0s [id=35ede2bf-6c9f-4920-bceb-63570a964012] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: test = { "id" = "35ede2bf-6c9f-4920-bceb-63570a964012" "name" = "foobar" "namespace" = "shared" } I have this working and the tests are creating the service policy test resource, then successfully using the data resouce. But I have no idea why it is saying the plan is not empty and then failing. https://github.com/volterraedge/terraform-provider-volterra/pull/98/checks#step:6:199 Error: testing.go:654: Step 0 error: After applying this step and refreshing, the plan was not empty: DIFF: UPDATE: data.volterra_service_policy.aikuqbdhiz id: "" => "<computed>" name: "" => "aikuqbdhiz" namespace: "" => "shared" STATE: volterra_service_policy.aikuqbdhiz: ID = 293dc667-86e0-459c-9394-cb8442e51e37 provider = provider.volterra algo = FIRST_MATCH allow_all_requests = true any_server = true deny_all_requests = false description = disable = false name = aikuqbdhiz namespace = shared rest RPC: ves.io.schema.service_policy.API.Delete , Status: OK , The 'service_policy' 'aikuqbdhiz' in namespace 'shared' was successfully deleted. Alex, Can you please update this PR. @sanabby this MR can be closed out as we merged in the main change and this was another option that added a few tests.
gharchive/pull-request
2021-11-17T18:04:05
2025-04-01T06:40:52.968305
{ "authors": [ "cohenaj194", "sanabby" ], "repo": "volterraedge/terraform-provider-volterra", "url": "https://github.com/volterraedge/terraform-provider-volterra/pull/98", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
563843353
Correction/addition regarding userconfig.txt According to this list some boot options do not work when placed in an file referenced by the include option in /boot/config.txt. They are processed at an (too) early stage of the boot process when the included file does not get parsed. Thanks!
gharchive/pull-request
2020-02-12T08:47:38
2025-04-01T06:40:52.971363
{ "authors": [ "gvolt", "volumio" ], "repo": "volumio/docs", "url": "https://github.com/volumio/docs/pull/74", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2046663886
Access denied due to invalid VCC-API-KEY I've created app under developer account and copied VCC API key - Primary but there's still an error { "status": 401, "error": { "message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } The app isn't publish since it was made for testing purposes but the API does not want to accept the VCC API, any ideas? Hi @grzegorztomasiak. Thanks for raising an issue :raised_hands: To better help you, we require some more information. How are you running the request? Are you trying to run our samples in this repo? Have you verified that the VCC_API_KEY is correctly added to the .env file? If you still have issues, please try to execute a simple curl command following the instructions here: https://developer.volvocars.com/apis/docs/getting-started/ Hi @adamgronberg, I have the exact same issue when trying out the connected-vehicle-fetch-sample. This also happens when I try to execute the request in curl/python. Could this be because my application is not published? Status: Only for testing I'm having same issue. Have tried creating multiple 'apps' and regenerating the keys etc. Same for me. Happens even in the sandbox> Same here... tried multiple applications and regenerated multiple times Hi all, thanks for the added context. It looks like the issues are related to the API, and not related to the sample code in this repository. I've escalated this internally. You can also contact developer.portal@volvocars.com for more direct help. I will leave this issue open until we've fully investigated the reason for the errors. Same error here. I've never managed to get it to work at all in either command line curl or a programming environment, e.g. these node packages. I've tried regenerating my VCC API keys but makes no difference, same 401 error. Looks like something is borked in the API system/auth itself. If helpful, here is my -vvvv curl output. ` curl -vvvv 'https://api.volvocars.com/connected-vehicle/v2/vehicles' | => -H 'accept: "application/json"' | => -H 'authorization: Bearer my-bearer-token' | => -H 'vcc-api-key: my-vcc-api-key' Trying 52.19.178.68:443... Connected to api.volvocars.com (52.19.178.68) port 443 ALPN: curl offers h2,http/1.1 (304) (OUT), TLS handshake, Client hello (1): CAfile: /etc/ssl/cert.pem CApath: none (304) (IN), TLS handshake, Server hello (2): (304) (IN), TLS handshake, Unknown (8): (304) (IN), TLS handshake, Certificate (11): (304) (IN), TLS handshake, CERT verify (15): (304) (IN), TLS handshake, Finished (20): (304) (OUT), TLS handshake, Finished (20): SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 ALPN: server accepted h2 Server certificate: subject: C=SE; L=Gothenburg; O=Volvo Car Corporation; CN=api.volvocars.com start date: Feb 15 00:00:00 2023 GMT expire date: Mar 6 23:59:59 2024 GMT subjectAltName: host "api.volvocars.com" matched cert's "api.volvocars.com" issuer: C=US; O=DigiCert Inc; CN=DigiCert TLS RSA SHA256 2020 CA1 SSL certificate verify ok. using HTTP/2 [HTTP/2] [1] OPENED stream for https://api.volvocars.com/connected-vehicle/v2/vehicles [HTTP/2] [1] [:method: GET] [HTTP/2] [1] [:scheme: https] [HTTP/2] [1] [:authority: api.volvocars.com] [HTTP/2] [1] [:path: /connected-vehicle/v2/vehicles] [HTTP/2] [1] [user-agent: curl/8.4.0] [HTTP/2] [1] [accept: "application/json"] [HTTP/2] [1] [authorization: Bearer my-bearer-token] [HTTP/2] [1] [vcc-api-key: my-vcc-api-key] GET /connected-vehicle/v2/vehicles HTTP/2 Host: api.volvocars.com User-Agent: curl/8.4.0 accept: "application/json" authorization: Bearer my-bearer-token vcc-api-key: my-vcc-api-key < HTTP/2 401 < content-length: 270 < content-type: application/json < date: Sun, 28 Jan 2024 19:40:15 GMT < server: vcc < access-control-allow-origin: https://developer.volvocars.com < request-context: appId=cid-v1:d08a6ac1-4942-4ce7-a466-f3dd07fd71d1 < { "status": 401, "error": { "message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } Connection #0 to host api.volvocars.com left intact } ` Have also tried accessing this via VolvoMQTT Home Assistant integration but same error message of: `Feb 01 22:43:16 volvo2mqtt [106] - INFO: Starting volvo2mqtt version v1.8.27 Feb 01 22:43:17 volvo2mqtt [106] - WARNING: VCCAPIKEY isn't working! Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application. Feb 01 22:43:17 volvo2mqtt [106] - WARNING: No working VCCAPIKEY found, waiting 10 minutes. Then trying again!` Any idea when this will be addressed? Have also tried accessing this via VolvoMQTT Home Assistant integration but same error message of: `Feb 01 22:43:16 volvo2mqtt [106] - INFO: Starting volvo2mqtt version v1.8.27 Feb 01 22:43:17 volvo2mqtt [106] - WARNING: VCCAPIKEY isn't working! Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application. Feb 01 22:43:17 volvo2mqtt [106] - WARNING: No working VCCAPIKEY found, waiting 10 minutes. Then trying again!` Any idea when this will be addressed? Did you try the portal? https://developer.volvocars.com/apis/connected-vehicle/v2/specification/#openapi I noticed this week that it is working a little bit better. It still comes up some times with an 401, but pressing the execute button again and then it works. It works 3 out of 5 times for the first try. Thanks @Michel-NL , I did try via the portal too but I have a very low success rate on good responses. Actually, I don't think I've had a valid response from the portal, only ones from MQTT via Home Assistant. To be frank, I don't have time to keep pressing a button to get a valid response, Volvo really need to get the stability of the API corrected! For info, here's the tail of the HA log file so far - just full of auth errors! Feb 02 12:40:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:40:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:40:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:40:33 volvo2mqtt [106] - INFO: Mqtt update done. Next run in 300 seconds. Feb 02 12:45:33 volvo2mqtt [106] - INFO: Sending mqtt update... Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - INFO: Mqtt update done. Next run in 300 seconds. Feb 02 12:50:33 volvo2mqtt [106] - INFO: Sending mqtt update... Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - INFO: Mqtt update done. Next run in 300 seconds. Hello everyone, and thank you for your continued error reports. I've forwarded all your comments to the support team. However, since the errors you are encountering are not related to the code in this repository, we've determined that we will have to close this issue. For further assistance and error reporting, please continue to reach out to the Volvo Cars' Developer Portal support at developer.portal@volvocars.com. They are better equipped to assist with your issues.
gharchive/issue
2023-12-18T13:26:27
2025-04-01T06:40:52.998996
{ "authors": [ "DuncanSmith", "Isshin", "Michel-NL", "REELcoder", "adamgronberg", "grzegorztomasiak", "markhaines", "thiemo-seys" ], "repo": "volvo-cars/developer-portal-api-samples", "url": "https://github.com/volvo-cars/developer-portal-api-samples/issues/8", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1082635826
Створи index.html і встав в нього наведений нижче фрагмент: <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <title>Résumé</title> </head> <body> ― Hello World! &#x1F609; </body> </html> Added index.html template
gharchive/issue
2021-12-16T20:57:17
2025-04-01T06:40:53.023445
{ "authors": [ "vovabatsyk" ], "repo": "vovabatsyk/homepage", "url": "https://github.com/vovabatsyk/homepage/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
440538737
Adding parameters to puppet-amanda/manifests/params.pp file for Redhat OSfamily. Added $shell, $xinetd_unsupported and $generic_package paramters to Redhat OSfamily. Pull Request (PR) description This Pull Request (PR) fixes the following issues This is to support strict_variables. thanks for the PR!
gharchive/pull-request
2019-05-06T04:46:41
2025-04-01T06:40:53.032679
{ "authors": [ "bastelfreak", "cliff-svt", "datarame" ], "repo": "voxpupuli/puppet-amanda", "url": "https://github.com/voxpupuli/puppet-amanda/pull/80", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1724388844
Add replacement ldapquery::search function Sourcing ldap server configuration options from puppet.conf was conflating their original purpose, and a future release of Puppet may even remove these options. It's still desirable to be able to set defaults for the function from a file, but a dedicated yaml file is far more flexible than an ini file. In this commit, a new ldapquery::search function is added with a new implementation. The old version is kept, but marked as DEPRECATED. Hi, Was looking at starting to use this module - thanks. Any chance of finishing this off , have written against this branch and its working well. The idea of creating a new function ldapquery::search and deprecating the current ldapquery::query one makes sense to me? Am using it like: $_filter = "(&(objectClass=group)(|${_egroups.map | $_eg | { "(CN=${_eg})" }.join()}))" $_results = ldapquery::query( 'OU=e-groups,OU=Workgroups,DC=example,DC=ch', $_filter, ['member'], { 'hosts' => [ ['ldap.example.ch', 389], ['ldap-critical.example.ch', 389], ], 'scope' => 'sub' }, ) which is fine I'd say. If the connection parameters came from a file f we certainly have more than one ldap server so that location path needs to be a configuration. I would just leave loading from a yaml or hiera as exercise for the reader. Any chance of finishing this off , have written against this branch and its working well. @traylenator I've just got around to picking this up again. Decided the best approach is probably just a new function instead of messing around with multiple dispatches etc. Just doing a bit more testing locally, then I'll take this off draft. Thanks - had it mind to look at.
gharchive/pull-request
2023-05-24T16:41:21
2025-04-01T06:40:53.036788
{ "authors": [ "alexjfisher", "traylenator" ], "repo": "voxpupuli/puppet-ldapquery", "url": "https://github.com/voxpupuli/puppet-ldapquery/pull/47", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
204135600
Remove CentOS 5 support just tried to run the acceptance test for centos5. selinux::module fails (could not find /usr/share/selinux/devel/Makefile) selinux::permissive fails (semanage permissive doesn't exist at all) CentOS 5 will be out of support on 2015-03 so I don't think its worth to invest time. my proposed solution is to just remove Centos 5 support from metadata.json. legacy fedora releases should be removed too (Fedora 19-23) from metadata.json Fedora versions strings probably shouldn't be in the metadata.json Fedora versions strings probably shouldn't be in the metadata.json Why? What are yor arguments to remove it? We do run tests against specific versions of fedora not against unspecified ones. As a user i'd like to see specific distro versions. What would happen to rspec puppet facts/facterdb tests which reads metadata json distro and version? I think we should list what we are running beaker acceptance tests for. I think it makes sense to support at least: CentOS/RHEL 6 (latest minor release, and the others best-effort only) and 7.3, and additionally RHEL 7.2 (CentOS doesn't usually support point releases after the next one is released AFAIK, but RHEL does, and not everyone will have updated to 7.3) Fedora 24 and 25 I'm not sure if there are boxes available for testing against RHEL, but CentOS probably is close enough.
gharchive/issue
2017-01-30T21:30:08
2025-04-01T06:40:53.047225
{ "authors": [ "juniorsysadmin", "oranenj", "vinzent" ], "repo": "voxpupuli/puppet-selinux", "url": "https://github.com/voxpupuli/puppet-selinux/issues/190", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1091639273
release 3.0.1-rc0 Affected Puppet, Ruby, OS and module versions/distributions Puppet: Ruby: Distribution: Module version: How to reproduce (e.g Puppet code you use) I'm currently unable to update to the most recent version of puppetlabs-registry because of dependency conflicts introduced by this module. These dependencies are updated in the latest RC version of this module, but it hasn't been released to the Forge yet. What are you seeing What behaviour did you expect instead Output log Any additional information you'd like to impart Hi, based on a discussion in https://groups.io/g/voxpupuli/message/449 we decided to archive this repository. I'm going to close all issues and PRs. If you're interested in maintaining the module, please respond to our mailinglist.
gharchive/issue
2021-12-31T18:01:31
2025-04-01T06:40:53.050445
{ "authors": [ "bastelfreak", "kruegerkyle95" ], "repo": "voxpupuli/puppet-windows_eventlog", "url": "https://github.com/voxpupuli/puppet-windows_eventlog/issues/72", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
278862148
06 auto var target $@ 書いた 依存、 $@、$^ で三部作に。
gharchive/pull-request
2017-12-04T03:45:17
2025-04-01T06:40:53.051318
{ "authors": [ "ajiyoshi-vg" ], "repo": "voyagegroup/make-advent-calendar-2017", "url": "https://github.com/voyagegroup/make-advent-calendar-2017/pull/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1614784846
feat: support extraction of MJPEG video frames This commit adds support for extracting MJPEG files out of .vraw-files. The generated files are playable in ffplay, but not VLC. This might be because MJPEG is not a standardized format and therefore not documented where it is easily found. To do something interesting with the extracting MJPEG files it needs to be converted with for instance ffmpeg. Hello Voysys! We needed to extract some MJPEG video data from our cameras (which stream in MJPEG), so I thought I'd add some support in vraw_convert. I've added a start here and am happy to change it according to your requests, or just hand it over to you directly if you want to take it and adapt it. One thing that I looked into, but did not invest enough time in, is to convert the MJPEG video data using ffmpeg crates and then generate some standard-conforming MP4 file. The crates I found seemed to assume a lot of domain knowledge about ffmpeg libraries, which I do not have.
gharchive/pull-request
2023-03-08T07:30:14
2025-04-01T06:40:53.053722
{ "authors": [ "Jassob" ], "repo": "voysys/vraw_convert", "url": "https://github.com/voysys/vraw_convert/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
600740605
resampling code Excellent work - if i wanted to change the resampling techniques from NN to interpolation, please advise where is this part of the code? Also, have you trained on the PanNuke dataset? If so are any changes are needed? Thank-you. Thank you! I assume that you are referring to upsampling here. You can alternatively upsample with interpolation by using: tf.image.resize_images( images, size, method=ResizeMethod.BILINEAR, align_corners=False ) Here, you can also use method=ResizeMethod.BICUBIC. Yes, we did train on the PanNuke dataset. For this we changes the input size to 256x256, in line with the size of the pre-extracted patches. If you are simply reporting a model on the PanNuke dataset (patches of size 256x256), then you can use same padding in the decoder and similarly output a patch of size 256x256. The valid convolution is only necessary when using a sliding window approach to prevent border artefacts. We will be releasing a repository of a HoVer-Net model trained on PanNuke that used to process ROIs and WSIs very soon. Please let me know if this is of interest. Thanks a lot and okay sure, i would be interested in seeing this!
gharchive/issue
2020-04-16T04:34:08
2025-04-01T06:40:53.061712
{ "authors": [ "CCz23", "simongraham" ], "repo": "vqdang/hover_net", "url": "https://github.com/vqdang/hover_net/issues/41", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1563416137
HTML #HTML ##User Story As a bootcamp student I want the prework notes to be structured on a webpage so that I can easily find and read information ##Acceptance Criteria GIVEN a Prework Study Guide website WHEN I visit the website in my browser THEN I see four boxes titled HTML, CSS, Git, and JavaScript with associated notes listed Added HTML update
gharchive/issue
2023-01-31T00:24:03
2025-04-01T06:40:53.065717
{ "authors": [ "vrich88" ], "repo": "vrich88/Prework-study-guide", "url": "https://github.com/vrich88/Prework-study-guide/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
683895085
Block edge 4.3.33 4.3.33 needs to be disabled: 2 clusters currently failing (14%), 0 gone ( 0%),and 12 successful (86%), out of 14 who attempted the update over 7d Currently failing classification: * Untriaged: 2 4.3.33 needs to be disabled: 12 clusters currently failing (29%), 8 gone (20%),and 19 successful (46%), out of 41 who attempted the update over 7d Currently failing classification: * API crashlooping: 1 * Logging issues: 1 * SDN issue: 1 * Single master: 1 * Slow etcd: 2 * UPI router degraded: 1 * Untriaged: 8 4.3.33 needs to be disabled: 15 clusters currently failing (25%), 13 gone (22%),and 30 successful (51%), out of 59 who attempted the update over 7d Currently failing classification: * Logging issues: 1 * SDN issue: 1 * Single master: 2 * Slow etcd: 3 * UPI router degraded: 1 * Untriaged: 11
gharchive/pull-request
2020-08-22T01:00:48
2025-04-01T06:40:53.087533
{ "authors": [ "vrutkovs" ], "repo": "vrutkovs/cincinnati-graph-data", "url": "https://github.com/vrutkovs/cincinnati-graph-data/pull/75", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
509432521
Is json supported? Laravel emails are not translated L5.8 For example Reset Password Email is not translated. In Laravel 5.8 we have lines like this: ->subject(Lang::getFromJson('Reset Password Notification')) And it return always Reset Password Notification. But when i remove getFromJson from library and original method is used, then translation works properly. which version you pick to run with laravel 5.8? "~2.6",
gharchive/issue
2019-10-19T10:47:30
2025-04-01T06:40:53.089349
{ "authors": [ "mgralikowski", "victorsilent" ], "repo": "vsch/laravel-translation-manager", "url": "https://github.com/vsch/laravel-translation-manager/issues/151", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
912095973
No Icon for (.fprg) files. There is a language that works on a similar principle of XML. The language is known as Flowgorithm. It has a separate icon for its file extension (.fprg). Will you add this icon to your extension? Do you have a link to the icon or project? Yes, here's a link to the icon of the file with extension .fprg Hope it helps. https://1drv.ms/u/s!Ar00rTB3hmIJjxLjEwTLfeGePq1m On Sun, 1 Aug, 2021, 14:38 Roberto Huertas, @.***> wrote: Do you have a link to the icon or project? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/vscode-icons/vscode-icons/issues/2785#issuecomment-890481391, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUI5MV2T7QC2LJM7VCMXZILT2UFKHANCNFSM46EBHFDA . Tried to impliment this, the icon doesn't exist anymore. Tried to impliment this, the icon doesn't exist anymore. Ok, I'll attach it here directly.
gharchive/issue
2021-06-05T04:16:58
2025-04-01T06:40:53.098150
{ "authors": [ "MRS73694", "robertohuertasm", "sasial-dev" ], "repo": "vscode-icons/vscode-icons", "url": "https://github.com/vscode-icons/vscode-icons/issues/2785", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
204329214
[Performance] Slim icon has too many details Slim icon slows down tree-view scrolling Even 2-3 icons cause perceptible lags. So when there are, say, 10 — it's unbearable Please, consider replacing of it Thanks :pray: Indeed. Thanks for bringing this in our attention. We'll replace it asap.
gharchive/issue
2017-01-31T15:23:13
2025-04-01T06:40:53.100091
{ "authors": [ "JimiC", "caudatecoder" ], "repo": "vscode-icons/vscode-icons", "url": "https://github.com/vscode-icons/vscode-icons/issues/701", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1324932414
Duplicated insertion while typing korean It seems something wrong in typing korean after update 0.0.89. But when I downgraded vscode-neovim 0.0.88, it doesn't happen and everything works fine. Version: 1.69.2 (user setup) Commit: 3b889b090b5ad5793f524b5d1d39fda662b96a2a Date: 2022-07-18T16:12:52.460Z Electron: 18.3.5 Chromium: 100.0.4896.160 Node.js: 16.13.2 V8: 10.0.139.17-electron.0 OS: Windows_NT x64 10.0.22000 This might be caused by #900. @BTBMan could you look into this? Thanks! @zuhanit Could you tell me what input method that you are using to reproduce? And It's that happend after type space key? I'm sorry I'm not a korean user and I tried to reproduce with my only Mac, but It didn't happend, But I will try my best to fix that. I'm using default IME of Windows 11, from Microsoft. It happens after type space key, but can happen like below: Exit inserting mode I pressed ㄷ+ㅐ+ㅎ+ㅏ+ㄴ+ㅁ+ㅣ+ㄴ and type ESC for exit. Just type little long word I pressed ㄷ+ㅐ+ㅎ+ㅏ+ㄴ+ㅁ+ㅣ+ㄴ+ㄱ+ㅜ+ㄱ and type ㅇ(not o, 0). Type space key always produce characters before cursor. Please tell me if you need any information for solve this 👀 This also occurs in Japanese. Version: 1.70.0 Commit: da76f93349a72022ca4670c1b84860304616aaa2 Date: 2022-08-04T04:38:48.541Z Electron: 18.3.5 Chromium: 100.0.4896.160 Node.js: 16.13.2 V8: 10.0.139.17-electron.0 OS: Linux x64 5.15.0-43-generic Ubuntu 22.04 NeoVim v0.5.0-dev+1041-g1607dd071 I typed the following i[ENABLE JAPANESE INPUT]aiueo[SPACE]aiueo[ENTER] It seems to occur when the Japanese conversion is started with the space key and then the next input is started without being confirmed with the enter key. I'm facing same problem too (in Japanese). Version: 1.70.0 (user setup) Commit: da76f93349a72022ca4670c1b84860304616aaa2 Date: 2022-08-04T04:38:16.462Z Electron: 18.3.5 Chromium: 100.0.4896.160 Node.js: 16.13.2 V8: 10.0.139.17-electron.0 OS: Windows_NT x64 10.0.19043 Using with NeoVim on WSL2 (Ubuntu 20.04) 0.0.89 has this bug. Use 0.0.88 until new release containing the revert
gharchive/issue
2022-08-01T20:23:04
2025-04-01T06:40:53.111968
{ "authors": [ "74th", "BTBMan", "snaka", "theol0403", "vlwkaos", "zuhanit" ], "repo": "vscode-neovim/vscode-neovim", "url": "https://github.com/vscode-neovim/vscode-neovim/issues/984", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
830078177
v1.0.3. backup broken Describe the bug if a backup takes longer as some seconds, it get an error. With v1.0.2. everything works fine. Logs I0312 12:32:36.453781 1 logging.go:180] wrestic/backup/progress "level"=0 "msg"="progress of backup" "percentage"="0.00%" I0312 12:32:36.454783 1 logging.go:180] wrestic/backup/progress "level"=0 "msg"="progress of backup" "percentage"="0.00%" E0312 12:32:36.455783 1 logging.go:156] wrestic/backup/progress "msg"="can't decode restic json output" "error"="invalid character 'L' looking for beginning of value" "string"="Load(\u003cdata/4193b3e209\u003e, 0, 0) returned error, retrying after 391.517075ms: Access Denied." E0312 12:32:36.458755 1 main.go:185] wrestic "msg"="backup job failed" "error"="cmd.Wait() err: -1" ``` Environment Image Version: e.g. v1.0.3 k3s version v1.20.4+k3s1 (838a906a) go version go1.15.8 Hi @gi8lino The error Load(\u003cdata/4193b3e209\u003e, 0, 0) returned error, retrying after 391.517075ms: Access Denied. indicates that restic can't access the backup bucket. Does the exact same backup work with 1.0.2? Hi @Kidswiss Yes, with v1.0.2 everything worked fine! Can you try to run the backup with another wrestic version? e.g. 0.1.9 or 0.2.0 and see if it still happens? I tried it with wrestic v0.2.0 and got the same error: I0312 13:11:15.721907 1 utils.go:81] wrestic/backup/progress "level"=0 "msg"="progress of backup" "percentage"="0.00%" E0312 13:11:15.723861 1 utils.go:69] wrestic/backup/progress "msg"="can't decode restic json output" "error"="invalid character 'L' looking for beginning of value" "string"="Load(\u003cdata/4193b3e209\u003e, 0, 0) returned error, retrying after 391.517075ms: Copy: Access Denied." E0312 13:11:15.739853 1 main.go:182] wrestic "msg"="backup job failed" "error"="cmd.Wait() err: signal: broken pipe" @gi8lino Can you share a Backup definition which triggers this behaviour on your side? With that I would like to learn whether the Backup just takes a long time because it is big in size, or it because a BackupCommand is taking a long time, or what the reason is, so that I can model this locally. I think this has nothing to do with how long the command runs. --- apiVersion: apps/v1 kind: Deployment metadata: name: annotated-subject-deployment spec: replicas: 1 selector: matchLabels: app: subject template: metadata: labels: app: subject annotations: k8up.syn.tools/backup: 'true' k8up.syn.tools/backupcommand: 'echo Hello' # This command runs almost instantly k8up.syn.tools/file-extension: '.txt' spec: containers: - name: subject-container image: quay.io/prometheus/busybox:latest imagePullPolicy: IfNotPresent args: - sh - -c - | printf '$BACKUP_FILE_CONTENT' | tee '/data/$BACKUP_FILE_NAME' && \ ls -la /data && \ echo "test file '/data/$BACKUP_FILE_NAME' written, sleeping now" && \ sleep infinity securityContext: runAsUser: $ID volumeMounts: - name: volume mountPath: /data volumes: - name: volume persistentVolumeClaim: claimName: subject-pvc This simple deployment (based on our E2E-tests) fails with the same error. @cimnine Your example uses a prebackup command, does it also fail for backups from PVCs? @cimnine do you mean this: apiVersion: backup.appuio.ch/v1alpha1 kind: Backup metadata: name: test-backup-7r8bx namespace: test spec: backend: s3: bucket: test I think the backup size doesn't matter. The error occurs even it the pvc is only 4.0K @gi8lino Do make use of PreBackupPods or the k8up.syn.tools/backupcommand annotation on Pods? @cimnine The PreBackupPods I use only for the cluster-backup (single node k3s). For SQL'ish stuff I use the annotation k8up.syn.tools/backupcommand on the deployment. For PVC's (all are RWO) I set the annotation k8up.syn.tools/backup: "true". @gi8lino so far I was not able to replicate the exact same issue. I've found another problem (https://github.com/vshn/wrestic/issues/78), but that isn't actually the same problem. Would you mind doing two things for me? First, can you tell me what kind of S3 implementation you are using? I.e. is it minio, ceph, AWS? And – if it's self-hosted – what version you're using? Second, could you try running K8up 1.0.3 with wrestic 0.1.9? The reason I'm asking is that in version 0.2.x of wrestic, the underlying restic binary was updated, and as far as I understood it contains quite some changes to the S3-related code. Therefore I would like to rule out that this triggers some kind of incompatibility with your S3. @cimnine I have minio installed thru helm: chart version: 8.0.10 VERSION: 2021-02-14T04:01:33Z PLATFORM: Host: minio-74557b8485-s7x7v | OS: linux | Arch: amd64 RUNTIME: Version: go1.15.7 | CPUs: 4 with k8up 1.0.3 and wrestic 0.1.9 k8up it works fine. It's a self-hosted single node k3s server. Hi! I wanted to report the same error on my infrastructure. Minio, simple docker install: 2021-03-12T00:00:47Z on a baremetal host with the storage attached via NFS. k8up is version k8up-1.0.3 installed via helm, on a K3s 3 node cluster. I have updated my minio to the latest version 2021-03-17T02:33:02Z and the backup resumed. What I did not check: I didn't check the logs on the minio server side I didn't restart minio to see if this might've been the cause. Maybe @gi8lino you can check? What also seemed weird to me that it was about half of the namespaces that were failing. The other half was doing just fine continuing the backups on schedule. @cimnine I restarted the minio deployment and tried to do some backups. One PVC (~17MB) worked fine. For the other PVC (1.3 GB, 3.6 GB & 356 GB) I get this kind of errors: I0325 10:42:11.373640 1 logging.go:180] wrestic/backup/progress "level"=0 "msg"="progress of backup" "percentage"="35.26%" E0325 10:42:11.373800 1 logging.go:156] wrestic/backup/progress "msg"="can't decode restic json output" "error"="unexpected end of JSON input" "string"="{\"message_type\":\"status\",\"percent_done\":0.355037" E0325 10:42:11.375847 1 main.go:185] wrestic "msg"="backup job failed" "error"="cmd.Wait() err: -1" @gi8lino I get this error from time to time, it seems to be able to recover though after a few attempts. I've created an issue in https://github.com/vshn/wrestic/issues/79 for the percentage problem. While doing so, I've found the wrestic issue https://github.com/vshn/wrestic/issues/76. This seems to describe the same issue as @gi8lino reports, doesn't it? I'd suggest to collect output parsing issues in vshn/wrestic#79. We should also not abort a backup on a single output line that can't be parsed. I don't think it's related to vshn/wrestic#76 though. Because restic won't exit in those cases. But here it returns as indicated by: E0325 10:42:11.375847 1 main.go:185] wrestic "msg"="backup job failed" "error"="cmd.Wait() err: -1" We decided that we first want to improve the part in wrestic that parses the log messages from restic. Because currently we're not sure whether the backup fails because the parsing of the output failes, or whether the underlying restic failed because of some reason. I tested a bit how Minio behaves. First I uploaded something to a local Minio instance and then removed the permissions of the file: 10:53:02 in baas/minio/test ➜ ll total 664 drwxr-xr-x 3 simonbeck staff 96 Mar 30 10:52 . drwxr-xr-x@ 5 simonbeck staff 160 Mar 30 10:52 .. ---------- 1 simonbeck staff 338373 Mar 30 10:52 Clipboard - March 29, 2021 1_18 PM.png Then I tried to download the file, minio showed these errors in the log: API: SYSTEM() Time: 10:53:12 CEST 03/30/2021 DeploymentID: 0366f684-76ec-4f28-8a40-aa47d38e5a69 Error: Prefix access is denied: test/Clipboard - March 29, 2021 1_18 PM.png (cmd.PrefixAccessDenied) 3: cmd/web-handlers.go:2442:cmd.toWebAPIError() 2: cmd/web-handlers.go:2452:cmd.writeWebErrorResponse() 1: cmd/web-handlers.go:1471:cmd.(*webAPIHandlers).Download() Also it returned an access denied to my browser: So this is not necessarily an issue of wrestic/restic itself but could indicate something wrong with the running Minio instance. fyi I updated k8up with helm to v1.0.4. Now it works: I0330 23:00:42.314151 1 backup.go:54] wrestic/backup "level"=0 "msg"="starting backup for folder" "foldername"="test-data" I0330 23:00:42.314175 1 command.go:82] wrestic/backup/command "level"=0 "msg"="Defining RESTIC_PROGRESS_FPS" "frequency"=0.016666666666666666 I0330 23:00:44.546191 1 logging.go:168] wrestic/backup/progress "level"=0 "msg"="progress of backup" "percentage"="0.00%" I0330 23:00:45.636605 1 logging.go:160] wrestic/backup/progress "level"=0 "msg"="backup finished" "changed files"=9 "errors"=0 "new files"=0 I0330 23:00:45.636744 1 logging.go:161] wrestic/backup/progress "level"=0 "msg"="stats" "bytes added"=6922430 "bytes processed"=143459545 "time"=2.782663316 I0330 23:00:45.636909 1 handler.go:44] wrestic/statsHandler/promStats "level"=0 "msg"="sending prometheus stats" "url"="http://prometheus-pushgateway.monitoring.svc.cluster.local:9091" To test it I've created a backup of a PVC and a DB. exemplary: kubectl apply -f - << EOF apiVersion: backup.appuio.ch/v1alpha1 kind: Backup metadata: name: manual-backup-2021-03-31-091709 namespace: test spec: backend: s3: bucket: test EOF I also tried to restore the DB-dump with the restic CLI. First, it didn't work, but after restarting minio it worked! Great to hear that your problems have been solved! Have you also tested a longer running backup? That's great! So to summarize: We found some issues with the restic output parsing that were fixed a buffer that gets filled up and leading to crashes for longer running backups if a log line couldn't be parsed it cancelled the backup The access denied errors were likely caused by some hiccups with Minio as restarts resolved them @gi8lino I think we can close here? @Kidswiss All my backups did run tonight without an error, so yes, you can close it. Thank you!
gharchive/issue
2021-03-12T12:36:49
2025-04-01T06:40:53.136633
{ "authors": [ "Kidswiss", "cimnine", "gi8lino", "schemen" ], "repo": "vshn/k8up", "url": "https://github.com/vshn/k8up/issues/395", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1196305134
Make certificate path length configurable The purpose of this PR is to make the certificate path length configurable: for the Root CA stored in KV, minimum should be 1 for the device identity and edge CA certificates obtained from the EST server, the path length should be 0. The PR also contains minor code refactoring and renaming. Thanks @Ioana37! Really great improvements!
gharchive/pull-request
2022-04-07T16:38:40
2025-04-01T06:40:53.144318
{ "authors": [ "Ioana37", "vslepakov" ], "repo": "vslepakov/keyvault-ca", "url": "https://github.com/vslepakov/keyvault-ca/pull/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
842638439
[BUG] Plugin CTRL +A , CTRL+., CTRL +SHIFT. works in Obsidian help vault but not with my note vault Describe the bug Plugin CTRL +A , CTRL+., CTRL +SHIFT. works in Obsidian help vault but not with my note vault To Reproduce Open Obsidian help vault. Install Outliner Plugin and turn on. Make a list. A B C Placing cursor around any item, CTRL+A will work Doing the same thing in my Notes Vault lselect whole note instead of line or whole outline. CTRL . and CTR+SHIFT+. work in Obsidian Help Vault but not in my notes vaules. I disabled all other plugin when testing. Expected behavior Shortcuts will have consistent behaviour and will not be buggy Screenshots If applicable, add screenshots to help explain your problem. Desktop (please complete the following information): OS: [e.g. iOS] Linux OS Obsidian Version: [e.g. 22] 0.11.9 Additional context Add any other context about the problem here. Tomorrow I will try to check output of the debug console Is it possible that your lists prefixed with * sign, not - sign? Currently - is only supported. Yes, it was the case! Could you make it more explicit in the README? @danieltomasz It's better to support spaces :) Released in https://github.com/vslinko/obsidian-outliner/releases/tag/1.0.10 The speed of development astonishes me! Thanks for your hard work!
gharchive/issue
2021-03-28T01:31:04
2025-04-01T06:40:53.150732
{ "authors": [ "danieltomasz", "vslinko" ], "repo": "vslinko/obsidian-outliner", "url": "https://github.com/vslinko/obsidian-outliner/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
856595391
[BUG] Unable to delete bullet marker in first item of list Describe the bug Cannot erase the bullet point from the first item of the list to convert it into normal text. To Reproduce Steps to reproduce the behavior: Create a list with a minimum of two items. Go to the first character of the first item. Press backspace to remove the bullet mark. Expected behavior The bullet mark should be removed converting the item to normal text (and the next item becomes the first item). Screenshots https://user-images.githubusercontent.com/29776685/114502492-7c7c5200-9c49-11eb-8be3-6d5624b0b189.mov Desktop (please complete the following information): OS: macOS Catalina 10.15.7 Obsidian Version: 0.11.13 The behaviour is happening when the list is at the top of the file The behaviour is happening in vanilla Obsidian too with the plugin turned off too. The behavior is happening when the list is at the top of the file The behavior is happening in vanilla Obsidian with the plugin turned off too. The behavior happens even if the list is not at the top of the file. Not true from my experiment: I created a new empty vault with all the default settings and no plugins - the behavior does not happen. The correct expected behavior happens. I added the Outliner plugin and the behavior happens. I disable the Outliner plugin and reopen the file and it is back to the behavior not happening (expected behavior happens). Reopening the file seems to be essential to bring back the vanilla Obsidian for some reason. @kritika-gupta Thank you for your contribution! I did it on purpose because I couldn't think of a better option. Outliner tries not to break the structure of the list, and this is important. I can create a specific rule that behaves like you suggested, but only for cases where the element is the first in the list and has no children. But it worries me that this is not intuitive behavior. @kritika-gupta Thank you for your contribution! I did it on purpose because I couldn't think of a better option. Outliner tries not to break the structure of the list, and this is important. I can create a specific rule that behaves like you suggested, but only for cases where the element is the first in the list and has no children. But it worries me that this is not intuitive behavior. I agree, it makes sense only when the element has no children. I came across this issue when I wanted to simply delete an item from the list and that item happened to be the first item on the list. The behavior occurred when I started to erase from the end of the item (which is how someone would probably delete an item from a list, I should have shown that in my screenshot, my bad). Fixed in https://github.com/vslinko/obsidian-outliner/releases/tag/1.1.3 Fixed in https://github.com/vslinko/obsidian-outliner/releases/tag/1.1.3 Thanks. Your development speed is amazing. Great work on such a useful plugin! Not for me. I recently (re)installed Obsidian and Outliner, and the first bullet point in any list cannot be removed. @mathewlowry Yes, you cannot remove bullet markers when a the list item contains sub-items, because this will break the structure of the list. Please try removing the sub-items first. press ⌘ ⇧ K to delete the entire line @mathewlowry This will screw up the entire list but it will get you what you want.
gharchive/issue
2021-04-13T05:50:18
2025-04-01T06:40:53.162038
{ "authors": [ "danieltomasz", "kritika-gupta", "mariomui", "mathewlowry", "vslinko" ], "repo": "vslinko/obsidian-outliner", "url": "https://github.com/vslinko/obsidian-outliner/issues/77", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
384998081
Add CrownControl Project URL https://github.com/huri000/CrownControl Category Scroll View I guess (couldn't find any better category) Description Inspired by Apple Watch Digital Crown, CrownControl is a tiny accessory view that enables scrolling through scrollable content possible without lifting your thumb. Why it should be included to awesome-ios (mandatory) It is an original (haven't seen any similar library) and experimental Digital Crown implemented for iOS, and a really cool addition to the awesome library IMHO. (I've added unit-tests as required). Checklist [x] Only one project/change is in this pull request [x] Has unit tests, integration tests or UI tests [x] Addition in chronological order (bottom of category) [x] Supports iOS 9 / tvOS 10 or later [x] Supports Swift 4 or later [x] Has a commit from less than 2 years ago [x] Has a clear README in English 1 Warning :warning: Found 7 link issues, a project collaborator will take care of these, thanks :) Link issues by awesome_bot Line Status Link 206 403 https://www.udemy.com/arkit-beginner-to-professional/?couponCode=CREATORS 1497 301 https://github.com/mobilefirstinc/MFCard redirects tohttps://github.com/RC7770/MFCard 1516 301 https://github.com/IvanVorobei/RequestPermission redirects tohttps://github.com/IvanVorobei/SPPermission 1915 404 https://github.com/Cleveroad/CRParticleEffect 1959 404 https://github.com/Cleveroad/CRRulerControl 1967 404 https://github.com/Cleveroad/CRPageViewController 2273 404 https://github.com/Cleveroad/CRNetworkButton Generated by :no_entry_sign: Danger Thanks for contributing! 🎉
gharchive/pull-request
2018-11-27T21:40:39
2025-04-01T06:40:53.176349
{ "authors": [ "danger-awesome-ios", "huri000", "lfarah" ], "repo": "vsouza/awesome-ios", "url": "https://github.com/vsouza/awesome-ios/pull/2704", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
446995508
Add glide engine Project URL http://github.com/cocoatoucher/Glide Category Games Description glide is a SpriteKit and GameplayKit based engine for making 2d games, and I'm adding it under Games category within this repo's readme file. Why it should be included to awesome-ios (mandatory) glide is a well documented engine which can help interested devs learn to build games. Also, glide needs more collaborators. Checklist [x] Has 50 Github stargazers or more [x] Only one project/change is in this pull request [x] Isn't an archived project [x] Has more than one contributor [x] Has unit tests, integration tests or UI tests [x] Addition in chronological order (bottom of category) [x] Supports iOS 9 / tvOS 10 or later [x] Supports Swift 4 or later [ ] Has a commit from less than 2 years ago [x] Has a clear README in English 1 Warning :warning: Found 10 link issues, a project collaborator will take care of these, thanks :) Link issues by awesome_bot Line Status Link 371 301 https://github.com/YR/Cachyr redirects tohttps://github.com/nrkno/yr-cachyr 745 301 http://github.com/cocoatoucher/Glide redirects tohttps://github.com/cocoatoucher/Glide 1145 301 https://github.com/Inspirato/SwiftPhotoGallery redirects tohttps://github.com/justinvallely/SwiftPhotoGallery 1179 301 https://www.photoeditorsdk.com redirects tohttps://photoeditorsdk.com/ 1386 301 https://www.airship.com/products/mobile-app-engagement redirects tohttps://www.airship.com/platform/channels/mobile-app/ 1645 301 https://github.com/twitter/twitter-kit-ios redirects tohttps://github.com/twitter-archive/twitter-kit-ios 1840 301 https://github.com/dzenbot/Iconic redirects tohttps://github.com/home-assistant/Iconic 2414 301 https://github.com/jogendra/AnimatedMaskLabel redirects tohttps://github.com/jogendra/LoadingShimmer 3321 500 http://www.swiftplayhouse.com/ 3331 301 https://itunes.apple.com/us/book/swift-programming-language/id881256329?mt=11 redirects tohttps://books.apple.com/us/book/swift-programming-language/id881256329 Generated by :no_entry_sign: Danger
gharchive/pull-request
2019-05-22T08:17:30
2025-04-01T06:40:53.192080
{ "authors": [ "cocoatoucher", "danger-awesome-ios" ], "repo": "vsouza/awesome-ios", "url": "https://github.com/vsouza/awesome-ios/pull/2801", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
374602276
ImportError: cannot import name cfg when running a training Thanx for ur excellent job! I got a issue when i tried to run a training. Please help me if u r willing to. @gaochen315 I tired to run: python tools/Train_ResNet_HICO.py --num_iteration 1800000 but got this information: Traceback (most recent call last): File "tools/Train_ResNet_HICO.py", line 21, in from models.train_Solver_HICO import train_net File "/home/ydm/project-HOI/iCAN/tools/../lib/models/train_Solver_HICO.py", line 12, in from ult.ult import Get_Next_Instance_HO_Neg_HICO File "/home/ydm/project-HOI/iCAN/tools/../lib/ult/ult.py", line 22, in from config import cfg ImportError: cannot import name cfg If you suspect this is an IPython bug, please report it at: https://github.com/ipython/ipython/issues or send an email to the mailing list at ipython-dev@python.org You can print a more detailed traceback right now with "%tb", or use "%debug" to interactively debug it. Extra-detailed tracebacks for bug-reporting purposes can be enabled via: %config Application.verbose_crash=True I set my environment as u said and i tried tensorflow 1.1 and 1.2 How can i fix it? Thanx again. you can see from the error that the script utl.py is not able to detect the module config that is in the same directory, an easy fix would be to change line 22 in https://github.com/vt-vl-lab/iCAN/blob/master/lib/ult/ult.py to from ult.config import cfg . that should remove this error @BestSongEver @unbiasedmodeler Thanx for your reply. It works.
gharchive/issue
2018-10-27T03:45:32
2025-04-01T06:40:53.204339
{ "authors": [ "BestSongEver", "unbiasedmodeler" ], "repo": "vt-vl-lab/iCAN", "url": "https://github.com/vt-vl-lab/iCAN/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
362814326
from future import divisions when running the code, it reports that: File "tools/Train_ResNet_VCOCO.py", line 20, in <module> from ult.config import cfg SyntaxError: future feature divisions is not defined (config.py, line 8) And I update the from __future__ import divisions of config.py with: from __future__ import division and solve this error. it seems a typo error of lib/ult/config.py. Good catch! It is fixed. Thanks.
gharchive/issue
2018-09-22T01:14:59
2025-04-01T06:40:53.206505
{ "authors": [ "gaochen315", "youjiangxu" ], "repo": "vt-vl-lab/iCAN", "url": "https://github.com/vt-vl-lab/iCAN/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2330022016
Fix several bugs: lws::account height update should only go up. Webhook confirmations can start after first new block Webhook confirmations could face a rescan Found these issues while digging deeper in the remote scanning code (although not related to that code). This will get merged relatively soon as these bugs need to be back-ported to release branch. Added some more tests, and fixed another issue within the DB scan height (it previously wasn't an issue until changes in the lws::account object).
gharchive/pull-request
2024-06-03T02:11:26
2025-04-01T06:40:53.230236
{ "authors": [ "vtnerd" ], "repo": "vtnerd/monero-lws", "url": "https://github.com/vtnerd/monero-lws/pull/119", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
798314715
Missing features Hello, is it possible to add these features? clear button to remove all values in v-model. It's not very user friendly to remove 200 tags manually close prop, so that I can close options list when I pick value from multiple/tags mode limit options list to given length +1 for No. 2 You can help yourself with @select="closeOptionLists" Event and methods: { closeOptionLists: function() { document.querySelectorAll('.multiselect-input').forEach(function (el) { el.blur(); }); } } Added all 2 in 1.3.1 @MartinKravec (the third should already exist). Now it should automatically appear over the caret when multiple or tags mode has any selected options Check out now API docs section Isn't limit in props section what you're looking for? I think also a interesting feature would be loading more options on scrolling. I have a API that is using paging so would be dope if i could load more options of the other pages by scrolling the list till the end. I think it would be great if you could group the options together Group-Title 1 Option 1 Option 2 Group-Title 2 Option 3 Option 4 Closing this because we have #24 for groups. Feel free to create a new one with scroll-loading.
gharchive/issue
2021-02-01T12:42:39
2025-04-01T06:40:53.243778
{ "authors": [ "MartinKravec", "adamberecz", "c4y", "luxterful" ], "repo": "vueform/multiselect", "url": "https://github.com/vueform/multiselect/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2618696612
Display values not updating properly when updating options and values with custom "value-prop" Version Vue version: 3 Description When updating values and options, while using a custom "value-prop", the display doesn't update properly and displays the wrong informations. But when you open the multiselect dropdown, the correct values available in the dropdown but still not updated in the main multiselect input. It seems like it is still using the old options list instead of the new one. Demo https://jsfiddle.net/edw13Lg8/ Please use our JSFiddle template to reproduce the bug. Issues without working reproduction might be ignored. In the meantime, A quick workaround is to force the multiselect's update by adding a :key attribute bound to the options : <VueFormMultiselect :key="options" :options="options" ... >
gharchive/issue
2024-10-28T15:01:47
2025-04-01T06:40:53.246745
{ "authors": [ "GueganVictor" ], "repo": "vueform/multiselect", "url": "https://github.com/vueform/multiselect/issues/431", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1166833526
Allow ability to set default compilerOptions when compiling templates in the browser. What problem does this feature solve? I need to preserve whitespace when compiling templates in the browser. I have to manually set the compiler option everytime I call createApp. What does the proposed API look like? Unsure of the best api but it should be pretty simple. Something like Vue.default.config.compilerOptions.whitespace = 'preserve' I agree with your statement, but I dont think a framework should be influencing my design choices. What do you think about some kind callback or event handler to interact with the instance returned from createApp()? This is my solution window.Vue = Object.create(Vue); Vue.createApp = function(...args){ var app = this.__proto__.createApp(...args); app.config.compilerOptions.whitespace = 'preserve'; return app; } I dont think a framework should be influencing my design choices That's pretty much what a framework does: it sets a frame in which you can design as you please, but only within the boundaries set by the framework's APIs. So yes, a framework will be influencing your design choices by its very nature. Point taken. I should have expressed my thoughts a bit different. In relation to this specific issue. Vue by default handles whitespace differently to browsers and there is no other way to change this other than coming up with a hacky work around. On a separate note. Do you think my solution will cause any problems? Can you see any issues with it?
gharchive/issue
2022-03-11T20:23:37
2025-04-01T06:40:53.252010
{ "authors": [ "LinusBorg", "ryanalbrecht" ], "repo": "vuejs/core", "url": "https://github.com/vuejs/core/issues/5574", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1384479129
fix: return the correct name when stubbing a script setup component When stubbing a component using the <script setup> syntax test-utils can't return its name correctly, so it skips the stub and the stub fails.There's an issue that describes the problem. This is because getComponentName() function doesn't return its __name property. OK, I'll do it later @cexbrayat If I want this test case to run expectedly I have to install unplugin-vue-vomponents and add it to Vitest configration file like this: @cexbrayat If I want the test case to run expectedly I have to install unplugin-vue-components and add it to Vitest configration file like this: Is it OK? I guess we can, if that solves a specific issue for this plugin. Is it possible to limit the application of this plugin only to the test you are adding? I think I can't. As long as a component is imported, the plugin will take effect, so I can't limit the scope of this plugin to a test, but only control the target it transforms. So I specified that it can only transform the AutoImportScriptSetup component Ok, it should be good enough. Push your test and we'll take a look 👍 You need to run pnpm i again to update the lockfile, and we should be ready to merge Ok, sorry I forgot Awesome, thanks @joeyhuang0235
gharchive/pull-request
2022-09-24T01:43:46
2025-04-01T06:40:53.261315
{ "authors": [ "cexbrayat", "joeyhuang0235" ], "repo": "vuejs/test-utils", "url": "https://github.com/vuejs/test-utils/pull/1783", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
723004650
feat(computed): add cleanup method See https://github.com/vuejs/vue-next/issues/2261#issuecomment-705170322 The cleanup method will remove all dep references to and from dependency sets. This allows the computed to be GC collected (if no other references remain). Another reason to call this cleanup is to reduce the number of dependencies. Triggering a ref may become slow when there are many stale computeds that depend on it - as they will all have to be triggered. When calling cleanup, you can reduce the amount of stale computeds. Notice that the computed will remain active and fully functional after cleanup. By simply setting the dirty flag upon cleanup, when getting the computed value later, all dependencies will simply be recovered and the correct value will be returned! One question @yyx990803 In the final commit, I clear the cached value upon cleanup. Personally, I think that it should also be cleared immediately after setting the dirty flag in the scheduler. It will never be used anyway and will use memory until the computed is read again. This is indeed a good addition. However, I am a bit concern about the constancy of the APIs we are going to have. Currently & On Going To clean up effects: Effect import { stop, effect } from '@vue/reactivity' const runner = effect(/* ... */) stop(runner) Watch / Watch Effect import { watch } from 'vue' const stopHandler = watch(/* ... */) stopHandler() Scope (PR #2195) import { effectScope, stop } from '@vue/reactivity' const scope = effectScope(/* ... */) stop(scope) Computed (this PR) import { computed } from 'vue' const foo = computed(/* ... */) foo.cleanup() Proposal To have a general interface of disposable effect and let the return value of effect scope watch computed extends it export interface Disposable { stop: () => void } const runner = effect(/* ... */) runner.stop() const scope = effectScope(/* ... */) scope.stop() const foo = computed(/* ... */) foo.stop() // as it's a breaking change, not sure if it's worth to do it const watcher = watch(/* ... */) watcher.stop() Wondering what you think? @antfu This occurred to me as well, but.. computed.cleanup is different from watch stopHandles, and even effect.stop. When running a stop on a watcher, it makes the effect inactive. It's really destroyed. In case of computed.cleanup it merely dereferences it from all dependencies. It does not make the effect inactive, and it could be re-used without problems. Maybe I should provide some background on why this is important. I've ran into this situation in a complex scroller with 20'000 rows that all had computeds depending on one other computed. As more and more rows were scrolled into view, scrolling performance degraded. The base computed, upon any trigger, had to trigger all those thousands of computeds that belonged to rows that were no longer 'within view'. Really wasteful. On top of that, as computeds cache their values, it held on to vast amounts of memory unnecessarily. Even when removing those rows later on, the base computed held the references causing a memory leak. I was misunderstand the original motivation, thanks for the detailed explanation. The naming is good to me then 👍 I was intrigued when I saw this added to the 3.1 plan. I think that a stop concept is needed (either in addition or in replacement of cleanup). The issue #2261 notes that a computed having a long-lived dependency will leak itself and all its dependencies. Looking at WPF for prior art, the "best" (but not really, read on) solution is to use WeakRef for dependency tracking. A weak ref means that a long-lived dependency would not keep a computed alive if nothing can read from that computed anymore. Problem is that weak references are a recent JS addition. They won't work in pre-2020 browsers, won't work in neither Safari nor Opera, and it's not something you can polyfill. So I suppose they're out of question here. This means that every effect depending on long-lived dependencies must be carefully tracked and stopped -- otherwise one leaks memory. There are high-level tools for that in Vue: it's done automatically in components, which are a special case of (upcoming) effectScope. It seems natural to me that there should be a low-level stop concept as well, for the simple case where a user wants to stop a single computed that they created. I don't know if a cleanup is needed. I'd tend to think you could destroy (stop) and re-create computeds just as well but I didn't fully grasp the background given by @basvanmeurs. So why not. When it comes to bikeshedding the name, I find cleanup not a great name. It conveys an after-stop / dispose idea that's more about the internals than the use-cases. Other ideas: pause or suspend? In contrast from stop it conveys that the computed would not be active anymore but ccould be started again. Hey I was intrigued as well by this changes to the effectApi and cleanup additions. I haven't gotten to the point where I need them in a project but I really think that the differences in the API's would be a struggle for every dev that wants to use them I get that there is a difference on low level or how they behave but most of the people who will use it won't really care about that. The only thing they would want is just to stop something from being reactive If I know that when I define const myComputedProp = computed(()=>{}) and then can stop it with just myComputedProp.stop(). Tomorrow when I have const myEffect=effect(()={}) I will probably first try myEffect.stop() And as I think more about think maybe stop is not the most best name, maybe as mentioned above dereference or deref, or dereact ( as of de-React * badum-ts * ), jokes aside I think standarized API would make it more accessible Cheers Problem is that weak references are a recent JS addition. They won't work in pre-2020 browsers, won't work in neither Safari nor Opera, and it's not something you can polyfill. So I suppose they're out of question here. Maybe we should rethink this. Safari now supports this as well. Now that IE11 support has been dropped, that only leaves Opera. WeakRef provides a native solution for cleaning up. Notice that it also provides a solution for the current problem where async created computes will leak even when they are created from within a component. I actually tested a basic implementation by simply replacing the Dep set (Set<ReactiveEffect>) by Set<WeakRef<ReactiveEffect>> (https://github.com/Planning-nl/vue-next/commit/701e4e06ff4d67d5ab941da582b216c77ef307d0). I found little to none performance effect, and it provides memory safety with little to no work, which is a huge asset! It may also affect the need for effectScope PR, as its primary motivations are: If you like to make your own a framework. For example, @vue/lit, it does not handle effects in the component instance lifecycles, which will cause mem leakage if you try to mount and unmount the sub-component multiple times. In order to solve this, you will need to implement recordInstanceBoundEffect for your framework, which is not an easy & simple task from the outside. And also they will lose the ability to be reused interchangeably with Vue's ecosystem. Clean up side-effects, preventing mem leak. Let's close this PR because of the following reasons: computeds can be stopped by wrapping it in an effectScope (https://github.com/vuejs/vue-next/pull/2195), and stopping that for convenience and to avoid having to create effectScopes, we could add a stop method to the computed class which simply invokes this.effect.stop(), but that might be better suited for a separate PR (or @yyx990803 could just commit it to 3.2 directly) on second thought, cleaning up references (which is an edge case) can be done manually by re-creating the same computed and exposing it wrapped in a ref
gharchive/pull-request
2020-10-16T08:08:12
2025-04-01T06:40:53.302074
{ "authors": [ "Valter4o", "antfu", "basvanmeurs", "jods4" ], "repo": "vuejs/vue-next", "url": "https://github.com/vuejs/vue-next/pull/2389", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
187938964
hashbang backwards compatibility? Is it somehow possible to use the old hashbang behaviour with vue-router 2.x? For our application we need the URLs to be with #! instead of just # in hash mode. I don't see why you need to use a deprecated practice but you can use history mode and set a base of /#!/ as a workaround. I doubt we add back support for it. I didn't know about this workaround. Seems to work just fine. Thanks for your help! Now I am updating Vue-rouer1.x to Vue-router2, and I have the same problem, I wanna urls has #!/ in hash mode, but cannot find the config option The doc is here: https://router.vuejs.org/en/api/options.html#base history mode with /#!/ seems not to be worked properly when served in sub path web app, such as /appA/#!/ or /appB/#!/. I found a easy way to compat v1 & v2 route path in hash mode. Just write a inline script before vue-router loaded, '''javascript var hash = location.hash; if (hash && hash.indexOf('#!') === 0) { location.hash = hash.slice(2); } ''' The old v1 route path will work fine in v2.
gharchive/issue
2016-11-08T09:17:54
2025-04-01T06:40:53.307660
{ "authors": [ "Mr-Hero", "aftdotleo", "pehbehbeh", "posva" ], "repo": "vuejs/vue-router", "url": "https://github.com/vuejs/vue-router/issues/885", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
280238326
testing event props Say we can test a component like this: <someComponent @wasUpdated="someFunction" /> How should the emitted event be tested? Based on the Vuex example found in the docs, I tried the following and it didn't work: const propsData = { onWasUpdated: jest.fn() } const wrapper = mount(someComponent, { propsData }) expect(propsData.onWasUpdated).toHaveBeenCalled() }) Is this currently possible? If so, how should it be done? Thanks For questions like this, please use StackOverflow 🙂. The issue tracker is reserved for bugs and feature requests. You can't add a click handler to a component, you need to add it to an element inside the component. This can defintely be done: <someComponent @wasUpdated="someFunction" /> I use it all the time to emit actions back up to the parent component. My issue was asking whether this is currently testable (couldn't find how in documentation), if not I'd like to file a feature request. Sorry @alidcastano , I misread your example as a click handler ☺️. You could find the component with the handler in the parent component and emit on the instance: find(someComponent).vm.$emit('wasUpdated') This will trigger the someFunction method. There are a few ways to test that someFunction was called. You can also test that someComponent emitted the someFunction method inside the component using the emitted method. What feature would you like to request? In my specific situation, the method is not emitted directly by the nested component. For example: <rootComponent @wasUpdated="someFunction" /> <parentComponent @wasUpdated="someFunction" /> // child component this.$emit('wasUpdated) I'll try finding the nested component and see if I can trigger the emit. But ideally, this is the feature I had in mind: const wrapper = mount(someComponent, { propsData: { onWasUpdated: jest.fn() // jsx syntax for @wasUpdated } }) expect(wrapper.vm.wasUpdated).tohaveBeenCalled()
gharchive/issue
2017-12-07T18:50:49
2025-04-01T06:40:53.318477
{ "authors": [ "alidcastano", "eddyerburgh" ], "repo": "vuejs/vue-test-utils", "url": "https://github.com/vuejs/vue-test-utils/issues/239", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
361679463
ReferenceError: requestAnimationFrame is not defined Version 1.0.0-beta.20 Reproduction link https://jsfiddle.net/50wL7mdz/734408/ Steps to reproduce mocha/chai test a .vue file that contains requestAnimationFrame default eslint config ReferenceError: requestAnimationFrame is not defined What is expected? no this warning What is actually happening? ReferenceError: requestAnimationFrame is not defined eslint config { "name": "xxx", "version": "0.1.0", "private": true, "scripts": { "serve": "vue-cli-service serve", "build": "vue-cli-service build", "lint": "vue-cli-service lint", "test:unit": "vue-cli-service test:unit", "test:e2e": "vue-cli-service test:e2e" }, "dependencies": { "@fortawesome/fontawesome-svg-core": "^1.2.4", "@fortawesome/free-solid-svg-icons": "^5.3.1", "@fortawesome/vue-fontawesome": "^0.1.1", "@tweenjs/tween.js": "^17.2.0", "axios": "^0.18.0", "echarts": "^4.2.0-rc.1", "element-ui": "^2.4.7", "normalize.css": "^8.0.0", "vue": "^2.5.17", "vue-router": "^3.0.1", "vuex": "^3.0.1" }, "devDependencies": { "@vue/cli-plugin-babel": "^3.0.3", "@vue/cli-plugin-e2e-cypress": "^3.0.3", "@vue/cli-plugin-eslint": "^3.0.3", "@vue/cli-plugin-unit-mocha": "^3.0.3", "@vue/cli-service": "^3.0.3", "@vue/eslint-config-standard": "^3.0.3", "@vue/test-utils": "^1.0.0-beta.20", "chai": "^4.1.2", "node-sass": "^4.9.0", "sass-loader": "^7.0.1", "vue-template-compiler": "^2.5.17" }, "eslintConfig": { "root": true, "env": { "node": true }, "extends": [ "plugin:vue/essential", "@vue/standard" ], "rules": {}, "parserOptions": { "parser": "babel-eslint" } }, "postcss": { "plugins": { "autoprefixer": {} } }, "browserslist": [ "> 1%", "last 2 versions", "not ie <= 8" ] } I have recently been experiencing a similar issue - I even got as far as creating a repro repo here but you beat me to the bug report. In my case, I'm using typescript with mocha+chai in addition to vuetify (repo created with @vue/cli v3). My instinct tells me that we're just missing a polyfill or a shim somewhere but despite my best efforts, I've not been able to reliably find a fix for this myself. @bugsduggan I still don't know how to fix it.A lot of information has been searched, but no solution has been found :( The issue is that jsdom-global doesn't add requestAnimationFrame to the global object. The immediate fix is to add requestAnimationFrame to the global object before your tests run: global.requestAnimationFrame = cb => cb() To include requestAnimation with jsdom, you must pass in the pretendToBeVisible option. const window = (new JSDOM(``, { pretendToBeVisual: true })).window; I created a PR in vue-cli to get this added to the mocha unit plugin—https://github.com/vuejs/vue-cli/pull/2573 @eddyerburgh I still see this issue and I am making an assumption that the merge PR you linked to is now released. Is there anything I need to do specifically to use the functionality you added in the PR? Anything I need to set in my configuration? Quite possible I am seeing an issue with the same symptom but different cause, but wanted to double check. Hi @JamesMcMahon, sorry that you're experiencing this issue. Can you open an issue in the vue-cli repo. That's the repo where the fix was made to Hi @eddyerburgh . I still see this issue when trying to run my tests. I am using mocha/chai, "@vue/cli-plugin-unit-mocha": "^4.0.5", @vue/cli-service": "^4.0.5" and "@vue/test-utils": "1.0.0-beta.29". I still don't know how to fix this issue. Can you help me to figure out the source of the problem, and a way to be able to fix it? Just a note this could happen in ts projects too, and to resolve this problem you have to define the global function in runtime. tests/setup.ts: const requestAnimationFrame = (fn: Function) => fn(); globalThis.requestAnimationFrame = requestAnimationFrame; And make sure that the setup.ts file is require before run the tests in npm script package.json: "test": "vue-cli-service test:unit --include ./tests/setup.ts --recursive ./__tests__/", I'm using Mocha+Chai for my testing, and Vuetify for my component library. To fix this issue I just added the following to the top of my component.spec.js file global.requestAnimationFrame = function () {} I was having this issue when I added a Vuetify v-text-field component in the template of my Vue file I'm using versions (not sure all packages that relevant), "devDependencies": { ... "@vue/cli-plugin-unit-mocha": "~4.2.0", "@vue/cli-service": "~4.2.0", "@vue/test-utils": "1.0.0-beta.31", "chai": "^4.1.2", "vue-cli-plugin-vuetify": "~2.0.5", "vue-template-compiler": "^2.6.11", "vuetify-loader": "^1.3.0" ... } Do want to say that I'm new to Vue and frontend testing so I don't have JSdom installed for my testing (mainly because I just heard of it and I assumed this was for while using JEST test runner) the loop is event based, based on a particular wavelength or something, I think it's related to electricity pulses. otherwise you could have user input with a mouse which has x and y as input and when it moves it executes the code, but if you were to make a loop without input such as raf, it would have to go into stable diffusion
gharchive/issue
2018-09-19T10:23:15
2025-04-01T06:40:53.331799
{ "authors": [ "JamesMcMahon", "bugsduggan", "campbellgoe", "eddyerburgh", "marklytonh", "nargeszmn", "xinde", "zzhenryquezz" ], "repo": "vuejs/vue-test-utils", "url": "https://github.com/vuejs/vue-test-utils/issues/974", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
218188528
Cannot set property 'exports' of undefined I've recently started getting this error with vue-touch. I'm using webpack 2 / vue 2. It works fine in development, but when I create my production build, I get this error. It looks like the module variable (renamed to t by webpack's uglifier) is being overwritten in the main anonymous method's global. I suppose that a simple global 'use strict' somewhere in the application would prevent the global object being passed in. I think webpack is forcing strict mode because when I removed the -p flag (but kept the uglifier etc. switched on) it works fine. ... Is there any solution to this? I'm getting the same error. Removing -p from webpack command helps, but this is not optimal. @MikaelEdebro this will quite a while ago but I think I got around this in the end by using vue-touch/dist/vue-touch.min.js rather than just vue-touch @jackmellis thied vue-touch/dist/vue-touch.min.js and it has the same behavior
gharchive/issue
2017-03-30T12:58:31
2025-04-01T06:40:53.335881
{ "authors": [ "MikaelEdebro", "aziev", "jackmellis" ], "repo": "vuejs/vue-touch", "url": "https://github.com/vuejs/vue-touch/issues/75", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
114156007
[suggestion] Add a transition mode for transition on list (v-for) Hi, I have a list of components which are rendered via an array prop like such : <child-component is="example" v-for="item in myArrayProp" transition="bounce" stagger="200" </child-component> Fact is myArrayProp is already filled from the beginning and I would like to transition (bounce-in) the list at the beginning for the first time. I could do it by triggering myself some class but I'll loose the benefit from the stagger effect. To achieve it, I found no other way to dirty hack it with flush the prop at the beginning and refills after nextTick So why not give the option to trigger the "enter" transition for the first time ? Thanks Yeah, I've been thinking about this, could be useful. Glad to hear it will eventually be implemented in future versions. Thanks I believe this should not take the name of transition-mode, because this would clash with the existing meaning of transition-mode. This is implemented in 2.0 as the appear prop. Although we'd like to backport it to 1.x, this is a non-critical change that requires non-trivial effort. Given the bandwidth we have, we are reducing the scope of 1.1 to a number of critical features and low hanging fruits, so unfortunately this will not be implemented for 1.x. It's been 2 years now. Is there any chance of implementing this on Vue2? Did you read the comment just above yours? 😆 🤦‍♂️, Sorry. I am a terrible reader.
gharchive/issue
2015-10-29T21:55:31
2025-04-01T06:40:53.340689
{ "authors": [ "AlexandreBonaventure", "posva", "simplesmiler", "the94air", "yyx990803" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/1654", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
156681673
[Bug Report] Template conflicted between multi components when dynamically changing template Vue.js version 1.0.24 Reproduction Link here is the code: http://codepen.io/hectorguo/pen/bebEoV Short Code: // When I change template like this, it goes not right when multi `<ad-time>` are used in one page. Vue.component('ad-time', { created: function() { this.$options.template = NEW_TEMPLATE; } } What is Expected? All three components should show different result depending on the format attribute. What is actually happening? Now you can see, all three components show the same template (same as the first component) result. And if you delete the first example, it will show the second example's template. I'm sure that each component's $options.template is right. So I am not sure if i could change the template dynamically like this. No, you can not change the template like this. You should consider anything that is inside $options to be read-only and non-reactive. If you need template to depend on data, then make multiple components with different template and use dynamic :is="type" to pick the one you need for particular data.
gharchive/issue
2016-05-25T07:32:28
2025-04-01T06:40:53.345043
{ "authors": [ "hectorguo", "simplesmiler" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/2951", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
172692653
Sorting components produces duplicate nodes in the dom Thank you so much for this awesome framework. I've only been working with the framework for a couple of months now, and still unsure of all of the limitations. This looks like a limitation of the component system. Vue.js version 1.0.26 Reproduction Link https://jsfiddle.net/CristianGiordano/cdLp4uz2/ Steps to reproduce Move a person from one list to another (The list component is re-rendered alongside the moved dom node) What is Expected? No duplication of elements. Here is a working version (same code) without a person component. https://jsfiddle.net/CristianGiordano/ghwgzgtx/ What is actually happening? It looks like VueJS cannot re-use existing nodes or find the node which was moved. I have tried adding a track-by id with the relevant object properties. I have tried SortableJS and Dragula sorting libraries with the same issue. p.s. Apologies if this is not the right forum to raise such an issue. It works when you remove the element : https://jsfiddle.net/Linusborg/cdLp4uz2/8/ (JS Line 54) The reason that it does not work is that the dropped element somehow looses the connnection to the component instance (it's usually saved on the element in an attribute called __vue__), and I think Vue can't do much about it. Ok cool thanks. I thought it was as such but thought I could be doing something wrong. Thanks for your time :)
gharchive/issue
2016-08-23T13:00:51
2025-04-01T06:40:53.349682
{ "authors": [ "CristianGiordano", "LinusBorg" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/3503", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
179562384
Not importing correctly with Rollup Vue.js version 2.0.0-rc.7 Reproduction Link http://vue-rollup-undefined-this.surge.sh Steps to reproduce https://github.com/danreeves/vue-rollup-undefined-this-bug-report What is Expected? Vue should init What is actually happening? From Rollups mouth: "The this keyword is equivalent to undefined at the top level of an ES module, and has been rewritten". A quick fix to test this out is to replace any calls to this with Vue.prototype, and it starts working. Let me check it. I got vue + rollup working on multiple projects Vue is a constructor. You have to call it with new I'm such an idiot! Thanks 方法createComponentInstanceForVnode 的两行注释导致rollup报错,请修改注释位置。能不能发这个方法的注释调整到方法头上?谢谢。 node_modules\vue\dist\vue.runtime.esm.js function createComponentInstanceForVnode ( vnode, // we know it's MountedComponentVNode but flow doesn't parent // activeInstance in lifecycle state ) {
gharchive/issue
2016-09-27T17:39:15
2025-04-01T06:40:53.354192
{ "authors": [ "danreeves", "posva", "standino" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/3790", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
236633068
Sync v-model during IME composition What problem does this feature solve? http://vuejs.org/v2/guide/forms.html#Basic-Usage For languages that require an IME (Chinese, Japanese, Korean etc.), you’ll notice that v-model doesn’t get updated during IME composition. <p>msg</p><input v-model="msg"> is OK, but type="search" is Not OK <p>msg</p><input type="search" v-model="msg"> What does the proposed API look like? For languages that require an IME (Chinese, Japanese, Korean etc.), v-model(type="search") doesn’t get updated during IME composition. This behavior is intentional because syncing during composition often leads to awkward UX. What's the use case for this? example: https://jsfiddle.net/xiaohan1219/k9qbs068/ That's not a use case... what I am asking is why do you want the in-composition string to show up. I don't want to the in-composition string to show up(in using type="search") 我的英语水平太烂了,问题的描述如截图,我是认为 search 输入框作为 input,应该具备 text 一样的输入体验,我还没看懂关于此处的代码,应该是vue 处理了但显然处理方式不同,才导致这两种不同的显示结果 ok I get it now... @yyx990803 Android has the same problem , why not handle composition events ? Just stumbled on this issue myself. Handling composition events in an MVC way does not seem to be doable with current browser behavior. It is a breaker for a live search/live autocompletion functionality. As of now, the best way for me to handle this was to make a form with input element and re-read it on every compositionupdate event. This seems to be the only hack that works with UC browser. For some reason, UC does update the inputs inside forms on composition, yet it gives undefined as event data... It will be very very good is there will be a standard way set by upstream how to handle that.
gharchive/issue
2017-06-17T02:26:47
2025-04-01T06:40:53.360066
{ "authors": [ "baybal", "cloudyan", "yyx990803", "zuibunan" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/5902", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
243280237
Native Events/Functional Components: Event.stopImmediatePropagation does not cancel event loop. Version 2.4.1 Reproduction link https://jsfiddle.net/alexsasharegan/ojymho07/ Steps to reproduce Create a functional component: create an event handler that calls Event.stopImmediatePropagation put the context's event handler in an array with the component's handler, but place the component's handler at the first index. (Allowing the component's handler the first chance to stop propagation) Using this functional component in a parent component, bind a listener to the same event (e.g. "click") Execute the event What is expected? The functional component's handler should be called first, since it is first in the array The functional component's handler should execute the stopImmediatePropagation call, and kill the event loop. What is actually happening? Execution order is preserved correctly. The parent component's handler is still being invoked (but additional handlers bound through EventTarget.addEventListener are not). My use case starts at creating a link component that will respect a disabled property, but continues on to various other components to create a cohesive UI component API. It would be beneficial to have some way to cancel the Vue event loop. I don't know my way around the source well enough, but I imagine that Vue internally tracks all event handlers for a given node, intercepts the native event first, then invokes all callbacks. This is why natively bound callbacks do not get invoked, but callbacks bound by Vue are invoked. That's because Vue is wrapping all of those functions in one function so it only has to register 1 instead of n listeners. https://github.com/vuejs/vue/blob/dev/src/core/vdom/helpers/update-listeners.js#L26 You should simply do the same - wrap it all in your function instead of pushing your own into the array.
gharchive/issue
2017-07-17T01:17:02
2025-04-01T06:40:53.366188
{ "authors": [ "LinusBorg", "alexsasharegan" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/6130", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
248564456
[bug] conditional change of type attribute not working? Version 2.4.2 Reproduction link https://jsfiddle.net/trusktr/50wL7mdz/52198/ Steps to reproduce Try to type in the text field. It will keep clearing your input every second. What is expected? It should toggle between showing and hiding the password. What is actually happening? It is destroying the input (or clearing its value) each time the value of hidden changes. Using v-if/v-else also doesn't work as would be intuitively expected: https://jsfiddle.net/trusktr/50wL7mdz/52199 Why is it not dom-diffing and applying the change only to the attribute value like one would expect? Here's the same thing, broken, using a self-closing input tag: https://jsfiddle.net/trusktr/50wL7mdz/52202/ Here's a fiddle showing that it works with vanilla HTML/JS: https://jsfiddle.net/trusktr/50wL7mdz/52205/ This means that if Vue is doing dom diffing like it claims, then this issue should not exist. @syropian suggested to add a dummy v-model as a workaround, which works, but not intuitive and wasteful: https://jsfiddle.net/50wL7mdz/52203 @trusktr, because type attribute on input is very special this is expected behaviour without v-model. You cannot carry same value around (think of input[type="file"] or even input[type="number"]). @nickmessing That's not intuitive, please fix. This is the sort of thing that should work in Vue because it works in the browser. Sure, switching from password to file won't work, and in that case you should re-renderingthe. But switching to text should work. Vue can easily make this intuitively easy. (Isn't it a framework like Vue's goal to make these things easy?). Do you not see the fact that someone telling me to unituitively use v-model as a workaround is a problem? @trusktr, Well, re-rendering diffs indeed, let's imagine a simple scenario. Step 1. First render, DOM looks like: <input type="text" /> Step 2. You add some text, DOM looks like (consider that value is describing a domProp and not attr here) <input type="text" domProp-value="asdasd" /> Step 3. You change hidden so now vue should calculate the diff and apply it between this 2: <input type="text" domProp-value="asdasd" /> <input type="password" /> Result: value gets removed. This is intuitive if you understand vue's rendering. It does remove value because there is no value in new input. v-model helps only because it is replaced by :value="str" @input="str = $event.target.value and as you see we have value. That example isn't really applicable to my case, and you also mentioned some implementation details that end users don't need to care about. The case is simple: if the only thing changing is type password to type text or vice versa, then you can optimize by DOM diffing as expected in that case, and end users won't run into unexpected behavior. Another way to put it is: if I can simply change from type password to type text when using vanilla DOM+JS, then it should be as simple in Vue too, because is an extension of what we expect with vanilla DOM. Honestly this is good enough reason to make this optimization on your end, in this case and any other similar cases, and make it just work. I've updated the title to remove the question mark. Even with the v-model hack, DOM Diffing is not working. Vue is destroying the existing input element and creating a whole new one (rather than just modifying the type attribute). This is REALLY bad. This destroys interoperability with outside code. For example, bootstrap validator stops working because it will have a reference to a no-longer-existing input element. If Vue is supposed to be incrementally adoptable, and interoperable with outside code (f.e. jQuery plugins) then this simply is not acceptable. @nickmessing If Vue is always dom-diffing like it claims, here’s a fiddle that should work: https://jsfiddle.net/trusktr/50wL7mdz/52661/ Start typing in the input field, and you will see the values logged to console. Then, when you toggle the show/hide of the password, if Vue were dom-diffing properly, it would be changing only the type attribute of the input. If this were the case, then you’d continue to see output in the console. But this isn’t the case because now there’s no input in the console. Each time the show/hide is toggled, Vue replaces the entire input element with a new one. This broke the simple logValues tool, and it will lead to code that will leak un-used DOM into memory. Here's the same example showing that dom-diffing is working as expected with React: https://jsfiddle.net/trusktr/69z2wepo/84211/ @trusktr, It indeed makes sense to change the type only when it goes from one text-like input to another. Vue was never claiming be interoperable with outside code. Rather then that, vue guarantees to keep in sync actual dom with virtual dom when using it's reactivity system and it's impossible to predict how it's going to do that since internal API can and will change over time. This is not a bug since it's keeping in sync the actual dom with virtual dom defined by template. It's probably sub-optimal to replace whole input when switching from one text-like to another and I will try to optimize that but Vue was never designed to "be friendly" with jQuery plugins. it's impossible to predict how it's going to do that since internal API can and will change over time. True, but some guarantee of atomicity can be proposed regardless of implementation: if only an attribute is supposed to change based on the Vue template, then the only thing in DOM that should change is the corresponding real attribute. sub-optimal to replace whole input when switching from one text-like to another It's probably suboptimal to ever replace the input no matter what the type transition. When changing type from text to file, DOM changes value to empty string. I don't think a user is going to try that though. If they do, they can simply use either (or both) v-on:input and v-on:change as needed. They can make that choice the same way they would with vanilla DOM. In any case, I don't think there's any need to replace the input element. Here's a fiddle that shows input and change events with type change from text to file: https://jsfiddle.net/trusktr/02gvxyLv/1/ I think it is the interest of a tool to work with existing standardized DOM functionality, not modifying expectations. FYI this is an intentional mechanism for dealing with <input> with v-model bindings + different type bindings. When you toggle between, say type="text" and type="checkbox", the event/value bindings (generated at compile time) would be different and it is more straightforward to replace the element. Toggling between text and password is a use case that's not been considered before and is easy to fix. Hi, I am using Vue 3.8.4 and this exact behavior it still happening to me: When text is typed into the input itself, its setting the value of the type property to whatever is typed into the input field
gharchive/issue
2017-08-08T00:05:16
2025-04-01T06:40:53.388059
{ "authors": [ "AdamBD", "nickmessing", "trusktr", "yyx990803" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/6313", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
327760954
Transition behavior is different when an uncalled done is provided in hook method Version 2.5.16 Reproduction link http://jsbin.com/wotazalama/1/edit?html,css,js,output Steps to reproduce In the demo link, click toggle to show and hide element What is expected? The two boxes are configured with <transition>, only differs in enter and leave hook. One is called with enter(el){...} and leave(el){ ... } . The other one is called with enter(el, done){...} and `leave(el, done){ .... }. Both function do the same thing and the done callback is not called. It is expected that the two boxes enter and leave the same way. What is actually happening? Box hooked with enter(el, done){...} and leave(el, done){...} do transition properly. Box hooked with enter(el){...} and leave(el){...} do not have leave transition. This is a re-submit of #8275 , I have messed up that demo. This one should properly demo the case. It also said // the done callback is optional when // used in combination with CSS In the demo case, the transition is actually a CSS one. The js hook part only set a max-height which can only be determined using script (el.scrollHeight). I looked into some source code, the function length do have effect. From what I read is that if done is not provided, it will try to detect the transition end in CSS. Most interestingly, the behavior is linked with both enter and leave. enter(el) + leave(el) ==> ❌ leave transition not run enter(el, done) + leave(el) ==> ✔️ leave transition run enter(el) + `leave(el, done) ==> ❌ leave transition not run enter(el, done) + leave(el, done) ==> ✔️ leave transition run // the done callback is optional when // used in combination with CSS "optional" does not mean it's optional to call it. it's optional to add it. If you add the argument however, you have to call done() . If you don't add it, Vue knows that it can only wait for the CSS to finish. this behaviour of the done parameter is similar to other libraries that provide done callbacks, testing frameworks like mocha, jasmine, jest, ava, etc. pp. @yyx990803 @LinusBorg I'm using CSS transition here, I should not provide the done call to make it work. The example itself is trying to use CSS transition, which is supposed to work with enter(el) and leave(el), but it is not. You can check this, which is what I originally intended to do. It's an accidental discovery that by provide using enter(el, done), the CSS transition is working unexpectedly. I tend to think it's a browser limitation on the event looping. So with CSS transition on, these code will not run the transition: el.style.maxHeight = '50px'; el.style.maxHeight = 0; As the doc says, providing done means you intend to explicitly control the end timing of the transition by eventually calling it at some point, so it disables the CSS-based auto transition end timing detection. Rule of thumb: if you listed done in your arguments you must call it. @yyx990803 I think I'm not making myself clear here. The CSS transition below is not working. It has no done in the parameter. /* css */ .slide-down-enter-active, .slide-down-leave-active { transition: all .5s ease; overflow: hidden; } /* js */ beforeEnter(el){ el.style.maxHeight = '0px'; }, enter(el){ el.style.maxHeight = `${el.scrollHeight}px`; }, afterEnter(el){ el.style.maxHeight = ''; }, beforeLeave(el){ el.style.maxHeight = `${el.scrollHeight}px`; }, leave(el){ el.style.maxHeight = '0px'; }, afterLeave(el){ el.style.maxHeight = ''; } Should I file another issue on this only? Sorry that my description here is confusing. 先看下面一个例子 /* css */ .slide-down-enter-active, .slide-down-leave-active { transition: all .5s ease; overflow: hidden; } /*js*/ el.style.maxHeight = '100px'; el.style.maxHeight = '0px'; 显然,运行上面的代码是不会有任何的过渡效果。 对于enter函数钩子,不传done(第二个参数),过渡效果结束之后,会执行afterEnter函数钩子;如果传了done,1)调用done函数,执行完done,UI上改动会立即执行完毕,然后调用afterEnter。2)不调用done,动画执行完毕也不会调用afterEnter。 例子: beforeEnter(el){ el.style.maxHeight = '0px'; }, enter(el){ el.style.maxHeight = `100px`; }, afterEnter(el){ el.style.maxHeight = ''; }, beforeLeave(el){ el.style.maxHeight = `100px`; }, leave(el){ el.style.maxHeight = '0px'; }, afterLeave(el){ el.style.maxHeight = ''; } enter run result: beforeEnter: el.style.maxHeight = '0px'; enter: el.style.maxHeight = `100px`; // 这里间隔了0.5s,所以能看到过渡效果 afterEnter: el.style.maxHeight = ''; leave run result(初始值el.style.maxHeight = ''): beforeLeave: el.style.maxHeight = `100px`; leave: el.style.maxHeight = '0px'; afterLeave: el.style.maxHeight = ''; 上面好3次给 maxHeight 赋值(''->100px->'0px'->''),并且最后一次赋值与初始值相同,每次 赋值时间间隔几乎为0,所以看不到任何过渡效果。 如果把enter钩子改为下面的形式: enter(el, done){ el.style.maxHeight = `100px`; } 这样,afterEnter不会被执行。 enter run result: beforeEnter: el.style.maxHeight = '0px'; enter: el.style.maxHeight = `100px`; 在0.5s内可以看到过渡效果,之后不会执行 afterEnter; leave run result(初始值el.style.maxHeight = '100px'): beforeLeave: el.style.maxHeight = `100px`; leave: el.style.maxHeight = '0px'; afterLeave: el.style.maxHeight = ''; maxHeight 从100px变为 ‘’,在接下来0.5s内可以看到过渡效果。 现在回过头来看 @yyx990803 尤大 的评论,说的都在点子上。 @jackysee 先看下面一个例子 /* css */ .slide-down-enter-active, .slide-down-leave-active { transition: all .5s ease; overflow: hidden; } /*js*/ // el.style.maxHeight 初始值为 0 el.style.maxHeight = '100px'; el.style.maxHeight = '0px'; 显然,运行上面的代码是不会有任何的过渡效果。 对于enter函数钩子(对于leave也是这样),不传done(第二个参数),过渡效果结束之后,会执行afterEnter函数钩子;如果传了done,1)调用done函数,执行完done,UI上改动会立即执行完毕,然后调用afterEnter。2)不调用done,动画执行完毕也不会调用afterEnter。 例子: beforeEnter(el){ el.style.maxHeight = '0px'; }, enter(el){ el.style.maxHeight = `100px`; }, afterEnter(el){ el.style.maxHeight = ''; }, beforeLeave(el){ el.style.maxHeight = `100px`; }, leave(el){ el.style.maxHeight = '0px'; }, afterLeave(el){ el.style.maxHeight = ''; } enter run result: beforeEnter: el.style.maxHeight = '0px'; enter: el.style.maxHeight = `100px`; // 0.5s后执行afterEnter,这里能看到过渡效果 afterEnter: el.style.maxHeight = ''; leave run result(初始值el.style.maxHeight = ''): beforeLeave: el.style.maxHeight = `100px`; leave: el.style.maxHeight = '0px'; // 0.5s后执行 afterLeave afterLeave: el.style.maxHeight = ''; 我猜测 el.style.maxHeight = '',当出现过渡动画时,maxHeight会被初始化成 0。 所以上面3次给 maxHeight 赋值,改变过程为0->100->0->0,其中前两次改变时间 间隔几乎为0,所以看不到任何过渡效果。 如果把enter钩子改为下面的形式: enter(el, done){ el.style.maxHeight = `100px`; } 这样,afterEnter不会被执行。 enter run result: beforeEnter: el.style.maxHeight = '0px'; enter: el.style.maxHeight = `100px`; 在0.5s内可以看到过渡效果,之后不会执行 afterEnter; leave run result(初始值el.style.maxHeight = '100px'): beforeLeave: el.style.maxHeight = `100px`; leave: el.style.maxHeight = '0px'; afterLeave: el.style.maxHeight = ''; maxHeight 从100px变为 0,在接下来0.5s内可以看到过渡效果。 @yyx990803 尤大说的是对的。 @jackysee
gharchive/issue
2018-05-30T14:37:18
2025-04-01T06:40:53.408642
{ "authors": [ "LinusBorg", "jackysee", "leeezhou", "yyx990803" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/8279", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
416744098
Event propogation before component is created and conditional rendering Version 2.6.8 Reproduction link https://codesandbox.io/s/wwoxp636jk Steps to reproduce Add a click listener on window in a child component What is expected? Before Vue 2.6, the click event was not propagated to the child component because it was not created yet. What is actually happening? The click event is now propagated to the child after it is created and mounted. Do I need now to explicitly add the modifier .stop to all my parent click events if I add click listeners on window in my child components? Duplicate of #9616 Also see https://github.com/vuejs/vue/issues/9478 and https://github.com/vuejs/vue/issues/9464
gharchive/issue
2019-03-04T10:55:23
2025-04-01T06:40:53.412512
{ "authors": [ "lucpotage", "posva" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/9615", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1306213314
fix #12666: vue.d.ts: Relativise to v3-component-public-instance What kind of change does this PR introduce? (check at least one) [x] Bugfix [ ] Feature [ ] Code style update [ ] Refactor [ ] Build-related changes [ ] Other, please describe: Does this PR introduce a breaking change? (check one) [ ] Yes [x] No If yes, please describe the impact and migration path for existing applications: The PR fulfills these requirements: [x] It's submitted to the main branch for v2.x (or to a previous version branch), not the master branch [x] When resolving a specific issue, it's referenced in the PR's title (e.g. fix #xxx[,#xxx], where "xxx" is the issue number) [ ] All tests are passing: https://github.com/vuejs/vue/blob/dev/.github/CONTRIBUTING.md#development-setup [ ] New/updated tests are included Other information: I didn't investigate why vue's tsconfig incorrectly resolves an absolute reference to v3-component-public-instance, so I was unable to start with a failing test. The setting that allows types/tsconfig.json to resolve absolute references by mistake is baseUrl: ".". I removed it, although I don't know if it'll break the tests.
gharchive/pull-request
2022-07-15T16:10:49
2025-04-01T06:40:53.418362
{ "authors": [ "sandersn" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/pull/12668", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
188138530
Nested version search index problem @maxiloc sorry to bother again, but as we are planning to merge multiple major versions of the docs under one domain (from vuejs.org+v1.vuejs.org to vuejs.org/v2 and vuejs.org/v1), we realized that Algalia may end up including both versions under the same index, and the search results can be confusing. Basically, our question is - is it possible to make one index only crawl pages under vuejs.org/v1, while the other vuejs.org/v2? Yes we do have a multi version capabilities. I updated the vuejs config to prepare for multiversion (https://github.com/algolia/docsearch-configs/commit/4b816ab9660c818ecb23bea97870cd0d58dcf52a) You just need to update the snippet <!-- at the end of the HEAD --> <link rel="stylesheet" href="https://cdn.jsdelivr.net/docsearch.js/2/docsearch.min.css" /> <!-- at the end of the BODY --> <script type="text/javascript" src="https://cdn.jsdelivr.net/docsearch.js/2/docsearch.min.js"></script> <script type="text/javascript"> docsearch({ apiKey: '85cc3221c9f23bfbaa4e3913dd7625ea', indexName: 'vuejs', inputSelector: '### REPLACE ME ####', algoliaOptions: { 'facetFilters': ["version:$VERSION"] }, debug: false // Set debug to true if you want to inspect the dropdown }); </script> $VERSION being v2. When you'll have v1 ready, we'll add it to the config. And you can change the $VERSION to v1 for the v1 part of the website Does that makes sense ? @maxiloc awesome, thanks again! @maxiloc Does this support for japanese translation? Yes we can support it as well but right now http://jp.vuejs.org/v2/api/ does not exist so I can not move it to the new config. @maxiloc thanks! I don't deploy the latest version (2.0 japanese translation) yet. already, I have been set the new configration. https://github.com/vuejs/jp.vuejs.org/blob/lang-ja-2.0/themes/vue/layout/layout.ejs#L78-L84 Are we ok with new configration? Yes as soon as you deploy ping me and I'll update the config @maxiloc thanks! @maxiloc deploy done. search configuration is the following: configration for v2 japanese translation https://github.com/vuejs/jp.vuejs.org/blob/lang-ja/themes/vue/layout/layout.ejs#L78-L84 configration for v1 japanese translation https://github.com/vuejs/v1-jp.vuejs.org/blob/master/themes/vue/source/js/common.js#L20-L25 Ok it's deployed on our side. You just need to update the v1 by this one: docsearch({ apiKey: '0a75952972806d9ad07e387d08e9cc4c', indexName: 'vuejs_jp', inputSelector: selector, algoliaOptions: { facetFilters: ["version:v1"] } }) @maxiloc thanks quick reply! I' fixed configuration. https://github.com/vuejs/v1-jp.vuejs.org/commit/267b40f9cdddd6846798f089f908f3b4daee1134 @maxiloc I think that algolia docsearch does not indexed for vuejs japanese translation. I checked docsearch config of vuejs_jp. https://github.com/algolia/docsearch-configs/blob/master/configs/vuejs_jp.json#L21-L37 It's is not "url": "http://(?P<version>.*?).vuejs.org/guide/", but "url": "http://(?P<version>.*?)-jp.vuejs.org/guide/", Isn't it? @kazupon My bad, it should be fixed now @maxiloc Thank you very much! 😺 @maxiloc, @yyx990803 — could you please give me a hint on how to update Algolia config for Russian translation? Comparing Japanese and English versions I've realized I need to provide different apiKey and indexName, but I don't seem to understand where could I get 'em. I need to configure it. What is the url ? https://ru.vuejs.org/ Here you go: index_name: vuejs_ru api_key: c6f9366f6f7fe057ee3e01747b603d9f @maxiloc awesome! Thank you! I believe this is all resolved now, so closing. 🙂
gharchive/issue
2016-11-09T00:09:33
2025-04-01T06:40:53.431517
{ "authors": [ "chrisvfritz", "gbezyuk", "kazupon", "maxiloc", "yyx990803" ], "repo": "vuejs/vuejs.org", "url": "https://github.com/vuejs/vuejs.org/issues/575", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2129518377
Text direction is RTL by default needs Can provide clear way to make Text direction is RTL by default. @Ali7med did you find clear way?
gharchive/issue
2024-02-12T06:47:20
2025-04-01T06:40:53.530824
{ "authors": [ "Ali7med", "MmKargar" ], "repo": "vueup/vue-quill", "url": "https://github.com/vueup/vue-quill/issues/507", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
381097206
Version 0.10 fails to build after cargo upgrade I have a previously working application that uses vulkano version 0.10 and vulkano-shader-derive version 0.10. After a recent cargo update vulkano fails to build with error[E0422]: cannot find struct, variant or union type `MirSurfaceCreateInfoKHR` in module `vk` --> C:\Users\Michael\.cargo\registry\src\github.com-1ecc6299db9ec823\vulkano-0.10.0\src\swapchain\surface.rs:314:29 | 314 | let infos = vk::MirSurfaceCreateInfoKHR { | ^^^^^^^^^^^^^^^^^^^^^^^ did you mean `XcbSurfaceCreateInfoKHR`? error[E0425]: cannot find value `STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR` in module `vk` --> C:\Users\Michael\.cargo\registry\src\github.com-1ecc6299db9ec823\vulkano-0.10.0\src\swapchain\surface.rs:315:28 | 315 | sType: vk::STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ did you mean `STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR`? Building [====================================================> ] 123/127: vulkano Might be related to issue #1108 . Oh oops, I published a breaking change to vk-sys as a minor change. Maybe I should have a separate changelog for vk-sys so this doesnt happen again. I cant yank vk-sys 0.3.4 because I don't have access. @tomaka In the meantime you can manually set to 0.3.3in your Cargo.lock I cant yank vk-sys 0.3.4 because I don't have access. @tomaka Done! @tomaka Oh no, this broke 0.11 When https://github.com/vulkano-rs/vulkano/pull/1115 is merged can you you yank 0.11.0? @tomaka I think you forgot to yank v0.11.0. @tomaka 0.11.0 still needs to be yanked @rukai Done!
gharchive/issue
2018-11-15T10:31:39
2025-04-01T06:40:53.549113
{ "authors": [ "MichaelMauderer", "newpavlov", "rukai", "tomaka" ], "repo": "vulkano-rs/vulkano", "url": "https://github.com/vulkano-rs/vulkano/issues/1113", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
810962785
Memory leaks if try to unblind a lot of utxos in FetchAndUnblindUtxos Step to reproduce: run the tests a lot of time without resetting nigiri instance (~ 25 times for me) now the tests accounts have a lot of UTXOS (~ 120) yarn test fails: "Failed: "abort("Cannot enlarge memory arrays. Either (1) compile with -s TOTAL_MEMORY=X with X higher than the current value 16777216, (2) compile with -s ALLOW_MEMORY_GROWTH=1 which allows increasing the size at runtime, or (3) if you want malloc to return NULL (0) instead of this abort, compile with -s ABORTING_MALLOC=0 "). Build with -s ASSERTIONS=1 for more info." ---> This is probably due to the secp256k-zkp WASM version. I investigated a bit, seems to the fact that in the JS bindings are creatign this leak how they are written. This may be similar https://stackoverflow.com/questions/55884378/why-in-webassembly-does-allow-memory-growth-1-fail-while-total-memory-512mb-succ Worth investigating https://github.com/emscripten-core/emscripten/issues/6860
gharchive/issue
2021-02-18T10:22:50
2025-04-01T06:40:53.552727
{ "authors": [ "louisinger", "tiero" ], "repo": "vulpemventures/ldk", "url": "https://github.com/vulpemventures/ldk/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2696244581
Standardizing Login Error Messages Admin and manager-related API tests failed because their emails were not marked as verified in the test fixtures. When trying to authenticate admin or manager users in tests, the server returned a 401 Unauthorized error with the message Email not verified. Fixtures Update: The admin_user and manager_user fixtures in conftest.py were updated: Set email_verified=True for these users to ensure they could log in during tests. Verified Admin and Manager Users: Verified these users in the fixture setup so they could successfully pass login validation in tests.
gharchive/issue
2024-11-26T22:12:07
2025-04-01T06:40:53.557684
{ "authors": [ "vvh24" ], "repo": "vvh24/event_manager", "url": "https://github.com/vvh24/event_manager/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
570293137
Please share your setup Would be nice if you can share your setup with a revers proxy in form of a docker-compose.yml ready to go for popular revers proxy web apps like. nginx (i will add mine) traefik caddy apache maybe Thank you Testing discord webhook Traefik 2 version: '3.6' services: wg-gen-web: image: vx3r/wg-gen-web:latest container_name: wg-gen-web restart: unless-stopped environment: - WG_CONF_DIR=/data - WG_INTERFACE_NAME=wg0.conf - SMTP_HOST=your.smtp.host - SMTP_PORT=465 - SMTP_USERNAME=your_smtp_username - SMTP_PASSWORD=your_smtp_password - SMTP_FROM=Wg Gen Web <address@to.send.from> volumes: - /etc/docker/container-data/wg-gen-web:/data labels: - traefik.http.routers.wg-gen-web.entryPoints=http - traefik.enable=true simple setup with Caddy (i am using my own built container that uses digitalocean token for dns validation - based upon abiosoft/caddy) version: '3.6' services: caddy: image: "sphen/caddy-digitalocean" container_name: caddy environment: - DO_AUTH_TOKEN=abc123 - ACME_AGREE=TRUE volumes: - /home/user/caddy/Caddyfile:/etc/Caddyfile - /home/user/caddy:/root/.caddy ports: - 443:443 depends_on: - wg-gen-web restart: always wg-gen-web: image: vx3r/wg-gen-web container_name: wg-gen-web restart: always environment: - WG_CONF_DIR=/data - WG_INTERFACE_NAME=wg0.conf volumes: - /etc/wireguard:/data Caddyfile: vpn.xxx.com { basicauth / user password proxy / http://wg-gen-web:8080 { transparent } tls { dns digitalocean } } simple setup with Caddy @sphen13 you may be interested in https://github.com/lucaslorentz/caddy-docker-proxy - a Caddy proxy to docker containers with automatic reload of the configuration and detection of container exposed ports. I used it happily for a few months but eventually moved to https://github.com/vx3r/wg-gen-web/issues/19#issuecomment-603488807 Hi (x-posted from the Discord channel), here is a setup for easily running Wg Gen Web on Kubernetes with Kilo: https://github.com/squat/kilo-wg-gen-web The manifests can be found at https://raw.githubusercontent.com/squat/kilo-wg-gen-web/master/manifests/kilo-wg-gen-web.yaml I used it happily for a few months but eventually moved to Traefik ... and then moved back to caddy v2 (using the new API in v2) Wg Dashboard with caddy version: '3.6' networks: monitor-net: driver: bridge services: wgweb: container_name: wgweb build: context: . volumes: - /etc/wireguard:/data expose: - 8888/tcp networks: - monitor-net caddy: image: stefanprodan/caddy container_name: caddy ports: - "8282:8888" volumes: - ./caddy:/etc/caddy environment: - ADMIN_USER=${ADMIN_USER} - ADMIN_PASSWORD=${ADMIN_PASSWORD} networks: - monitor-net labels: org.label-schema.group: "monitoring" Dockerfile FROM ubuntu RUN apt update && \ apt install curl vim net-tools iputils-ping -y RUN mkdir /data WORKDIR /app COPY . /app WORKDIR /app EXPOSE 8888 CMD [ "./wg-gen-web" ] Wireguard API version: '3.6' services: wg-json-api: image: james/wg-api:latest container_name: wg-json-api restart: unless-stopped cap_add: - NET_ADMIN network_mode: "host" command: wg-api --device wg0 --listen 172.27.0.1:8080 Caddyfile :8888 { basicauth / {$ADMIN_USER} {$ADMIN_PASSWORD} proxy / wgweb:8888 { transparent } errors stderr tls off } Env # IP address to listen to SERVER=0.0.0.0 # port to bind PORT=8888 # Gin framework release mode GIN_MODE=release # where to write all generated config files WG_CONF_DIR=/data # WireGuard main config file name, generally <interface name>.conf WG_INTERFACE_NAME=wg0.conf # SMTP settings to send email to clients SMTP_HOST=smtp.gmail.com SMTP_PORT=587 SMTP_USERNAME=************************** SMTP_PASSWORD=************************** SMTP_FROM=************************** #fake OAUTH2_PROVIDER_NAME=fake ADMIN_USER=************************** ADMIN_PASSWORD=************************** WG_STATS_API=http://172.27.0.1:8080
gharchive/issue
2020-02-25T03:50:28
2025-04-01T06:40:53.576377
{ "authors": [ "rahmadsandy", "sphen13", "squat", "vx3r", "wsw70" ], "repo": "vx3r/wg-gen-web", "url": "https://github.com/vx3r/wg-gen-web/issues/19", "license": "WTFPL", "license_type": "permissive", "license_source": "github-api" }
451773391
在本机部署时出错 python3 manage.py runserver 0.0.0.0:8000 请贴出详细报错信息 默认没有在0.0.0.0上启动,导致docker拉起后,浏览器打不开页面,需要进入w12scan_web_1 手动run起来。
gharchive/issue
2019-06-04T02:55:30
2025-04-01T06:40:53.599516
{ "authors": [ "boy-hack", "coffeehb", "myxss" ], "repo": "w-digital-scanner/w12scan", "url": "https://github.com/w-digital-scanner/w12scan/issues/33", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
415227061
Apply one hint at cursor position I want to apply one hint at cursor position. How should I go about this? Could you elaborate? Could you elaborate? I basically want to do something like this: https://github.com/mpickering/hlint-refactor-vim With ALE's default setting, when I do :ALEFix, the hlint fixer would apply fixes to all suggestions in the file. But I want to have the ability to apply the suggested fix to where the cursor is at. Okay. Fixing a range of lines isn't supported and isn't likely to be soon.
gharchive/issue
2019-02-27T16:57:43
2025-04-01T06:40:53.601602
{ "authors": [ "arbitary", "w0rp" ], "repo": "w0rp/ale", "url": "https://github.com/w0rp/ale/issues/2320", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
92673004
the async API breaks process.nextTick() within the callback I discovered this issue when I wrapped eval(expr,cb) into a Promise-returning API. Promises execute their callbacks via process.nextTick() If you call process.nextTick(foo) from within the evaluation callback, then foo will not execute until something else happens (such as another timeout expires). See these Mocha tests: import * as julia from 'node-julia'; import { expect } from 'chai'; describe('node-julia', () => { it('should work asynchronously', done => { julia.eval("2+3", (err, result) => { if (!err) { expect(result).to.equal(5); } done(err); }); }); it('should not mess up process.nextTick() when called asynchronously', done => { julia.eval("2+3", (err, result) => { if (!err) { expect(result).to.equal(5); } process.nextTick(() => done(err)); }); }); it('should not require calling setTimeout() to fix process.nextTick() when called asynchronously', done => { julia.eval("2+3", (err, result) => { if (!err) { expect(result).to.equal(5); } process.nextTick(() => done(err)); setTimeout(() => {}, 0); }); }); }); The first test passes. The second test fails (times out) The third test (where I added an empty setTimeout) passes. It seems like it is related to this issue: https://github.com/joyent/node/issues/7714 I'm guessing your callbacks are somehow bypassing the nodejs event loop and so the event loop does not know to drain the nextTick queue until something else happens to wake up the event loop. FYI this is the JavaScript stack trace I get if I print it via (console.log(new Error().stack)). Inside my callback: at /.../node-julia-test.es6:17:36 Inside my nextTick callback: at /.../node-julia-test.es6:28:61 at process._tickCallback (node.js:355:11) node-julia version: node-julia@1.1.2 (git://github.com/wateim/node-julia.git#73eb78f3873eeaafa96e4c81b7b5560f30e35435) node version: v0.12.5 OS: Linux ubuntu-14 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux and if it helps, https://github.com/JuliaLang/julia/commit/fe7203e8b747a7caf472b59e5fa41f6cc86b1489 I don't remember doing anything non standard as far as async notification. Just basic uv_async_send, but I do remember there can be only 1 notify delivery for multiple things in uv_async_send so knowing that there's a similar thing that happens in nj maybe something subtle about that. This might prove difficult as inserting logging will alter the eventing behavior. Might be worth attempting with iojs just for perspective and (lack of) reproducibility. BTW just for info, are you using Q or bluebird or something else for the promises wrapping? I'm guessing that it may not be enough to notify libuv about your callback, but you may need to specifically kick in nodejs's bookkeeping, e.g. whatever's going on with process._tickCallback It looks like one way or another you need to call MakeCallback() to actually run the callback and allow nodejs to trigger its nextTick processing logic (and also its domain logic which I also notice is not working with node-julia. It looks like all of the node native libraries use this method. Here's the timers, for example: https://github.com/joyent/node/blob/master/src/timer_wrap.cc#L140 FYI, here's some unit tests showing the callbacks also are not running on the current domain: it('this one works', done => { const d = domain.create(); d.on('error', () => done()); d.run(() => { process.nextTick(() => { throw new Error("some error"); }); }); }); it('this one fails', done => { const d = domain.create(); d.on('error', () => done()); d.run(() => { julia.eval("2+3", () => { throw new Error("some error"); }); }); }); Re: promises. We use RxJs for most of our async stuff so I chose not to pull in a library like bluebird or Q. So I just use this function to wrap: /** * Converts a Node.js callback style function to a Promise. This must be in function (err, ...) format. * @param {Function} func The function to call * @param {Function} [selector] A selector which takes the arguments from the callback minus the error to produce a single item to yield on next. * @returns {Function} An async function which when applied, returns a Promise with the callback results (if there are multiple results, then they are supplied as an array) */ export function fromNodeCallback(func, selector) { var newFunc = function (...args) { return new Promise((resolve, reject) => { func(...args, (err, ...results) => { if (err) { reject(err); } else { let finalResults = results.length > 1 ? results : results[0]; // run the selector if provided if (selector) { try { finalResults = selector(finalResults); } catch (e) { reject(e); } } resolve(finalResults); } }); }); }; newFunc.displayName = func.displayName; return newFunc; }; Usage: const evalAsync = fromNodeCallback(julia.eval); const myPromise = evalAsync("2+3"); Hey sorry for the catch up questions. Are you guys using a transpiler like traceur or babel, I was able to get mocha to process the test but only after changing back to the using require rather than import. yes I'm using babel + babel runtime + webpack. I believe a7cbe3b2 has addressed this problem, the test case passes, and according to nodejs/nan#284 and this stackoverflow question. Yes this checkin resolves the nextTick issue. The domain test still fails. I'll open a different issue for that. Thanks
gharchive/issue
2015-07-02T16:02:13
2025-04-01T06:40:54.037201
{ "authors": [ "bman654", "sebastiang", "waTeim" ], "repo": "waTeim/node-julia", "url": "https://github.com/waTeim/node-julia/issues/14", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2106353660
🛑 qBit is down In 3b35e51, qBit ($TORRENT) was down: HTTP code: 530 Response time: 267 ms Resolved: qBit is back up in e5bb252 after 9 minutes.
gharchive/issue
2024-01-29T20:23:52
2025-04-01T06:40:54.039854
{ "authors": [ "waallaby" ], "repo": "waallaby/up-time", "url": "https://github.com/waallaby/up-time/issues/232", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1857292754
🛑 Plex Server | Waallaby-Net is down In ebf43b4, Plex Server | Waallaby-Net ($PLEX) was down: HTTP code: 502 Response time: 1004 ms Resolved: Plex Server | Waallaby-Net is back up in bc045f6 after 5 days, 21 hours, 56 minutes.
gharchive/issue
2023-08-18T21:06:57
2025-04-01T06:40:54.042088
{ "authors": [ "waallaby" ], "repo": "waallaby/up-time", "url": "https://github.com/waallaby/up-time/issues/62", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2598220110
Support wasm32-wasip2 on the stable channel Hey! 👋🏻 First of all, I'm hugely inspired by how simple this crate makes wasi:http in Rust. It looks really nice, and your implementations feel nice and idiomatic. I was going to do something similar to this for https://wasmcloud.com/, but it looks like you've already done a lot of what I was hoping for 😄 Since this crate depends on url, I'm pretty familiar with one of the small hiccups that affected wasm32-wasip2 on the nightly toolchain https://github.com/servo/rust-url/pull/960. I just filed an upstream PR to fix this for the stable toolchain as well https://github.com/servo/rust-url/pull/983, but I want to be sensitive to how prevalent the url crate is and that this target just landed in Rust 1.82.0. I would love to use this library, selfishly for https://crates.io/crates/wasmcloud-component. To be clear, we're using the wasmcloud-component essentially as a stopgap for things that don't exist in the wasi crate and its wrappers are all wasi-p2 compatible. Anyways, do you have any thoughts on ways we can support this crate for wasm32-wasip2 on the stable channel in the meantime? I suppose another possible rabbit hole here is that the request code looks similarly structured to reqwest https://github.com/seanmonstar/reqwest/issues/2294, what do you think about collaborating there to add support to that library, or are you more interested in having a wasi-first library here Hi @brooksmtownsend, thanks for your interest in this project, I do want to have a wasi-first library here. Actually, I have already been working on fixing this issue. Since the release of Rust 1.82, I have noticed the problem, and I have also seen your patch in the rust-url crate. I am planning to stop relying on the url crate. A new version will be released in the next few days. I have released v0.4.0, please give it a try. Amazing! I'll give it a shot today. Great Monday present @iawia002
gharchive/issue
2024-10-18T19:16:47
2025-04-01T06:40:54.048394
{ "authors": [ "brooksmtownsend", "iawia002" ], "repo": "wacker-dev/waki", "url": "https://github.com/wacker-dev/waki/issues/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
279241274
您好,get_k_data 时间参数没传进去 您好,get_k_data函数调用并打印如下: sxdata = ts.get_k_data(code,start='2017-05-01',end='2017-11-29',ktype='60') print(sxdata) 结果显示起始日期是2017-06-05,结束日期是2017-12-04(昨日),如附件 分钟数据只有最近两周的,建议用ts.bar
gharchive/issue
2017-12-05T04:22:31
2025-04-01T06:40:54.050646
{ "authors": [ "jimmysoa", "mcze333" ], "repo": "waditu/tushare", "url": "https://github.com/waditu/tushare/issues/540", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2068410262
Auto-generate tag slugs when creating tags directly in the Snippet panel Is your proposal related to a problem? Problem 1: If you create a tag from the Snippet panel, a slug for the tag is not autogenerated (like how a slug is autogenerated when you create a Page for the first time). Note: This is creating a new slug from the Snippet panel. This is not an issue if creating a new tag directly from a page. Problem 2: If you create a tag from the Snippet panel, and you paste the name of the tag into the slug field, it does no convert it into an acceptable format (similar to when you paste text into a page slug). For example, let's say I have a tag called "Life Skills". If this was a page and I pasted "Life Skills" into the page slug field, it would convert the text to "life-skills" (all lower case with dashes instead of spaces). If I paste that into the Tag slug field, it remains "Life Skills". Setup: Wagtail 5.2 Python 3.8 Django 4.2.6 Describe the solution you'd like Solution to Problem 1: Go to the Snippet panel and click the button to create a new tag. Enter the name of the tag. As the name is typed, the tag is created automatically in the proper format (Ex: If I type "Life Skills", the slug becomes "life-skills" automatically). Solution to Problem 2: Go to the Snippet panel and click the button to update a tag slug. Paste some text with or without spaces (with capital letters or not) into the slug field (Ex: "CHORES" or "House Chores"). When pasted, it converts the text to a lower case version of the text with dashes between spaces (EX: "chores" or "house-chores"). Describe alternatives you've considered I manually write the slug names and try to follow the same rules used when Page slugs are generated or when tags get created directly from a page. Not sure how feasible a solution would be. It seems to be limitations on tags in general from this discussion: #4109 . So perhaps part of the problem is a side effect of the Taggit library? Additional context Since pages use slugs as well, it would be nice from a usability perspective if page slugs and tag slugs shared similar behavior. We have 3 options. We can use the slugify library:- from django.utils.text import slugify slug_value=slugify(page_title) Might need to generate a unique slug using uuid import uuid #while slug_value already exists in database: slug_value=f'{slugify(slug_value)}'-'{str(uuid.uuid4())[:4]}' We also have AutoSlugField which can be directly added in models. First we need to install pip install django-autoslug from autoslug import AutoSlugField slug_value=AutoSlugField(populate_from='title', unique=True, null=True, default=None) We can also use it directly in jinja. <p>{{ text|slugify }}</p>
gharchive/issue
2024-01-06T05:51:51
2025-04-01T06:40:54.059495
{ "authors": [ "Nishikant00", "RoseOfSteel" ], "repo": "wagtail/wagtail", "url": "https://github.com/wagtail/wagtail/issues/11422", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2548278783
Issue with column widths in document chooser when there are long filenames Issue Summary When there are long filenames (without spaces) for documents, the column widths of the table in the document chooser are stretched (especially when the browser width is narrow) so it is difficult to read the document title. Steps to Reproduce Upload a document with a long filename (with no spaces) Edit a page with a document chooser View the modal in a narrow browser window See attached screengrab Technical details Python version: unknown, I'm just raising this from an editorial perspective Django version: unknown, I'm just raising this from an editorial perspective Wagtail version: 6.1.3 Browser version: Chrome 128 Working on this Anyone can contribute to this. View our contributing guidelines, add a comment to the issue once you’re ready to start. Thanks @davidjamesharris! Confirmed on current main branch - it's mitigated somewhat by the fix to the excessive right margin (which was caused by the sidebar styles from the main page leaking into the chooser) but still exists: @davidjamesharris Can I work on this issue? @Sumitsh28 Thanks for your interest! Yes, anyone is welcome to work on any issue. I will give this a go! I will give it a try!! I will also trying to give a solution I want to work on this issue please assign me this issue. @davidjamesharris is this issue still open? @shauryapanchal Please see #12390 - a fix has been submitted already, but needs more work. You're welcome to pick this up if you're interested. @gasman understood.. i'll try to come up with a solution.. thanks for the update! @shauryapanchal Would you mind i pick this up real quick
gharchive/issue
2024-09-25T15:14:10
2025-04-01T06:40:54.066665
{ "authors": [ "Sambodhi-Roy", "SayanDutta651", "Sumitsh28", "TejasSaraf", "aminechm", "davidjamesharris", "frankyiu", "gasman", "shauryapanchal" ], "repo": "wagtail/wagtail", "url": "https://github.com/wagtail/wagtail/issues/12357", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1704816107
Docs/tutorial getting started facelift No code changes. Only documentation revision for the getting started tutorial. I wanted to learn Wagtail, so I started with the tutorial and made changes along the way when I got confused. Definitely open to any feedback or improvements that I can make :) Changes: Addition of learning objectives describing what the tutorial will accomplish with the learner. Added base prerequisite knowledge and the description of the intended audience of the tutorial. Fixed up typos and a couple of inconsistent language occurrences. Rewrote sections that were lengthy and used numbered lists for complex steps Simple explanations for admin users, database migrations, SQLite, and Wagtail's QuerySet modifiers. Added hierarchical tree structure visuals. Improved call to action at the end of tutorial. 100% more jazz hands (when I was relieved something worked) Stayed the same: General tutorial layout, I found it adequate to get me started Tutorial images, I saw no reason to change, since I think the interfaces are the same. Odd references like (virtual_environment_creation)= and (tutorial_categories)= which I do not see a reference anywhere in the document? Tested on Ubuntu 22.04 LTS and Windows 10. I ran it through a word processor to catch spelling mistakes and generated a working local version using Sphinx. Please check the following: [x] Do the tests still pass?[^1] [x] Does the code comply with the style guide? [ ] Run make lint from the Wagtail root. [ ] For Python changes: Have you added tests to cover the new/fixed behaviour? [ ] For front-end changes: Did you test on all of Wagtail’s supported environments?[^2] [ ] Please list the exact browser and operating system versions you tested: [ ] Please list which assistive technologies [^3] you tested: [ ] For new features: Has the documentation been updated accordingly? One more helpful resource from our docs for your first contribution - https://docs.wagtail.org/en/stable/contributing/first_contribution_guide.html It includes some of the general feedback already given but hopefully helps you get to a first few PRs that can be easily reviewed and merged in. @lb- @thibaudcolas Thank you both so much for the feedback! I can definitely split these edits into smaller reviewable chunks. Those resources that you listed are incredibly helpful, I am a newbie with contributing to open source so I really appreciate the guidance. I will work on getting these suggested changes in the coming week. Spending most of my free time in Hyrule this weekend :) To be clear, I should open separate PRs for each edits to make it easier for the core team to review? Does that mean we should keep this PR open and reference the smaller PRs to this specific thread? I do not mind if the tutorial goes in a different direction, this exercise helped me solidify the concepts after writing about it and hopefully this is useful to others in their journey. The most confusing portion of the tutorial was the QuerySet modifiers, which is why I created a simple tree visual for the reader. Also it took me some time to figure out why Parents and Children pages were referenced before the Tree hierarchy explanation. Cheers! @kev-odin in regards to your question about the PR. It's up to you what you want to do. You can close this and open discrete PRs or change this PR to reflect your first desired change based on feedback. Maybe in the meantime you can change this to a draft until ready for another review.
gharchive/pull-request
2023-05-11T00:57:03
2025-04-01T06:40:54.076662
{ "authors": [ "kev-odin", "lb-" ], "repo": "wagtail/wagtail", "url": "https://github.com/wagtail/wagtail/pull/10425", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1996692523
Restore the ability to have content type facets when searching pages with an empty query This restores the ability to have content type facets when searching pages with an empty query: We lost the ability to do this in 5.1 when the view was refactored from an FBV to a CBV: https://github.com/wagtail/wagtail/pull/10615/commits/52abcee04363ea9dae36ecbd87b01f4137a7ce3e#diff-dc3a1012fc5394a6a1abd8e254ed29cfd5107ba2fc6bb7b6127b4e6386d8dea9L108 The facets are only applied if there's a search query. Previously this was detected with if "q" in request.GET, but it was changed to if self.q (with self.q = request.GET.get("q", "")), so empty queries (e.g. clicking search on the left menu and press enter without typing anything) will not show the content type facets. In addition, the adoption of #10645 to make the header search use the Stimulus SwapController would mean the facets will also be lost if you type something in the header search and clear it afterwards, as the q query param will be removed: https://github.com/wagtail/wagtail/blob/ba17ef19d333570d9cca0195178e5e15babae93c/client/src/controllers/SwapController.ts#L178-L183 To fix this, we can either: always do a search and apply the facets even if the query param is empty (which I've done here) or, make a distinction between and empty q vs a non-existent q, and then also update SwapController so that it doesn't remove empty query params. I tried doing it this way in 7ec15f7aa08f25df5d27e273d43d98fe3fe13b4e, but it seems more intrusive. This will probably be made redundant with page type filters in Universal Listings (#10446), but at the moment there's no alternative when you'd like to see all pages filtered by the page type. (Maybe #10850, but that's a separate topic.) Please check the following: [x] Do the tests still pass?[^1] [x] Does the code comply with the style guide? [x] Run make lint from the Wagtail root. [x] For Python changes: Have you added tests to cover the new/fixed behaviour? [^1]: Development Testing [^2]: Browser and device support [^3]: Accessibility Target Merged in 3af26aa30e82bbb6d5c9988fc7d23e1fcc9fe74b (main) / e8ff6a2fa3 (stable/5.2.x).
gharchive/pull-request
2023-11-16T11:55:08
2025-04-01T06:40:54.084135
{ "authors": [ "gasman", "laymonage" ], "repo": "wagtail/wagtail", "url": "https://github.com/wagtail/wagtail/pull/11243", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2103771343
Update Ordering, wording and help text for private pages Fixes #11535 After Changes: Please check the following: [x] Do the tests still pass?[^1] [x] Does the code comply with the style guide? [ ] Run make lint from the Wagtail root. [ ] For Python changes: Have you added tests to cover the new/fixed behaviour? [ ] For front-end changes: Did you test on all of Wagtail’s supported environments?[^2] [ ] Please list the exact browser and operating system versions you tested: [ ] Please list which assistive technologies [^3] you tested: [ ] For new features: Has the documentation been updated accordingly? Please describe additional details for testing this change. [^1]: Development Testing [^2]: Browser and device support [^3]: Accessibility Target @lb- I would greatly appreciate your feedback and suggestions on the current changes. I'll update the tests once I am finished with the updates and everything looks satisfactory. Sorry it was not clear on the issue, can you please put the shared password option under 'public'. Thanks @lb-, I've made the changes
gharchive/pull-request
2024-01-27T19:58:12
2025-04-01T06:40:54.090343
{ "authors": [ "lb-", "rohitsrma" ], "repo": "wagtail/wagtail", "url": "https://github.com/wagtail/wagtail/pull/11546", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
225540704
onActivityResult issue Good day, I just started using your library. Nice work. But I can't get 2 static Strings from the LongImageActivity which are LONG_IMAGE_RESULT_CODE and IMAGE_PATH_KEY. Any solution? Thanks. Hello, These are declared in LongImageActivity like this: public static final String IMAGE_PATH_KEY = "imagePathKey"; public static final int LONG_IMAGE_RESULT_CODE = 1234; you can statically import these as well like import static com.wajahatkarim3.longimagecamera.LongImageCamera.*; This should solve your issue. Again, still you are facing same kind of issue, then you can use values directly instead of the variables.
gharchive/issue
2017-05-01T22:10:49
2025-04-01T06:40:54.104119
{ "authors": [ "tayorh27", "wajahatkarim3" ], "repo": "wajahatkarim3/LongImageCamera", "url": "https://github.com/wajahatkarim3/LongImageCamera/issues/1", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1553615205
Sign cli with digital signature I used to use WakaTime at work to track my programming time, however, the CLI tool kept getting flagged by security because it isn't signed. I would like to keep using the WakaTime service is it possible to sign the your CLI tool? What OS are you using? Would you share the report you got? I'm assuming Windows? Hi, yes it's Windows 11. I'll get you the full details when I'm at work tomorrow. Hi, the exact reports I got from security were hope you're well. We received an alert this morning that an unsigned software communicating externally to an api for a program called Wakatime. Is this regular behaviour for your device? I'd expect that there may be other tools that could track this type of performance with the dev team, might be worth finding out if there is such a tool being used as our stance on unsigned Github software may change in the future. Hi we have had an alert from your machine that "wakatime-cli-windows-amd64.exe" has been making connections to api.wakatime.com any idea what this is.  Many Thanks Hi, is there a verdict on this request, are you willing you support this feature? Yes, we're working on getting a cert for signing Windows builds. Hi all, I'd like to contribute to this issue as well. I'm on Win10 and AVG blocks wakatime-cli-windows-amd64.exe from running (stating "IDP.ARES.Generic") though does not detect when scanned, but here's the report from VT: https://www.virustotal.com/gui/file/f2d3bd662aaaa79abd5939cd5b20f0bfe982a6c97582762bc8e9de3d6d867bac (For the record Immunet does not detect, nor does SpybotS&D.) I'm curious as to why some providers consider the file malicious. Any comments from the devs? Adding update here - Previous comment states MalwareBytes does not detect this, however one machine I manage uses MalwareBytes and saw this flagged for the first time beginning 5 hours ago and again in the last 15 minutes. I can confirm that it was wakatime-cli-windows-amd64.exe and the backup which were flagged. The device running it was operating W11 Pro. The timing coincides with this change by @alanhamlett . Probably not related to that change, but the fact that 1 hr ago we did a release so the binary signature changed. Usually once the AV programs all see the new binary signature and start trusting it the false positives go away, but the first day of a release it's more likely to get flagged. That tracks - thanks for the update and great product! I cited that change as the timing matched with the first alert. Likely the second matched as well for the same reason.
gharchive/issue
2023-01-23T19:09:46
2025-04-01T06:40:54.112305
{ "authors": [ "AlfredSimpson", "IzStriker", "alanhamlett", "gandarez", "smladenoff" ], "repo": "wakatime/wakatime-cli", "url": "https://github.com/wakatime/wakatime-cli/issues/817", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
97518566
SECURITY_VIOLATION in new version Hello, does anybody already use this latest version with DHL-Geschäftskundenportal? In production mode (not using the sandbox) I always get the following error message from DHL: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <soap:Fault> <faultcode>soap:Client</faultcode> <faultstring>SECURITY_VIOLATION</faultstring> </soap:Fault> </soap:Body> </soap:Envelope> The funny thing is, that the request is perfectly working in the sandbox environment. I would appreciate if anybody already had this problem and knows what was missing. Cheers, Hannes PS: Yes, I have a valid AppID with the necessary approved action items. You'll want to contact DHL about this. I've not seen it before. I already did. But till now, I didn't get any helpful responses. Do you already use this latest version with the new endpoints? Which new endpoints are you referring to? To the new one: https://cig.dhl.de/services/production/soap I figured out what the problem was. The namespace for the envelope was wrong. I used SOAP 1.2 with the Envelope namespace for SOAP 1.1. (or other way arround? doesn't matter anymore) Hi vinett-de. Could you please provide more information on how you solved the problem? I am using Soap::Lite (v1.1) and getting the same error. My Envelope exactly looks like on the developer pages but my code is only working in sandbox mode. Thanks, Chris Hi vinett-de. I also have the same issue and i am not able to find the solution. Could you please explain how you did it. Hello. I have like same eroor "SECURITY_VIOLATION". I have sent several queries to DHL support - and in last i got reason - this error can be in name of action - example for live mode SOAPAction: "urn:getVersion" but for test SOAPAction: "getVersion". so in php SOAP class function just set : example for createShipmentOrder: $location = "https://cig.dhl.de/services/production/soap"; $options = array( 'login' => $user, //ApplikationsID 'password' => $password, //Applikationstoken 'soap_version' => SOAP_1_1, 'exceptions' =>false, 'trace' => 1, ); $client = new SoapClient($wsdl,$options); $answer = $client->__doRequest($request,$location, 'urn:createShipmentOrder',1); This error can also occur, if you do not have set the correct permissions for your application inside your dhl developer account. Each method you want to call needs to be activated there. This error can also occur, if you do not have set the correct permissions for your application inside your dhl developer account. Each method you want to call needs to be activated there. how to activated?the status of the application is requested granted. To fix SECURITY_VIOLATION, the following notice on this page helped me: Zusätzlich ist für die Geschäftskundenversand-API folgendes zu beachten: In der Produktivumgebung ist im Header zusätzlich zwingend die SOAPAction der jeweiligen Operation mit anzugeben, bspw. "SOAPAction: "urn:createShipmentOrder" After setting this header, the error was gone.
gharchive/issue
2015-07-27T17:55:03
2025-04-01T06:40:54.131732
{ "authors": [ "FloPinguin", "JahangirNaik", "bindablue", "gmile-chris", "lgbr", "poviljaj", "saschanos", "vinett-de" ], "repo": "waldher/dhl-intraship", "url": "https://github.com/waldher/dhl-intraship/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
650066531
[linux] reload comand does not pick up source file change because they are forks Hi, If I modify server.php and then run php server.php reload in linux, the workers are still running the old code because they are forked process and thus their memory is a copy of the parent process. How then can we reload source code change in linux without having to restart the master process every time? For example. start.php $worker = new Worker('websocket://0.0.0.0:6666'); $worker->onWorkerStart = function($worker){ // You can also use spl_autoload to require MyClass file. require __DIR__ . '/MyClass.php'; $my_class = new MyClass(); $worker->onConnect = [$my_class, 'onConnect']; $worker->onMessage = [$my_class, 'onMessage']; $worker->onClose = [$my_class, 'onClose']; }; Worker::runAll(); MyClass.php class MyClass { public function onConnect($connection) { } public function onMessage($connection, $data) { } public function onClose($connection) { } } Thank you, it works. I'll try to explain what happens here for other people that might face the same problem. We put the application code in a class in a separate file. But we don't load the class in server.php. That way, when the process is forked, the application class does not exist yet in the PHP cache. Then we require the application class in onWorkerStart thus causing it to load the updated application class every time a worker is restarted. A caveat of this is that since the application code is required inside a function, any global variables inside it need to be declared global. app.php global $var; $var = 'some global value'; function onMessage($connection, $request) { } 引入文件报错 咋回事? 引入文件报错 咋回事? myclass里操作不了数据库?
gharchive/issue
2020-07-02T16:35:41
2025-04-01T06:40:54.146799
{ "authors": [ "char101", "shunhua", "walkor" ], "repo": "walkor/Workerman", "url": "https://github.com/walkor/Workerman/issues/542", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1394487955
[Feature]: Overwrite nested default config in sweep Description I would love a better handling of nested configs during sweeping. Specifically, I would like to be able to provide a default config in wandb.init(config=defaults) whose parameters are only update by the sweep config. This is an example sweep configuration (in Jupyter): sweep_configuration = { 'method': 'bayes', 'metric': { 'goal': 'minimize', 'name': 'valid/best_loss' }, 'parameters': { 'model': { 'parameters': { 'cgnn': { 'parameters': { 'train': { 'parameters': { 'batch_s': {'values': [4, 8, 16]}, 'lr': {'max': 0.1, 'min': 0.0001} } }, 'model': { 'parameters': { 'num_layers': {'values': [2, 4, 6]}, 'head_num': {'values': [4, 8, 16]}, 'embed_dim': {'values': [64, 128]}, 'ff_hidden_dim': {'values': [256, 512]}, 'softmax': {'values': [True, False]}, 'extra_out': {'values': [True, False]}, } } } } } } } } If there is a better way to define a nested sweep config let me know. A solution could massively reduce overhead when setting up a sweep config. Suggested Solution When providing a nested default config in wandb.init(config=defaults) the config provided by the sweep should preserve the original nested structure of the default config when adding/overwriting values. For the above sweep configuration, this would mean the wandb.config would also contain additional training parameters like weight decay etc. I think a solution should just do a recursive config merge. Alternatives No response Additional Context No response Thank you for writing in @dschaub95. We have recent and in progress updates into how nested sweep configs are handled. I will review your input and provide feedback soon. Any updates on this? It unfortunately makes nested sweep configs impractical to use. Here is a minimal reproducible example: import wandb print(wandb.__version__) def main(): default_config = {'model': {'dim': 2, 'lr': 0.1}} wandb.init(config=default_config, mode='disabled') print(wandb.config) wandb.finish() print('Wandb context') main() print('Sweep Context') sweep_config = { 'method': 'grid', 'parameters': { 'model.dim': { 'values': [8, 16] } } } sweep_id = wandb.sweep(sweep_config) wandb.agent(sweep_id, function=main, count=1) outputs: 0.14.0 Wandb context {'model': {'dim': 2, 'lr': 0.1}} Sweep Context Create sweep with ID: 2pt4a8no Sweep URL: https://wandb.ai/rapharomero/uncategorized/sweeps/2pt4a8no wandb: Agent Starting Run: n9il6acn with config: wandb: model.dim: 8 {'model.dim': 8, 'model': {'dim': 2, 'lr': 0.1}} Instead we would expect the sweep output to be {'model': {'dim': 8, 'lr': 0.1}}} i.e. the model.dim should get updated during the sweep. A second MRE: Using the nested sweep config configuration, with wandb in disabled mode import wandb print(wandb.__version__) default_config = {'model': {'dim': 2, 'lr': 0.1}} def main(): wandb.init(config=default_config, mode='disabled') print(wandb.config) wandb.finish() print('Wandb context') main() print('Sweep Context') sweep_config = { 'method': 'grid', 'parameters': { 'model': { 'parameters': { 'dim': { 'values': [2, 8, 16] } } } } } sweep_id = wandb.sweep(sweep_config) wandb.agent(sweep_id, function=main, count=1) outputs the following error 0.14.0 Wandb context {'model': {'dim': 2, 'lr': 0.1}} Sweep Context Create sweep with ID: fexck74s Sweep URL: https://wandb.ai/rapharomero/uncategorized/sweeps/fexck74s wandb: Agent Starting Run: zdg3kzf2 with config: wandb: model: {'dim': 2} Traceback (most recent call last): File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_init.py", line 1144, in init run = wi.init() File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_init.py", line 556, in init return self._make_run_disabled() File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_init.py", line 513, in _make_run_disabled drun.config.update(self.config) File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_config.py", line 185, in update sanitized = self._update(d, allow_val_change) File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_config.py", line 178, in _update sanitized = self._sanitize_dict( File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_config.py", line 238, in _sanitize_dict k, v = self._sanitize(k, v, allow_val_change) File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_config.py", line 262, in _sanitize raise config_util.ConfigError( wandb.sdk.lib.config_util.ConfigError: Attempted to change value of key "model" from {'dim': 2} to {'dim': 2, 'lr': 0.1} If you really want to do this, pass allow_val_change=True to config.update() During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_init.py", line 1152, in init getcaller() File "/home/raphael/anaconda3/envs/pylab/lib/python3.8/site-packages/wandb/sdk/wandb_init.py", line 838, in getcaller src, line, func, stack = logger.findCaller(stack_info=True) AttributeError: '_EarlyLogger' object has no attribute 'findCaller' wandb: ERROR Abnormal program exit Run zdg3kzf2 errored: Exception('problem') wandb: ERROR Run zdg3kzf2 errored: Exception('problem') However, if I run the same code as before with wandb mode = 'offline',i.e. import wandb print(wandb.__version__) default_config = {'model': {'dim': 2, 'lr': 0.1}} def main(): wandb.init(config=default_config, mode='offline') print(wandb.config) wandb.finish() print('Wandb context') main() print('Sweep Context') sweep_config = { 'method': 'grid', 'parameters': { 'model': { 'parameters': { 'dim': { 'values': [8, 16] } } } } } sweep_id = wandb.sweep(sweep_config) wandb.agent(sweep_id, function=main, count=1) I get 0.14.0 Wandb context wandb: Tracking run with wandb version 0.14.0 wandb: W&B syncing is set to `offline` in this directory. wandb: Run `wandb online` or set WANDB_MODE=online to enable cloud syncing. {'model': {'dim': 2, 'lr': 0.1}} wandb: Waiting for W&B process to finish... (success). wandb: You can sync this run to the cloud by running: wandb: wandb sync /home/raphael/Work/aida_ugent/latent_trajectory-private/code/wandb/offline-run-20230403_163408-tv1q7fyx wandb: Find logs at: ./wandb/offline-run-20230403_163408-tv1q7fyx/logs Sweep Context Create sweep with ID: lvqiwynh Sweep URL: https://wandb.ai/rapharomero/uncategorized/sweeps/lvqiwynh wandb: Agent Starting Run: iir10yjz with config: wandb: model: {'dim': 8} wandb: Tracking run with wandb version 0.14.0 wandb: W&B syncing is set to `offline` in this directory. wandb: Run `wandb online` or set WANDB_MODE=online to enable cloud syncing. {'model': {'dim': 2}} wandb: Waiting for W&B process to finish... (success). wandb: You can sync this run to the cloud by running: wandb: wandb sync /home/raphael/Work/aida_ugent/latent_trajectory-private/code/wandb/offline-run-20230403_163419-iir10yjz wandb: Find logs at: ./wandb/offline-run-20230403_163419-iir10yjz/logs However we would expect the printed output during the sweep to be {'model': {'dim': 8, 'lr': 0.1}} Instead it seems that during the sweep run, the default configs get erased and replaced by the configs specified in the sweep config. To summarize, it would be nice if both the 'dot-trick' syntax and the 'nested parameter' sweep config version would replace the relevant default config by their values specified by the sweep. Just ran into the same issue where the default configs for nested parameter got erased and were replaced by the ones defined in the sweep config. Having the ability to just overwrite the specified keys would be immensely helpful (as well as intuitive). If not, it would be great if this is mentioned in the docs so users know what to expect.
gharchive/issue
2022-10-03T10:16:56
2025-04-01T06:40:54.166683
{ "authors": [ "MBakirWB", "dschaub95", "maartenbuyl", "pecey", "rapharomero" ], "repo": "wandb/wandb", "url": "https://github.com/wandb/wandb/issues/4345", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2719108111
chore(ui): add custom styles to column menu items Description Fixes WB-NNNNN Fixes #NNNN What does the PR do? Include a concise description of the PR contents. Testing How was this PR tested? #3152 👈 (View in Graphite) master This stack of pull requests is managed by Graphite. Learn more about stacking.
gharchive/pull-request
2024-12-05T01:28:30
2025-04-01T06:40:54.170815
{ "authors": [ "bcsherma" ], "repo": "wandb/weave", "url": "https://github.com/wandb/weave/pull/3152", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
353634530
移植了PHP版本 公司项目用百度地图采集了一些点位,需要转换为84和国测局坐标,谢谢您提供的转换库~ 因移动端需要移植了PHP版本,并发布了composer包,托管地址: https://github.com/billy-poon/coordtransform 希望您不介意~ 不会不会,随便用
gharchive/issue
2018-08-24T03:50:12
2025-04-01T06:40:54.172304
{ "authors": [ "billy-poon", "wandergis" ], "repo": "wandergis/coordtransform", "url": "https://github.com/wandergis/coordtransform/issues/25", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
485101715
so good i need it. so good Thanks.
gharchive/issue
2019-08-26T07:59:47
2025-04-01T06:40:54.173228
{ "authors": [ "apimello", "wandergis" ], "repo": "wandergis/coordtransform", "url": "https://github.com/wandergis/coordtransform/issues/27", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
602096912
一个页面嵌入多个编辑框产生的问题 一个页面嵌入多个编辑框后 , toolbar 的部分按钮会失常, 例如 背景色 字体色 标题大小 字体等. 已解决 : 在移动端使用editor会使toolbar溢出,添加 overflow-x 之后会使 窗口错误.
gharchive/issue
2020-04-17T16:32:18
2025-04-01T06:40:54.177962
{ "authors": [ "Li-Lian1069" ], "repo": "wangfupeng1988/wangEditor", "url": "https://github.com/wangfupeng1988/wangEditor/issues/2180", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
216680973
editor.config.withCredentials = false;设置无效 设置了editor.config.withCredentials = false;后,在图片上传ajax请求时还是带上了cookie。跟源码发现:// xhr.withCredentials = editor.config.withCredentials || true;这个一直是true吧? 先自己把源码改一下,后面会统一改了这一个地方 建议升级 v3 版本,v3 解决了这一问题
gharchive/issue
2017-03-24T07:17:20
2025-04-01T06:40:54.179253
{ "authors": [ "liuxx001", "wangfupeng1988" ], "repo": "wangfupeng1988/wangEditor", "url": "https://github.com/wangfupeng1988/wangEditor/issues/626", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1226089645
How do you use this? Could you show us a guide on how to use this, please? It is very confusing. pls check example.py under root folder I mean everything is so confusing and hard to understand to be honest. On Thu, May 5, 2022 at 4:02 PM anhdodotnet @.***> wrote: Video GUI .... PLs — Reply to this email directly, view it on GitHub https://github.com/wanghaisheng/youtube-auto-upload/issues/11#issuecomment-1118665077, or unsubscribe https://github.com/notifications/unsubscribe-auth/AYBHGWT5HSBAAXFLR7ZIV3DVIPPIVANCNFSM5VDSDQXQ . You are receiving this because you authored the thread.Message ID: @.***>
gharchive/issue
2022-05-05T00:23:44
2025-04-01T06:40:54.182881
{ "authors": [ "viktorkovach", "wanghaisheng" ], "repo": "wanghaisheng/youtube-auto-upload", "url": "https://github.com/wanghaisheng/youtube-auto-upload/issues/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1253151459
cookie editing getting error on upload Checklist [X] I'm reporting a bug unrelated to a specific site [X] I've verified that I'm running yt-dlp version 2022.05.18 (update instructions) or later (specify commit) [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details [X] I've checked that all URLs and arguments with special characters are properly quoted or escaped [X] I've searched the bugtracker for similar issues including closed ones. DO NOT post duplicates [X] I've read the guidelines for opening an issue Description hey there i am getting this error on upload no info ? open any editor, edit cookie.json replace 'no_restriction' to None is fine like this? On Tue, 31 May 2022 at 10:21, HeisenBerg? @.***> wrote: open any editor, edit cookie.json replace 'no_restriction' to None is fine — Reply to this email directly, view it on GitHub https://github.com/wanghaisheng/youtube-auto-upload/issues/22#issuecomment-1141514595, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZDGH7ZJRIUOGOT5MSY6KOTVMU5PXANCNFSM5XLVPP7Q . You are receiving this because you authored the thread.Message ID: @.***> like this On Tue, 31 May 2022 at 10:21, HeisenBerg? @.***> wrote: open any editor, edit cookie.json replace 'no_restriction' to None is fine — Reply to this email directly, view it on GitHub https://github.com/wanghaisheng/youtube-auto-upload/issues/22#issuecomment-1141514595, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZDGH7ZJRIUOGOT5MSY6KOTVMU5PXANCNFSM5XLVPP7Q . You are receiving this because you authored the thread.Message ID: @.***> yeah still same error None instead none working thanks let me know even putting 1 for policy its posting and videos private can you paste the content of console stuck here 1.try to find 'this is a schedule video task' Hey, Wang! I am getting the exact sample Timeout 30000ms error as mentioned by @simpelecase Above. It seems to be occurring whenever publishpolicy is 1. Is there a workaround? What do you mean by 1. try to find "this is a schedule task"? I need screen recording and full log for that for timeout issue 99% is caused by your network quality for 1 it means publish at instant for 0 means private for 2 means publish at your schedule datetime Is there somewhere in your code you could change the timeout limits of a playwright page? And I can not seem to upload a custom thumbnail. ----The thumbnail file exists and I have the correct path to it ----I did not make many changes to "one-file example. py" except for setting the path for video, cookie and thumbnail" I left tags and description empty bc I have youtube upload default set. and turning | watcheverystep & record screen to False Good to see you are still responsive, many other people on github simply publish their git repo and forget about it after. I need screen recording and full log for that I really did not intend to ask twice try uninstall using pip uninstall ytb_up install at the root directory python setup.py install 3.run demo code I reinstalled Pip, so thumbnail part is working now. But program still times out, when publish policy is not 0 ScreenRecording: d1db7ebc-21f2-416e-a102-0b9bfe91f2f7.webm Log: whether run in view mode True start web page without proxy DEBUG: Firefox is now running ============tags [''] cookies existing C:\Users\maxxu\PycharmProjects\AgentRoxyProject\cookie.json <Locator frame=<Frame name= url='https://www.youtube.com/'> selector='yt-img-shadow.ytd-topbar-menu-button-renderer > img:nth-child(1)'> checking login status True start change locale to english Click your profile icon . Click Language or Location icon choose the language or location you like to use. finish change locale to english DEBUG: Found YouTube upload Dialog Modal DEBUG: Trying to upload "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\Final.mp4" to YouTube... DEBUG: Trying to detect verify... there is no verification at all DEBUG: Trying to set "[War Scene] The Japanese army fired planes to bomb the Chinese fleet ." as title... click title field to input clear existing title filling new title DEBUG: Trying to set "[War Scene] The Japanese army fired planes to bomb the Chinese fleet ." as description... DEBUG: Trying to set "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\AgentRoxyThumbnail.png" as thumbnail... DEBUG: Trying to set video to "Not made for kids"... not made for kids task done tags you give [''] overwrite prefined channel tags click show more button DEBUG: Trying to set "" as tags... clear existing tags filling new tags uploading progress check task done next next! next next! next next! DEBUG: Trying to set video visibility to public... Error feed: Traceback (most recent call last): File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\GunKingDownload.py", line 345, in task() File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\GunKingDownload.py", line 340, in task instantpublish() File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\GunKingDownload.py", line 249, in instantpublish asyncio.run(upload.upload( File "D:\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "D:\lib\asyncio\base_events.py", line 646, in run_until_complete return future.result() File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\venv\lib\site-packages\ytb_up\youtube.py", line 401, in upload await page.locator(PUBLIC_RADIO_LABEL).click() File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\venv\lib\site-packages\playwright\async_api_generated.py", line 12189, in click await self._impl_obj.click( File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\venv\lib\site-packages\playwright_impl_locator.py", line 144, in click return await self._frame.click(self._selector, strict=True, **params) File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\venv\lib\site-packages\playwright_impl_frame.py", line 474, in click await self._channel.send("click", locals_to_params(locals())) File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\venv\lib\site-packages\playwright_impl_connection.py", line 43, in send return await self._connection.wrap_api_call( File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\venv\lib\site-packages\playwright_impl_connection.py", line 370, in _ return await result File "C:\Users\maxxu\PycharmProjects\AgentRoxyProject\venv\lib\site-packages\playwright_impl_connection.py", line 78, in inner_send result = next(iter(done)).result() playwright._impl._api_types.TimeoutError: Timeout 30000ms exceeded. =========================== logs =========================== waiting for selector "tp-yt-paper-radio-button.style-scope:nth-child(20)" Exception ignored in: <function BaseSubprocessTransport.del at 0x00000288FEADD360> Traceback (most recent call last): File "D:\lib\asyncio\base_subprocess.py", line 126, in del File "D:\lib\asyncio\base_subprocess.py", line 104, in close File "D:\lib\asyncio\proactor_events.py", line 108, in close File "D:\lib\asyncio\base_events.py", line 750, in call_soon File "D:\lib\asyncio\base_events.py", line 515, in _check_closed RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x00000288FEADECB0> Traceback (most recent call last): File "D:\lib\asyncio\proactor_events.py", line 116, in del File "D:\lib\asyncio\proactor_events.py", line 108, in close File "D:\lib\asyncio\base_events.py", line 750, in call_soon File "D:\lib\asyncio\base_events.py", line 515, in _check_closed RuntimeError: Event loop is closed Process finished with exit code 1 @simpelecase 18926010461 try add me
gharchive/issue
2022-05-30T22:07:51
2025-04-01T06:40:54.216619
{ "authors": [ "roxydoxyGits", "simpelecase", "wanghaisheng" ], "repo": "wanghaisheng/youtube-auto-upload", "url": "https://github.com/wanghaisheng/youtube-auto-upload/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1896425891
将画布的背景图设置为网络图片且和编辑器不同源,导出时抛出异常“Failed to execute 'toDataURL' on 'HTMLCanvasElement': Tainted canvases may not be exported”。 这个没啥办法 这个没啥办法 这个插件支持二次开发吗? 你二次开发也没用啊,浏览器的限制 @好的,明白了,那只能暂时将网络图片转成Base64传入,感谢作者。🎊
gharchive/issue
2023-09-14T12:04:17
2025-04-01T06:40:54.219623
{ "authors": [ "wanglin2", "zhangXiaoMin007" ], "repo": "wanglin2/mind-map", "url": "https://github.com/wanglin2/mind-map/issues/332", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
776808635
Datasets May I know how to get the datasets? Sure, you can find them on Google drive. https://drive.google.com/file/d/14bdK0qd-AvLfOASjX2G-ErGtiSV4LPfH/view?usp=sharing Sure, you can find them on Google drive. https://drive.google.com/file/d/14bdK0qd-AvLfOASjX2G-ErGtiSV4LPfH/view?usp=sharing
gharchive/issue
2020-12-31T06:09:37
2025-04-01T06:40:54.223782
{ "authors": [ "Robinson98", "wangyifan411" ], "repo": "wangyifan411/Face-Mask-Type-Detector", "url": "https://github.com/wangyifan411/Face-Mask-Type-Detector/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1417075486
Add pull request template 🤔 Not Existing Feature Request? [X] Yes, I'm sure, this is a new requested feature! 🤔 Not an Idea or Suggestion? [X] Yes, I'm sure, this is not idea or suggestion! 📋 Request Details Add pull request template. Lots of template are already available, you can use any one. 📜 Code of Conduct [X] I agree to follow this project's Code of Conduct. Do you have example of it @theritikchoure ? @warengonzaga , below I have mentioned a link for pull request templates. https://github.com/axolo-co/pull_request_template You can use any, I personally use Example - 6 @warengonzaga , below I have mentioned a link for pull request templates. https://github.com/axolo-co/pull_request_template You can use any, I personally use Example - 6 Interesting, I'll take a look.
gharchive/issue
2022-10-20T18:28:54
2025-04-01T06:40:54.235945
{ "authors": [ "theritikchoure", "warengonzaga" ], "repo": "warengonzaga/gathertown.js", "url": "https://github.com/warengonzaga/gathertown.js/issues/48", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1220382435
Can we create multiple child webgl context on parent widget ? I have a trouble when I call 2 widgets which contains webgl context on my parent widget : If I take your webgl_animation_keyframes.dart and webgl_loader_gltf.dart from your example : Column( children: [ webgl_loader_gltf(), webgl_animation_keyframes(), ] ) I've this result (only the second load) : When I use 2 animated gltf differents, I have an animation conflict on the second child widgets and the first is black : (Normaly the character should be in the top widget and c3po in the bottom widget) I already check this kind of solution but it's hard to apply in flutter/dart context : https://webglfundamentals.org/webgl/lessons/webgl-multiple-views.html Do you have an explanation or solution ? Thanks for your library 🙏 Hi @MapleNoise now flutter_gl not support multi opengl contexts but i think you wanted is use viewport to show multiple views it is supported have a look threejs example https://threejs.org/examples/?q=view#webgl_multiple_views or three_dart example https://wasabia.github.io/three_dart_example/#/examples/webgl_camera use viewport to show multiple It's ok for me, don't forget to add/update the autoclear params in the render() function : Works fine with .gltf render() { int _t = DateTime.now().millisecondsSinceEpoch; final _gl = three3dRender.gl; renderer!.setViewport(0, 0, 100, 100); renderer!.render(scene, camera); renderer!.autoClear = false; renderer!.setViewport(100, 0, 100, 100); renderer!.render(scene, camera2); renderer!.autoClear = false; renderer!.setViewport(200, 0, 100, 100); renderer!.render(scene, camera3); renderer!.autoClear = true; int _t1 = DateTime.now().millisecondsSinceEpoch; if (verbose) { print("render cost: ${_t1 - _t} "); print(renderer!.info.memory); print(renderer!.info.render); } _gl.flush(); if (verbose) print(" render: sourceTexture: $sourceTexture "); if (!kIsWeb) { three3dRender.updateTexture(sourceTexture); } }
gharchive/issue
2022-04-29T09:48:06
2025-04-01T06:40:54.315450
{ "authors": [ "MapleNoise", "wasabia" ], "repo": "wasabia/three_dart", "url": "https://github.com/wasabia/three_dart/issues/51", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1307796493
BarItem Can i customized barItem not only using icon but jpg or svg? yes, you can convert SVG to iconData
gharchive/issue
2022-07-18T11:15:21
2025-04-01T06:40:54.388713
{ "authors": [ "krisnachy", "watery-desert" ], "repo": "watery-desert/water_drop_nav_bar", "url": "https://github.com/watery-desert/water_drop_nav_bar/issues/6", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
165789978
Can't make it work Hi, I'm trying to validate all the .html files under the /dist folder, but apparently I can't make this task work: gulp.task('html', function() { return gulp.src("dist/**/*.html") .pipe(validator()) .pipe(gulp.dest('dist/')); }); I constantly get: [My directory]\node_modules\gulp-util\lib\PluginError.js:73 if (!this.message) throw new Error('Missing error message'); ^ Error: Missing error message at new PluginError ( [My directory]\node_modules\gulp-util\lib\PluginError.js:73:28) at [My directory]\node_modules\gulp-html\index.js:33:17 at ChildProcess.exithandler (child_process.js:209:5) at emitTwo (events.js:100:13) at ChildProcess.emit (events.js:185:7) at maybeClose (internal/child_process.js:850:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:215:5) What am I missing? Help is appreciated, thanks! P.S. Node version is 5.8.0 Hello! Thanks for reporting this. This gulp plugin requires java command. Could you please try to display the version of Java? $ java -version Hi, thanks for the reply. I've checked and it seems that I was having an older version of Java 7, but now I've just updated to the latest 8 version. java version "1.8.0_91" Java(TM) SE Runtime Environment (build 1.8.0_91-b15) So, now using the vnu from the CLI seems to work just fine: java -Xss512k -jar vnu/vnu.jar --skip-non-html dist/ The only thing is that I'm still getting the above error message for the gulp-html. I guess I'm doing something wrong... Hmm I think it's kind of bug of this gulp-html cuz this plugin unfolds the html file because of the gulp way. I see, thanks for taking a look. Looking forward for a fix maybe :) Hey, is there any news on this? :) @catalinred check my pull request
gharchive/issue
2016-07-15T13:45:18
2025-04-01T06:40:54.393013
{ "authors": [ "Vernando05", "catalinred", "watilde" ], "repo": "watilde/gulp-html", "url": "https://github.com/watilde/gulp-html/issues/13", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
624338057
Setup rebar3_format https://github.com/AdRoll/rebar3_format 4feb840bde0087d15251ce627ea141a0a2ed052c
gharchive/issue
2020-05-25T14:35:15
2025-04-01T06:40:54.401422
{ "authors": [ "marianoguerra", "wattlebirdaz" ], "repo": "wattlebirdaz/rclref", "url": "https://github.com/wattlebirdaz/rclref/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1047581175
Fix Fullscreen Why Currently, when trying to make the current window fullscreen it throws exception Command FullScreenWindow not valid for browser Marionette::WebDriver (Exception) from lib/marionette/src/marionette/web_driver.cr:84:9 in 'execute' from lib/marionette/src/marionette/session.cr:833:16 in 'execute' from lib/marionette/src/marionette/window.cr:142:7 in 'execute' from lib/marionette/src/marionette/window.cr:134:5 in 'execute' from lib/marionette/src/marionette/window.cr:131:7 in 'fullscreen' from lib/flux/src/flux.cr:27:5 in 'fullscreen' from spec/flows/authorization_code_flux.cr:15:7 in 'call' from spec/flows/authorization_code_flux.cr:6:5 in 'flow' from spec/helpers/token_helper.cr:15:26 in 'prepare_code_challenge_url' from spec/token_spec.cr:11:45 in '->' from /usr/share/crystal/src/primitives.cr:266:3 in 'internal_run' from /usr/share/crystal/src/spec/example.cr:33:16 in 'run' from /usr/share/crystal/src/spec/context.cr:18:23 in 'internal_run' from /usr/share/crystal/src/spec/context.cr:330:7 in 'run' from /usr/share/crystal/src/spec/context.cr:18:23 in 'internal_run' from /usr/share/crystal/src/spec/context.cr:330:7 in 'run' from /usr/share/crystal/src/spec/context.cr:18:23 in 'internal_run' from /usr/share/crystal/src/spec/context.cr:147:7 in 'run' from /usr/share/crystal/src/spec/dsl.cr:201:7 in '->' from /usr/share/crystal/src/primitives.cr:266:3 in 'run' from /usr/share/crystal/src/crystal/main.cr:45:14 in 'main' from /usr/share/crystal/src/crystal/main.cr:119:3 in 'main' from __libc_start_main from _start from ??? Side Effects When calling current_window.fullscreen no exception is thrown and windows is resize to full screen. Thanks!
gharchive/pull-request
2021-11-08T15:22:00
2025-04-01T06:40:54.403228
{ "authors": [ "eliasjpr", "watzon" ], "repo": "watzon/marionette", "url": "https://github.com/watzon/marionette/pull/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2449020770
Hide Unused Columns from Logbook By submitting feature requests, you have the opportunity to contribute your ideas and inspiration to the Wavelog Dev-Team. Please be aware that not every feature request will be addressed or answered. What Feature is Missing? A way to hide unused/unwanted columns from the logbook. A second part is to then hide the data from the logbook dashboard if the column that is referenced is hidden. Why is This Feature Important? As an example, I use LotW and QRZ in place of sending a paper card. As such, I'd like to have an option to hide this column from the logbook as I don't need it, and seeing the red arrows makes me think the contact has not been confirmed. Another example is the QSL via column and QSL Msg columns. As part of this, removing these options from the overall tracking on the main logbook dashboard would allow for a cleaner dashboard to only show what each operator would like to see/uses. Is This Feature Personal or Beneficial to Others as Well? I believe this feature will benefit all users as not everyone uses all the columns, and by allowing user selection on what to hide, Wavelog become more customizable to suit each operators needs. It would also allow the logbook to be easier to read on mobile devices. Hi @stevepanaghi - is this still an issue? You can disable the QSL-Methods for Overview and search at Usersettings. This simply disables the icons in the UI, and preselects the chosen Methods at the most of the analytics views. Disabling doesn't mean that the sync stops (if sync was set-up) Looks like that solved my issue/request! On Mon, Dec 2, 2024, at 12:39 AM, Joerg (DJ7NT) wrote: Hi @stevepanaghi https://github.com/stevepanaghi - is this still an issue? You can disable the QSL-Methods for Overview and search at Usersettings. This simply disables the icons in the UI, and preselects the chosen Methods at the most of the analytics views. Disabling doesn't mean that the sync stops (if sync was set-up) image.png (view on web) https://github.com/user-attachments/assets/9c9144c9-4079-43b6-b741-ece3da7357b3 — Reply to this email directly, view it on GitHub https://github.com/wavelog/wavelog/issues/697#issuecomment-2510623771, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADCIBCQ63KUXOL7FN4OI66D2DPXAFAVCNFSM6AAAAABMAVQWG2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMJQGYZDGNZXGE. You are receiving this because you were mentioned.Message ID: @.***>
gharchive/issue
2024-08-05T16:53:37
2025-04-01T06:40:54.412145
{ "authors": [ "int2001", "stevepanaghi" ], "repo": "wavelog/wavelog", "url": "https://github.com/wavelog/wavelog/issues/697", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
852243563
Allow EdgeText to render child elements Hello, me again! Going one step further than in https://github.com/wbkd/react-flow/issues/1045, that would be great if the EdgeText component could render the specified children inside the wrapping <g> element (eg: just after the <rect> and <text>). Eg: <EdgeText x={x} y={y} label="Hello world" > <circle cx="40" cy="40" r="25" /> </EdgeText> The reason I'm asking is that I'm going into some customisation that requires drawing some extra gimmicks next to the edge text. Since I'm already happily to use the EdgeText component, which handles the drawing of text and background rectangle perfectly, it would be ideal to keep using it instead of duplicating its code inside my own component. Note: I'm happy to handle the positioning of the child elements myself hey @gfox1984 :) good idea, I will add it! Actually, I'm thinking it would be helpful to raise an event on EdgeText when you update its state (setEdgeTextBbox), along with the rectangle you're using. It would save the hassle of recomputing it and getting the timing right when we want to re-position the children... Released in 9.5.1. The children should re-render when we update the sizes.
gharchive/issue
2021-04-07T10:04:40
2025-04-01T06:40:54.760766
{ "authors": [ "gfox1984", "moklick" ], "repo": "wbkd/react-flow", "url": "https://github.com/wbkd/react-flow/issues/1073", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1463635996
reproduce with berlin/brandenburg data Remaining issue to figure out: The Bing Key is exposed in the source code at the moment which is suboptimal. Upps... sorry guys. Bin falsch hier.
gharchive/pull-request
2022-11-24T17:14:50
2025-04-01T06:40:54.831988
{ "authors": [ "sophiamersmann" ], "repo": "wdr-data/reichweitenchecker", "url": "https://github.com/wdr-data/reichweitenchecker/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
131788936
Have three textviews, and a image view, hence a custom listview We have four views, and when we drag right to left, we randomly get a motion:CANCEL, and the swipe aborts. Why? Seems because we are leaving one view, and entering another. Thanks. Do any of your views react to scrolls or swipes as well? As long as your views are static ones like textviews, buttons or (clipped) images, it should work properly. Fixed, thanks!
gharchive/issue
2016-02-05T23:32:50
2025-04-01T06:40:54.834609
{ "authors": [ "buryware", "wdullaer" ], "repo": "wdullaer/SwipeActionAdapter", "url": "https://github.com/wdullaer/SwipeActionAdapter/issues/43", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
202564082
Api/flux prefix migration Make flux respond both at / and at /api/flux so we can swap authfe over, then remove support for / Part of https://github.com/weaveworks/flux/issues/217 Will this also need changes to standalone instructions? at this stage no, but quite right. In trying to figure out how to migrate this, I've realized that changing the router also changes the client. So, if we change the router to have a prefixed subrouter, the client will begin adding /api/flux to each request route. This makes the migration (as far as what we tell users to do, and whether they should upgrade fluxctl) really hard to predict.
gharchive/pull-request
2017-01-23T15:24:46
2025-04-01T06:40:54.862988
{ "authors": [ "paulbellamy", "squaremo" ], "repo": "weaveworks/flux", "url": "https://github.com/weaveworks/flux/pull/397", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
243395038
Gather Weave Net plugin and proxy info from report Instead of using Docker, because after Weave Net 2.0 there are no proxy nor plugin containers. This has the drawback of not detecting the plugin/proxy in systems running Weave Net < 2.0 , but I think we can live with it. Fixes #2634 @2opremio did you check that this gets rid of the docker daemon log entries shown in #2628? Both for Weave Net 2.0 and pre-2.0 (not essential, but would be lovely). did you check that this gets rid of the docker daemon log entries shown Yes it does.
gharchive/pull-request
2017-07-17T13:26:44
2025-04-01T06:40:54.866755
{ "authors": [ "2opremio", "rade" ], "repo": "weaveworks/scope", "url": "https://github.com/weaveworks/scope/pull/2719", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
489060633
WeaveNet issue. Continuously Creating network breakages between kubernets pods HI, I am running simple Kubernetes cluster with single node acts as both (Master & worker) as well....... 1). In that I am facing an issue with Weanet Networking in kubernetes. Suddenly my pods are not able to communicate mongoDB running on another host and both are in same zone......But i am able telnet from the server. [root@prod-app1 weave]# curl http://172.31.1.231:27017 It looks like you are trying to access MongoDB over HTTP on the native driver port. [root@prod-app1 weave]# After restarting the weavenet pod only the application pods are able to connect mongodb. K8s version: 1.15 Weavnet Version: 2.5.2 uname -a: Linux prod-app1.gati.com 3.10.0-862.el7.x86_64 Docker version: 18.09.7 Can anyone please help me to resolve the issue. Thank you Due to the Weavenet issue the pods are suddenly not working and gone to CreashLoopBackOff state [root@prod-app1 ~]# kubectl get pod -n capiot NAME READY STATUS RESTARTS AGE b2b-6d44c5d847-82zgp 1/1 Running 0 28h b2bgw-66f9f95c85-j9pxf 0/1 CrashLoopBackOff 12 4h12m dm-5cbd787874-bp6ld 1/1 Running 0 28h gw-59f69487f7-8f7j8 0/1 CrashLoopBackOff 7 28h mon-5fb8d485cb-jrt76 0/1 CrashLoopBackOff 8 28h nats-c666bb65b-qlg6g 1/1 Running 0 28h ne-f9b6468cc-tx74j 0/1 CrashLoopBackOff 7 28h nginx-859c9759f8-2jqj6 1/1 Running 0 28h pm-64f765f7f5-fngbw 0/1 CrashLoopBackOff 7 28h redis-547fdbb749-nfw5r 1/1 Running 0 28h sec-5bcb5f85fb-752xv 0/1 CrashLoopBackOff 7 4h12m sm-59677d5bcc-l72g8 0/1 Running 7 28h user-6c886f554-vcwl2 0/1 CrashLoopBackOff 8 28h wf-d85b7c498-gpkpp 0/1 CrashLoopBackOff 9 28h [root@prod-app1 ~]# When look into one of the pod logs it gives the following error "Not able to connect to Mongodb" [root@prod-app1 ~]# kubectl logs -f -n capiot sec-5bcb5f85fb-752xv WARNING: No configurations found in configuration directory:/app/config WARNING: To disable this warning set SUPPRESS_NO_CONFIG_WARNING in the environment. [2019-09-04T12:12:57.794] [INFO] [security] [sec-5bcb5f85fb-752xv] - Server started on port 10007 [2019-09-04T12:13:02.443] [ERROR] [odp-utils-nats-streaming] - Could not connect to server: Error: getaddrinfo EAI_AGAIN nats.capiot:4222 [2019-09-04T12:13:02.577] [ERROR] security [sec-5bcb5f85fb-752xv] - ERROR :: Unable to connect to Kubernetes API server [2019-09-04T12:13:02.579] [INFO] security [sec-5bcb5f85fb-752xv] - [2019-09-04T12:13:13.642] [ERROR] [security] [sec-5bcb5f85fb-752xv] - ------------------------- Database connection lost ------------------------- [2019-09-04T12:13:13.644] [ERROR] [security] [sec-5bcb5f85fb-752xv] - { MongoNetworkError: failed to connect to server [172.31.1.231:27019] on first connect [MongoNetworkError: connect EHOSTUNREACH 172.31.1.231:27019] at Pool. (/app/node_modules/mongodb-core/lib/topologies/server.js:564:11) at emitOne (events.js:116:13) at Pool.emit (events.js:211:7) at Connection. (/app/node_modules/mongodb-core/lib/connection/pool.js:317:12) at Object.onceWrapper (events.js:317:30) at emitTwo (events.js:126:13) at Connection.emit (events.js:214:7) at Socket. (/app/node_modules/mongodb-core/lib/connection/connection.js:246:50) at Object.onceWrapper (events.js:315:30) at emitOne (events.js:116:13) at Socket.emit (events.js:211:7) at emitErrorNT (internal/streams/destroy.js:66:8) at _combinedTickCallback (internal/process/next_tick.js:139:11) at process._tickDomainCallback (internal/process/next_tick.js:219:9) name: 'MongoNetworkError', errorLabels: [ 'TransientTransactionError' ], [Symbol(mongoErrorContextSymbol)]: {} } [root@prod-app1 ~]# But I am able to telnet from the host machine. [root@prod-app1 ~]# curl http://172.31.1.231:27017 It looks like you are trying to access MongoDB over HTTP on the native driver port. [root@prod-app1 ~]# After Restarting the weavenet pod it is working fine and the application pods are able to connect the mongodb......But this issue is getting resolved for a specific amount of time only. After some using the again it is the same issue. [root@prod-app1 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-7xqff 1/1 Running 4 5d5h coredns-5c98db65d4-wpkkl 1/1 Running 4 5d5h etcd-prod-app1.gati.com 1/1 Running 4 5d5h kube-apiserver-prod-app1.gati.com 1/1 Running 4 5d5h kube-controller-manager-prod-app1.gati.com 1/1 Running 4 5d5h kube-proxy-g8p52 1/1 Running 1 5d4h kube-proxy-wt2kr 1/1 Running 4 5d5h kube-scheduler-prod-app1.gati.com 1/1 Running 4 5d5h weave-net-qcftw 2/2 Running 0 4h14m [root@prod-app1 ~]# kubectl delete pod -n kube-system weave-net-qcftw pod "weave-net-qcftw" deleted [root@prod-app1 ~]# Now all the pods are in Running state issue resolved for this movement. It will come again after sometime.. [root@prod-app1 ~]# kubectl get pod -n capiot NAME READY STATUS RESTARTS AGE b2b-6d44c5d847-j7fnr 1/1 Running 0 95s b2bgw-66f9f95c85-qlzn8 1/1 Running 3 95s dm-5cbd787874-c6n8f 1/1 Running 0 95s gw-59f69487f7-qlskd 1/1 Running 0 95s mon-5fb8d485cb-hmsms 1/1 Running 0 95s nats-c666bb65b-gwc7z 1/1 Running 0 95s ne-f9b6468cc-z9mfn 1/1 Running 0 95s nginx-859c9759f8-28jlq 1/1 Running 0 95s pm-64f765f7f5-ghz8b 1/1 Running 0 95s redis-547fdbb749-pfhn9 1/1 Running 0 94s sec-5bcb5f85fb-4w4nn 1/1 Running 0 94s sm-59677d5bcc-7jgwz 1/1 Running 0 94s user-6c886f554-z629v 1/1 Running 0 94s wf-d85b7c498-nx2lv 1/1 Running 1 94s So, please can anyone help me to resolve the issue. Thank you. Weave-net simply sets up iptables to masqurade the outbound traffic from the pods that is not destined for other pod's (i.e. traffic that leaves weave's overlay network). Check if the traffic is leaving the node but getting dropped in between the nodes? Does your node has multiple interfaces? Can you tell me how do we know the node has multiple interfaces? I am running Kubernetes in Offline mode and using the separate volume for working Directory.... Iptables configuration in /etc/sysctl.d/k8s.conf is net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@app-uat ~]# lsmod | grep br_netfilter br_netfilter 22256 1 xt_physdev bridge 146976 2 br_netfilter,ebtable_broute Can you tell me how do we know the node has multiple interfaces? If you run ip link show you should get a list of devices, then if you discount the loopback device lo, any bridges such as docker0, weave, any virtual devices beginning v, whatever is left are interfaces on your node. The below is the output of ip link show command... Here i am able to see only on interface. [root@app-uat yamlfile]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:50:56:a9:bf:67 brd ff:ff:ff:ff:ff:ff 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:6b:4e:6e:9e brd ff:ff:ff:ff:ff:ff 516: vethwepl2b8a17b@if515: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 2a:dc:ab:02:37:46 brd ff:ff:ff:ff:ff:ff link-netnsid 3 4: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 3a:59:78:c1:f2:ff brd ff:ff:ff:ff:ff:ff 6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 22:80:4d:e6:0e:34 brd ff:ff:ff:ff:ff:ff 7: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 8a:e3:c3:1c:f7:7f brd ff:ff:ff:ff:ff:ff 9: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP mode DEFAULT group default link/ether 0e:47:11:ff:c4:52 brd ff:ff:ff:ff:ff:ff 522: vethwepl4578f4a@if521: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 92:8d:a9:09:8d:33 brd ff:ff:ff:ff:ff:ff link-netnsid 6 10: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 66:07:05:f5:ea:e4 brd ff:ff:ff:ff:ff:ff 524: vethwepl452680a@if523: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 02:43:a1:a3:d7:83 brd ff:ff:ff:ff:ff:ff link-netnsid 17 526: vethwepl58ab21b@if525: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 32:db:10:1b:ce:92 brd ff:ff:ff:ff:ff:ff link-netnsid 16 528: vethwepl705aa52@if527: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether be:d0:fe:ee:d3:d3 brd ff:ff:ff:ff:ff:ff link-netnsid 19 530: vethwepl604c904@if529: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 7e:a3:8c:79:6d:8b brd ff:ff:ff:ff:ff:ff link-netnsid 20 532: vethwepl07bcd2c@if531: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether e6:70:f1:51:e5:dd brd ff:ff:ff:ff:ff:ff link-netnsid 22 534: vethwepl66458ae@if533: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether be:be:8d:43:c0:2e brd ff:ff:ff:ff:ff:ff link-netnsid 23 536: vethwepl9aed25b@if535: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether d2:e3:36:74:d4:9b brd ff:ff:ff:ff:ff:ff link-netnsid 24 538: vethweplc460359@if537: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether ae:b2:90:c3:0c:29 brd ff:ff:ff:ff:ff:ff link-netnsid 27 540: vethwepled86a8e@if539: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether ae:4f:a6:b9:d2:56 brd ff:ff:ff:ff:ff:ff link-netnsid 36 542: vethwepld2ac26e@if541: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether ce:a0:9b:a8:f1:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 37 544: vethwepl645331d@if543: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 0a:f4:f0:83:83:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 38 546: vethwepld97fcb6@if545: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 22:c1:31:15:c9:7f brd ff:ff:ff:ff:ff:ff link-netnsid 39 548: vethwepl899db35@if547: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 3a:92:b8:24:6a:89 brd ff:ff:ff:ff:ff:ff link-netnsid 40 550: vethwepl7660b31@if549: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 2e:47:64:27:ea:28 brd ff:ff:ff:ff:ff:ff link-netnsid 1 552: vethwepld7e9f57@if551: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether c2:75:e9:27:65:53 brd ff:ff:ff:ff:ff:ff link-netnsid 2 554: vethwepl1346b4b@if553: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 7a:92:2e:ff:ea:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 4 556: vethwepl9f6de44@if555: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether c6:cd:98:55:a0:fc brd ff:ff:ff:ff:ff:ff link-netnsid 5 350: veth4c0239b@if349: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 46:8c:26:49:52:75 brd ff:ff:ff:ff:ff:ff link-netnsid 0 373: vethweple2775c0@if372: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether ce:45:07:f8:4c:2c brd ff:ff:ff:ff:ff:ff link-netnsid 10 377: vethwepld9432f5@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether ca:22:22:e9:3f:38 brd ff:ff:ff:ff:ff:ff link-netnsid 12 488: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc noqueue master datapath state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 92:c3:b7:4e:5e:b2 brd ff:ff:ff:ff:ff:ff 494: vethwepl2d035bd@if493: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 66:05:70:f8:29:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 11 496: vethwepl7457b53@if495: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether ba:e6:8c:42:05:29 brd ff:ff:ff:ff:ff:ff link-netnsid 26 498: vethweplf944488@if497: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 9e:e6:86:57:02:ca brd ff:ff:ff:ff:ff:ff link-netnsid 29 506: vethweplb38c623@if505: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether 9a:1a:79:f6:f8:c6 brd ff:ff:ff:ff:ff:ff link-netnsid 32 510: vethwepl8ea2003@if509: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default link/ether c6:08:d4:72:7c:99 brd ff:ff:ff:ff:ff:ff link-netnsid 35
gharchive/issue
2019-09-04T09:56:29
2025-04-01T06:40:54.893048
{ "authors": [ "KrishnaKoppineni", "bboreham", "murali-reddy" ], "repo": "weaveworks/weave", "url": "https://github.com/weaveworks/weave/issues/3696", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
227097766
WIP: In the build container, request install/upgrade of openssl It seems that the install of libssl causes builds to break, because Python can no longer load the SSL code. Example: https://circleci.com/gh/weaveworks/weave/8855 Replaced by #2940
gharchive/pull-request
2017-05-08T16:14:59
2025-04-01T06:40:54.895262
{ "authors": [ "bboreham" ], "repo": "weaveworks/weave", "url": "https://github.com/weaveworks/weave/pull/2937", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1376550796
[FAD-95] IPFS Description Containerized IPFS and file uploader. Read the packages/ipfs/README.md and I can demo it on Monday before you two look at the PR. Checklist [ ] CHANGELOG has been updated (if appropriate) [ ] Environment variables updated (if appropriate) Closing due to the changes when implementing [FAD-108]
gharchive/pull-request
2022-09-16T22:18:39
2025-04-01T06:40:54.897657
{ "authors": [ "sachasmart-weavik" ], "repo": "weavik/free-artist-dao", "url": "https://github.com/weavik/free-artist-dao/pull/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1650862482
fix(doc-core): When the close action of the search box is triggered, … Description 全局搜索框,直接点击关闭的时候,搜索内容应当置空 Related Issue Types of changes [ ] Docs change / Dependency upgrade [x] Bug fix [ ] New feature / Improvement [ ] Refactoring [ ] Breaking change Checklist [ ] I have added changeset via pnpm run change. [ ] I have updated the documentation. [ ] I have added tests to cover my changes. Thank you
gharchive/pull-request
2023-04-02T08:41:22
2025-04-01T06:40:54.900633
{ "authors": [ "niaogege", "sanyuan0704" ], "repo": "web-infra-dev/modern.js", "url": "https://github.com/web-infra-dev/modern.js/pull/3323", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1915905291
Release v2.36.0 What's Changed New Features 🎉 feat(module-tools): improve logs in watch mode by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4729 feat(builder): improve time logs format by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4702 feat(builder): include tslib in lib-polyfill.js by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4712 Bug Fixes 🐞 fix(server): use cjs format hmr-client to fix hmr issue by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4719 fix(module-tools): move init watcher from onStart hook to createCompiler by @10Derozan in https://github.com/web-infra-dev/modern.js/pull/4733 fix(plugin-garfish): only override assetPrefix default value by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4725 fix(plugin-proxy): failed to run networksetup command in Windows by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4700 fix(builder): mismatched directory name containing node_modules by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4720 fix(main-doc): fixed wrong ports in MFE docs by @kirillbashtenko in https://github.com/web-infra-dev/modern.js/pull/4722 Docs update 📄 docs: add descriptions for AppContext properties in https://github.com/web-infra-dev/modern.js/pull/4726 Other Changes chore(builder): simplify assets rule by @9aoy in https://github.com/web-infra-dev/modern.js/pull/4701 chore(builder): update rspack to 0.3.5 by @9aoy in https://github.com/web-infra-dev/modern.js/pull/4730 chore(builder): use rspack.xxxPlugin instead of builtins configuration by @9aoy in https://github.com/web-infra-dev/modern.js/pull/4728 chore(module-tools): bump swc-plugins 0.6.4, remove unused deps by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4709 refactor(plugin-module): use buildConfig.hooks to realize afresh by @10Derozan in https://github.com/web-infra-dev/modern.js/pull/4651 chore(runtime): remove unused redux-logger dependencies by @chenjiahan in https://github.com/web-infra-dev/modern.js/pull/4713 refactor(module-tools): by @10Derozan in https://github.com/web-infra-dev/modern.js/pull/4651 更新内容 新特性 🎉 feat(module-tools): 优化 watch 模式下的日志 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4729 feat(builder): 优化时间日志的格式 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4702 feat(builder): 拆分 tslib 到 lib-polyfill.js 中 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4712 Bug 修复 🐞 fix(server): 使用 cjs 格式的 hmr-client 来修复 hmr 问题 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4719 fix(module-tools): 将 init watcher 从 onStart 钩子移动到 createCompiler 中 由 @10Derozan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4733 fix(plugin-garfish): 只对 assetPrefix 的默认值进行覆盖 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4725 fix(plugin-proxy): 修复 Windows 下运行 networksetup 失败的问题 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4700 fix(builder): 错误匹配了包含 node_modules 的目录 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4720 文档更新 📄 docs: 增加 AppContext 属性的描述, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4726 其他变更 chore(builder): 简化 assets 规则 由 @9aoy 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4701 chore(builder): 升级 rspack 到 0.3.5 由 @9aoy 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4730 chore(builder): 使用 rspack.xxxPlugin 代替 builtins configuration 由 @9aoy 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4728 chore(module-tools): 升级 swc-plugins 0.6.4, 移除无用依赖 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4709 refactor(plugin-module): 使用 buildConfig.hooks 重新实现各插件功能 由 @10Derozan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4651 chore(runtime): 移除未使用的 redux-logger 依赖 由 @chenjiahan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4713 merge libuild to module tools, add buildConfig.hooks to support load, transform and renderChunk support buildConfig.tsconfig, refine the scenarios for custom tsconfig, so replace dts.tsconfigPath with this. disable buildConfig.transformLodash by default: This optimisation was introduced in version 2.22.0 to reduce code size by modularising lodash import, but it may also cause some compatibility issues, so in version 2.32.0 a new transformLodash configuration has been added to manually disable this optimisation. In this version, this optimisation is turned off by default, and lodash is not processed separately by default. only use swc transform when enable transformImport, transformLodash or externalHelpers. swc conversion was introduced in version 2.16.0, but the implementation still has some problems, such as format cjs does not have "Annotate the CommonJS export names for ESM import in node", sourceType commonjs support is poor, etc. In this version, swc conversion is no longer used in full, and all kinds of limitations and judgements are removed, and only swc is used as a supplement to some features. remove unuse dependecies and improve code quality. support debug mode to print debug logs. fix some css module bugs. support buildConfig.jsx: preserve . support glob input in js and dts generator. support banner and footer. refactor(module-tools): 将 libuild 合入模块工程,添加 buildConfig.hooks,支持 load, transform 和 renderChunk 钩子。 支持 buildConfig.tsconfig 配置,用来完善自定义 tsconfig 的场景,请用它来替换 dts.tsconfigPath 默认禁用 buildConfig.transformLodash: 此优化是由 2.22.0 版本引入,通过模块化 lodash 的导入从而减小代码体积,但这也可能导致一些兼容性问题,因此在 2.32.0 版本新增了 transformLodash 配置,可以手动关闭此优化。在此版本,默认关闭此优化,默认不对 lodash 作单独的处理。 只有在开启 transformImport, transformLodash 或 externalHelpers 时才使用 swc 转换。 swc 转换是在 2.16.0 版本引入,但实现仍存在一些问题,例如 format cjs 没有 “Annotate the CommonJS export names for ESM import in node”,sourceType commonjs 支持不佳等等,在此版本,不再全量使用 swc 转换,移除各种限制和判断,只使用 swc 作为部分功能的补充。 移除未使用的依赖并提升代码质量。 支持 debug 模式打印调试日志。 修复一些 css module 问题。 支持 buildConfig.jsx: preserve 选项。 支持 glob 模式输入在 js 和 dts 生成器中。 支持 banner 和 footer 配置。 由 @10Derozan 实现, 详情可查看 https://github.com/web-infra-dev/modern.js/pull/4651 The v2.36.0 version will be released on October 14th as we are on vacation.
gharchive/pull-request
2023-09-27T16:07:42
2025-04-01T06:40:54.927445
{ "authors": [ "caohuilin", "chenjiahan" ], "repo": "web-infra-dev/modern.js", "url": "https://github.com/web-infra-dev/modern.js/pull/4739", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2745003744
[Feature]: Support asset module What problem does this feature solve? Bundle [ ] support preserve relative path in publicPath: auto https://github.com/web-infra-dev/rspack/issues/8748#issuecomment-2548493524 [ ] turn on output.publicPath: 'auto' for bundle [ ] support module.generator.experimentalLibPreserveImport in https://github.com/web-infra-dev/rspack/pull/8724 Bundleless [ ] close https://github.com/web-infra-dev/rspack/issues/8748 [ ] turn on module.generator['asset'].publicPath: 'auto' for bundleless [ ] support module.generator['asset'].experimentalLibReExport in https://github.com/web-infra-dev/rspack/pull/8724 What does the proposed API look like? input └── src ├── assets │ └── react.svg // <-- └── index.tsx output ./dist ├── esm │ ├── assets │ │ └── react.mjs // <-- │ ├── index.d.ts │ └── index.mjs └── static/svg └── react.svg // <-- // dist/esm/assets/react.mjs import url from '../../../static/svg/react.svg'; export default url; related issue: https://github.com/web-infra-dev/rslib/issues/230 https://github.com/web-infra-dev/rslib/issues/199 When wil this feature be released? When wil this feature be released? There's no exact time. Check this issue to track.
gharchive/issue
2024-12-17T13:52:18
2025-04-01T06:40:54.933978
{ "authors": [ "SoonIter", "Timeless0911", "noshower" ], "repo": "web-infra-dev/rslib", "url": "https://github.com/web-infra-dev/rslib/issues/570", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1702906579
chore: add rust hot update test feature Related issue (if exists) Summary 🤖 Generated by Copilot at 50a05bb This pull request adds a new function test_hmr_fixture to the rspack_testing crate, which tests the hot module reloading (HMR) feature of the rspack tool. It also updates and adds some examples to demonstrate and compare the HMR feature with the normal build feature. The examples use hard-coded fixture paths and console log statements. Walkthrough 🤖 Generated by Copilot at 50a05bb Add a new function test_hmr_fixture to the rspack_testing crate that tests the hot module replacement (HMR) feature of the compiler for a given fixture (link, link, link) Add two examples of how to use the test_hmr_fixture function in the test-cli.rs and test-hmr-cli.rs files (link, link, link) Add a new test.config.json file to the simple example to provide some configuration options for the testing framework or the compiler (link) Add some console log statements to the index.js file in the simple example to demonstrate or debug the HMR feature or the module caching behavior (link, link) Add the expected output of the compiler for the simple example in the main.js file (link) Do we need hot update tests for rust, because I don't know how to do hot update tests in rust because of the lack of this tool @ahabhgk @h-a-n-a I don't suggest add hmr test at rust side, hmr can't works without @rspack/dev-server, which is written in js, so normally we test hmr at js side. I don't suggest add hmr test at rust side, hmr can't works without @rspack/dev-server, which is written in js, so normally we test hmr at js side. Maybe there is something wrong with the description of my proposal. I want to add the test method of rebuild to the rust layer, through the changed and removed files, to facilitate testing possible problems in the rebuild process IMO test at rust side enable better debug experience, so rebuild test at rust side is nice to have, you can checkout packages/rspack/tests/WatchTestCases.test.ts for reference, it's written in js but the main idea is using writeFile to trigger the rebuild, I think we can do the same in rust side. IMO test at rust side enable better debug experience, so rebuild test at rust side is nice to have, you can checkout packages/rspack/tests/WatchTestCases.test.ts for reference, it's written in js but the main idea is using writeFile to trigger the rebuild, I think we can do the same in rust side. So it makes sense, right? So I'm going to follow up Thank you!
gharchive/pull-request
2023-05-10T00:12:10
2025-04-01T06:40:54.944846
{ "authors": [ "ahabhgk", "hyf0", "suxin2017" ], "repo": "web-infra-dev/rspack", "url": "https://github.com/web-infra-dev/rspack/pull/3091", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2229130109
Generate .dist.yml where compat_features are sourced from BCD These are cases where the source YAML doesn't include compat_features, but the generated .dist.yml does, based on BCD tags. @foolip rebase needed here, otherwise ready to go. Thank you!
gharchive/pull-request
2024-04-06T07:53:06
2025-04-01T06:40:54.946312
{ "authors": [ "ddbeck", "foolip" ], "repo": "web-platform-dx/web-features", "url": "https://github.com/web-platform-dx/web-features/pull/797", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1458630752
[CHECKLIST] Implement successful deposit flow Overview The goal of this checklist is to create a comprehensive list of tasks that collectively make up the successful and unsuccessful deposit flow for dApp users. Deposit Flow Note: Assumes a user has already connected their wallet and created their note account. Deposit Scene-1 | Data Input Selects token to deposit into bridge - if non-bridged asset (e.g. webbETH) then force deposit and wrap flow Select destination chain Select amount to deposit Deposit Scene-2 | Confirmation User confirms they have copied the new spend note User selects 'Deposit' Deposit Scene-3 | Deposit In-progress Display the progress of the deposit User can select 'New Transaction' that will move progress view into accordion component to the right side and display deposit scene-1 Right-sided component Task Checklist [x] https://github.com/webb-tools/webb-dapp/issues/705 [ ] https://github.com/webb-tools/webb-dapp/issues/717 [ ] [TASK] Fix notification card for deposits and wrap and deposit [ ] [TASK] Validating deposit inputs (e.g. balance) [x] [TASK] Improve leaf fetching and fix caching [x] Implement Deposit Scene-1 | Data Input [x] Implement Deposit Scene-2 | Confirmation [x] Implement Deposit Scene-3 | Deposit In-progress [x] Implement successful Deposit transaction [x] Test deposits for each chain with bridged asset [x] Test deposits for each chain with non-bridged asset (e.g. deposit and wrap flow) Relevant Figma Links Figma Prototype Wireframe File Closed in #719
gharchive/issue
2022-11-21T21:46:35
2025-04-01T06:40:55.013357
{ "authors": [ "dutterbutter" ], "repo": "webb-tools/webb-dapp", "url": "https://github.com/webb-tools/webb-dapp/issues/711", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }