id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2289971290
ZeroSSL 429 Too Many Requests ZeroSSL implement a 429 Too Many Requests error if you fire too many requests at once, even tho the code does one thing after another rather than all at once, you can still hit this limit what should happen is IF you hot a 429 error, it should retry the request 1-2 seconds later, im hitting this problem at different stages. sometimes i hit it when requesting 5 domains, at the newNonce stage and sometimes i hit it at the after the newNonce but when it tries to get the challenges https://github.com/cert-manager/cert-manager/issues/5867 Should be fixed with https://github.com/publishlab/node-acme-client/commit/e4e8bde250b78fd07d28ebdf035b6f7b0b71aacb - it will now retry requests that receive 429 or 5xx status codes, respecting Retry-After header if included with the response. Landed in v5.4.0. Have not been able to test this in the wild yet, please let me know if this resolved your issue and feel free to re-open if the problems persist.
gharchive/issue
2024-05-10T15:44:09
2025-04-01T06:40:06.606663
{ "authors": [ "nmorsman", "si458" ], "repo": "publishlab/node-acme-client", "url": "https://github.com/publishlab/node-acme-client/issues/89", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2548119985
CTM-576: add environment variables for email template. add error stat… …us if match is unsuccessful. With the new error state, you need to update the code where it was checking for pending or matched and include the newly added state ctml-json.service run_match needs to include case to update from error state useGetMatchResult trialResultService ...
gharchive/pull-request
2024-09-25T14:13:35
2025-04-01T06:40:06.621097
{ "authors": [ "kevinwangcanada", "mickey-ng" ], "repo": "pughlab/ctims", "url": "https://github.com/pughlab/ctims/pull/193", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
842221210
Rename 'Rare Books and Special Collections' to 'Special Collections' This was changed in Blacklight but stayed with the old name in voyager. We should change it in Alma. Update 'Rare Books and Special Collections' to 'Special Collections'. I changed the Organization Unit Name for now. What does the location prefix "Rare Books:" relate to? Is it shorthand for "Rare Books and Special Collections"? In that case, we should change it to "Special Collections:". Or does it mean the umbrella collection "ex" (=rare books)? In that case, some of them need correcting, e.g. "hsvm" (=manuscripts high security vault) : "Rare Books: South East (MSS)" Thank you for changing the name @regineheberlein. Your questions make a lot of sense. @kevinreiss @mzelesky who is the right person/group of people to answer these questions? Per conversation with Alexis: Change external name to human readable expanded string Change internal name to code + abbreviated human readable string remove all prefixes I'll get started now. Also pinged Joshua about the East Asian location labels (currently all "East Asian Rare") @mzelesky @kevinreiss Can we delete the location "UNASSIGNED"? @christinach What gets displayed in Blacklight, Name or External Name? We cannot delete the location "UNASSIGNED"; it's used for migration purposes. At least, that's my understanding. If there is nothing in the locations after go-live we can delete. In general, we cannot delete any locations prior to go-live unless those locations are also deleted in Voyager. @regineheberlein we use 'name', not the 'external_name' @regineheberlein but we can change it to the 'external_name' if that makes more sense. @christinach Yes please. There is a difference between what staff need to see in the UI and what we want patrons to see in Blacklight, so change the source to external_name would be great. Now reaching out to Martin per Joshua Martin confirms that the labels for EA rare, while all the same, are good as is.
gharchive/issue
2021-03-26T18:52:38
2025-04-01T06:40:06.639543
{ "authors": [ "christinach", "mzelesky", "regineheberlein" ], "repo": "pulibrary/bibdata", "url": "https://github.com/pulibrary/bibdata/issues/1166", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1479165303
Bring coverage up to 90% https://coveralls.io/builds/54801434 After merging a pulbot pr it went above 90%
gharchive/issue
2022-12-06T13:55:24
2025-04-01T06:40:06.640813
{ "authors": [ "christinach" ], "repo": "pulibrary/bibdata", "url": "https://github.com/pulibrary/bibdata/issues/2042", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1678780505
Regenerating catalogo item pages removed catalogo info When I run bin/generate_item_pages, it updates the item pages, but the info from the Catalogo is removed from the top of the pages. Fixed by 734c5bf61c9d10a8207977fcfec3e6af89dbaa32
gharchive/issue
2023-04-21T16:20:02
2025-04-01T06:40:06.641738
{ "authors": [ "escowles" ], "repo": "pulibrary/cicognara-static", "url": "https://github.com/pulibrary/cicognara-static/issues/51", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
866476665
American Jewess MVW is not displaying in DPUL or online Catalog the MVW is not coming up and only a black screen is visible for https://catalog.princeton.edu/catalog/9138143 and https://dpul.princeton.edu/wa/catalog/d217qt45s. Displaying fine in figgy If I sign in, still a no go. cannot see anything. I do not know if others are affected (and why). Tested on Chrome, Firefox, and Safari. Reported by Steve Ferguson This is because the first volume is marked private (https://figgy.princeton.edu/catalog/20c031a8-70fc-4d75-a30f-4438034f0e4f) Changed to open just now Works! Thank you!
gharchive/issue
2021-04-23T22:51:56
2025-04-01T06:40:06.644285
{ "authors": [ "kelea99", "tpendragon" ], "repo": "pulibrary/dpul", "url": "https://github.com/pulibrary/dpul/issues/896", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1929317686
Add unit testing to the provider Fixes https://github.com/pulumi/ci-mgmt/issues/661 The reason I'm working on this now is that I'd like to really move upgrade tests out of examples and under provider/ in AWS (and soon GCP). https://github.com/pulumi/pulumi-aws/pull/2855 FYI: If you want to get tests working "right now", you can add the extra test to extraTests and run make ci-mgmt. We already have an existing extra test for pulumi-aws: https://github.com/pulumi/pulumi-aws/blob/0b33b02036fcf1cae354aae98a9d278e957e0199/.ci-mgmt.yaml#L69-L94 This simpler version seems to work in: https://github.com/pulumi/pulumi-azuread/actions/runs/6437059888/job/17481565040#step:10:37 By running tests after building as a separate step, this bypasses the need to model the dependency in Make. A bit awkward but the least evil quick option at the moment it seems like. Locally, you just have to "know" you need to run tfgen before. @iwahbe another look? I think in the ideal state it really should not need tfgen. There is the issue with go:embed pointing to non-existing files; this is really a papercut, this can be worked around. Beyond that, why should testing the provider depend on tfgen?
gharchive/pull-request
2023-10-06T02:03:41
2025-04-01T06:40:06.691271
{ "authors": [ "iwahbe", "t0yv0" ], "repo": "pulumi/ci-mgmt", "url": "https://github.com/pulumi/ci-mgmt/pull/662", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
467220106
feat request: preview a staged hugo web stack in PRs For doc and blog writers, it'd be great if we could see and share with others a preview of a staged hugo web stack when a PR is opened. This would help the reviewers and users see the changes being proposed from the PR itself without having to pull down the PR and run hugo locally. This would indeed be very nice. My ideal wishlist here is to have a Netlify-like experience, but using Pulumi: Upon a new PR being opened Stand up a fresh, new static website, being served out of an S3 bucket Post back that URL using our GitHub App, as an output property of the Pulumi stack Upon a PR being merged Destroy and remove the stack If we do this using S3 websites, we won't get great URLs, but it'll be fast. Not only would this be a terrific flow, it would be a great real-world example of Pulumi in action. For the immediate term, I was thinking we'd just hook up Netlify so we can have something to unblock people while we work on revamping our documentation. Having a "Pulumi-powered Review App" would be the ideal end-result though. It should be very doable using GitHub Actions, I think it's just a matter of getting the workflow right. For reference https://github.com/pulumi/actions/issues/9 is the tracking this type of feature in that repo, and is linked to some other PRs that have been sent out around enabling that type of thing. Christian created a solution for this internally. We'll still want to create a Pulumi stack that can provide this service, and using GitHub Actions would be an ideal compliment. But that effort is out of scope for the current milestone.
gharchive/issue
2019-07-12T04:52:34
2025-04-01T06:40:06.695520
{ "authors": [ "chrsmith", "joeduffy", "metral" ], "repo": "pulumi/docs", "url": "https://github.com/pulumi/docs/issues/1332", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1824936697
Reading cloudflare:index:Ruleset Causes Panic What happened? On the latest version of the provider (v5.7.0 at time of writing) we're attempting to import a cloudflare:index:Ruleset resource into our stack and running into the following panic: Diagnostics: cloudflare:index:Ruleset (CDN Optimization Rules): error: Preview failed: error reading from server: EOF pulumi:pulumi:Stack (infrastructure-main): panic: DefaultValue() should not be called during schema generation goroutine 106 [running]: github.com/pulumi/pulumi-terraform-bridge/pf/internal/schemashim.(*blockSchema).DefaultValue(0xc000730870) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/pf@v0.9.2/internal/schemashim/block_schema.go:123 +0x27 github.com/pulumi/pulumi-terraform-bridge/v3/pkg/tfbridge.getDefaultValue(...) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/v3@v3.51.0/pkg/tfbridge/schema.go:1350 github.com/pulumi/pulumi-terraform-bridge/v3/pkg/tfbridge.isDefaultOrZeroValue({0x18c2780, 0xc00116c840}, 0x18c2780?, {{0x13819e0?, 0xc0001a5278?}}) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/v3@v3.51.0/pkg/tfbridge/schema.go:1361 +0x44 github.com/pulumi/pulumi-terraform-bridge/v3/pkg/tfbridge.extractSchemaInputs({{0x158f780?, 0xc001211920?}}, {0x18c29c0, 0xc0011115c0}, 0xc000d712c0?, 0x0) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/v3@v3.51.0/pkg/tfbridge/schema.go:1438 +0x3e6 github.com/pulumi/pulumi-terraform-bridge/v3/pkg/tfbridge.ExtractInputsFromOutputs(0x18b9810?, 0xc0005fa820?, {0x18b7448?, 0xc001179ed0}, 0xc000730870, 0x0) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/v3@v3.51.0/pkg/tfbridge/schema.go:1468 +0x15f github.com/pulumi/pulumi-terraform-bridge/pf/tfbridge.(*provider).ReadWithContext(0xc00025f600, {0x18b36c8?, 0xc000614b70?}, {0xc0011c6a00, 0x9d}, {0xc000aa89b0, 0x46}, 0x0?, 0xc000614bd0) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/pf@v0.9.2/tfbridge/provider_read.go:69 +0x2f9 github.com/pulumi/pulumi-terraform-bridge/pf/internal/plugin.(*providerServer).Read(0xc000436870, {0x18b36c8, 0xc000614b70}, 0xc0007bf9e0) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/pf@v0.9.2/internal/plugin/provider_server.go:370 +0x188 github.com/pulumi/pulumi-terraform-bridge/x/muxer.(*muxer).Read.func1({0x18c1770?, 0xc000436870?}) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/x/muxer@v0.0.4/muxer.go:350 +0x39 github.com/pulumi/pulumi-terraform-bridge/x/muxer.resourceMethod[...](0xc0005fad20?, 0x40, 0xc00010f780?) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/x/muxer@v0.0.4/muxer.go:303 +0xbd github.com/pulumi/pulumi-terraform-bridge/x/muxer.(*muxer).Read(0x0?, {0x18b36c8?, 0xc000614b70?}, 0x40?) /home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/x/muxer@v0.0.4/muxer.go:349 +0x68 github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Read_Handler.func1({0x18b36c8, 0xc000614b70}, {0x154f420?, 0xc0007bf9e0}) /home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.72.2/proto/go/provider_grpc.pb.go:591 +0x7b github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1({0x18b36c8, 0xc0006148a0}, {0x154f420, 0xc0007bf9e0}, 0xc00003ff00, 0xc0005e5ab8) /home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x3e8 github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Read_Handler({0x15d1b60?, 0xc0005fad20}, {0x18b36c8, 0xc0006148a0}, 0xc0006af810, 0xc000434080) /home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.72.2/proto/go/provider_grpc.pb.go:593 +0x138 google.golang.org/grpc.(*Server).processUnaryRPC(0xc000630000, {0x18bb640, 0xc000aaa000}, 0xc00030fe60, 0xc000436900, 0x23a1ed8, 0x0) /home/runner/go/pkg/mod/google.golang.org/grpc@v1.56.0/server.go:1337 +0xdf3 google.golang.org/grpc.(*Server).handleStream(0xc000630000, {0x18bb640, 0xc000aaa000}, 0xc00030fe60, 0x0) /home/runner/go/pkg/mod/google.golang.org/grpc@v1.56.0/server.go:1714 +0xa36 google.golang.org/grpc.(*Server).serveStreams.func1.1() /home/runner/go/pkg/mod/google.golang.org/grpc@v1.56.0/server.go:959 +0x98 created by google.golang.org/grpc.(*Server).serveStreams.func1 /home/runner/go/pkg/mod/google.golang.org/grpc@v1.56.0/server.go:957 +0x18c error: preview failed We thought that it was originally how we were importing the resource (something like maybe one of the parameters we were passing) but upon trying to create a minimal reproduction of the issue by just reading the RulesSet via a getter, we see the same panic. See the "Steps to Reproduce" section for how we were able to reproduce it. When we downgraded back to v5.0.0, we were able to read the resource successfully. The issue may have been introduced in v5.1.0, but we are unable to run the provider (there was a bug fixed in #406). v5.1.1 was the first actual version where we could see the bug when we were attempting to "walk" versions. Expected Behavior We initially expected to be able to import the resource successfully. In our minimum reproduction, we should be able to pulumi.export() the attributes of the resource without a panic. Steps to reproduce I created our minimal reproduction via Docker. Start a container with: $ docker run --rm -it ubuntu:22.04 Inside the container, run a few things to bootstrap the environment: apt update && apt install -y curl python3 python3.10-venv curl -fsSL https://get.pulumi.com/ | sh export PATH=~/.pulumi/bin:$PATH pulumi login --local mkdir /tmp/testing cd /tmp/testing python3 -m venv . source bin/activate pip install pulumi pulumi-cloudflare export CLOUDFLARE_API_TOKEN="OUR_TOKEN_HERE" export PULUMI_CONFIG_PASSPHRASE="foobar" Create a requirements.txt with the following: Arpeggio==2.0.2 attrs==23.1.0 dill==0.3.7 grpcio==1.56.0 parver==0.4 protobuf==4.23.4 pulumi==3.76.1 pulumi-cloudflare==5.7.0 PyYAML==6.0.1 semver==2.13.0 six==1.16.0 Install the requirements with: pip install -r requirements.txt Using the automation API, create a script like the following. Be sure to replace the values of ZONE_ID and RULESET_ID. import json import sys import pulumi import pulumi_cloudflare as cloudflare from pulumi import automation as auto ZONE_ID = "09deadbeef" RULESET_ID="09badcaffe" def prog(): ruleset = cloudflare.Ruleset.get( "ruleset", f"zone/{ZONE_ID}/{RULESET_ID}", ) pulumi.export("ruleset", ruleset) project_name = "testing" stack_name = "testing" stack = auto.create_or_select_stack( stack_name=stack_name, project_name=project_name, program=prog ) stack.workspace.install_plugin("cloudflare", "v5.7.9") print("plugins installed") stack.refresh(on_output=print) stack.preview(on_output=print) The refresh is successful, but when you start doing the preview, the panic occurs. Output of pulumi about Other notes: Arch is aarch64 as I am on an M1 Mac currently. Our runners where the initial panic occurred during CI run x86_64. The warnings are just because I'm using the automation API, so there's no project or stack file. root because I'm lazy and in a Docker container. Version 3.76.1 Go Version go1.20.6 Go Compiler gc Host OS ubuntu Version 22.04 Arch aarch64 Backend Name c131e0286c4d URL file://~ User root Organizations Pulumi locates its logs in /tmp by default warning: Failed to read project: no Pulumi.yaml project file found (searching upwards from /tmp/testing). If you have not created a project yet, use `pulumi new` to do so: no project file found warning: Failed to get information about the current stack: no Pulumi.yaml project file found (searching upwards from /tmp/testing). If you have not created a project yet, use `pulumi new` to do so: no project file found Additional context No response Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already). @guineveresaenger Could you please take a look? I got context here. It's a plugin framework bug in the bridge. What happened is that tfbridge.ExtractInputsFromOutputs got reused from the sdk/v2 bridge which is no longer safe as it passes schema-only provider shim into sdk/v2 bridge machinery. The schema-only provider is not equipped for runtime use, hence we get DefaultValue() should not be called during schema generation while not being in the schema generation context anymore. A possible fix here is to wrap the schema-only provider when passing to tfbridge.ExtractInputsFromOutputs or use a weaker interface in tfbridge.ExtractInputsFromOutputs. In the Plugin Framework case there is no capability currently to detect default values for properties; but proceeding as if no default value is defined is preferable to having a panic. Hi @dmizelle - this should be fixed on the next release of this provider!
gharchive/issue
2023-07-27T18:19:39
2025-04-01T06:40:06.721824
{ "authors": [ "dmizelle", "guineveresaenger", "mikhailshilkov", "t0yv0" ], "repo": "pulumi/pulumi-cloudflare", "url": "https://github.com/pulumi/pulumi-cloudflare/issues/460", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1012696203
Shared Services Platforms page "How Pulumi Helps" section should be updated the how pulumi helps section on shared services looks really strange followed by the section above it; id consider moving this to the bottom or changing to 3 cards and doing a row This is about https://www.pulumi.com/solutions/shared-services-platforms/.
gharchive/issue
2021-09-30T22:46:31
2025-04-01T06:40:06.723919
{ "authors": [ "lukehoban", "zchase" ], "repo": "pulumi/pulumi-hugo", "url": "https://github.com/pulumi/pulumi-hugo/issues/665", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1711145663
follow up bug fixes to docs nav reorg Description these arent blocking, only bugs i found Checklist: [x] I have reviewed the style guide. [x] If blogging, I have reviewed the blogging guide. [x] I have manually confirmed that all new links work. [x] I added aliases (i.e., redirects) for all filename changes. [x] If making css changes, I rebuilt the bundle. this will go green once docs stuff lands
gharchive/pull-request
2023-05-16T03:05:51
2025-04-01T06:40:06.726701
{ "authors": [ "susanev" ], "repo": "pulumi/pulumi-hugo", "url": "https://github.com/pulumi/pulumi-hugo/pull/2839", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
955184555
Learn Pulumi with a module and some topics This example takes the Introduction to Pulumi on AWS tutorial and ports it to a Learn Pulumi "module" (which we will soon rename to "tutorial") to give us a sense of what we're currently working with. also just in case people are looking for it https://github.com/pulumi/pulumi-hugo/blob/master/LEARN.md
gharchive/pull-request
2021-07-28T19:51:22
2025-04-01T06:40:06.728308
{ "authors": [ "cnunciato", "mattstratton" ], "repo": "pulumi/pulumi-hugo", "url": "https://github.com/pulumi/pulumi-hugo/pull/454", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
142078592
connection error: socket closed Hello, since the update i'm always getting the error: connection error: socket closed. I think the server closes the socket..i don't need an solution i only want to knnow what happened. Did the agar.io server rejected the client ? (i changed my init key) The proxies i use aren't bad ones, and before the update they worked perfect. My other thougt was that agar.io banned my ip's.... do you know what happened ? Thanks, David Do you use agario-client version 1.3.0? examples/basic.js works fine for me, is it broken for you? yes I do....
gharchive/issue
2016-03-19T16:23:09
2025-04-01T06:40:06.817281
{ "authors": [ "d-bots-client", "pulviscriptor" ], "repo": "pulviscriptor/agario-client", "url": "https://github.com/pulviscriptor/agario-client/issues/118", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
146813128
NOT WORKING plz tell how to install in detailed instructions and doesnt it have a code to install andn tampermonkey if not just give how to make it work This is not bot, this is library for agar.io protocol. If you know Node.JS/JavaScript language then you install it via npm install agario-client, read documentation https://github.com/pulviscriptor/agario-client/blob/master/README.md and write your own code. If you don't then you're in the wrong place. i did make one but only normal are coming for facebook it says failed to get token Dude.. how often does i have to say it?? ALL LEAKED ACCOUNTS ARE BANNED FROM $MONEY$CLIP$ can you help me make it work at least trying with my one account Set the cookies of YOUR facebook account in and then log off ur Account from agario and then run the feeder not working do you have your script that is working or can you say how to get token for agario fb @kurama129 you dont have to acces the token, just get COOKIES!!!
gharchive/issue
2016-04-08T04:47:25
2025-04-01T06:40:06.821265
{ "authors": [ "kurama129", "philw1508", "pulviscriptor", "sejo153" ], "repo": "pulviscriptor/agario-client", "url": "https://github.com/pulviscriptor/agario-client/issues/146", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
258567123
Searching through items in a piece while on a page other than 1 will not show results When I am on the second page of items in a piece (in this example the second page of People) and try to search, I get back a page that says it has results but displays none. If I choose at the bottom to go to Page 1 they will display, so it seems like the Search feature does not bring you to the page 1 of search results. This got fixed.
gharchive/issue
2017-09-18T17:54:46
2025-04-01T06:40:06.825398
{ "authors": [ "boutell", "runyx" ], "repo": "punkave/apostrophe", "url": "https://github.com/punkave/apostrophe/issues/1013", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1888763886
[Bug]: Cannot use the browser immediately after installing it Steps to reproduce With an empty cache (~/.cache/puppeteer does not exist), after installing the browser using browsers.install, we cannot use the browser with puppeteer.launch. You can reproduce the problem with the code at https://github.com/rgl/try-puppeteer-in-bun: git clone https://github.com/rgl/try-puppeteer-in-bun.git cd try-puppeteer-in-bun bun install --frozen-lockfile rm -rf ~/.cache/puppeteer bun run main.js Expected results Expected it to launch the just installed browser. Actual results It fails with: 2023-09-09T14:17:43.077Z Downloading the browser Chrome/116.0.5845.96... 2023-09-09T14:17:49.732Z Launching the browser... 610 | } 611 | get stdio() { 612 | return this.#stdioObject ??= this.#createStdioObject(); 613 | } 614 | 615 | spawn(options) { ^ ESRCH: No such process syscall: "open" errno: -3 at spawn (node:child_process:615:14) at node:child_process:2:40 at new Process (/home/vagrant/Projects/try-puppeteer-in-bun/node_modules/@puppeteer/browsers/lib/esm/launch.js:104:31) at launch (/home/vagrant/Projects/try-puppeteer-in-bun/node_modules/@puppeteer/browsers/lib/esm/launch.js:58:11) at /home/vagrant/Projects/try-puppeteer-in-bun/node_modules/puppeteer-core/lib/esm/puppeteer/node/ProductLauncher.js:60:31 at processTicksAndRejections (:1:2602) @rgl does it reproduce in node? @OrKoN, it does work fine with node 18.17.1, but not with bun 1.0.0. @rgl is it the problem with bun perhaps? I forgot to mention, the second time its executed in bun, it works fine. It only fails when starting with a clean cache. I see, it would sound like some of the buns API are not completely compatible. Note that we don't support bun so if you can figure out what's the difference between bun and node, I would recommend filing an issue against bun.
gharchive/issue
2023-09-09T14:21:23
2025-04-01T06:40:06.853081
{ "authors": [ "OrKoN", "rgl" ], "repo": "puppeteer/puppeteer", "url": "https://github.com/puppeteer/puppeteer/issues/10867", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
534095372
Request.continue(overrides) does not set cookie Steps to reproduce Tell us about your environment: Puppeteer version: 2.0.0 Platform / OS version: Ubuntu Node.js version: 10.15.3 What steps will reproduce the problem? Start a puppeteer browser with the request interception code: await page.setRequestInterception(true); page.on('request', request => { // Override headers const headers = {cookie: 'hello=world;'}; request.continue({headers}); }); Browse to a page which already has a cookie set by the browser, then check the request received by the server. What is the expected result? The server should receive the cookie header: "hello=world". What happens instead? The server receives receives the original cookie header set by the page, but it does not equal "hello=world" as it should. Setting the cookie header only works if the original request does not have a cookie set at all. What is the use case? I'm developing an app for pen-testing web apps, I need to let the user tamper with the request data i.e. changing header values (including cookies!), change POST data etc. I have same problem, i can't solve it have a look at fetchURLResponse() function located in file puppeteer-bot-2a.js in my project: https://puppeteer.theater/ use page.setcookie the same problem This is still an issue for me on puppeteer v22.11.2. However, I find that I can override cookies if I use the stealth plugin: const puppeteer = require('puppeteer-extra'); const StealthPlugin = require('puppeteer-extra-plugin-stealth'); puppeteer.use(StealthPlugin());
gharchive/issue
2019-12-06T15:53:56
2025-04-01T06:40:06.859196
{ "authors": [ "entrity", "evanrolfe", "nicoandmee", "zhaojunlike", "ztob" ], "repo": "puppeteer/puppeteer", "url": "https://github.com/puppeteer/puppeteer/issues/5231", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2403066814
Chore: expose quality variable Sometimes users do not want very high quality video that may take much disk space or they may want clearer video. This PR is to expose quality variable so that users can adjust it themselves to meet the needs of testing. Hi @Lightning00Blade I saw this PR failed. Can you help to check the failure?
gharchive/pull-request
2024-07-11T12:31:59
2025-04-01T06:40:06.860362
{ "authors": [ "minggeorgelei" ], "repo": "puppeteer/puppeteer", "url": "https://github.com/puppeteer/puppeteer/pull/12735", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
106693330
(BKR-536) skip ssh installation for images that already provide it Add check to see if a host tells us it already has ssh installed, if so, don't attempt to install it again. To skip installation, add the key to your node definition, eg: HOSTS: pe-puppet.localdomain: roles: - "agent" - "master" platform: "el-7-x86_64" image: "pe2015-2-0_aio-master" ssh_installed: true Refer to this link for build results (access rights to CI server needed): http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/1522/ Test PASSed. I'm -1 on this. We should either do auto detection which is not possible , or just not care about it. it brings no extra overhead anyway. @electrical how would we autodetect? I need something along these lines as I've got a docker image that isn't bootable with stock beaker. What do you think about my suggest to have static Dockerfiles (hipchat channel)? That would solve this for me. As for not caring - how can we bypass the above failure? thanks {"stream":"Step 3 : RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key\n"} {"stream":" ---\u003e Running in 6c122e50aff0\n"} {"stream":"Generating public/private rsa key pair.\n/etc/ssh/ssh_host_rsa_key already exists.\nOverwrite (y/n)? "} {"errorDetail":{"code":1,"message":"The command '/bin/sh -c ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key' returned a non-zero code: 1"},"error":"The command '/bin/sh -c ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key' returned a non-zero code: 1"} Your problem is not as much that ssh is installed but that we always do the keygen part. There are 2 ways we can solve this; The main image you use, remove the generated keys out of it and let beaker generate it. This ensures its random at all times. Do a test if the file exist and only if it doesn't run ssh-keygen. For example: test -f /etc/ssh/ssh_host_rsa_key|| ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key This should only run the ssh-keygen part if the file doesn't exist. @electrical I had a think about this and came up with a different approach - closed in favour of https://github.com/puppetlabs/beaker/pull/961
gharchive/pull-request
2015-09-16T03:52:17
2025-04-01T06:40:06.865491
{ "authors": [ "GeoffWilliams", "electrical", "puppetlabs-jenkins" ], "repo": "puppetlabs/beaker", "url": "https://github.com/puppetlabs/beaker/pull/958", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
708398644
(GH-2210) Configure 'modules' with 'bolt project init' This updates the bolt project init command to configure modules in bolt-project.yaml, allowing users with new projects to immediately start using the module workflow. If bolt project init is run, modules will be an empty array [] If bolt project init --modules is run, modules will be an array of declarations for the specified modules This also updates the command so that it can only be run once per project. Previously, if a project had a configuration file, but no Puppetfile, the bolt project init --modules command would still install modules. Now, Bolt will error in this case and display a helpful error that the user should instead use the bolt module add command. Closes #2210 !feature Configure modules with bolt project init (#2110) The bolt project init command will now configure the modules key in the bolt-project.yaml file, enabling the bolt module command. CLA signed by all contributors.
gharchive/pull-request
2020-09-24T18:50:49
2025-04-01T06:40:06.870547
{ "authors": [ "beechtom", "puppetcla" ], "repo": "puppetlabs/bolt", "url": "https://github.com/puppetlabs/bolt/pull/2211", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
434291262
(BOLT-1195) Convert a YAML plan to a Puppet plan This creates a new command bolt plan convert <path-to-plan> which accepts either a relative or absolute path a YAML plan and prints to stdout the equivalent Puppet plan. This relies on https://github.com/puppetlabs/puppet/pull/7485 Would be helpful to (temporarily) commit your Gemfile pointing to the change in puppet. Travis failures are just rubocop on the Gemfile :P CLA signed by all contributors. it looks like puppet/pal is not being set up properly bundle exec bolt plan convert yaml/plans/foo.yaml bundler: failed to load command: bolt (/Users/adreyer/src/bolt/.bundle/ruby/2.5.0/bin/bolt) NameError: uninitialized constant Bolt::PAL::YamlPlan::Puppet Did you mean? Bolt::PuppetDB /Users/adreyer/src/bolt/lib/bolt/pal/yaml_plan.rb:77:in `block in initialize' /Users/adreyer/src/bolt/lib/bolt/pal/yaml_plan.rb:68:in `each' /Users/adreyer/src/bolt/lib/bolt/pal/yaml_plan.rb:68:in `map' /Users/adreyer/src/bolt/lib/bolt/pal/yaml_plan.rb:68:in `initialize' /Users/adreyer/src/bolt/lib/bolt/transpiler.rb:59:in `new' /Users/adreyer/src/bolt/lib/bolt/transpiler.rb:59:in `parse_plan' /Users/adreyer/src/bolt/lib/bolt/transpiler.rb:22:in `transpile' /Users/adreyer/src/bolt/lib/bolt/cli.rb:275:in `execute' /Users/adreyer/src/bolt/exe/bolt:10:in `<top (required)>' /Users/adreyer/src/bolt/.bundle/ruby/2.5.0/bin/bolt:23:in `load' /Users/adreyer/src/bolt/.bundle/ruby/2.5.0/bin/bolt:23:in `<top (required)>' Re: Templating: I see a few negatives to templating without any gain over building a string. 1. It's less readable, especially if like me you're not familiar with it 2. It's trickier to support inline logic, like "only add a parenthesis here if there are no parameters otherwise add a newline" all on one line is very long, 3. It adds a new tool to learn (granted it's ruby-ish, but it's still adding complexity). I didn't think it was worth exploring further, but if there are advantages I'm not considering I'm happy to listen. It looks like strings are always being emitted in single quotes. We should be checking whether it's an EvaluableString and switching based on that. Strings which were double-quoted in the YAML should be double-quoted in the output. WIthout having looked at code and just playing with this I see a couple things. No convert description in bolt help cas@cas-ThinkPad-T460p:~/working_dir/bolt$ bolt Usage: bolt <subcommand> <action> [options] Available subcommands: bolt command run <command> Run a command remotely bolt file upload <src> <dest> Upload a local file bolt script run <script> Upload a local script and run it remotely bolt task show Show list of available tasks bolt task show <task> Show documentation for task bolt task run <task> [params] Run a Puppet task bolt plan show Show list of available plans bolt plan show <plan> Show details for plan bolt plan run <plan> [params] Run a Puppet task plan bolt apply <manifest> Apply Puppet manifest code bolt puppetfile install Install modules from a Puppetfile into a Boltdir bolt puppetfile show-modules List modules available to Bolt Having a step description can result in double quoting in arg list parameters: nodes: type: TargetSpec foo: type: String[1] description: foo default: bar steps: - name: run_task task: sample target: $nodes description: hi parameters: message: hello world return: $run_task cas@cas-ThinkPad-T460p:~/working_dir/bolt$ bolt plan convert Boltdir/site/yaml_plans/plans/test_convert.yaml # WARNING: This is an autogenerated plan. It may not behave as expected. plan yaml_plans::test_convert( TargetSpec $nodes, String[1] $foo = 'bar' ) { $run_task = run_task('sample', $nodes, ''hi'', {'message' => 'hello world'}) return $run_task } The converted puppet plan contains invalid puppet code: Syntax error at 'hi' (line: 6, column: 42) Eval can be used to have duplicate variable assignment that is not caught. parameters: nodes: type: TargetSpec foo: type: String[1] description: foo default: bar steps: - name: run_task task: sample target: $nodes parameters: message: hello world - eval: | $run_task = $foo name: eval_step return: $run_task cas@cas-ThinkPad-T460p:~/working_dir/bolt$ bolt plan convert Boltdir/site/yaml_plans/plans/test_convert.yaml # WARNING: This is an autogenerated plan. It may not behave as expected. plan yaml_plans::test_convert( TargetSpec $nodes, String[1] $foo = 'bar' ) { $run_task = run_task('sample', $nodes, {'message' => 'hello world'}) $eval_step = $run_task = $foo return $run_task } Also I think that the warnings/errors about invalid puppet are getting printed to both stdout and stderr. @donoghuc: I made adding the help text part of #977 for docs review Woops! Removed the extra quotes. I'm not sure what to do about that...I run the plan through the evaluating parser once it's converted, perhaps there's another puppet evaluator I need to use? Investinating... Conflicting variable assignments are explicitly one of the things we can't detect, and are indeed a key reason we emit the warning about how the plan may not be correct. @donoghuc I think I've addressed all your comments - using with() || seems to work for me for the eval code block use case. @nicklewis I think I'm going to head out for the night, but will make those changes first thing tomorrow! I thought I remembered talking about printing the error/warning when we detect invalid puppet code to stderr only (thought i dont see that in the ticket)? I see that it is exiting with exit code 1 but it seems like the error is still being printed to stdout. Do we need to give parameters the with() treatment like for eval? parameters: nodes: type: TargetSpec x: type: Integer y: type: Integer steps: - name: run_task task: sample::echo description: "foo" target: localhost parameters: message: | $z = $x + $y $z + 1 return: $run_task cas@cas-ThinkPad-T460p:~/working_dir/bolt$ bolt plan run yaml_plans::test_convert x=1 y=1 -t localhost Starting: plan yaml_plans::test_convert Starting: foo on localhost Finished: foo with 0 failures in 0.01 sec Finished: plan yaml_plans::test_convert in 0.01 sec Finished on localhost: cas-ThinkPad-T460p got passed the message: 3 { } Successful on 1 node: localhost Ran on 1 node cas@cas-ThinkPad-T460p:~/working_dir/bolt$ bolt plan convert Boltdir/site/yaml_plans/plans/test_convert.yaml # WARNING: This is an autogenerated plan. It may not behave as expected. plan yaml_plans::test_convert( TargetSpec $nodes, Integer $x, Integer $y ) { $run_task = run_task('sample::echo', 'localhost', "foo", {'message' => $z = $x + $y $z + 1 }) return $run_task } The converted puppet plan contains invalid puppet code: Syntax error at 'z' (line: 8, column: 1) Re: Code blocks, I made https://tickets.puppetlabs.com/browse/BOLT-1285 - I'm going to resolve to comment just since it's kind of long.
gharchive/pull-request
2019-04-17T13:43:24
2025-04-01T06:40:06.881732
{ "authors": [ "adreyer", "donoghuc", "lucywyman", "nicklewis", "puppetcla" ], "repo": "puppetlabs/bolt", "url": "https://github.com/puppetlabs/bolt/pull/957", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
89086797
1.1.1 release notes This includes all the changes from Jeff's original PR, plus some minor wording changes. We can merge it once we have the go/no-go meeting, since merging it will post the release notes to the docs site. :+1: LGTM. Will hold on merge until we do the go/no-go decision - hopefully later today.
gharchive/pull-request
2015-06-17T18:11:55
2025-04-01T06:40:06.886321
{ "authors": [ "camlow325", "nfagerlund" ], "repo": "puppetlabs/puppet-server", "url": "https://github.com/puppetlabs/puppet-server/pull/618", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
473093361
(PUP-9927) Exclude PKey.read monkey patch on JRuby 9.1 JRuby 9.1 (openssl-jruby) doesn't implement the OpenSSL::PKey.read method despite being "compatible" with MRI 2.3. We don't need that generic PKey.read method on JRuby, so don't try to monkey patch it. We don't test master against jruby 9.1, but a quick test shows it loads successfully: $ ruby --version jruby 9.1.17.0 (2.3.3) 2018-04-20 d8b1ff9 Java HotSpot(TM) 64-Bit Server VM 25.162-b12 on 1.8.0_162-b12 +jit [darwin-x86_64] $ bx rspec spec/unit/x509/cert_provider_spec.rb Run options: exclude {:broken=>true, :benchmark=>true} ...........*........................................................ Pending: (Failures listed here are expected and do not affect your suite's status) 1) Puppet::X509::CertProvider when loading crls and input is invalid raises when invalid input is inside BEGIN-END block # jruby bug: https://github.com/jruby/jruby/issues/5619 Failure/Error: expect { create_provider(crlpath: crl_path).load_crls }.to raise_error(OpenSSL::X509::CRLError, 'nested asn1 error') expected OpenSSL::X509::CRLError with "nested asn1 error", got #<OpenSSL::X509::CRLError: java.io.IOException: malformed PEM data encountered> with backtrace: # ./lib/puppet/x509/cert_provider.rb:109:in `block in load_crls_from_pem' # ./lib/puppet/x509/cert_provider.rb:108:in `load_crls_from_pem' # ./lib/puppet/x509/cert_provider.rb:93:in `load_crls' # ./spec/unit/x509/cert_provider_spec.rb:137:in `block in (root)' # ./spec/unit/x509/cert_provider_spec.rb:136:in `block in (root)' # ./spec/unit/x509/cert_provider_spec.rb:136:in `block in (root)' Finished in 2.34 seconds (files took 2.25 seconds to load) 68 examples, 0 failures, 1 pending CLA signed by all contributors. I should just remove our calls to OpenSSL::PKey.read and the related monkey patches. Let me fix that. I haven't functionally tested this yet, but it seems clear and correct.
gharchive/pull-request
2019-07-25T22:16:29
2025-04-01T06:40:06.894210
{ "authors": [ "joshcooper", "justinstoller", "puppetcla" ], "repo": "puppetlabs/puppet", "url": "https://github.com/puppetlabs/puppet/pull/7639", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
251041373
(maint) Update EZBake to 1.5.0 This version of EZBake stores the build metadata (including the deployed snapshot version and fully-resolved versions of all snapshot dependencies) in a persistent location, rather than one that gets deleted after two weeks. I tested this by creating a build job with lein with-profile ezbake ezbake build and observing that it completed successfully. According to the EZBake changelog the only difference between these two versions is the generation of this metadata. Test PASSed CLA signed by all contributors.
gharchive/pull-request
2017-08-17T19:04:44
2025-04-01T06:40:06.896624
{ "authors": [ "aperiodic", "puppetcla", "puppetlabs-jenkins" ], "repo": "puppetlabs/puppetdb", "url": "https://github.com/puppetlabs/puppetdb/pull/2358", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
336300445
(PDB-2700) Migrate log test utils to trapperkeeper Prior to this commit, testutil functions related to logging were duplicated in trapperkeeper and puppetdb. This commit removes the duplicated functions and switches to the ones provided by trapperkeeper. Test PASSed CLA signed by all contributors. Test PASSed Test PASSed Test PASSed Test PASSed Test PASSed
gharchive/pull-request
2018-06-27T16:25:19
2025-04-01T06:40:06.899156
{ "authors": [ "austb", "puppetcla", "puppetlabs-jenkins" ], "repo": "puppetlabs/puppetdb", "url": "https://github.com/puppetlabs/puppetdb/pull/2501", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
143369535
(PDOC-75) Work with both versions of 'interpret_any' Prior to this commit, strings would fail with puppet 4.4.0 and newer. This was because strings was making use of a method that was marked API private and thus subject to change. The method was changed in 4.4.0 release of puppet to take two variables, meaning that strings much adjust how it calls the method based on the version of puppet it's running with. @hlindberg @kylog Updated and tests are passing!
gharchive/pull-request
2016-03-24T21:49:58
2025-04-01T06:40:06.908518
{ "authors": [ "HAIL9000" ], "repo": "puppetlabs/puppetlabs-strings", "url": "https://github.com/puppetlabs/puppetlabs-strings/pull/77", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
430204322
Add initial version of wash find This commit adds the initial support for wash find. wash find is very similar to GNU find and BSD find in that it supports an expression based language composed of primaries and operators. Primaries are predicates constructed on a specific entry. This commit supports the -true, -false, and -name primaries, where -true specifies a predicate that always returns true; -false specifies a predicate that always returns false; and -name specifies a predicate that returns true if the entry’s cname matches the specified shell glob. The supported operators are “()”, “!/-not”, “-a/-and”, and “-o/-or” (in order of decreasing precedence). “-not” takes a single predicate p and returns a new predicate q s.t. q(e) = ! p(e), where ! is the logical not operator. -and takes predicates p1 and p2 and returns a new predicate q s.t. q(e) = p1(e) && p2(e), where && is the short-circuited logical and operator. Similarly, -or takes predicates p1 and p2 and returns q(e) = p1(e) || p2(e) where || is the logical or operator. “()” is useful to control precedence. For example, wash find <path> -true -o -true -a -false will evaluate to true since -a has greater precedence than -o. However, wash find <path> ( -true -o - true ) -a -false will evaluate to false. Like the existing find command, wash find also supports concatenating predicates. For example, something like wash find <path> -true -false will be parsed as wash find <path> -true -a -false. Subsequent commits will add more primaries, a better usage, and will also properly recurse down into all the listed subdirectories. Signed-off-by: Enis Inan enis.inan@puppet.com Contributions to this project require sign-off consistent with the Developers Certificate of Origin. This can be as simple as using git commit -s on each commit. Pushed an update addressing the feedback.
gharchive/pull-request
2019-04-07T23:16:07
2025-04-01T06:40:06.917914
{ "authors": [ "ekinanp" ], "repo": "puppetlabs/wash", "url": "https://github.com/puppetlabs/wash/pull/189", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1092795921
Any plan for a dashboard? So we can control and see all my stuff within Emacs. I've thought about it and that was initially the plan when I started this project. It's a feature I'd love to add one day. The ground work is there now so maybe I'll work on that soon I've been able to make a lot of progress on this already! I'd be happy if you'd be willing to try it out so far. It's in the dashboard branch. I still have a lot of testing to do, code cleanup, and documentation writing, but I think it's in a decent state so far. If you use straight, you can use this recipe :straight (:type git :host github :repo "purplg/hass" :branch "dashboard") I'd like to be able to add "rows" of widgets in the future, but that'd add a decent amount of additional complexity and I haven't decided on a good way to do it, so I'll hold off on that for now. Also I haven't done anything, except color the icons blue and make the group names larger, on the appearance of the dashboard. Eventually I wanna make it prettier and customizable. If you have any other suggestions or ideas, I'd be happy to hear them. The basic required setup is to configure hass-dash-layout according to the example in the docstring for that variable. Then you can call (hass-dash-open) and it'll open the dashboard. I can help you with setup if you decide to try it out. I realize the documentation is a bit lacking right now. Awesome :tada: !! With the dashboard branch I got this when I run (hass-setup). I'm not using straight btw, since I'm using a special nixos emacs config. #s(request-response nil nil nil nil nil "https://my_url:443/api/" nil (:sync nil :type "GET" :headers (("User-Agent" . hass--user-agent) ("Authorization" . "Bearer my_token") ("Content-Type" . "application/json")) :data nil :parser #[0 "ÀÁ !‡" [hass--deserialize buffer-string] 2] :error hass--request-error ...) #<buffer *request curl*-43543> nil nil ...) No trace if I enable toggle-debug-on-error. Thanks for testing! I don't think that's an error but rather just a request object being returned that doesn't need to be and it outputs to the minibuffer. I've commit a change to fix that, something I should have done a long time ago haha. Do hass--available-entities and hass--available-services populate after hass-setup is called? Ah that's an important thing to handle! Good catch, I'll be sure to fix that soon Should work now. I also added the default service for a scene as scene.turn_on so you no longer need to specify that :service if that's what you're calling. I'll be adding more before release. Just tested. it works. thanks! Btw, the first request takes a while. Btw, the first request takes a while. It might be only on my side. I think I saw something similar with the normal HA dashboard. Any recommended key binding for (hass-dash-open)? As an Evil and former Doom user, I'm using SPC-a-d for hass-dash-open. SPC-a-c for hass-call-service. Works well for me. @bbigras Thanks for your suggestion and help! Sorry it took so long to complete this. Been very busy with life lately. Let me know if you find any issues or have any other suggestions. :)
gharchive/issue
2022-01-03T20:25:30
2025-04-01T06:40:06.987148
{ "authors": [ "bbigras", "purplg" ], "repo": "purplg/hass", "url": "https://github.com/purplg/hass/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
396271900
Intentionally empty repo? Hi, dotenv's Cargo.toml refers to this repo, but it's empty. Is that intentional? https://github.com/apiraino/rust-dotenv is the most up to date, based on the network graph. Its Cargo.toml points to apiraino/rust-dotenv, so it should be fixed on the next publish. I've been meaning to write this for a couple of days now... sorry for not having got to it. Sean Griffin was so kind to point me to the crates.io API to get the latest available version as source code, which currently is 0.13.0. It would be lovely if someone could spawn or update a fork of this repository with it to take over the head of the network. curl -L crates.io/api/v1/crates/dotenv/0.13.0/download --output dotenv_0.13.0.tar.gz We could fork it, but we'd also need to get added as owners on crates.io. Otherwise, we wouldn't be able to publish new versions of the crate. @sgrif, are you looking for new maintainers? Yes, the repo is intentionally blank. Since the original author of dotenv (the real user named purplinimal) deleted the repo, it left a chance for a malicious user to create a github user with the same name and then create a repo named dotenv. Since crates.io will point to purplinimal/dotenv (which is how you got here) you won't realise the difference. This user and repo was created just as a placeholder so that nobody misuses the repo link. It will be deleted once we restore the contents of dotenv in another repo (or it is moved to this repo). I should have left a note here regarding this) Just curious, what's holding up restoring the contents in another repo? If you do that, update the link in Cargo.toml, and publish a new version, then crates.io will point to the correct repo, won't it? I know finding time for open source work isn't always easy. It just seems like the time it takes to create a new user and blank repo is about the same time as it takes to restore the repo under an active user's account and update the crates.io metadata. Thank you for the explanation. I'm also wondering where the source code lives now. I'm not able to build dotenv 0.10.1. I get this error: error[E0659]: `error_chain` is ambiguous (derive helper attribute vs any other name) --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/dotenv-0.10.1/src/lib.rs:23:40 | 23 | #[cfg_attr(not(feature = "backtrace"), error_chain(backtrace = "false"))] | ^^^^^^^^^^^ ambiguous name | note: `error_chain` could refer to the derive helper attribute defined here --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/dotenv-0.10.1/src/lib.rs:22:17 | 22 | #[derive(Debug, error_chain)] | ^^^^^^^^^^^ note: `error_chain` could also refer to the derive macro imported here --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/dotenv-0.10.1/src/lib.rs:10:1 | 10 | #[macro_use] | ^^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0659`. But I don't know where to report it since this repo is empty and it hasn't been restored somewhere else. It's been two months since the last activity on this issue. Anyone know what's going on? @jimmycuadra I've started taking over maintainership of dotenv. You can find the repo at https://github.com/dotenv-rs/dotenv. @dgriffen great. Would be worth adding in the readme that this (your repo) isn't the official dotenv but a maintained fork from it. Thanks @Dylan-DPC it will be published as the official dotenv. The dotenv-rs org has permissions to publish crates. @dgriffen okay that's great. Let me know once there is a new release on crates.io and it reflects your repo, then i'll delete this repo. @Dylan-DPC the new release is up. The only repo we couldnt get ahold of was dotenv_codegen_impl since there was only one listed owner. We renamed that crate since it wasnt a public dependency. One thing I've noticed, this account will have to remain around since it is still listed as an owner on the crates. @Dylan-DPC would it be possible for you to add @sgrif as an owner of dotenv_codegen_impl? They wouldn't be able to. Oh, for some reason I thought that it was tied to the github username. I'm guessing is some more unique identifier? @dgriffen crates.io still considers the original name and my account as 2 different accounts ( a good thing) so I can't add you as a co-owner. Either Sean has permissions or you could start maintaining a fork and swap it for that (worst-case)
gharchive/issue
2019-01-06T16:20:12
2025-04-01T06:40:07.000246
{ "authors": [ "Dylan-DPC", "azriel91", "dgriffen", "dguo", "jimmycuadra", "lweberk", "sgrif" ], "repo": "purpliminal/rust-dotenv", "url": "https://github.com/purpliminal/rust-dotenv/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
820014226
Xcode does not works pac device ios 14.3 JB Describe the bug Xcode does not deploy anymore after JB. Xcode: 12.4 Ios: 14.3 Iphone 11 When the phone is not JB Xcode work perfectly. When the phone is JB you get the error: Errors were encountered while preparing your device for development. Please check the Devices and Simulators Window. or: Could not start debugserver on "dev name" when trying to launch "app name". if you reboot the phone ( so no JB) everything work fine Same problem +1 if you reboot the phone ( so no JB) everything work fine Details Failed to start remote service "com.apple.debugserver" on device. Domain: com.apple.dtdevicekit Code: 811 Recovery Suggestion: Please check your connection to your device. User Info: { DVTRadarComponentKey = 261622; } The service is invalid. Domain: com.apple.dt.MobileDeviceErrorDomain Code: -402653150 User Info: { DVTRadarComponentKey = 261622; MobileDeviceErrorCode = "(0xE8000022)"; "com.apple.dtdevicekit.stacktrace" = ( 0 DTDeviceKitBase 0x000000012c236c8f DTDKCreateNSErrorFromAMDErrorCode + 220 1 DTDeviceKitBase 0x000000012c2465a4 __63-[DTDKRemoteDeviceConnection startFirstServiceOf:unlockKeybag:]_block_invoke + 613 2 DTDeviceKitBase 0x000000012c245c70 __48-[DTDKRemoteDeviceConnection futureWithSession:]_block_invoke_3 + 22 3 DTDeviceKitBase 0x000000012c238def __DTDKExecuteInSession_block_invoke_2 + 35 4 DTDeviceKitBase 0x000000012c2384c9 __DTDKExecuteWithConnection_block_invoke_2 + 218 5 DTDeviceKitBase 0x000000012c2383c6 __DTDKExecuteWithConnection_block_invoke + 106 6 libdispatch.dylib 0x00007fff202c37c7 _dispatch_client_callout + 8 7 libdispatch.dylib 0x00007fff202d0605 _dispatch_lane_barrier_sync_invoke_and_complete + 60 8 DVTFoundation 0x000000010f646cc3 DVTDispatchBarrierSync + 208 9 DVTFoundation 0x000000010f61dd76 -[DVTDispatchLock performLockedBlock:] + 60 10 DTDeviceKitBase 0x000000012c2382c7 DTDKExecuteWithConnection + 226 11 DTDeviceKitBase 0x000000012c238c93 DTDKExecuteInSession + 239 12 DTDeviceKitBase 0x000000012c245ac2 __48-[DTDKRemoteDeviceConnection futureWithSession:]_block_invoke_2 + 131 13 DVTFoundation 0x000000010f64417e DVT_CALLING_CLIENT_BLOCK + 7 14 DVTFoundation 0x000000010f645da0 __DVTDispatchAsync_block_invoke + 1191 15 libdispatch.dylib 0x00007fff202c25dd _dispatch_call_block_and_release + 12 16 libdispatch.dylib 0x00007fff202c37c7 _dispatch_client_callout + 8 17 libdispatch.dylib 0x00007fff202c95fe _dispatch_lane_serial_drain + 606 18 libdispatch.dylib 0x00007fff202ca0fe _dispatch_lane_invoke + 426 19 libdispatch.dylib 0x00007fff202d3c5d _dispatch_workloop_worker_thread + 819 20 libsystem_pthread.dylib 0x00007fff2046b499 _pthread_wqthread + 314 21 libsystem_pthread.dylib 0x00007fff2046a467 start_wqthread + 15 ); } System Information macOS Version 11.2.1 (Build 20D74) Xcode 12.4 (17801) (Build 12D4e) Timestamp: 2021-03-19T12:11:37+03:00 If I restart the phone everything is fine +1 Just update u0 to 6.1.1 and rejailbreak,it has already solved. ok, thank you fixed on 6.1.1 as mention by @wstclzy2010
gharchive/issue
2021-03-02T13:22:34
2025-04-01T06:40:07.031181
{ "authors": [ "Andre85to", "burakCokyildirim", "wstclzy2010" ], "repo": "pwn20wndstuff/Undecimus", "url": "https://github.com/pwn20wndstuff/Undecimus/issues/2227", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
435502172
API Developer API for My App Developer API Uncover https://clk.sh/api?api=4f614bbab5372e223605a5fee810d641a66d17b5&url=yourdestinationlink.com&ali
gharchive/pull-request
2019-04-21T11:54:21
2025-04-01T06:40:07.032747
{ "authors": [ "Joe0077Rayyan" ], "repo": "pwn20wndstuff/Undecimus", "url": "https://github.com/pwn20wndstuff/Undecimus/pull/973", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
638353050
Discussion: Examples for mowing automations with this integration I think it helps all (specialy new users) if we collect and discuss automations (here for mowing control) and rerequisits for those. For irrigation we discuss this here -> #30 Target: Notification over HomeAssistant Companion App or as Telegram Messenger message or over Amazon Alexa with the help of Alexa Media Player Integration Why not with the Gardena App: It is not possible to get Messages from the status of the Gardena Mowers. It is not possible to get this notifications over Amazon Alexa from the App ** Requirements ** Installed hass-gardena-smart-system integration Home Assistant Compagnion iOS or Android App Gardena Mower (optional) Telegram Messenger Integration (optional) Alexa Media Player Integration you can find and install this Integration over HACS Configuration: to get the activity Statuscode as sensor for the automation trigger you need the activity attribut as seperat template sensor in the configuration.yaml - platform: template sensors: sileno_activity: value_template: "{{ states.vacuum.sileno.attributes.activity }}" friendly_name: "Sileno Aktivität" Automation example: - alias: "Docked with Autotimer" trigger: platform: state entity_id: sensor.sileno_activity to: 'PARKED_AUTOTIMER' action: - service: notify.notify data: title: "Mower docked" message: "Lawn mower was parked due to autotimer" - service: notify.telegram_[TELEGRAM_CHANNEL] data_template: title: '*Mower docked!*' message: "Lawn mower was parked due to autotimer -> https://[HA_URL]/lovelace/terrasse" - alias: "Lawnmower mows" trigger: platform: state entity_id: sensor.sileno_activity to: 'OK_CUTTING' action: - service: notify.notify data: title: "Mower Status" message: "Lawnmower mows" - service: notify.telegram_[TELEGRAM_CHANNEL] data_template: title: '*Mower Status*' message: "Lawnmower mows -> https://[HA_URL]/lovelace/terrasse" - alias: "Lawnmower loads" trigger: platform: state entity_id: sensor.sileno_activity to: 'OK_CHARGING' action: - service: notify.notify data: title: "Mower Status" message: "Lawnmower loads" - service: notify.telegram_[TELEGRAM_CHANNEL] data_template: title: '*Lawnmower loads*' message: "Lawnmower loads -> https://[HA_URL]/lovelace/terrasse" - alias: "Mower Error" trigger: - platform: state entity_id: vacuum.sileno to: 'error' action: - service: notify.notify data: title: "Mower Fehler" message: "Error: {{ states.sensor.sileno_error_status.state }}!" - service: notify.telegram_[TELEGRAM_CHANNEL] data_template: title: '*Mower Error*' message: "Error: {{ states.sensor.sileno_error_status.state }}! -> https://[HA_URL]/lovelace/terrasse" This has been added to a RECIPE.md Thanks @northpower25
gharchive/issue
2020-06-14T13:31:55
2025-04-01T06:40:07.055470
{ "authors": [ "grm", "northpower25" ], "repo": "py-smart-gardena/hass-gardena-smart-system", "url": "https://github.com/py-smart-gardena/hass-gardena-smart-system/issues/32", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
487232287
Implement GeoTIFF exporter Some users have expressed their interest to read the pysteps output in GeoTIFF format. This is because GeoTIFF is the standard format for GIS applications (e.g. QGIS). Being able to easily import pysteps output into GIS applications would be especially helpful for visualization purposes. @pulkkins You might not want to pull in wradlib as a dependency, but maybe your users (eg. me) already have it installed. Then you might get some inspiration from this wradlib-notebook. This uses GDAL as another big dependency (which QGIS normally also have installed). It can output to all available GDAL raster formats. PR #118 closes this issue. Does somebody know that, how can I import fmi_geotiff dataset? when I use "fmi" it uses the pgm. but how can use geotiff dataset?
gharchive/issue
2019-08-30T00:37:36
2025-04-01T06:40:07.057796
{ "authors": [ "Afshinshafei", "aperezhortal", "kmuehlbauer", "pulkkins" ], "repo": "pySTEPS/pysteps", "url": "https://github.com/pySTEPS/pysteps/issues/116", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1288873201
Implementing containers Alternative to #1203 This run the whole GA in the MAPDL container as suggested by @koubaa I think this could be an alternative to the mentioned PR limitations. For Windows, we still need to do #1221 Installing Python using GitHub Actions seems to uncorrectly link the executables. I'm going with installing manually Python 3.8 I couldn't make XVFB run... assuming it pipes plots to it. We have a marker for skip_no_xserver which should skip tests that do plots, but then the whole test_post is not using it. And CICD passes. So I guess xvfb should simulate a display (that is what is mean to be). But in the docker image I just keep getting Core dumped error every time we run a plot function in the CICD. As a temporal fix, I'm just adding the pytest marker to all the functions that do plots. Running this locally I see an error when running .ci/display_test.py (venv) [root@fd92f49248e6 pymapdl]# xvfb-run python .ci/display_test.py system_supports_plotting True 2022-07-04 12:50:23.809 ( 0.687s) [ F7CCF740]vtkXOpenGLRenderWindow.:251 ERR| vtkXOpenGLRenderWindow (0x21351b0): Could not find a decent config ERROR:root:Could not find a decent config 2022-07-04 12:50:23.810 ( 0.688s) [ F7CCF740]vtkXOpenGLRenderWindow.:468 ERR| vtkXOpenGLRenderWindow (0x21351b0): Could not find a decent visual ERROR:root:Could not find a decent visual /usr/bin/xvfb-run: line 181: 1321 Aborted DISPLAY=:$SERVERNUM XAUTHORITY=$AUTHFILE "$@" 2>&1 https://github.com/pyvista/pyvista/issues/2155 I feel like this probably a Pyvista/vtk issue. I'm still investigating. Running this locally I see an error when running .ci/display_test.py (venv) [root@fd92f49248e6 pymapdl]# xvfb-run python .ci/display_test.py system_supports_plotting True 2022-07-04 12:50:23.809 ( 0.687s) [ F7CCF740]vtkXOpenGLRenderWindow.:251 ERR| vtkXOpenGLRenderWindow (0x21351b0): Could not find a decent config ERROR:root:Could not find a decent config 2022-07-04 12:50:23.810 ( 0.688s) [ F7CCF740]vtkXOpenGLRenderWindow.:468 ERR| vtkXOpenGLRenderWindow (0x21351b0): Could not find a decent visual ERROR:root:Could not find a decent visual /usr/bin/xvfb-run: line 181: 1321 Aborted DISPLAY=:$SERVERNUM XAUTHORITY=$AUTHFILE "$@" 2>&1 pyvista/pyvista#2155 I feel like this probably a Pyvista/vtk issue because it has been recorded by pyvista error file: ERROR: In ../Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 251 vtkXOpenGLRenderWindow (0x3867750): Could not find a decent config ERROR: In ../Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 468 vtkXOpenGLRenderWindow (0x3867750): Could not find a decent visual I'm still investigating Soo........ Discussion So it seems it is a CentOS issue. I tried two images: centos/python-36-centos7 and ubuntu:latest. I had to install xvfb and pyvista in both (python also in ubuntu). Outputs CentOS7 (app-root) xvfb-run python -c "import pyvista;from pyvista.plotting import system_supports_plotting;print("system_supports_plotting", system_supports_plotting());pyvista.OFF_SCREEN = True;pyvista.plot(pyvista.Sphere())" <function system_supports_plotting at 0x7f0bbee4f730> False ERROR:root:Could not find a decent config ERROR:root:Could not find a decent visual /usr/bin/xvfb-run: line 181: 430 Aborted DISPLAY=:$SERVERNUM XAUTHORITY=$AUTHFILE "$@" 2>&1 Ubuntu root@18378420f4f8:/# xvfb-run python3 -c "import pyvista;from pyvista.plotting import system_supports_plotting;print("system_supports_plotting", system_supports_plotting());pyvista.OFF_SCREEN = True;pyvista.plot(pyvista.Sphere())" <function system_supports_plotting at 0x7f7dc6f16170> False Code to replicate CentOS Pull image: docker run -i -t -u root centos/python-36-centos7 /bin/bash Install pyvista: pip install pyvista Installing xvfb: yum -y install mesa-libGL xorg-x11-server-Xvfb Test: xvfb-run python -c "import pyvista;from pyvista.plotting import system_supports_plotting;print("system_supports_plotting", system_supports_plotting());pyvista.OFF_SCREEN = True;pyvista.plot(pyvista.Sphere())" Ubuntu Pull image: docker run -it -u root ubuntu Update: apt-get update -y Install python: apt-get install -y python3 apt-get install -y python3-pip Install pyvista: pip3 install pyvista Install xvfb: apt install xvfb. Test: xvfb-run python3 -c "import pyvista;from pyvista.plotting import system_supports_plotting;print("system_supports_plotting", system_supports_plotting());pyvista.OFF_SCREEN = True;pyvista.plot(pyvista.Sphere())" Conclusion After trying to do my best to fix the issue on the CentOS image, I think this requires quite a few changes in xvfb and maybe CentOS. I am not ready to identify, and implement those changes. Hence, I'm recommending to CLOSE THIS ISSUE without implement it. Future A lot of work has been devoted to this PR, and the situation is that it partially works. You could implement this PR in only the unit tests side, and leave the docs building using the old implementation. But this will just hide the fact that we are using an OS which is quite old and at the end of its life support. This PR could be reused in the future, once the MAPDL docker image has been moved to ubuntu. But that will take some time. So we can close this issue for now, and then reopen once the docker image has been migrated. FYI: Pinging @akaszynski and @koubaa @akaszynski @germa89 Agree that we can try this again with a more modern docker image. In fact - a lot of the work in this PR has to do with setting up the machine to have both mapdl and pymapdl installed on it. Maybe it is worth it to generate such a docker image internally so that we can pick a container for CI that already has pymapdl + mapdl.
gharchive/pull-request
2022-06-29T15:08:49
2025-04-01T06:40:07.080924
{ "authors": [ "germa89", "koubaa" ], "repo": "pyansys/pymapdl", "url": "https://github.com/pyansys/pymapdl/pull/1238", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1108844700
Alternate digest algorithms Needed SHA-512 digests. Implemented SHA-1, SHA-256, and SHA-512 instead of hardcoding SHA-256 (The spec offers CRC32C and ADLER and a couple others, but they're in zlib and would require a bit more code -- not sure there'd be many users for the extra work). Also, the current spec calls for the tokens ('sha-256') to be lowercase with a dash. Hashlib, unfortunately, returns 'sha256' for .name, so used a dict (known_digests) to apply the correct tokens for the spec. In addition, I separated out a method to return the request body instead of using it directly for the digest. The spec does not specify that ONLY the body may be used for the digest (In my case, I need one of the headers included as well), so this allows a developer to override the content that is, er, digested. Thank you for your contribution! This ended up being implemented a bit differently, but the library now supports SHA-512 digests via subclassing: class MySigner(HTTPSignatureAuth): def add_digest(self, request): super().add_digest(request, algorithm="sha-512")
gharchive/pull-request
2022-01-20T04:20:07
2025-04-01T06:40:07.096224
{ "authors": [ "jimduchek", "kislyuk" ], "repo": "pyauth/requests-http-signature", "url": "https://github.com/pyauth/requests-http-signature/pull/29", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1651732229
Change StrConstrained name For https://github.com/pydantic/pydantic-core/issues/386 Right, that was silly :smile: I think this is a simple way of doing what the issue says. Codecov Report Merging #517 (10e2467) into main (1276e37) will increase coverage by 0.00%. The diff coverage is 100.00%. :mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more Additional details and impacted files @@ Coverage Diff @@ ## main #517 +/- ## ======================================= Coverage 94.80% 94.80% ======================================= Files 93 93 Lines 12259 12266 +7 Branches 25 25 ======================================= + Hits 11622 11629 +7 Misses 630 630 Partials 7 7 Impacted Files Coverage Δ src/validators/string.rs 100.00% <100.00%> (ø) ... and 1 file with indirect coverage changes Continue to review full report in Codecov by Sentry. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 1276e37...10e2467. Read the comment docs. honestly, maybe this isn't worth the effort. I was also thinking it will a few checks to just change the name dynamically. So you're thinking to just change constrained-str to str or just close this? Thanks so much for looking into this. Let's leave this for now, there' are more impactful things to work on.
gharchive/pull-request
2023-04-03T09:56:47
2025-04-01T06:40:07.151090
{ "authors": [ "aminalaee", "codecov-commenter", "samuelcolvin" ], "repo": "pydantic/pydantic-core", "url": "https://github.com/pydantic/pydantic-core/pull/517", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1693097157
Expand validate_call docstring Expands docstring and moves page in API docs to preserve alpha ordering Change Summary Related issue number Checklist [ ] Unit tests for the changes exist [ ] Tests pass on CI and coverage remains at 100% [x] Documentation reflects the changes where applicable [ ] changes/<pull request or issue id>-<github username>.md file added describing change (see changes/README.md for details) [ ] My PR is ready to review, please add a comment including the phrase "please review" to assign reviewers please review
gharchive/pull-request
2023-05-02T21:03:05
2025-04-01T06:40:07.154636
{ "authors": [ "tpdorsey" ], "repo": "pydantic/pydantic", "url": "https://github.com/pydantic/pydantic/pull/5670", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
159525995
helper function - detect index columns in dataframe Has anyone seen any method how to detect index column(s) in dataframe based on uniqueness of rows? if u r asking questions on an issue (not recommended) pls post a complete example otherwise ask on the mailing list (much better)
gharchive/issue
2016-06-09T22:50:21
2025-04-01T06:40:07.161351
{ "authors": [ "denfromufa", "jreback" ], "repo": "pydata/pandas", "url": "https://github.com/pydata/pandas/issues/13417", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
238990919
CF conventions for time doesn't support years CF conventions code supports: {'microseconds': 'us', 'milliseconds': 'ms', 'seconds': 's', 'minutes': 'm', 'hours': 'h', 'days': 'D'}, but not 'years'. See example file https://www.dropbox.com/s/34dcpliko928yaj/histsoc_population_0.5deg_1861-2005.nc4?dl=0 I am not sure to understand what you are asking us to do here. The problem with "years" is that their use is not recommended by the CF conventions. Very often (and I think your file means it this way), users would like years to be simple "calendar years" , i.e. : 1901-01-01, 1902-01-01, but this is not what the unit "years" means in the CF conventions: see http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#time-coordinate I think I do mean 'years' in the CF convention sense, in this case the time dimension is: double time(time=145); :standard_name = "time"; :units = "years since 1860-1-1 12:00:00"; :calendar = "proleptic_gregorian"; This is correctly interpreted by the NASA Panoply NetCDF file viewer. From glancing at the xarray code, it seems it depends on the pandas Timedelta object which in turn doesn't support years as delta objects (although date ranges can be generated at year intervals so it should be possible to implement). I think I do mean 'years' in the CF convention sense Can you pinpoint to which part of the CF convention? From the link I read: a year is exactly 365.242198781 days, which would lead to highly unlikely calendar dates. I agree however that interpreting "years" as being "calendar years" is the only way that makes sense. For the record, netCDF4 also doesn't like "years": import netCDF4 ds = netCDF4.Dataset('/home/mowglie/Downloads/histsoc_population_0.5deg_1861-2005.nc4') time = ds.variables['time'] netCDF4.num2date(time[:], units=time.units) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-15-b38f64c7bce4> in <module>() 2 ds = netCDF4.Dataset('histsoc_population_0.5deg_1861-2005.nc4') 3 time = ds.variables['time'] ----> 4 netCDF4.num2date(time[:], units=time.units) netCDF4/_netCDF4.pyx in netCDF4._netCDF4.num2date (netCDF4/_netCDF4.c:66463)() ValueError: unsupported time units Although I'm not a specialist of CF conventions, this issue may be related to this one: https://github.com/Unidata/netcdftime/issues/5. The forthcoming NetCDFTimeIndex (#1252) uses the netcdftime package. It's rather about using common_year with noleap calendar, though. I would think that this sort of feature belongs in netcdftime, not xarray. There are obvious issues with defining what a year (or a month) is but I image we can sort those out. I actually have a similar issues with respect to 'months'. I want to write out my xarray dataarray as a netcdf file, with months as time intervals (one value per month, doesn't matter what day of the month is used as a reference). As with the 'years' described above, this does not seem to work in the current framework? In order to construct a netcdf file with a 2D field on a monthly resolution (for X number of years), I currently use the lines of code mentioned below. Since I do not care about the type of calendar, I just use 360_day, in which each month of the year has 30 days. Perhaps this can be useful for others. In case a better solution is available, please let me know! import numpy as np import pandas as pd import xarray as xr # 51 years, saving first day of each month. mmhours = np.arange(0,(51*360*24),30*24) attrs = {'units': 'Hours since 1955-01-01T12:00:00', 'calendar' : '360_day'} target = np.random.rand(len(mmhours),10,10) lat = np.arange(50,51,0.1) lon = np.arange(3,4,0.1) target_xr = xr.Dataset({'test': (['time', 'lat', 'lon'], target)}, coords={'time': ('time', mmhours, attrs) ,'lat': lat, 'lon': lon}) target_xr.to_netcdf('test.nc', encoding={'test': {'zlib': True}}) Hi Matthias, I think your solution is fine. The best is simply to avoid "months" as units altogether. I one has a "real" calendar one can also let pandas and xarray do the job: t = pd.date_range(start='1980-01', end='2010-12', freq='MS') target = np.random.rand(len(t), 10, 10) lat = np.arange(50, 51, 0.1) lon = np.arange(3, 4, 0.1) target_xr = xr.Dataset({'test': (['time', 'lat', 'lon'], target)}, coords={'time': ('time', t), 'lat': lat, 'lon': lon} ) target_xr.to_netcdf('test_2.nc') which creates the following time units automatically: int64 time(time) ; time:units = "days since 1980-01-01 00:00:00" ; time:calendar = "proleptic_gregorian" ; Month unit support in cftime is being discussed in in https://github.com/Unidata/cftime/pull/69 Perhaps xarray folks would like to weigh in. I have run into this problem multiple times. The latest example I found were some [CORE ocean model runs] (https://rda.ucar.edu/datasets/ds262.0/index.html#!description). The time dimension of some (they mix units) of these files is given as netcdf MRI-A_sigma0_monthly { dimensions: level = 51 ; latitude = 368 ; longitude = 364 ; time = UNLIMITED ; // (720 currently) time_bnds = 2 ; variables: double latitude(latitude) ; latitude:units = "degrees_north " ; latitude:axis = "Y" ; double longitude(longitude) ; longitude:units = "degrees_east " ; longitude:axis = "X" ; double level(level) ; level:units = "m " ; level:axis = "Z" ; double time(time) ; time:units = "years since 1948-1-1 00:00:00 " ; time:axis = "T" ; time:bounds = "time_bnds" ; time:calendar = "noleap" ; double time_bnds(time, time_bnds) ; float sigma0(time, level, latitude, longitude) ; sigma0:units = "kg/m^3 " ; sigma0:long_name = "Monthly-mean potential density (sigma-0) " ; sigma0:missing_value = -9.99e+33f ; } I understand that 'fully' supporting to decode this unit is hard and should probably addressed upstream. But I think it might be useful to have a utility function that converts a dataset with these units into someting quickly useable with xarray? E.g. one could load the dataset with ds = xr.open_dataset(..., decode_times=False) and then maybe call xr.decode_funky_units(ds, units='calendaryears', ...), which could default to the first day of a year (or the first day of a month for units of months since. This way the user is aware that something is not decoded exactly, but can work with the data. Is this something that people could see useful here? Id be happy to give an implementation a shot if there is interest.
gharchive/issue
2017-06-27T21:38:32
2025-04-01T06:40:07.173313
{ "authors": [ "benbovy", "fmaussion", "jbusecke", "jhamman", "mangecoeur", "matthiasdemuzere", "rabernat" ], "repo": "pydata/xarray", "url": "https://github.com/pydata/xarray/issues/1467", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
264517839
arithmetics should ignore nans created by alignment Can anybody tell me if there is anybody who benefits from this behaviour? I can't think of any good use cases. wallet = xarray.DataArray([50, 70], dims=['currency'], coords={'currency': ['EUR', 'USD']}) restaurant_bill = xarray.DataArray([30], dims=['currency'], coords={'currency': ['USD']}) with xarray.set_options(arithmetic_join="outer"): print(wallet - restaurant_bill) <xarray.DataArray (currency: 2)> array([ nan, 40.]) Coordinates: * currency (currency) object 'EUR' 'USD' While it is fairly clear why it can be desirable to have nan + not nan = nan as a default in arithmetic when the nan is already present in one of the input arrays, when the nan is introduced as part of an automatic align things become much less intuitive. Proposal: add a parameter to xarray.align, fillvalue=numpy.nan, which determines what will appear in the newly created array elements change __add__, __sub__ etc. to invoke xarray.align(fillvalue=0) change __mul__, __truediv__ etc. to invoke xarray.align(fillvalue=1) In theory the setting could be left as an opt-in as set_options(arithmetic_align_fillvalue='neutral'), yet I wonder who would actually want the current behaviour? Related pure-numpy thread https://stackoverflow.com/questions/42209838/treat-nan-as-zero-in-numpy-array-summation-except-for-nan-in-all-arrays This behavior is consistent with the default behavior on pandas, which always does an outer join for arithmetic: In [6]: wallet.to_series() - restaurant_bill.to_series() currency EUR NaN USD 40.0 dtype: float64 So I don't think we want to change this in an inconsistent way in xarray. I do agree that we should have support for an explicit fill value in alignment/reindexing and arithmetic. For consistency with pandas (and elsewhere in xarray), let's call it fill_value. change __add__, __sub__ etc. to invoke xarray.align(fillvalue=0) change __mul__, __truediv__ etc. to invoke xarray.align(fillvalue=1) I can see the logic in using an identity value instead of NaN as a default in arithmetic. One peril of this approach is that it isn't always evident what the right identity is. In fact, according to NumPy: In [16]: import numpy as np In [17]: print(np.add.identity) 0 In [18]: print(np.multiply.identity) 1 In [19]: print(np.subtract.identity) None In [20]: print(np.divide.identity) None Let me give a couple other examples of why I don't think we should use an identity of some sort as the default: Suppose we are comparing a model to observations. model - observations gives the residuals -- unless the variables are mis-aligned, in which case it would give a residual equal to one of the variables. Suppose we want to do a weighted average. (observations * weights).sum() gives a sensible answer -- but not if a missing weight defaults to 1 (all the weights put together are supposed to sum to 1!). Closed by #2876
gharchive/issue
2017-10-11T09:33:34
2025-04-01T06:40:07.181059
{ "authors": [ "crusaderky", "shoyer" ], "repo": "pydata/xarray", "url": "https://github.com/pydata/xarray/issues/1625", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1953001043
Add metadata_only param to .to_zarr? Is your feature request related to a problem? A leaf from https://github.com/pydata/xarray/issues/8245, which has a bullet: compute=False is arguably a less-than-obvious kwarg meaning "write metadata". Maybe this should be a method, maybe it's a candidate for renaming? Or maybe make_template can be an abstraction over it I've also noticed that for large arrays, running compute=False can take several minutes, despite the indexes being very small. I think this is because it's building a dask task graph — which is then discarded, since the array is written from different machines with the region pattern. Describe the solution you'd like Would introducing a metadata_only parameter to to_zarr help here: Better name No dask graph Describe alternatives you've considered No response Additional context No response Yes, is a great idea! +1, this is a really nice idea. Related to this could also be a write-through cache of sorts. For high-latency stores (e.g. S3), synchronously populating the store metadata can really add up. If we knew we were only writing metadata, we could safely populate all the Zarr json objects then send them in one bulk write step. The combination of these two features would be a lightning fast Zarr initialization routine 🚀 I came across https://github.com/pydata/xarray/issues/8343 recently — this seems to be a similar suggestion to what I was intending. Is that correct? The challenge is that Xarray needs some way to represent the "schema" for the desired entire dataset. I'm very open to alternatives, but so far, the most convenient way to do this has been to load Dask arrays into an xarray.Dataset. Is anyone more familiar with whether there is a cost to producing the dask task graph? I'm seeing .to_zarr(compute=False) take well over 1 minutes with large arrays with lots of chunks. And it's only writing very small metadata. If to_zarr(compute=False) is slow, that's more likely due to Xarray doing lots of sequential blocking IO operations. Once you've made the Dataset object the dask graphs have already been created. This is an area I don't understand well, so obv I defer. I had thought that it was relevant to driven by dask since that's most of the time is spent in dask/array/optimization.py. But maybe we're saying that this is because xarray is writing indexes during that time? (Though the indexes are fairly small, and it's spending lots of time in a fuse function?) %Own %Total OwnTime TotalTime Function (filename) 0.00% 0.00% 14.07s 16.42s fuse (dask/optimization.py) 0.00% 0.00% 8.15s 8.15s reverse_dict (dask/core.py) 0.00% 0.00% 3.17s 4.17s make_blockwise_graph (dask/blockwise.py) 0.00% 0.00% 2.81s 2.94s _cull_dependencies (dask/blockwise.py) 20.00% 25.00% 2.50s 3.04s fuse_slice (dask/array/optimization.py) 0.00% 0.00% 2.49s 2.49s keys_in_tasks (dask/core.py) 0.00% 0.00% 2.19s 2.90s functions_of (dask/optimization.py) 0.00% 0.00% 1.32s 9.20s cull (dask/highlevelgraph.py) 0.00% 0.00% 0.930s 48.16s optimize (dask/array/optimization.py) Max I think you're right. In recent times, dask has a "lazy graph" (HighLevelGraph) that gets lowered down to an old-style graph expressed as dicts. That lowering is still slow and potentially whats happening here. Yeah here's the call to optimize in dask.array.store https://github.com/dask/dask/blob/1203b1bb6d52b1fb00d54656af4af8e6e35aa615/dask/array/core.py#L1169-L1171 I really think we're just taking advantage of a side-effect by using compute=False for this use-case. I like the idea of a separate method, say initialize_zarr, for this. Yes we could either do: .initialize_zarr / .to_zarr_metadata / xarray_beam.make_template](https://xarray-beam.readthedocs.io/en/latest/_autosummary/xarray_beam.make_template.html) or .to_zarr(metadata_only=True) How difficult do we think this is? Is it something I can bang out in 45 mins or is it a bigger effort that requires more context? FYI — for the moment, xr.ones_like(ds).to_zarr(compute=False) saves a few minutes each go! (though the data needs to be floats) FYI — for the moment, xr.ones_like(ds).to_zarr(compute=False) saves a few minutes each go! (though the data needs to be floats) In xarray-beam we use zeros_like, which I believe works for any NumPy dtype. How difficult do we think this is? Is it something I can bang out in 45 mins or is it a bigger effort that requires more context? In xarray-beam we use zeros_like, which I believe works for any NumPy dtype. Could this just be running xr.zero_like(ds).to_zarr(compute=False)?? Are there any data types that zarr supports which wouldn't be covered by zeros_like? Possibly strings... Possibly strings... Seem to work! ds = xr.Dataset( { "a": (("x", "y"), np.arange(20).reshape(4, 5)), "b": ("x", ["aa", "b", "c", "d"]), "c": ("x", np.array(["aa", "b", "c", "d"], dtype="S")), "time": ("t", pd.date_range("2001-01-01", periods=4, freq='D')), }, coords={"x": np.arange(4), "y": [33, 44, 22, 11, 55]}, ) I'm playing around with this piece of code. Does it make sense? There's a fair bit of complexity around the write_empty_chunks=False optimization. def make_template(ds, *, encoding=None): fillvalues = { name: var.encoding["_FillValue"] for name, var in ds._variables.items() if var.encoding and "_FillValue" in var.encoding } fillvalues.update( { name: enc["_FillValue"] for name, enc in encoding.items() if "_FillValue" in enc } ) to_drop = {var for var, varenc in encoding.items() if "_FillValue" in varenc} dropped = ds.drop_vars(to_drop) template = xr.zeros_like(ds) for var in to_drop: template[var] = xr.full_like(ds[var], encoding[var]["_FillValue"]) return template def initialize_zarr(ds, repo, *, region_dims=None, append_dim=None, **kwargs): if "compute" in kwargs: raise ValueError("The ``compute`` kwarg is not supported in `initialize_zarr`.") if kwargs.get("mode", "w") != "w": raise ValueError( f"Only mode='w' is allowed for initialize_zarr. Received {kwargs['mode']}" ) encoding = kwargs.get("encoding", {}) template = make_template(ds, encoding=encoding) # TODO: handle `write_empty_chunks` in init_kwargs["encoding"] init_kwargs = kwargs.copy() init_kwargs.pop("write_empty_chunks", None) template.to_zarr( repo.store, group="foo/", compute=False, write_empty_chunks=False, **init_kwargs ) if region_dims: # At this point, the store has been initialized (and potentially overwritten) kwargs.pop("mode") dropped = ds.drop_dims(region_dims) new_encoding = kwargs.pop("encoding", None) if new_encoding: new_encoding = {k: v for k, v in new_encoding.items() if k in dropped} dropped.to_zarr( repo.store, group="foo/", **kwargs, encoding=new_encoding, compute=True, mode="a", ) # can't use drop_dims since that will also remove any variable # with the dims to be dropped # even if they have anything in region_dims dims_to_drop = set(ds.dims) - set(region_dims) vars_to_drop = [ name for name, var in ds._variables.items() if set(var.dims).issubset(dims_to_drop) ] return ds.drop_vars(vars_to_drop) elif append_dim: # TODO pass else: return ds encoding = {"a": {"_FillValue": -1}} initialized = initialize_zarr(ds, store, region_dims="y", mode="w", encoding=encoding) n00b question — why do we need the code around _FillValue? If there's a dataset with two dims, and they're both region_dims, does this drop all the vars? (very minor point — we also want to allow w-, indeed that should be the default, given that Zarr will wipe everything, including a whole bucket, for any path it's given) why do we need the code around _FillValue? 🤦🏾‍♂️ I was testing with a numpy dataset, so compute=False made no difference :/ If there's a dataset with two dims, and they're both region_dims, does this drop all the vars? Yes. But the indexes for the dims get written during template.to_zarr. After the drop, we are looking to write non-dim coordinate variables and data variables without region dims. So if there was a big dask array that had no dimensions in common with region_dims it would get written at this point. Perhaps initialize_ isn't a good prefix for this function. Thoughts? we also want to allow w-, indeed that should be the default, given that Zarr will wipe everything, :+1: If there's a dataset with two dims, and they're both region_dims, does this drop all the vars? Yes. But the indexes for the dims get written during template.to_zarr. After the drop, we are looking to write non-dim coordinate variables and data variables without region dims. So if there was a big dask array that had no dimensions in common with region_dims it would get written at this point. I think the code is doing the right thing — dims_to_drop = set(ds.dims) - set(region_dims) means that we'll drop anything with no dims in common, as you replied. A clearer way to ask my initial question was "If there's a dataset with one array with two dims, and both dims are region_dims, does this drop the array?" — it doesn't drop anything (dims_to_drop is empty). Perhaps initialize_ isn't a good prefix for this function. I like it! make_template etc seems fine too if anyone has strong preferences. I see there's a logic branch for append_dim; I haven't used region with append_dim; is that a commonly used pattern?
gharchive/issue
2023-10-19T20:25:11
2025-04-01T06:40:07.202863
{ "authors": [ "dcherian", "jhamman", "max-sixty", "shoyer" ], "repo": "pydata/xarray", "url": "https://github.com/pydata/xarray/issues/8343", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
778875117
Interactive cells in test_coordinates Description The test functions in test_coordinates.py are separated using interactive cells in the current develop branch. Is this a leftover or intended? That's a leftover - I think they could be deleted and committed without a pull-request... That's a leftover - I think they could be deleted and committed without a pull-request...
gharchive/issue
2021-01-05T11:04:52
2025-04-01T06:40:07.253043
{ "authors": [ "f-brinkmann", "sikersten" ], "repo": "pyfar/pyfar", "url": "https://github.com/pyfar/pyfar/issues/70", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2512062395
[patch] Expose StaticIO.for_node as a shortcut to for-nodes So you don't need to type body_node_class=... from pyiron_workflow import Workflow loop1 = Workflow.create.standard.Add.for_node( iter_on="other", obj=1, other=[1, 2], output_as_dataframe=False, ) loop2 = Workflow.create.standard.Multiply.for_node( zip_on=("obj", "other"), obj=loop1.outputs.add, other=[1, 2], output_as_dataframe=False, ) out = loop2() out.mul >>> [2, 6] Pull Request Test Coverage Report for Build 10754356381 Details 0 of 0 changed or added relevant lines in 0 files are covered. 6 unchanged lines in 1 file lost coverage. Overall coverage increased (+0.01%) to 91.404% Files with Coverage Reduction New Missed Lines % nodes/static_io.py 6 86.96% Totals Change from base Build 10754157007: 0.01% Covered Lines: 2924 Relevant Lines: 3199 💛 - Coveralls
gharchive/pull-request
2024-09-07T20:51:44
2025-04-01T06:40:07.389759
{ "authors": [ "coveralls", "liamhuber" ], "repo": "pyiron/pyiron_workflow", "url": "https://github.com/pyiron/pyiron_workflow/pull/446", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1915585614
Initial structure of the sponsor page @cmaureir It's so great to see this coming together. Thank you 💯
gharchive/pull-request
2023-09-27T13:24:54
2025-04-01T06:40:07.407996
{ "authors": [ "cmaureir", "willingc" ], "repo": "pyladies/global-conference", "url": "https://github.com/pyladies/global-conference/pull/47", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
647473892
WIP: Add Full Rank Approximation This PR adds Full Rank ADVI interface. It is ready to review. Just to experiment with the API, I have created a gist inspired from @ColCarroll's notebook. I look forward to include mixture comparisons as well. [X] Add Full Rank ADVI [X] Add docstrings [X] Add tests [X] Change variable flattening to model flattening [ ] Rerun quick start notebook Just testing this now. Not sure what to make of this: Should be 10K loss values, correct? Hi @fonnesbeck Thanks for checking this out. The implementation of Full Rank ADVI(especially the model flattening part) is wrong. I am working on this! Soon, this will be resolved. Mean field ADVI does not run at all. Fails with: ValueError: Dimensions must be equal, but are 4 and 1086 for '{{node monte_carlo_variational_loss/expectation/JointDistributionSequential/log_prob/add_1}} = AddV2[T=DT_FLOAT](monte_carlo_variational_loss/expectation/JointDistributionSequential/log_prob/add, monte_carlo_variational_loss/expectation/JointDistributionSequential/log_prob/Normal_1/log_prob/sub)' with input shapes: [1,4], [1,1086]. Seems like a shape issue. Can you share a reproducible code snippet? I suspect sort of adjusting sample_size parameter to fit function. Is this supposed to be working now? I'm getting the following: It was running with the original PR, though not getting good results. Now it quits after a handful of iterations with NaNs. There are some concerns about transformations not being stable Same behavior for mean-field ADVI Yes. Some transformations were missing for Bounded Distributions. Highlighting the same, I had opened an issue #283 some time back. Commit d61431b adds the remaining transformations. Before this commit, passing validate_args=True to each distribution while doing VI, leads to various constraint errors from TFP. But I do not feel like its a good approach to manually add transformations to respective distributions. I tried to make it more general with a single class - class Interval(BackwardTransform): name = "interval" def __init__(self, lower_limit, upper_limit): transform = tf.cond( tf.math.is_inf(lower_limit), lambda: tf.cond( tf.math.is_inf(upper_limit), lambda: tfb.Identity(), lambda: tfb.Chain([tfb.Shift(upper_limit), tfb.Scale(-1), tfb.Exp()]), # upper - exp(x) ), lambda: tf.cond( tf.math.is_inf(upper_limit), lambda: tfb.Chain([tfb.Shift(lower_limit), tfb.Exp()]), # exp(x) + lower lambda: tfb.Sigmoid(low=lower_limit, high=upper_limit), # interval ), ) super().__init__(transform) But this leads to many eager execution issues for lambda functions. Please suggest if there is a way to improve existing approach. @fonnesbeck, what is park bias model? @ferrine its a relatively large model that I was using to test ADVI, ported over from PyMC3. Its got a hierarchical component as well as Gaussian proccesses. Samples very slowly, so a good candidate for ADVI. Hi @fonnesbeck The model seems interesting. Can you experiment again in your free time with the current state of PR? Issues related to transformations has been resolved. Thanks With the current PR, I am getting the following failure: --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-29-6a1ed8631a36> in <module>() 1 m = park_bias_model() 2 # trace = pm.sample(m, num_samples=100, burn_in=2000, xla=True) ----> 3 approx = pm.fit(m, num_steps=20000, method='fullrank_advi') 6 frames /usr/local/lib/python3.6/dist-packages/pymc4/variational/approximations.py in fit(model, method, num_steps, sample_size, random_seed, optimizer, **kwargs) 246 return losses 247 --> 248 return ADVIFit(inference, run_approximation()) /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 778 else: 779 compiler = "nonXla" --> 780 result = self._call(*args, **kwds) 781 782 new_tracing_count = self._get_tracing_count() /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 844 *args, **kwds) 845 # If we did not create any variables the trace we have is good enough. --> 846 return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access 847 848 def fn_with_cond(*inner_args, **inner_kwds): /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _filtered_call(self, args, kwargs, cancellation_manager) 1845 resource_variable_ops.BaseResourceVariable))], 1846 captured_inputs=self.captured_inputs, -> 1847 cancellation_manager=cancellation_manager) 1848 1849 def _call_flat(self, args, captured_inputs, cancellation_manager=None): /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1921 # No tape is watching; skip to running the function. 1922 return self._build_call_outputs(self._inference_function.call( -> 1923 ctx, args, cancellation_manager=cancellation_manager)) 1924 forward_backward = self._select_forward_and_backward_functions( 1925 args, /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager) 548 inputs=args, 549 attrs=attrs, --> 550 ctx=ctx) 551 else: 552 outputs = execute.execute_with_cancellation( /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 58 ctx.ensure_initialized() 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: 62 if name is not None: InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Cholesky decomposition was not successful. The input might not be valid. [[{{node fit_surrogate_posterior/while/body/_22/fit_surrogate_posterior/while/StatefulPartitionedCall/monte_carlo_variational_loss/expectation/loop_body/PartitionedCall/pfor/PartitionedCall/Cholesky/pfor/Cholesky}}]] [[fit_surrogate_posterior/while/body/_22/fit_surrogate_posterior/while/StatefulPartitionedCall/Adam/Adam/AssignAddVariableOp/_51]] (1) Invalid argument: Cholesky decomposition was not successful. The input might not be valid. [[{{node fit_surrogate_posterior/while/body/_22/fit_surrogate_posterior/while/StatefulPartitionedCall/monte_carlo_variational_loss/expectation/loop_body/PartitionedCall/pfor/PartitionedCall/Cholesky/pfor/Cholesky}}]] 0 successful operations. 0 derived errors ignored. [Op:__inference_run_approximation_207085] Function call stack: run_approximation -> run_approximation Can you rebase to current master so that all the covariance functions are available? and I also leave this experiment by @Sayam753 This is a major feature, congrats @Sayam753! @ferrine should we open an issue to keep track of the issue? In another PR I would add full rank to the example NB, which also still reads "Full Rank ADVI - Coming Soon".
gharchive/pull-request
2020-06-29T15:33:36
2025-04-01T06:40:07.460350
{ "authors": [ "Sayam753", "ferrine", "fonnesbeck", "twiecki" ], "repo": "pymc-devs/pymc4", "url": "https://github.com/pymc-devs/pymc4/pull/289", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2349424955
Consolidate ModelBuilder InferenceData getters (and setters) Think the ModelBuilder class itself could just get the getters (and setters) for: prior prior_predictive fit_result (posterior) posterior_predictive They are defined here for mmm: https://github.com/pymc-labs/pymc-marketing/blob/664b5caef1f20785a8f73cac3c4f83a13ceeeee7/pymc_marketing/mmm/base.py#L266-L290) and here for clv: https://github.com/pymc-labs/pymc-marketing/blob/664b5caef1f20785a8f73cac3c4f83a13ceeeee7/pymc_marketing/clv/models/basic.py#L242-L256 Would the clv modules benefit from this? @ColtAllen I'm leaning more toward standardization of code and having informative error messages Would the clv modules benefit from this? @ColtAllen I'm leaning more toward standardization of code and having informative error messages CLVModel overwrites quite a few of the inherited ModelBuilder methods because the latter is built around an "X,Y" variable convention that doesn't apply to the CLV models. That said, the idata methods could benefit from some cleanup, particularly if it gets rid of the following warning that pops up during testing: ~/site-packages/arviz/data/inference_data.py:1538: UserWarning: The group fit_data is not defined in the InferenceData scheme This warning is probably extraneous from the ModelBuilder class, but also worth mentioning: ~/site-packages/pymc/model/core.py:518: FutureWarning: All coords are now mutable by default. coords_mutable will be removed in a future release. This warning is probably extraneous from the ModelBuilder class, but also worth mentioning: ~/site-packages/pymc/model/core.py:518: FutureWarning: All coords are now mutable by default. coords_mutable will be removed in a future release. Yeah, I think we should get rid of these. Have this in mind #669 CLVModel overwrites quite a few of the inherited ModelBuilder methods because the latter is built around an "X,Y" variable convention that doesn't apply to the CLV models. That said, the idata methods could benefit from some cleanup, particularly if it gets rid of the following warning that pops up during testing: Maybe there can be two versions of the ModelBuilder? There can be some base class that will handle this common logic of getters and setters. Then two child classes with the X at initialize and X,y in the methods. Just an initial thought...
gharchive/issue
2024-06-12T18:17:12
2025-04-01T06:40:07.468001
{ "authors": [ "ColtAllen", "wd60622" ], "repo": "pymc-labs/pymc-marketing", "url": "https://github.com/pymc-labs/pymc-marketing/issues/742", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
433055620
pipenv --py crashes Issue description Running pipenv --py in a project with no venv setup crashes. Expected result No stack trace, just an error message. Actual result $ pipenv --py Traceback (most recent call last): File "/home/david/bin/pipenv", line 11, in <module> sys.exit(cli()) File "/home/david/.local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/home/david/.local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 717, in main rv = self.invoke(ctx) File "/home/david/.local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 1114, in invoke return Command.invoke(self, ctx) File "/home/david/.local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/david/.local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/home/david/.local/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 64, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/home/david/.local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/home/david/.local/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 17, in new_func return f(get_current_context(), *args, **kwargs) File "/home/david/.local/lib/python3.6/site-packages/pipenv/cli/command.py", line 140, in cli do_py() File "/home/david/.local/lib/python3.6/site-packages/pipenv/core.py", line 1635, in do_py click.echo(which("python", allow_global=system)) File "/home/david/.local/lib/python3.6/site-packages/pipenv/core.py", line 108, in which raise RuntimeError("location not created nor specified") RuntimeError: location not created nor specified Steps to replicate Using Ubuntu 18.04: sudo apt update sudo apt install python3-pip pip3 install pipenv # --user is implied so pipenv executable gets installed in /home/$USER/.local/bin/ git clone https://github.com/dlech/vscode-python-issue4866.git cd vscode-python-issue4866 pipenv --py Please run $ pipenv --support, and paste the results here. $ pipenv --support Pipenv version: '2018.11.26' Pipenv location: '/home/david/.local/lib/python3.6/site-packages/pipenv' Python location: '/usr/bin/python3' Python installations found: 3.6.7: /usr/bin/python3 3.6.7: /usr/bin/python3.6m 2.7.15rc1: /usr/bin/python2 2.7.1: /usr/bin/jython PEP 508 Information: {'implementation_name': 'cpython', 'implementation_version': '3.6.7', 'os_name': 'posix', 'platform_machine': 'x86_64', 'platform_python_implementation': 'CPython', 'platform_release': '4.18.0-16-generic', 'platform_system': 'Linux', 'platform_version': '#17~18.04.1-Ubuntu SMP Tue Feb 12 13:35:51 UTC 2019', 'python_full_version': '3.6.7', 'python_version': '3.6', 'sys_platform': 'linux'} System environment variables: CLUTTER_IM_MODULE NVM_DIR LS_COLORS LESSCLOSE XDG_MENU_PREFIX LANG DISPLAY GTK2_MODULES DEBFULLNAME GTK_CSD NVM_CD_FLAGS USERNAME CHROME_DESKTOP NO_AT_BRIDGE XDG_VTNR GIO_LAUNCHED_DESKTOP_FILE_PID SSH_AUTH_SOCK PRU_C_DIR MANDATORY_PATH XDG_SESSION_ID USER DESKTOP_SESSION RBENV_SHELL QT4_IM_MODULE GOPATH TEXTDOMAINDIR DROPBOX_USE_LIBAPPINDICATOR DEFAULTS_PATH QT_QPA_PLATFORMTHEME PWD HOME TEXTDOMAIN SSH_AGENT_PID TERM_PROGRAM TERM_PROGRAM_VERSION QT_ACCESSIBILITY LIBVIRT_DEFAULT_URI XDG_SESSION_TYPE XDG_DATA_DIRS GSETTINGS_SCHEMA_DIR XDG_SESSION_DESKTOP GTK_MODULES WINDOWPATH TERM SHELL QT_IM_MODULE XMODIFIERS IM_CONFIG_PHASE NVM_BIN XDG_CURRENT_DESKTOP GPG_AGENT_INFO GIO_LAUNCHED_DESKTOP_FILE XDG_SEAT SHLVL DEBEMAIL GDMSESSION GNOME_DESKTOP_SESSION_ID LOGNAME DBUS_SESSION_BUS_ADDRESS XDG_RUNTIME_DIR XAUTHORITY XDG_CONFIG_DIRS PATH RBENV_VERSION SESSION_MANAGER LESSOPEN GTK_IM_MODULE _ PIP_DISABLE_PIP_VERSION_CHECK PYTHONDONTWRITEBYTECODE PIP_SHIMS_BASE_MODULE PIP_PYTHON_PATH PYTHONFINDER_IGNORE_UNSUPPORTED Pipenv–specific environment variables: Debug–specific environment variables: PATH: /home/david/.nvm/versions/node/v10.15.3/bin:/home/david/work/gocode/bin:/home/david/.rbenv/shims:/home/david/bin:/home/david/work/gocode/bin:/home/david/.rbenv/shims:/home/david/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/david/.dotnet/tools:/home/david/.dotnet/tools SHELL: /bin/bash LANG: en_US.UTF-8 PWD: /home/david/work/junk/vscode-python-issue4866 Contents of Pipfile ('/home/david/work/junk/vscode-python-issue4866/Pipfile'): [[source]] name = "pypi" url = "https://pypi.org/simple" verify_ssl = true [dev-packages] [packages] [requires] python_version = "3.6" Contents of Pipfile.lock ('/home/david/work/junk/vscode-python-issue4866/Pipfile.lock'): { "_meta": { "hash": { "sha256": "415dfdcb118dd9bdfef17671cb7dcd78dbd69b6ae7d4f39e8b44e71d60ca72e7" }, "pipfile-spec": 6, "requires": { "python_version": "3.6" }, "sources": [ { "name": "pypi", "url": "https://pypi.org/simple", "verify_ssl": true } ] }, "default": {}, "develop": {} } Yes the message RuntimeError: location not created nor specified tells what is happening, do you want to improve the message? Welcome It seems that most errors result in a stack trace being printed, so I guess this is normal for pipenv. @frostming maybe pipenv --venv and pipenv --py should have the same style: It makes sense to me. Feel free to shoot a PR.
gharchive/issue
2019-04-15T00:55:44
2025-04-01T06:40:07.603848
{ "authors": [ "Cologler", "dlech", "frostming" ], "repo": "pypa/pipenv", "url": "https://github.com/pypa/pipenv/issues/3694", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1196485565
Add setuptools.command.build Summary of changes In order to override distutils.command.build on downstream projects it is good to have a setuptools specific command which allows downstream projects to avoid importing distutils. Closes Pull Request Checklist [x] Changes have tests [x] News fragment added in changelog.d/. (See documentation for details) Thank you very much @isuruf, sorry for the confusion with the extra commits, I was trying to fix the error with flake8, I hope that is OK with you. I am OK with this change, but I will let the other maintainers have a look because I don't know if there are other implications. Ping on this Thank you very much @isuruf and sorry for the delay. Since no one has found any problem with this implementation, I believe we can go ahead... Should we add any protection/migration warnings for the case an user was previously relying on distutils.command.build to setup subcommands? For example, how about something like: _ORIGINAL_SUBCOMMANDS = {"build_py", "build_clib", "build_ext", "build_scripts"} class build(_build): def run(self): subcommands = {cmd[0] for cmd in _build.sub_commands} if subcommands - _ORIGINAL_SUBCOMMANDS: msg = """ It seems that you are using `distutils.command.build.build` to add new subcommands. Using `distutils` directly is considered deprecated, please use `setuptools.command.build`. """ warning.warns(msg, SetuptoolsDeprecationWarning) self.subcommands = _build.sub_commands super().run() ... What do you think? Thank you very much @isuruf and sorry for the delay. Since no one has found any problem with this implementation, I believe we can go ahead... Should we add any protection/migration warnings for the case an user was previously relying on distutils.command.build to setup subcommands? For example, how about something like: _ORIGINAL_SUBCOMMANDS = {"build_py", "build_clib", "build_ext", "build_scripts"} class build(_build): def run(self): subcommands = {cmd[0] for cmd in _build.sub_commands} if subcommands - _ORIGINAL_SUBCOMMANDS: msg = """ It seems that you are using `distutils.command.build.build` to add new subcommands. Using `distutils` directly is considered deprecated, please use `setuptools.command.build`. """ warning.warns(msg, SetuptoolsDeprecationWarning) self.subcommands = _build.sub_commands super().run() ... What do you think? Makes sense to me. There's a small typo in the line before last. self.subcommands should be self.sub_commands. Can you push into this PR or should I? I can do that later today, but if you would like to go ahead and work on it, that would be great (since I would have to spend some time to create a simple test case). Hi @isuruf, I tried to fix the tests failing in the CI. Please feel free to revert my changes and do something different.
gharchive/pull-request
2022-04-07T19:34:40
2025-04-01T06:40:07.639557
{ "authors": [ "abravalheri", "isuruf" ], "repo": "pypa/setuptools", "url": "https://github.com/pypa/setuptools/pull/3256", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1268605067
Update dependency management docs Summary of changes Extract depencency_liks section to a new file: deprecated/dependency_links.rst Add note about directly URLs not being accepted in PyPI. Simplify intro about build system requirement. Simplify intro about optional dependencies. Fix confusion in example about "Project" and "Package". "Demote" section about extras in entry-points to a note. Pull Request Checklist [ ] Changes have tests [ ] News fragment added in changelog.d/. (See documentation for details) @abravalheri thanks for this PR. Now that dependency_links is deprecated, is there an alternative way to specify a different non-pypi package index url [eg in artifactory/jfrog] to setuptools.setup()? I've tried the install_requires argument to setuptools.setup() like so: install_requires=[mypackage @ http://:@:8082/api/pypi/pypi/simple] but install_requires seems to expect to download source code as a [.zip] archive. Ideally, I'm looking for something equivalent to pip's extra-index-url setting, which is specifiable directly to setuptools.setup().
gharchive/pull-request
2022-06-12T14:46:20
2025-04-01T06:40:07.644885
{ "authors": [ "abravalheri", "dkondoetsy" ], "repo": "pypa/setuptools", "url": "https://github.com/pypa/setuptools/pull/3364", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
453752406
"beta" badge for WebAuthn beta feature within Account Settings What's the problem this feature will solve? The new WebAuthn two-factor auth method is going to -- initially -- be in beta, and we want to alert users to that when letting them use it. Describe the solution you'd like For the WebAuthn rollout, @ewdurbin suggested that we initially add a badge in the Account Settings marking the WebAuthn 2FA method as "beta". This badge should be on the same line as the WebAuthn entry (and clearly NOT apply to the TOTP method) and should probably link to an FAQ entry or GitHub issue or wiki page about the beta. Additional context Because we don't have #5869 yet. Fixed in #5977! Thank you @nlhkabu!
gharchive/issue
2019-06-08T03:54:30
2025-04-01T06:40:07.651321
{ "authors": [ "brainwane" ], "repo": "pypa/warehouse", "url": "https://github.com/pypa/warehouse/issues/5976", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
478480120
Increasing package size limit for apache-flink Hi, I would like to request you to increase the size limit of the package "apache-flink" to 400MB, because in the next release, apache-flink will package the Flink Cord JARs files and opt(connector's JRAs), in that case the size will be increased almost 400MB. Please help me increase the upload limit, thank you. There is a developmental release in both PyPI and Test PyPI for the moment, but it is incomplete. It's actual just a test release package. A link to your project on PyPI (or Test PyPI): https://test.pypi.org/project/apache-flink/ https://pypi.org/project/apache-flink/ The size of your release, in megabytes: Almost 400MB (After official release) Which index/indexes you need the increase for (PyPI, Test PyPI, or both): Both A brief description of your project, including the reason for the additional size. Apache-Flink allows you to write Flink programs in Python. The additional size comes from the flink JARs files. BTW:The discussion and vote mail thread can be found here: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Publish-the-PyFlink-into-PyPI-td31201.html I appreciate if you can have look at this apply @jamadden @nlhkabu Best, Jincheng There is a previous distribution of apache flink on PyPI. In https://github.com/pypa/warehouse/issues/6116 it was given a limit of 300MB, though it appears that was never used. If this project is to officially replace that one, perhaps there could be some coordination to remove it from PyPI so as to prevent user confusion and excess resource usage. 300MB is on the very upper end of sizes that moderators are suggested to approve without consulting with other moderators. 400MB is in the area that requires more scrutiny. In particular, if "the package is attempting to ship ... non-Python stuff like OpenJDK" moderators are suggested to seek further clarification from the requestor and discuss with the other moderators. Besides the PyPi resource usage, large packages don't produce a very good user experience, so we like to help maintainers find alternate solutions. @sunjincheng121 could you please provide more details about why this project needs to be so large? Were any other distribution options considered and rejected (and if so, why)? For example, one suggestion is that the package could "side-load large models or datasets via downloads from elsewhere at runtime/installtime." Or, since you mention multiple different JARs, another option might be to package them independently and "break the package into smaller packages that depend on each other as necessary." Hi @jamadden, I am very sorry that I did not explain the relationship between https://pypi.org/project/pyflink/ and https://pypi.org/project/apache-flink/. Before we did the Flink community discussion, we wanted to use pyflink as the release project, but pyflink was created by other people not related to Apache Flink. After discussions in the Flink community, we finally decided to create a new apach-flink project as the official release.(The project(pyflink) is not part of Flink, we can not delete it for now.) Regarding the size of the package, I am worried about the future expansion, so I applied for 400MB. It is also possible to apply 300MB at present. Other packages if needed in the future, such as your suggestions, we can also use the script to download dynamically. So a brief summary: https://test.pypi.org/project/apache-flink/ is an official project of Apache Flink (on behalf of the Flink PMC) Apply for package size up to 300MB If there is anything I do not explanation clearly, please let me know :) Best, Jincheng @jamadden @nlhkabu is there something I should explain? :) @jamadden @sunjincheng121 Thanks a lot for the discussion. Any progress? @jamadden Do you think it makes sense to increase the package size for now and we can continue to investigate other options such as breaking the package into a few smaller packages per your suggestion. @jamadden @nlhkabu @di could you please have look at this application, Is there any suggestions for me to explain or improve? My apologies for the delay, my time for this has been extremely limited lately. Thank you for the detailed information. Based on it, I've taken the following actions: Increase apache-flink to 300MB on both pypi.org and test.pypi.org (larger sizes might need other moderators to take a look) Restore pyflink to the default limit, reverting https://github.com/pypa/warehouse/issues/6116 Thanks @jamadden !
gharchive/issue
2019-08-08T13:43:03
2025-04-01T06:40:07.662945
{ "authors": [ "dianfu", "jamadden", "sunjincheng121" ], "repo": "pypa/warehouse", "url": "https://github.com/pypa/warehouse/issues/6394", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
502276446
Add Travis CI badge to README Warehouse uses Travis CI to run integration tests. Our README.md file doesn't have a badge at the top saying that our Travis CI builds are passing. You can look at https://shields.io/category/build to see how to add one. Good First Issue: This issue is good for first time contributors. If you've already contributed to Warehouse, work on another issue without this label instead. If there is not a corresponding pull request for this issue, it is up for grabs. For directions for getting set up, see our Getting Started Guide. If you are working on this issue and have questions, feel free to ask them here, #pypa-dev on Freenode, or the pypa-dev mailing list. @brainwane Can I work on this issue? Sure, go ahead @Patil2099! A longer response for @Patil2099 -- Yes, please go ahead and grab this issue! At least within Warehouse, our rule is: If there is not a corresponding pull request for this issue (visibly linked by issue number/PR number), and it's not marked as assigned to anyone, it is up for grabs. So in cases like this, you don't even need to ask. :) Just comment and say that you're starting to work on it, and link in a comment to your work-in-progress branch once you start. As always, if you have questions along the way as you work on this, please feel free to ask them here, in #pypa-dev on Freenode, or the pypa-dev mailing list. Thank you!
gharchive/issue
2019-10-03T20:18:59
2025-04-01T06:40:07.668577
{ "authors": [ "Patil2099", "brainwane" ], "repo": "pypa/warehouse", "url": "https://github.com/pypa/warehouse/issues/6752", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
629191079
fix parenthesis error in deprecation docs I'd like for @Manthan03583 to take this one. https://warehouse.readthedocs.io/application/#historical-context-deprecations has a grammar error in the line download counts visible in the API: instead, use the Google BigQuery service) This is a good first contribution for someone who's new to this repository. You open up docs/application.rst and change - `download counts visible in the API <https://warehouse.readthedocs.io/api-reference/xml-rpc/#changes-to-legacy-api>`_: instead, use `the Google BigQuery service <https://packaging.python.org/guides/analyzing-pypi-package-downloads/>`_) to - `download counts visible in the API <https://warehouse.readthedocs.io/api-reference/xml-rpc/#changes-to-legacy-api>`_ (instead, use `the Google BigQuery service <https://packaging.python.org/guides/analyzing-pypi-package-downloads/>`_) and then open a pull request. -- Good First Issue for @Manthan03583: For directions for getting set up, see our Getting Started Guide. If you are working on this issue and have questions, feel free to ask them here, #pypa-dev on Freenode, or the distutils-sig.python.org mailing list. Thank you mailing list link in developer docs updated #8030 Hi @Manthan03583 - please come into IRC https://webchat.freenode.net/?channels=%23pypa-dev now, if you can. sorry mam , my internet connection is running slowly now. And I am sorry i mixed both the commits in one pull request. @Manthan03583 I think you need some basic help using git before you continue. Instead of committing your proposed changes to your master branch, you need to create a new branch and commit the changes to that new branch. I suggest that you read a training kit from https://github.github.com/training-kit/ (available in several languages) from start to finish, and that you read https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/proposing-changes-to-your-work-with-pull-requests as well. This will probably take you a few days. If you need some more guidance to understand how to make a new branch and create a pull request from it, please take a look at https://zulip.readthedocs.io/en/latest/git/ which may also help you. Then, after you've done that, please open a new PR in this repository, one that just contains https://github.com/pypa/warehouse/pull/8030/commits/bbeb7d11e9cf4ddd6770d91f0d48ae144d47fa2c . Thanks. parenthesis error in deprecation docs fixed #8046
gharchive/issue
2020-06-02T13:10:57
2025-04-01T06:40:07.676600
{ "authors": [ "Manthan03583", "brainwane" ], "repo": "pypa/warehouse", "url": "https://github.com/pypa/warehouse/issues/8031", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
684225500
Add help for pasting password in Windows Following up on https://github.com/pypa/twine/issues/671#issuecomment-674510081 and the subsequent discussion. In short, I think tokens are causing more folks to copy/paste instead of typing by hand, but that doesn't work great in the Windows Command Prompt. It seemed worth documenting, and this felt like an appropriate location. @di Can you take a look at this, both for the content, and the pending Travis check?
gharchive/pull-request
2020-08-23T17:41:05
2025-04-01T06:40:07.678194
{ "authors": [ "bhrutledge" ], "repo": "pypa/warehouse", "url": "https://github.com/pypa/warehouse/pull/8463", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
929447555
pyproj ImportError : undefined symbol: proj_context_set_ca_bundle_path Code Sample, a copy-pastable example if possible A "Minimal, Complete and Verifiable Example" will make it much easier for maintainers to help you: http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports jldz9@r10smithryang:~/InSAR_Sentinel_3/Mississippi$ save_qgis.py Traceback (most recent call last): File "/usr/local/home/jldz9/tools/MintPy/mintpy/save_qgis.py", line 17, in <module> from mintpy.utils import ptime, readfile, utils as ut File "/usr/local/home/jldz9/tools/MintPy/mintpy/utils/utils.py", line 24, in <module> from mintpy.utils.utils0 import * File "/usr/local/home/jldz9/tools/MintPy/mintpy/utils/utils0.py", line 23, in <module> from pyproj import Proj, Transformer File "/usr/local/home/jldz9/anaconda3/envs/mintpy/lib/python3.8/site-packages/pyproj/__init__.py", line 49, in <module> import pyproj.network File "/usr/local/home/jldz9/anaconda3/envs/mintpy/lib/python3.8/site-packages/pyproj/network.py", line 10, in <module> from pyproj._network import ( # noqa: F401 ImportError: /usr/local/home/jldz9/anaconda3/envs/mintpy/lib/python3.8/site-packages/pyproj/_network.cpython-38-x86_64-linux-gnu.so: undefined symbol: proj_context_set_ca_bundle_path Problem description Pyproj is imported by mintpy function save_qgis.py When I tried it out it it simply give me the import error, I opened the issue there but the author told me it might be the pyproj install issue. Expected Output expect to output help page if I just simply input save_qgis.py without any parameters Environment Information Output from: pyproj -v pyproj info: pyproj: 3.1.0 PROJ: 7.2.0 data dir: /usr/local/home/jldz9/anaconda3/envs/mintpy/share/proj user_data_dir: /usr/local/home/jldz9/.local/share/proj System: python: 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05) [GCC 9.3.0] executable: /usr/local/home/jldz9/anaconda3/envs/mintpy/bin/python machine: Linux-5.4.0-73-generic-x86_64-with-glibc2.10 Python deps: certifi: 2021.05.30 pip: 21.1.1 setuptools: 52.0.0.post20210125 Cython: None Output from: python -m pyproj -v Output from: python -c "import pyproj; pyproj.show_versions()" pyproj version (python -c "import pyproj; print(pyproj.__version__)") PROJ version (python -c "import pyproj; print(pyproj.proj_version_str)") PROJ data directory (python -c "import pyproj; print(pyproj.datadir.get_data_dir())") Python version (python -c "import sys; print(sys.version.replace('\n', ' '))") Operation System Information (python -c "import platform; print(platform.platform())") Installation method conda, pip wheel, from source, etc... conda Conda environment information (if you installed with conda): Environment (conda list): $ conda list proj # packages in environment at /usr/local/home/jldz9/anaconda3/envs/mintpy: # # Name Version Build Channel proj 7.2.0 h277dcde_2 conda-forge pyproj 3.1.0 py38h53229fd_3 conda-forge $conda list (mintpy) jldz9@r10smithryang:~/InSAR_Sentinel_3/Mississippi$ conda list # packages in environment at /usr/local/home/jldz9/anaconda3/envs/mintpy: # # Name Version Build Channel _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 1_gnu conda-forge alsa-lib 1.2.3 h516909a_0 conda-forge appdirs 1.4.4 pyh9f0ad1d_0 conda-forge asciitree 0.3.3 py_2 conda-forge blosc 1.21.0 h9c3ff4c_0 conda-forge bokeh 2.3.2 py38h578d9bd_0 conda-forge boost-cpp 1.74.0 hc6e9bd1_3 conda-forge brotli 1.0.9 h9c3ff4c_4 conda-forge brotlipy 0.7.0 py38h497a2fe_1001 conda-forge brunsli 0.1 h9c3ff4c_0 conda-forge bzip2 1.0.8 h7f98852_4 conda-forge c-ares 1.17.1 h7f98852_1 conda-forge ca-certificates 2021.5.30 ha878542_0 conda-forge cairo 1.16.0 h6cf1ce9_1008 conda-forge cartopy 0.19.0.post1 py38hc9c980b_0 conda-forge cdsapi 0.5.1 pyhd8ed1ab_0 conda-forge certifi 2021.5.30 py38h578d9bd_0 conda-forge cffi 1.14.5 py38ha65f79e_0 conda-forge cfitsio 3.470 hb418390_7 conda-forge cftime 1.5.0 py38hb5d20a5_0 conda-forge chardet 4.0.0 py38h578d9bd_1 conda-forge charls 2.2.0 h9c3ff4c_0 conda-forge click 8.0.1 py38h578d9bd_0 conda-forge cloudpickle 1.6.0 py_0 conda-forge configobj 5.0.6 py_0 conda-forge cryptography 3.4.7 py38ha5dfef3_0 conda-forge curl 7.77.0 hea6ffbf_0 conda-forge cvxopt 1.2.6 py38h55e5319_0 conda-forge cycler 0.10.0 py_2 conda-forge cytoolz 0.11.0 py38h497a2fe_3 conda-forge dask 2021.6.2 pyhd8ed1ab_0 conda-forge dask-core 2021.6.2 pyhd8ed1ab_0 conda-forge dask-jobqueue 0.7.2 pyhd8ed1ab_1 conda-forge dbus 1.13.6 h48d8840_2 conda-forge decorator 5.0.9 pyhd8ed1ab_0 conda-forge defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge distributed 2021.6.2 py38h578d9bd_0 conda-forge dsdp 5.8 hd9d9efa_1203 conda-forge eccodes 2.21.0 ha0e6eb6_0 conda-forge expat 2.4.1 h9c3ff4c_0 conda-forge fasteners 0.16 pyhd8ed1ab_0 conda-forge fftw 3.3.9 nompi_h74d3f13_101 conda-forge fontconfig 2.13.1 hba837de_1005 conda-forge freetype 2.10.4 h0708190_1 conda-forge freexl 1.0.6 h7f98852_0 conda-forge fsspec 2021.6.1 pyhd8ed1ab_0 conda-forge gdal 3.2.1 py38hc0b2d6b_7 conda-forge geos 3.9.1 h9c3ff4c_2 conda-forge geotiff 1.6.0 h2b14fbe_4 conda-forge gettext 0.19.8.1 h0b5b191_1005 conda-forge giflib 5.2.1 h36c2ea0_2 conda-forge glib 2.68.3 h9c3ff4c_0 conda-forge glib-tools 2.68.3 h9c3ff4c_0 conda-forge glpk 4.65 h9202a9a_1004 conda-forge gmp 6.2.1 h58526e2_0 conda-forge gsl 2.6 he838d99_2 conda-forge gst-plugins-base 1.18.4 hf529b03_2 conda-forge gstreamer 1.18.4 h76c114f_2 conda-forge h5py 2.10.0 nompi_py38h9915d05_106 conda-forge hdf4 4.2.15 h10796ff_3 conda-forge hdf5 1.10.6 nompi_h6a2412b_1114 conda-forge heapdict 1.0.1 py_0 conda-forge icu 68.1 h58526e2_0 conda-forge idna 2.10 pyh9f0ad1d_0 conda-forge imagecodecs 2021.3.31 py38h1455ab2_0 conda-forge imageio 2.9.0 py_0 conda-forge jasper 1.900.1 h07fcdf6_1006 conda-forge jinja2 3.0.1 pyhd8ed1ab_0 conda-forge jpeg 9d h36c2ea0_0 conda-forge json-c 0.15 h98cffda_0 conda-forge jxrlib 1.1 h7f98852_2 conda-forge kealib 1.4.14 hcc255d8_2 conda-forge kiwisolver 1.3.1 py38h1fd1430_1 conda-forge krb5 1.19.1 hcc1bbae_0 conda-forge lcms2 2.12 hddcbb42_0 conda-forge ld_impl_linux-64 2.35.1 hea4e1c9_2 conda-forge lerc 2.2.1 h9c3ff4c_0 conda-forge libaec 1.0.5 h9c3ff4c_0 conda-forge libblas 3.9.0 9_openblas conda-forge libcblas 3.9.0 9_openblas conda-forge libclang 11.1.0 default_ha53f305_1 conda-forge libcurl 7.77.0 h2574ce0_0 conda-forge libdap4 3.20.6 hd7c4107_2 conda-forge libdeflate 1.7 h7f98852_5 conda-forge libedit 3.1.20191231 he28a2e2_2 conda-forge libev 4.33 h516909a_1 conda-forge libevent 2.1.10 hcdb4288_3 conda-forge libffi 3.3 h58526e2_2 conda-forge libgcc-ng 9.3.0 h2828fa1_19 conda-forge libgdal 3.2.1 h38ff51b_7 conda-forge libgfortran-ng 9.3.0 hff62375_19 conda-forge libgfortran5 9.3.0 hff62375_19 conda-forge libglib 2.68.3 h3e27bee_0 conda-forge libgomp 9.3.0 h2828fa1_19 conda-forge libiconv 1.16 h516909a_0 conda-forge libkml 1.3.0 h238a007_1013 conda-forge liblapack 3.9.0 9_openblas conda-forge libllvm11 11.1.0 hf817b99_2 conda-forge libnetcdf 4.7.4 nompi_h56d31a8_107 conda-forge libnghttp2 1.43.0 h812cca2_0 conda-forge libogg 1.3.4 h7f98852_1 conda-forge libopenblas 0.3.15 pthreads_h8fe5266_1 conda-forge libopus 1.3.1 h7f98852_1 conda-forge libpng 1.6.37 h21135ba_2 conda-forge libpq 13.3 hd57d9b9_0 conda-forge librttopo 1.1.0 h1185371_6 conda-forge libspatialite 5.0.1 he52d314_3 conda-forge libssh2 1.9.0 ha56f1ee_6 conda-forge libstdcxx-ng 9.3.0 h6de172a_19 conda-forge libtiff 4.2.0 hbd63e13_2 conda-forge libuuid 2.32.1 h7f98852_1000 conda-forge libvorbis 1.3.7 h9c3ff4c_0 conda-forge libwebp-base 1.2.0 h7f98852_2 conda-forge libxcb 1.13 h7f98852_1003 conda-forge libxkbcommon 1.0.3 he3ba5ed_0 conda-forge libxml2 2.9.12 h72842e0_0 conda-forge libxslt 1.1.33 h15afd5d_2 conda-forge libzopfli 1.0.3 h9c3ff4c_0 conda-forge llvm-openmp 8.0.1 hc9558a2_0 conda-forge locket 0.2.0 py_2 conda-forge lxml 4.6.3 py38hf1fe3a4_0 conda-forge lz4-c 1.9.3 h9c3ff4c_0 conda-forge markupsafe 2.0.1 py38h497a2fe_0 conda-forge matplotlib 3.4.2 py38h578d9bd_0 conda-forge matplotlib-base 3.4.2 py38hcc49a3a_0 conda-forge metis 5.1.0 h58526e2_1006 conda-forge monotonic 1.5 py_0 conda-forge mpfr 4.0.2 he80fd80_1 conda-forge msgpack-python 1.0.2 py38h1fd1430_1 conda-forge mysql-common 8.0.25 ha770c72_0 conda-forge mysql-libs 8.0.25 h935591d_0 conda-forge ncurses 6.2 h58526e2_4 conda-forge netcdf4 1.5.6 nompi_py38hf887595_102 conda-forge networkx 2.5 py_0 conda-forge nspr 4.30 h9c3ff4c_0 conda-forge nss 3.67 hb5efdd6_0 conda-forge numcodecs 0.7.3 py38h709712a_0 conda-forge numpy 1.21.0 py38h9894fe3_0 conda-forge olefile 0.46 pyh9f0ad1d_1 conda-forge openjpeg 2.4.0 hb52868f_1 conda-forge openmp 8.0.1 0 conda-forge openssl 1.1.1k h7f98852_0 conda-forge packaging 20.9 pyh44b312d_0 conda-forge pandas 1.2.5 py38h1abd341_0 conda-forge partd 1.2.0 pyhd8ed1ab_0 conda-forge pcre 8.45 h9c3ff4c_0 conda-forge pillow 8.2.0 py38ha0e1e83_1 conda-forge pip 21.1.2 pyhd8ed1ab_0 conda-forge pixman 0.40.0 h36c2ea0_0 conda-forge pooch 1.4.0 pyhd8ed1ab_0 conda-forge poppler 0.89.0 h2de54a5_5 conda-forge poppler-data 0.4.10 0 conda-forge postgresql 13.3 h2510834_0 conda-forge proj 7.2.0 h277dcde_2 conda-forge psutil 5.8.0 py38h497a2fe_1 conda-forge pthread-stubs 0.4 h36c2ea0_1001 conda-forge pycparser 2.20 pyh9f0ad1d_2 conda-forge pygrib 2.1.3 py38h549f6ee_0 conda-forge pyhdf 0.10.3 py38he643327_0 conda-forge pykdtree 1.3.4 py38h0b5ebd8_0 conda-forge pykml 0.2.0 pypi_0 pypi pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge pyproj 3.1.0 py38h53229fd_3 conda-forge pyqt 5.12.3 py38h578d9bd_7 conda-forge pyqt-impl 5.12.3 py38h7400c14_7 conda-forge pyqt5-sip 4.19.18 py38h709712a_7 conda-forge pyqtchart 5.12 py38h7400c14_7 conda-forge pyqtwebengine 5.12.1 py38h7400c14_7 conda-forge pyresample 1.20.0 py38h1abd341_0 conda-forge pyshp 2.1.3 pyh44b312d_0 conda-forge pysocks 1.7.1 py38h578d9bd_3 conda-forge pysolid 0.1.2 pypi_0 pypi python 3.8.10 h49503c6_1_cpython conda-forge python-dateutil 2.8.1 py_0 conda-forge python_abi 3.8 2_cp38 conda-forge pytz 2021.1 pyhd8ed1ab_0 conda-forge pywavelets 1.1.1 py38h5c078b8_3 conda-forge pyyaml 5.4.1 py38h497a2fe_0 conda-forge qt 5.12.9 hda022c4_4 conda-forge readline 8.1 h46c0cb4_0 conda-forge requests 2.25.1 pyhd3deb0d_0 conda-forge scikit-image 0.18.1 py38h51da96c_0 conda-forge scipy 1.6.3 py38h7b17777_0 conda-forge setuptools 49.6.0 py38h578d9bd_3 conda-forge shapely 1.7.1 py38haeee4fe_5 conda-forge six 1.16.0 pyh6c4a22f_0 conda-forge snappy 1.1.8 he1b5a44_3 conda-forge sortedcontainers 2.4.0 pyhd8ed1ab_0 conda-forge sqlite 3.36.0 h9cd32fc_0 conda-forge suitesparse 5.10.1 hd8046ac_0 conda-forge tbb 2020.2 h4bd325d_4 conda-forge tblib 1.7.0 pyhd8ed1ab_0 conda-forge tifffile 2021.4.8 pyhd8ed1ab_0 conda-forge tiledb 2.2.9 h91fcb0e_0 conda-forge tk 8.6.10 h21135ba_1 conda-forge toolz 0.11.1 py_0 conda-forge tornado 6.1 py38h497a2fe_1 conda-forge tqdm 4.61.1 pyhd8ed1ab_0 conda-forge typing_extensions 3.10.0.0 pyha770c72_0 conda-forge tzcode 2021a h7f98852_1 conda-forge tzdata 2021a he74cb21_0 conda-forge urllib3 1.26.5 pyhd8ed1ab_0 conda-forge wheel 0.36.2 pyhd3deb0d_0 conda-forge xarray 0.18.2 pyhd8ed1ab_0 conda-forge xerces-c 3.2.3 h9d8b166_2 conda-forge xorg-kbproto 1.0.7 h7f98852_1002 conda-forge xorg-libice 1.0.10 h7f98852_0 conda-forge xorg-libsm 1.2.3 hd9c2040_1000 conda-forge xorg-libx11 1.7.2 h7f98852_0 conda-forge xorg-libxau 1.0.9 h7f98852_0 conda-forge xorg-libxdmcp 1.1.3 h7f98852_0 conda-forge xorg-libxext 1.3.4 h7f98852_1 conda-forge xorg-libxrender 0.9.10 h7f98852_1003 conda-forge xorg-renderproto 0.11.1 h7f98852_1002 conda-forge xorg-xextproto 7.3.0 h7f98852_1002 conda-forge xorg-xproto 7.0.31 h7f98852_1007 conda-forge xz 5.2.5 h516909a_1 conda-forge yaml 0.2.5 h516909a_0 conda-forge zarr 2.8.3 pyhd8ed1ab_0 conda-forge zfp 0.5.5 h9c3ff4c_5 conda-forge zict 2.0.0 py_0 conda-forge zlib 1.2.11 h516909a_1010 conda-forge zstd 1.4.9 ha95c52a_0 conda-forge Details about conda and system ( conda info ): $ conda info (mintpy) jldz9@r10smithryang:~/InSAR_Sentinel_3/Mississippi$ conda info active environment : mintpy active env location : /usr/local/home/jldz9/anaconda3/envs/mintpy shell level : 2 user config file : /usr/local/home/jldz9/.condarc populated config files : conda version : 4.10.1 conda-build version : 3.21.4 python version : 3.8.8.final.0 virtual packages : __cuda=9.1=0 __linux=5.4.0=0 __glibc=2.31=0 __unix=0=0 __archspec=1=x86_64 base environment : /usr/local/home/jldz9/anaconda3 (writable) conda av data dir : /usr/local/home/jldz9/anaconda3/etc/conda conda av metadata url : https://repo.anaconda.com/pkgs/main channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /usr/local/home/jldz9/anaconda3/pkgs /usr/local/home/jldz9/.conda/pkgs envs directories : /usr/local/home/jldz9/anaconda3/envs /usr/local/home/jldz9/.conda/envs platform : linux-64 user-agent : conda/4.10.1 requests/2.25.1 CPython/3.8.8 Linux/5.4.0-73-generic ubuntu/20.04.2 glibc/2.31 UID:GID : 73268:5513 netrc file : /usr/local/home/jldz9/.netrc offline mode : False @jldz9 I would recommend re-creating your conda environment. Whenever I see the undefined symbol error, it usually indicates that pyproj was built against a different version of PROJ than it is running against. I am going to recommend re-creating your environment and ensuring that you use conda activate <envonment name> before use. If that does not solve your issue, I recommend raising an issue here: https://github.com/conda-forge/pyproj-feedstock/
gharchive/issue
2021-06-24T17:21:27
2025-04-01T06:40:07.724539
{ "authors": [ "jldz9", "snowman2" ], "repo": "pyproj4/pyproj", "url": "https://github.com/pyproj4/pyproj/issues/864", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1552058659
Binaries for v13.3.1 Node 16 (v93) Hi, looking at https://supabase-public-artifacts-bucket.s3.amazonaws.com/ tha last version uploaded is v13.3.0. Would you mind adding v13.3.1 for Node 16 v93, both linux x64 and arm64? Also v14.0.0 and v15.0.0 would be ideal. This is preventing me to upgrade some dependencies. Thanks in advance 13.3.1 has been uploaded. The pre-built binaries are an optimization, and should not be a blocker for upgrades--it'll fall back to compiling from source if needed.
gharchive/issue
2023-01-22T09:43:37
2025-04-01T06:40:07.744630
{ "authors": [ "darora", "ruggi99" ], "repo": "pyramation/libpg-query-node", "url": "https://github.com/pyramation/libpg-query-node/issues/28", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
759873225
fix: Fixed media content upload to bucket As witnessed over the past day, we've had some troubles with media upload. Some were fixed in #95 and #96 fortunately, but two issues remained: adding the subfolder in the bucket name was not supported by the CSP client empty file could be uploaded, if the byte position of a file were to be offset for instance This PR fixes both issues by adding a generalized bucket key resolution and resetting the byte position after hashing to avoid uploading empty files. I tried it on my end by deploying the feature branch on Heroku, and it works: Feel free to try by yourself before the next deployment :ok_hand: Any feedback is welcome! Added a unittest for bucket key resolution (after all, I introduced a feature, this should be tested) And additionally, I think I found the problem with ConnectionError: the tests were run right after the docker started, while the FastAPI server needs some time to run and become reachable :sweat_smile: I added a workflow fix (moving the docker start as early as possible, and add a sleep before running the tests) and it seems to be doing the trick :raised_hands:
gharchive/pull-request
2020-12-08T23:50:53
2025-04-01T06:40:07.748038
{ "authors": [ "frgfm" ], "repo": "pyronear/pyro-api", "url": "https://github.com/pyronear/pyro-api/pull/100", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1085195439
Replace log4j with another logger We don't use logging that extensively in our agent, but due to recent events it seems as though it may be worthwhile to just switch from log4j to use another logger. Would be open to any other suitable logger here. Considering the amount of attention of security researchers log4j has been enjoying recently, it probably can be considered quite secure now :) @ivanyu fair enough... We'll consider them on strike two. One more and I think we will have to strongly consider the switch :)
gharchive/issue
2021-12-20T20:57:06
2025-04-01T06:40:07.749606
{ "authors": [ "Rperry2174", "ivanyu" ], "repo": "pyroscope-io/pyroscope-java", "url": "https://github.com/pyroscope-io/pyroscope-java/issues/7", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1945302099
To evaluate or not to evaluate ... that's the question! Background This module is based on coincident orchestration and a great achievement of that module, despite requiring inevitably special headers to allow SharedArrayBuffer, is that no code evaluation ever happens: it's all about proxies, Atomics or standard Web features, without "eval" like security concerns ever. At the same time, because of those required special headers and to make this module as portable as possible when it comes to workers support, without needing to rely on foreign headers at all when that's desired, this module pre-bundles some file before publishing, also providing a nonce that 3rd party bundlers can use to eventually allow the xworker Blob code whenever CSP rules are both present and rightly strict / paranoid in terms of which code is trusted and which one isn't. Such nonce is indeed embedded in the package.json as sha-256 value and, for whoever is wondering what is that about indeed, that's the checksum for the ./worker/xworker.js file that the build artifact creates before this module reaches npm (or just to even pass integration tests). Current State I had strong discussions against evaluating any code in the worker space because of CSP constraints and possible attached shenanigans, but while I am still fully convinced not evaluating anything in the worker space is actually desirable, automatically defeating tons of different classes of potential attacks to the code, this choice, and my position, is very likely limiting the DX related to hooks and/or the ability to actually pass callbacks to the worker as JS things to do and execute at the right time, like it is already for the main thread story, helped by the fact in there there's no need to survive postMessage related dances, so it's easier to add callbacks to onBeforeRun and friends without ever worry about CSP contraints. However, while working, thinking, refactoring, and working on other things around this PR, I've wondered myself why shouldn't developers be able to define for either main or workers the very same list of hooks, also providing a way in the worker to deal with the current interpreter before (or after) any code runs at all, enabling tons of features that otherwise need to branch between main and workers all the time ... heck, I even think that PR is wrong in somehow branching main VS thread, as in an ideal world we want to just define these hooks, and stop caring about which one is running where, as long as there is any mean/way to disambiguate the hook is running in a main thread, or a worker one: onReady onBeforeRun onBeforeRunAsync onAfterRun onAfterRunAsync codeBeforeRun codeBeforeRunAsync codeAfterRun codeAfterRunAsync If we had these normalized no matter where, simply adding some detail to their callbacks when possible (I still think code is about python or other interpreters code, and these already provide a way to disambiguate, as it s for pyscript module, but other hooks can also provide as global IS_WORKER flag if needed), the only hook that'd be left over to date is the onWorkerReady one, because this is a hook that makes sense only on the main thread, as that provides the xworker referece to eventually attach utilities before the worker even runs via sync or anything else we want or need in the future. In short, I wonder if just having all hooks normalized for all worlds makes sense, but then again, callbacks posted as strings to workers won't be able to carry their own outer scope like it is for main, so that it is still somehow desirable to disambiguate hooks by main one and thread one, where callbacks without outer scope dependencies can work regardless for both cases, but it's still possible to fine-tune those callbacks that wouldn't. The code story would be the same for both main and thread, so that even in main it would be possible to add code related hooks, instead of just JS one, but basically anyone would be free to actually hook their own code or plugin at any time around the lifecycle of a polyscript script enabled script or custom type ... after all, for workers that would be just enabling extra trust to an already inevitably evaluated part of the project. What do you you think? /cc @antocuni @fpliger @ntoll @JeffersGlass @bugzpodder I've ready 3 times what I wrote and I know some part might feel confusing ... I'll try to hopefully explain some part: No outer scope in Workers import { hooks } from '@pyscript/core'; let i = 0; // this hooks works only in *main* hooks.onBeforeRun(() => { console.log(i++) }); // in workers, this can be serialized only as fn.toString() // producing just "() => { console.log(i++) }" as string. // that `i++` trusts an outer scope entry or a global one // failing miserably, so that simple hook is not // cross main/worker usable/reliable Disambiguation in JS code To solve previous caveat, we can pass a worker boolean (or any other name) as wrapper utility, asking users to branch out their intents: import { hooks } from '@pyscript/core'; let i = 0; hooks.onBeforeRun(({ worker }) => { if (worker) { // possible workaround if ('i' in globalThis) globalThis.i = 0; // do we really go down this branching path // in user provided hooks, or even our own hooks? } console.log(i++); }); The issue here is that the branching burden would be all over the place, for both main or worker only meant code (see py-terminal plugin as example) ... and I personally don't fully like it, but in a world where some project prefer worker or main first, I think this might be a path forward, as it incentives a default choice yet allowing exception to be handled ... I'm a bit torn about this approach though ... Disambiguation in Python code The pyscript module, as example, provides a RUNNING_IN_WORKER export boolean value so that it's easy to do different things in there when such value is True or False, and the current py-terminal plugin uses that detail to behave differently ... I felt like that was OK and it solved also the stdlib bundling branching behavior, where we can with ease provide either different exports, with same names, or different behavior, erroring when desired in either main or worker scenarios ... I am also not sure about this, but it worked well to me to date. That's it, these are the things that make me wonder if my idea makes any sense, if you understand its constraints and/or caveats, and if it would be a better way forward in general, not just for PyScript, but for the sake of anyone using even just polyscript to create amizing things out there! Thank you for reasoning with me, happy to answer any outstanding question or concern, or happy to listen to even better ideas than this one! @WebReflection I glanced over the comments and it deserves more time/attention. Just wanted to acknowledge that. I'll try to do that tomorrow before schedule gets too hectic Hi, just catching up with all the discussions after last week's travels... :airplane: :earth_africa: :+1: Here's the key quote from your initial comment @WebReflection: in an ideal world we want to just define these hooks, and stop caring about which one is running where. This certainly reduces the cognitive burden on developers, in the same way they always have a windows or document reference, that works in the same way, no matter if they're in the main thread or on a worker. But I also hear you when say that callbacks run on workers won't have their outer context available to them, because they're serialized to strings via the postMessage to the worker. This is clearly a difference between running on the main thread or on a worker. So we can only go so far to help folks stop caring about where their code is run. I believe, for most people, most of the time, and in most situations, this won't be a problem. However, for a significant minority of (likely highly technical) users who are expecting an outer context, we should very clearly flag this difference. That such behaviour is imposed upon us by the browser, for very good security reasons, means we can say something along the lines of, "that's how browsers work, and all we can do is just flag the side effects of this on your code". Given what I hope will be clear documentation and guidance on our part, I hope such technical users will be able to adjust their code to "fit". so, if we agree on normalizing and branching instead of going full (repeated) separation such as main and worker namespaces, I need to "rewrite" and change a lot of things, but I think whatever decision we make will be the right one, as long as we don't change mind and we are OK in allowing code evaluation in the worker by default ... if anything, this caveat is the only one that keeps me with my feet on the ground believing that maybe @antocuni sugggestion was the best one to reason about, otherwise we also risk to evaluate a lot of code maybe meant to work on main only or on worker only ... As summary, I do want to allow evaluation on workers too, but maybe we're better off making it explicit as proposed in this MR which can be updated as follow up after this discussion: https://github.com/pyscript/polyscript/pull/58 I do like that MR already, it's surely verbose but it helps separating all concerns and logic around the two very different worlds. FYI I've updated both description and code, after rebase, for the current proposed MR: https://github.com/pyscript/polyscript/pull/58#issue-1930242841 The more I think about it, the more it feels like the right way to move forward, as it's never ambiguous, it can avoid repeated entries by spreading objects, whenever that's needed, it clearly define a path for callbacks that work on main and those that don't + it allows different Python code too to run in main (in the future, see future proof section of the description). We've moved the idea forward with latest polyscript so we can close this.
gharchive/issue
2023-10-16T14:07:01
2025-04-01T06:40:07.796107
{ "authors": [ "WebReflection", "fpliger", "ntoll" ], "repo": "pyscript/polyscript", "url": "https://github.com/pyscript/polyscript/issues/61", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1056892479
Build better website using github page Hello i like idea of python IDE pyscripter, when i search the project home page I find blogpost with announcement of release. i like to help build a simple website for pyscripter using static site generators like Hugo and hosted in github page. any interest in this project? I will try to build a sample site and make PR -sorry if my english is not good, i still try to learn Thanks Thanks for the offer! If you have a go and I like the results I will accept you PR.
gharchive/issue
2021-11-18T03:24:19
2025-04-01T06:40:07.801150
{ "authors": [ "gunungpw", "pyscripter" ], "repo": "pyscripter/pyscripter.github.io", "url": "https://github.com/pyscripter/pyscripter.github.io/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
290915347
Increase test coverage for requests by checking input params Given the nature of the package, it is not currently possible to test all the different request methods through automation, since testing responses requires valid Amazon Seller credentials to make a connection. Even if we were able to use Seller credentials to perform an automated test, there is no (last I heard) testing environment on the MWS side, so any request made has potential to wreak havoc in that Seller's production environment. Rather than try to check MWS's response, we can make testing simpler by stopping short of actually sending a request. All that is really needed is to get either the parameter dict or the request string that a given request method constructs, testing that against pre-set data. I am thinking the best option is to have an attribute on the base MWS class, something like debug_params, as a boolean. When testing, we just switch it to True, then make_request would be set to return the full param set instead of sending a request. Our test cases for these requests then need to initialize the APIs and switch on this "debug mode" in order to proceed. This does mean that some code would not be covered, specifically the latter half of make_request that builds a DictWrapper or DataWrapper from the response. We can find some ways to cover that separately later, though, since right now I think covering the request methods is more important. Framework for this test coverage added by updates ala #54.
gharchive/issue
2018-01-23T16:59:11
2025-04-01T06:40:07.836122
{ "authors": [ "GriceTurrble" ], "repo": "python-amazon-mws/python-amazon-mws", "url": "https://github.com/python-amazon-mws/python-amazon-mws/issues/38", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
410807532
[WIP] Remove Pipfiles and add pyproject.toml This PR removes the files required by pipenv and adds poetry's requirements instead. Using poetry will decrease our build time significantly, and make it much easier to publish projects to PyPi. What is keeping this WIP? @sco1 I need to add the appropriate changes to azure-pipelines.yml. I'll see if I can get this done tonight. Any ideas on this? Why are we changing the pip cache directory anyway? These warnings also appear on master. The directory '/home/vsts/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/home/vsts/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. As it is, the docker image builds and runs just fine. One minor thing is a suggestion from fuzzywuzzy to install python-Levenshtein. Don't know if we care enough to bother. If we do, we should consider making it extra (optional) dependency. I've experimented with building a wheel using poetry build -f wheel and then installing it in the docker image. However, when installing the wheel, it doesn't respect VCS dependencies and fetches discord.py from PyPI. The same behaviour is exhibited if sdist is used (and it takes way longer to install anyway). Apart from this dependency issue, it's pretty easy to use a wheel in the image. If it worked, I think it would speed up building the image since poetry wouldn't have to be installed. The time saved doesn't seem to be huge though. Levenshtein is really slow to install, which is why we decided against it. build times are significantly increased by its inclusion, and its only used for an incredibly rarely used snake feature. I have some concerns regarding dependencies. We don't actually run the bot in the Lint & Test job; we only run flake8. How many of the base packages are actually needed to install the Python dependencies? Some examples which don't require them to build: PIL will install fine but the bot will crash if ImageFont is imported and libfreetype isn't installed. lxml (along with its dependencies libxml2 and xlst) is an extra dependency of bs4. Our code base only seems to make use of html.parser so the dependencies are not needed. They aren't in the Dockerfile anyway. And there are some other packages which I can't figure out the purpose of: curl zlib Definitely ditch lxml, it's not being used anywhere. You can probably remove git as well, since I think discord.py is installed via http. Git is needed for git dependencies, Poetry farms out to git with a subprocess call I'm taking a stab at updating the readme, but Poetry has some differences from pipenv that make this problematic: Installation of Poetry The documentation for installation recommends using their custom installer, which is retrieved via curl. Windows users don't have curl by default. There is an alternate installation section which installs it with pip install --user poetry, but it doesn't explain what a user install is. The concern here is that the user site may not be on the PATH. Run Scripts Poetry's scripts section isn't the same as pipenv's. Rather, it's more akin to setup.py's scripts. This isn't very useful for us; I don't think it can even run modules directly. A suggestion here https://github.com/sdispater/poetry/issues/241#issuecomment-470200353 is to use a makefile. A makefile is an interesting suggestion, but it'd add make as yet another thing contributors have to install (more of an annoyance on Windows if anything). Another idea pointed out by Joseph was https://github.com/sdispater/poetry/issues/241#issuecomment-445434646. .env Files Poetry does not support .env files. There's some some stuff with bash or env that can take care of this, but it's quite cumbersome. We'd need a cross-platform solution anyway. One idea is to have __main__.py try to load the .env file if it exists using python-dotenv. No PyCharm Support PyCharm will not automatically create a virtual environment from the Poetry files. However, I believe it will automatically detect an existing virtual environment and automatically add it as an interpreter. It probably only looks in the project's directory so poetry config settings.virtualenvs.in-project true is needed. The guide would need to be shuffled around a bit so that the virtual environment is made before the project is opened in PyCharm. I understand the logic for not recommending the pip install but ultimately it boils down to the same pitfall as using the wrong Python environment's pip version to install any other module. I don't know if there's a scenario where a successful pip install --user install would fail to provide the correct entry point. Regardless, I've non-user installed it and it works just fine. Run scripts are definitely an annoying one, for the sake of making folks' lives easier we'll just have to toy around with the proposed alternatives until the point where Poetry provides first class support (if it happens). Loss of .env support seems well mitigated by python-dotenv. From seeing some recent issues folks have been having with PyCharm's pipenv support I don't know if the loss of support here is too big of a deal. I don't use PyCharm so I can't attest but it seems like its support can be spotty. Since Poetry does not install to the typical environments folder, users will have to pay attention to the settings.virtualenvs.in-project configuration option, which introduces a pain point, but it shouldn't be too terrible to resolve if someone isn't paying attention and runs into it. Unless poetry respects a project-specific config.toml? The docs are unclear. I don't know if there's a scenario where a successful pip install --user install would fail to provide the correct entry point. I had to add the user site to PATH on my machine. The path of least resistance for less experience contributors would be a non-user pip install, but I think that's worth coming to a consensus on before moving forward. Unless poetry respects a project-specific config.toml It doesn't seem to, unfortunately. We need to come to some decisions so this PR can move along. My proposals: Create a Python script to delegate launch options. It's technically abuse of the scripts feature of Poetry but screw it. Use python-dotenv to load a .env file if one exists. Still undecided on how to install Poetry. Sorry, my first paragraph was bad and wrong Sorry this half turned into a CI revamp PR. Updates on the situation: Azure Pipelines At this point, I think I've done all I can to clean things up. The Lint & Test job has been sped up by almost 40 seconds. Both jobs in total, without actually building the image, have been sped up by over 1.5 minutes. Unfortunately, this has all been due to clean up, not because of Poetry; Poetry is actually about twice as slow to install dependencies compared to pipenv. All apt-get packages were removed. They were either not used or already came with the Ubuntu agent from MS. Removing the redundant docker especially saved time. setuptools and salt-pepper were removed, the former of which took especially long to install. They were remnants of a legacy deployment system. Docker Image The image's size has been reduced by over 200 MB; it's currently at about 140 MB. This is largely thanks to disabling caches and removing build dependencies once done building. There are some important changes to note, however: At least Docker Engine 18.06 is required now. The base image was updated to python:3.7.2-alpine3.9. There's more info on these changes in the commit messages. These two changes are actually for quite minor reasons, so I am not against rolling them back if someone has a problem with them. The downside to going with a non-user install is it could potentially require sudo/admin permissions. @sco1 Good point. Maybe we should go with a user install and leave a "note" about adding the user site to PATH if poetry isn't a recognised command. If so, how extensive would the instructions in this "note" be? We could just model it off pipenv's docs. After all, that's what we were relying on before. I think it may have been a mistake to cut down on the intermediate layers so much. It looks like docker push/pull only download layers that have changed rather than the whole image. Will have to see what the size of a layer for just poetry installing the dependencies is. We've decided to stick with pipenv after all.
gharchive/pull-request
2019-02-15T15:01:10
2025-04-01T06:40:07.856339
{ "authors": [ "AnonGuy", "GhostofGoes", "MarkKoz", "heavysaturn", "sco1" ], "repo": "python-discord/seasonalbot", "url": "https://github.com/python-discord/seasonalbot/pull/116", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1006355349
Add a 5 minute cooldown to the topic command Relevant Issues Closes #868 Description Using the command while it's on cooldown will hit the error handler, which sends an error message showing how long is left on the cooldown, which is deleted after 7.5 seconds. Did you: [x] Join the Python Discord Community? [x] Read all the comments in this template? [x] Ensure there is an issue open, or link relevant discord discussions? [x] Read and agree to the contributing guidelines? I'm not a fan of this solution. I think there are relevant reasons to use two topic commands in a five minutes span. I'd propose one of the following: Reduce to one minute Add a re-roll reaction that can be only used by the command runner
gharchive/pull-request
2021-09-24T11:14:31
2025-04-01T06:40:07.861406
{ "authors": [ "Akarys42", "ChrisLovering" ], "repo": "python-discord/sir-lancebot", "url": "https://github.com/python-discord/sir-lancebot/pull/880", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1392833327
automate security headers and add them in google sheets all sites have security headers and we have to keep track of it so i made a script to automate that work Hey! I'm just starting to get involved in open source and, based on the tags, thought this might be a good place to start. I'm still a bit confused on what needs to be done though. Do you mind elaborating on the task ? Hey @avyayjain can you please explain it a Lil bit more So there is a site called securityheaders.com where you can check how secure the site is and where does the website lacks so I made a script where you just have to enter the website url and you can check the tags can i make a pr for it Cool, go ahead pls check my pr as it is showing flake8 error and greeting but i have pass it in my code but it is showing error for other codes
gharchive/issue
2022-09-30T18:49:38
2025-04-01T06:40:07.864235
{ "authors": [ "avyayjain", "ehildebrandtrojo", "pawangeek" ], "repo": "python-geeks/Automation-scripts", "url": "https://github.com/python-geeks/Automation-scripts/issues/830", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2123450562
Are relative JSON pointers supported? Sorry it's me again :smile: I'm making some progress on my "schema walker" experiments, and wanted to see how relative references using relative JSON pointers were handled. The docstring of the Resolver class states: https://github.com/python-jsonschema/referencing/blob/7d6069a7dfa76be679b02587b1ba89a0bfe7348d/referencing/_core.py#L615-L621 However, trying with the following example (and making use of the jsonschema library for demonstration purposes), I get the following: from jsonschema.validators import Draft7Validator schema = { "$id": "my_schema", "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "first_email": { "type": "string", "format": "email" }, "second_email": { "$ref": "1/first_email" } } } Draft7Validator(alt).validate({"first_email": "a@example.com", "second_email": "b@example.com"}) I couldn't find anything in the codebase that would handle these relative JSON pointers. I'm also unsure when this was introduced in the json schema spec(s), I couldn't find anything in the changelogs. Thanks in advance! Hi! That isn't a valid JSON Schema (and this library of course only deals with references which are valid JSON Schema). "Relative" in that docstring meant "relative URI" not "relative JSON Pointer" which is what you have there, albeit of course it'd be nice to clarify that there. More specifically the $ref keyword in all current dialects of JSON Schema does not take a relative JSON pointer. The JSON Schema spec does reference the relative JSON pointer spec, but IIRC just to allow keywords to define things in terms of them, and to have a format: relative-json-pointer format. It does not define any of its own "official dialect keywords" to take relative pointers. It probably would be nice to have such referencing support here at some point but the initial focus was certainly only on existing JSON Schema dialects. Interesting! Got misled by this example, reading more about it Opis seems to extand the json schema specification to allow relative pointers. Thanks again
gharchive/issue
2024-02-07T16:43:17
2025-04-01T06:40:07.869946
{ "authors": [ "Julian", "Viicos" ], "repo": "python-jsonschema/referencing", "url": "https://github.com/python-jsonschema/referencing/issues/125", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2104193013
handle deprecation of poetry.core.masonry.builder.Builder fixing deprecation warnings in the unit tests I think this breaks the Yocto Project usage, where we do not want to build the full poetry but DO want to support the poetry-core PEP-517 backend. https://git.yoctoproject.org/poky/tree/meta/classes-recipe/python_poetry_core.bbclass https://git.yoctoproject.org/poky/tree/scripts/lib/recipetool/create_buildsys_python.py#n738 Nevermind. A colleague pointed out that the poetry.core.masonry.builders.wheel is still there and I noticed nothing seems to have changed the "poetry.core.masonry.api" build-backend callout. I tested a build of tomlkit (and ran it's tests) and it was JustFine(TM).
gharchive/pull-request
2024-01-28T14:34:55
2025-04-01T06:40:07.895637
{ "authors": [ "dimbleby", "moto-timo" ], "repo": "python-poetry/poetry-core", "url": "https://github.com/python-poetry/poetry-core/pull/692", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1666977520
Add a CloseCode Enum I was looking for an Enum containing definitions of the standard close codes, and couldn't find one. Is this something that could be provided to users? I think an IntEnum would be a good way to implement this. It seemed like this library would be the correct place to implement such an Enum for Python users. References https://www.rfc-editor.org/rfc/rfc6455.html#section-7.4.1 https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/close https://github.com/Luka967/websocket-close-codes https://learn.microsoft.com/en-us/dotnet/api/system.net.websockets.websocketclosestatus?view=net-8.0 In practice, this would mean turning https://github.com/python-websockets/websockets/blob/main/src/websockets/frames.py#L55-L73 into an Enum and making it a public API, similar to Opcode. There's a problem, though: extensions can use arbitrary close codes in the 2xxx range and applications can use arbitrary close codes in the 4xxx range, as explained in https://www.rfc-editor.org/rfc/rfc6455.html#section-7.4.2. This cannot be represented with an Enum. How did you plan to handle these cases? I was thinking about these as similar in use to http.HTTPStatus in the standard library. They don't cover all possible cases, but they provide self-describing names for the standard codes so that user code is clearer and has few magic numbers that can be prone to typo errors. So in this case, the actual close() methods would still all take int, but if you as a user explicitly want to use one of these standard codes or compare against them, you have a nice clear, safe way to do that with an IntEnum. For example: # before: close(1002) # after: close(WebsocketCloseStatus.ProtocolError) # before if status == 1002: # after if status == WebsocketCloseStatus.ProtocolError: For example, in the Django Channels library, I saw a magic 1000 that could be replaced with a more self-describing IntEnum: https://github.com/django/channels/blob/7e7b71a405db1dbc7509f290b1b158bd07b74c1b/channels/worker.py#L12 User applications can also have these kinds of situations. These applications might also define Enums for custom errors in the 4xxx, but that is out of the scope of something standardized. In these cases they might mix usages of the standard websocket codes with their own Enums. The difference is that HTTP error codes cover all possible cases and HTTP libraries crash if they get a code that they don't recognize. In other words, I'm not convinced by: @property def close_code(self) -> CloseCode | int: """Return a CloseCode or, if not recognized, an int.""" This doesn't look like a convenient API for the caller. To be clear: I recognize that the problem is valid but I don't think that the proposed solution is an improvement over the current situation. In other words, I'm not convinced by: @aaugustin I also do not think any APIs should change and would not advocate for this change. I was thinking that this Enum/IntEnum would be used exclusively by end users when referencing these common codes as shown in https://github.com/python-websockets/websockets/issues/1335#issuecomment-1508392907 To me, this is similar to how most HTTP libraries still use an int for their API for status codes, here's an example from httpx: https://github.com/encode/httpx/blob/4b5a92e88e03443c2619f0905d756b159f9f0222/httpx/_models.py#L445-L448 But httpx still defines an IntEnum for end users to use when setting/checking the ints that used in the API: https://github.com/encode/httpx/blob/4b5a92e88e03443c2619f0905d756b159f9f0222/httpx/_status_codes.py#L4 But HTTP status codes are still useful for users of the library to avoid magic numbers when possible. So, to be clear I would only be advocating for defining an Enum/IntEnum (probably an IntEnum given this is explicitly not exhaustive) in this library. None of the APIs would need to change because, as you show, given that it's not exhaustive. I agree that CloseCode | int is not a useful type. The difference is that HTTP error codes cover all possible cases and HTTP libraries crash if they get a code that they don't recognize. Is this true? The HTTP libraries I've seen use int in their APIs and I've always assumed they would pass back the exact int sent over the wire. I suppose we could try sending something like 209 to httpx/requests/http.client. The difference is that HTTP error codes cover all possible cases and HTTP libraries crash if they get a code that they don't recognize. I tried out a quick Flask/HTTPX application and non exhaustive HTTP status codes are supported. # server.py from flask import Flask app = Flask(__name__) @app.route("/") def hello_world(): return "<p>Hello, World!</p>", 209 # client.py import httpx response = httpx.get("http://127.0.0.1:5000") print(response) print(response.status_code) flask --app server run py .\client.py <Response [209 UNKNOWN]> 209 I'm still skeptical about this. Overall, WebSocket close codes are much less important than HTTP status codes. In ten years of maintaining this library, I never came across any situation where the WebSocket close code matters 🤷 In practice, all you care about is that the connection dropped. Maybe you'll reopen it. The details are only for debugging. No problem, definitely understand. I trust your judgement. Thanks for considering. 😄 Maybe you could tell me a bit more about why you wanted this Enum? What's your use case? I sketched a pull request. Now the question is -- to what extent would it make sense to use this enum in the internal implementation? PR looks great! to what extent would it make sense to use this enum in the internal implementation? My personal opinion is to use them wherever they improve readability. I would personally default to using the Enums unless there was some compelling reason not to since to they express intent, are less prone to small typos, and are self documenting. But there is no reason you have to use them everywhere.
gharchive/issue
2023-04-13T18:59:37
2025-04-01T06:40:08.005593
{ "authors": [ "aaugustin", "johnthagen" ], "repo": "python-websockets/websockets", "url": "https://github.com/python-websockets/websockets/issues/1335", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1993557281
New worker request: kushaldas-wasi Username kushaldas GitHub username No response Name No response Email address mail@kushaldas.in Password status I need a new owner password Processor architecture amd64 Operating System Ubuntu 20.04 Anything special about the worker? For WASI builds, fresh setup. Hey @kushaldas, thank you for your interest in setting up a new buildbot worker for CPython! I have updated the buildbot master configuration and I will be emailing you shortly on the email address you specified with your worker password. Note that you will need to follow up with a PR to add the specific worker!
gharchive/issue
2023-11-14T20:56:12
2025-04-01T06:40:08.013661
{ "authors": [ "itamaro", "kushaldas" ], "repo": "python/buildmaster-config", "url": "https://github.com/python/buildmaster-config/issues/442", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2609870982
🛑 Chat Api is down In 8dc9162, Chat Api (https://chat.roblox.com/) was down: HTTP code: 404 Response time: 121 ms Resolved: Chat Api is back up in d6070d2 after 12 minutes.
gharchive/issue
2024-10-23T21:18:22
2025-04-01T06:40:08.277149
{ "authors": [ "pythoniaweb" ], "repo": "pythoniaweb/roblox-status", "url": "https://github.com/pythoniaweb/roblox-status/issues/772", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2613069389
🛑 Account Settings is down In 325d4f0, Account Settings (https://accountsettings.roblox.com/) was down: HTTP code: 404 Response time: 67 ms Resolved: Account Settings is back up in 11bd704 after 1 hour, 6 minutes.
gharchive/issue
2024-10-25T04:51:39
2025-04-01T06:40:08.279562
{ "authors": [ "pythoniaweb" ], "repo": "pythoniaweb/roblox-status", "url": "https://github.com/pythoniaweb/roblox-status/issues/892", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2349490463
Docs should say what's the smallest model users will see a benefit for I was working on a minimal example to showcase the benefits of fp8 an H100 without forcing users to download a chunky model like here https://github.com/pytorch-labs/float8_experimental/issues/279 I guess it's expected that fp8 will be slower for tiny models because of overhead in which case we should say in docs what's the minimal model size people should try Training time in FP16: 7.10 seconds Training time in FP8: 9.80 seconds import torch import torch.nn as nn import copy from torch.cuda.amp import autocast from float8_experimental.float8_linear_utils import swap_linear_with_float8_linear from float8_experimental.float8_dynamic_linear import Float8DynamicLinear torch.set_float32_matmul_precision('high') class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.layer1 = nn.Linear(32, 32) self.layer2 = nn.Linear(32, 32) def forward(self, x): x = torch.relu(self.layer1(x)) x = self.layer2(x) return x def train(model, data_loader): criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters()) model.train() for data, target in data_loader: optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() optimizer.step() def benchmark_training(model, data_loader, iterations=100, warmup_iterations=10): # Warm-up phase: Run a few iterations to get the GPU to a steady state model = torch.compile(model) for _ in range(warmup_iterations): train(model, data_loader) # Timing phase start_event = torch.cuda.Event(enable_timing=True) end_event = torch.cuda.Event(enable_timing=True) torch.cuda.synchronize() # Wait for all operations on the CUDA device to complete start_event.record() for _ in range(iterations): train(model, data_loader) end_event.record() torch.cuda.synchronize() # Wait for the events to be recorded elapsed_time = start_event.elapsed_time(end_event) / 1000.0 # Convert milliseconds to seconds return elapsed_time data_loader = [(torch.randn(32, 32, device="cuda"), torch.randn(32, 1, device="cuda")) for _ in range(110)] # Initial model setup base_model = Model().cuda() # Training in fp16 model_fp16 = copy.deepcopy(base_model) fp16_time = benchmark_training(model_fp16, data_loader) # Training in fp8 model_fp8 = copy.deepcopy(base_model) swap_linear_with_float8_linear(model_fp8, Float8DynamicLinear) fp8_time = benchmark_training(model_fp8, data_loader) print(f"Training time in FP16: {fp16_time:.2f} seconds") print(f"Training time in FP8: {fp8_time:.2f} seconds") Great idea, let's do it https://github.com/pytorch/ao/issues/572
gharchive/issue
2024-06-12T19:00:13
2025-04-01T06:40:08.287129
{ "authors": [ "msaroufim", "vkuzo" ], "repo": "pytorch-labs/float8_experimental", "url": "https://github.com/pytorch-labs/float8_experimental/issues/280", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1376595589
Add a "no_conversion" flow to torch-tensorrt Adds a "no_conversion" option to torch-tensorrt which when enabled will replace the standard conversion and engine insertion with an embedded function call for each convertible segment. This allows inspection of the partition without running conversion and the possibility to convert each engine individually in subsequent runs. Future work: Allow this flow to run without a GPU to enable TRT convertibility/partitioning linting flows on host machines Partition without running shape propagation when in the no-convert flow Fixes # (#1361) Please delete options that are not relevant and/or add your own. New feature (non-breaking change which adds functionality) [ ] My code follows the style guidelines of this project (You can use the linters) [ ] I have performed a self-review of my own code [ ] I have commented my code, particularly in hard-to-understand areas and hacks [ ] I have made corresponding changes to the documentation [ ] I have added tests to verify my fix or my feature [ ] New and existing unit tests pass locally with my changes [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified @bowang007 Could something like this solve our graph stitching issues? Say if the way partitioning works is it creates a bunch of methods but a method is either 100% PyTorch or 100% TRT. May make things like collections way easier too.
gharchive/pull-request
2022-09-16T23:54:40
2025-04-01T06:40:08.294630
{ "authors": [ "mfeliz-cruise", "narendasan" ], "repo": "pytorch/TensorRT", "url": "https://github.com/pytorch/TensorRT/pull/1360", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2290335010
[Question] MBU in automated CI? Hi folks, thanks for the great work. With https://github.com/pytorch/ao/pull/135 merged, vLLM could see benefit from torch.compile backend given compiler-native integration with PagedAttention kernels. Is there an easy way to see what the latest/nightly MBU is for torch compile on say, H100 / Llama3 70B? Also interested in cold start compile time cc @msaroufim @anijain2305 do we have any benchmark numbers for the cold start compile time? Related https://github.com/pytorch/pytorch/issues/125958
gharchive/issue
2024-05-10T20:00:29
2025-04-01T06:40:08.297056
{ "authors": [ "cadedaniel", "msaroufim", "supriyar" ], "repo": "pytorch/ao", "url": "https://github.com/pytorch/ao/issues/237", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1553795567
torchaudio.transforms.Loudness is soooo slow on GPU 🐛 Describe the bug torchaudio.transforms.Loudness() is painfully slow on a GPU (A100) with, say, 200000 samples. On CPU it flies. If this is expected, at least it should be documented and a warning should be given the first time it is run on a GPU. Versions Collecting environment information... PyTorch version: 1.13.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.5 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: 11.6.124 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB Nvidia driver version: 515.65.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy==0.991 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.23.4 [pip3] pytorch-lightning==1.9.0 [pip3] torch==1.13.1 [pip3] torchaudio==0.13.1 [pip3] torchmetrics==0.11.0 [pip3] torchview==0.2.5 [pip3] torchvision==0.13.1 [conda] Could not collect Hi @turian — thanks for creating the issue. Would you mind posting a code snippet that would allow us to reproduce what you're seeing? import torchaudio import torch from tqdm.auto import tqdm l = torchaudio.transforms.Loudness(sample_rate=44100) l = l.to("cpu") x = torch.rand(1, 44100 * 10, device="cpu") for i in tqdm(list(range(10))): l(x) l = torchaudio.transforms.Loudness(sample_rate=44100) l = l.to("cuda") x = torch.rand(1, 44100 * 10, device="cuda") for i in tqdm(list(range(10))): l(x) The first loop takes under a second. The second one takes 45 seconds per audio. @turian thanks! Was able to reproduce on my side. It looks like the execution time is dominated by the for loop within lfilter. Coincidentally, there is a related issue open at https://github.com/pytorch/audio/issues/1408 with a solution landing imminently — perhaps you can follow that issue for updates. Running the given script with #3018 gives the following; 100%|██████████| 10/10 [00:00<00:00, 42.81it/s] 100%|██████████| 10/10 [00:02<00:00, 3.50it/s] while on the main branch, it indeed takes forever. 100%|██████████| 10/10 [00:00<00:00, 49.14it/s] cc @yoyololicon Thanks for your PR, it is very timely. @hwangjeff @mthrok shall we close?
gharchive/issue
2023-01-23T20:52:14
2025-04-01T06:40:08.306569
{ "authors": [ "hwangjeff", "mthrok", "turian" ], "repo": "pytorch/audio", "url": "https://github.com/pytorch/audio/issues/3006", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1604892091
Mutating a forked element changes element value in all branches 🐛 Describe the bug When using Forker all child pipelines receive the same object. In case this object is mutable, e.g. a dictionary, modification of the object in one branch, modifies it in all. This leads to unexpected behavior. This is related to #1032, but probably affecting a lot more pipelines as it does not involve nesting. In the following example, only the dict1 dictionary is (explicitly) modified, but when printing the values of both branches have changed. import torchdata.datapipes as dp def to_dict(item): return { "value": item, "metadata": f"item value: {item}", } def add5(value): return value + 5 it = dp.iter.IterableWrapper(range(5)) dict1, dict2 = it.map(to_dict).fork(2) dict1 = dict1.map(add5, input_col="value", output_col="new_value") for d1, d2 in dict1.zip(dict2): print(f"{d1 is d2}, {d1}, {d2}") output True, {'value': 0, 'metadata': 'item value: 0', 'new_value': 5}, {'value': 0, 'metadata': 'item value: 0', 'new_value': 5} True, {'value': 1, 'metadata': 'item value: 1', 'new_value': 6}, {'value': 1, 'metadata': 'item value: 1', 'new_value': 6} True, {'value': 2, 'metadata': 'item value: 2', 'new_value': 7}, {'value': 2, 'metadata': 'item value: 2', 'new_value': 7} True, {'value': 3, 'metadata': 'item value: 3', 'new_value': 8}, {'value': 3, 'metadata': 'item value: 3', 'new_value': 8} True, {'value': 4, 'metadata': 'item value: 4', 'new_value': 9}, {'value': 4, 'metadata': 'item value: 4', 'new_value': 9} I would expect that the dict2 is not modified and that the d1 is d2 test fails. Versions PyTorch version: 1.13.1 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35 Python version: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:20:04) [GCC 11.3.0] (64-bit runtime) Python platform: Linux-5.15.0-1030-aws-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz CPU family: 6 Model: 106 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Stepping: 6 BogoMIPS: 5799.91 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd ida arat avx512vbmi pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear flush_l1d arch_capabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 192 KiB (4 instances) L1i cache: 128 KiB (4 instances) L2 cache: 5 MiB (4 instances) L3 cache: 54 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy==1.0.1 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.5 [pip3] pytorch-lightning==1.9.2 [pip3] torch==1.13.1 [pip3] torchdata==0.5.1+59663c1 [pip3] torchmetrics==0.11.1 [pip3] torchvision==0.14.1a0+b69fce3 [conda] mkl 2022.2.1 h84fe81f_16997 conda-forge [conda] numpy 1.23.5 py310h53a5b5f_0 conda-forge [conda] pytorch 1.13.1 cpu_py310hd11e9c7_1 conda-forge [conda] pytorch-lightning 1.9.2 pyhd8ed1ab_0 conda-forge [conda] torchdata 0.5.1 py310h6e501d2_2 conda-forge [conda] torchmetrics 0.11.1 pyhd8ed1ab_0 conda-forge [conda] torchvision 0.14.1 cpu_py310hdfb2906_0 conda-forge I personally think it's the expected behavior of fork operation. We want to minimize the copy of objects within the pipeline. The existing solution would be creating different DataPipes rather than using fork. Since there are multiple users asking this feature, we should decide which option is better: Add a deepcopy argument to make sure fork doing copy on the provided object. Add a new DataPipe (tentatively names it as replicate) dedicatedly to do copy. For basic mutable objects, in particular dicts, a shallow copy would already eliminate a lot of pitfalls. To avoid excessive copying one could only do it for mutable objects. from collections.abc import MutableMapping if isinstance(item, MutableMapping): item = copy.copy(item) IMHO at least the behavior should be documented, such that everyone is aware of it. For basic mutable objects, in particular dicts, a shallow copy would already eliminate a lot of pitfalls. It depends. If the inner object is something like integer, then yes. Otherwise, it still needs deepcopy. IMHO at least the behavior should be documented, such that everyone is aware of it. Totally makes sense. The inline-doc does refer to copy which gives users confusion. Would you love to open a PR to fix it? Note that, this issue is mostly similar #1032 IMHO at least the behavior should be documented, such that everyone is aware of it. Totally makes sense. The inline-doc does refer to copy which gives users confusion. Would you love to open a PR to fix it? I can try to phrase a cautionary note and open an PR for it. Is there a general advice on how to proceed in those cases though? Then I could mention it directly. At the moment I am doing fork1, fork2 = dp.fork(2) fork1 = fork1.map(copy.copy) fork2 = fork2.map(copy.copy) which solves the problem, but does not read very well. How about adding a copy_fn to Forker? It can default to None and if needed the user can supply copy, deepcopy or even a custom copy function. How about adding a copy_fn to Forker? It can default to None and if needed the user can supply copy, deepcopy or even a custom copy function. If it's acceptable to use a copy_fn, you can do something like: dp = ... fork1 = pickle.loads(pickle.dumps(dp)) fork2 = dp Another option is to do a function to return DataPipe: def get_dp(): it = dp.iter.IterableWrapper(range(5)) dict_dp = it.map(to_dict) return dict_dp fork1 = get_dp() fork2 = get_dp() I was thinking that fork can have an additional argument copy_element=False or (deep_copy=False). If it is set to True, it can copy or deep copy the element before yielding it. Thoughts? I created a PR for pytorch pytorch/pytorch#96030 with a potential interface and a note highlighting the behaviour. Please let me know what you think. Coming from a cpp background, I'm strongly opposed to making premature copies just to be safe. In my opinion, the best course of action is to highlight this behavior in the documentation and let the user decide if and how he wants to copy objects. Otherwise we needless incur unnecessary and potentially unwanted overhead. Adding another map that does the copying into the pipeline is trivial and also more in-line with torchdata's modular approach IMHO. @sehoffmann I don't disagree with you about making premature copies and another map would do the work. But, generally speaking, users expect more syntax sugar when doing python functional programming. So, we end up with fork no-copy by default and detailed comment if copy strategy is provided. Let us know if you think the doc is not sufficient https://pytorch.org/data/main/generated/torchdata.datapipes.iter.Forker.html?highlight=fork#torchdata.datapipes.iter.Forker Closing this issue for now as https://github.com/pytorch/pytorch/pull/96030 has been merged. Pls feel free to re-open it if needed.
gharchive/issue
2023-03-01T12:35:06
2025-04-01T06:40:08.352245
{ "authors": [ "NivekT", "ejguan", "falckt", "sehoffmann" ], "repo": "pytorch/data", "url": "https://github.com/pytorch/data/issues/1061", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
728335537
Error when trying to train with pipeline parallelism Hi guys, I was trying to train a transformer model with pipeline parallelism. Is this supposed to work already? The command i tried (following the translation example): fairseq-train data-bin/iwslt14.tokenized.de-en --arch transformer_iwslt_de_en_pipeline_parallel --share-decoder-input-output-embed --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 --dropout 0.3 --weight-decay 0.0001 --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --max-tokens 4096 --eval-bleu --eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}' --eval-bleu-detok moses --eval-bleu-remove-bpe --eval-bleu-print-samples --best-checkpoint-metric bleu --maximize-best-checkpoint-metric --pipeline-model-parallel --pipeline-encoder-balance '[8]' --pipeline-encoder-devices '[0]' --pipeline-decoder-balance '[1,6,1]' --pipeline-decoder-devices '[0,1,0]' --pipeline-chunks 1 --distributed-world-size 2 error: 2020-10-23 17:17:08 | INFO | fairseq.tasks.translation | [de] dictionary: 8848 types 2020-10-23 17:17:08 | INFO | fairseq.tasks.translation | [en] dictionary: 6632 types 2020-10-23 17:17:08 | INFO | fairseq.data.data_utils | loaded 7283 examples from: data-bin/iwslt14.tokenized.de-en/valid.de-en.de 2020-10-23 17:17:08 | INFO | fairseq.data.data_utils | loaded 7283 examples from: data-bin/iwslt14.tokenized.de-en/valid.de-en.en 2020-10-23 17:17:08 | INFO | fairseq.tasks.translation | data-bin/iwslt14.tokenized.de-en valid de-en 7283 examples Traceback (most recent call last): File "/secondary/thies/.virtualenvs/pytorch-23102020/bin/fairseq-train", line 33, in <module> sys.exit(load_entry_point('fairseq', 'console_scripts', 'fairseq-train')()) File "/tertiary/thies/fairseq/fairseq_cli/train.py", line 352, in cli_main distributed_utils.call_main(cfg, main) File "/tertiary/thies/fairseq/fairseq/distributed_utils.py", line 301, in call_main cfg.distributed_training.distributed_world_size, File "/secondary/thies/.virtualenvs/pytorch-23102020/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 247, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/secondary/thies/.virtualenvs/pytorch-23102020/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 205, in start_processes while not context.join(): File "/secondary/thies/.virtualenvs/pytorch-23102020/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 166, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException: -- Process 0 terminated with the following error: Traceback (most recent call last): File "/secondary/thies/.virtualenvs/pytorch-23102020/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/tertiary/thies/fairseq/fairseq/distributed_utils.py", line 283, in distributed_main main(cfg, **kwargs) File "/tertiary/thies/fairseq/fairseq_cli/train.py", line 74, in main model = task.build_model(cfg.model) File "/tertiary/thies/fairseq/fairseq/tasks/translation.py", line 327, in build_model model = super().build_model(args) File "/tertiary/thies/fairseq/fairseq/tasks/fairseq_task.py", line 548, in build_model model = models.build_model(args, self) File "/tertiary/thies/fairseq/fairseq/models/__init__.py", line 56, in build_model return ARCH_MODEL_REGISTRY[cfg.arch].build_model(cfg, task) File "/tertiary/thies/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/model.py", line 277, in build_model checkpoint=args.pipeline_checkpoint, File "/tertiary/thies/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/model.py", line 57, in __init__ + [encoder.final_layer_norm] File "/secondary/thies/.virtualenvs/pytorch-23102020/lib/python3.6/site-packages/torch/nn/modules/module.py", line 796, in __getattr__ type(self).__name__, name)) torch.nn.modules.module.ModuleAttributeError: 'TransformerEncoder' object has no attribute 'embedding_layer' For training, the a single Pipe() module needs to be created for the Transformer encoder-decoder model. So, you need to set --pipeline-balance and --pipeline-devices in the training command, instead of --pipeline-encoder-balance, --pipeline-encoder-devices, --pipeline-decoder-balance, --pipeline-decoder-devices. For inference/generation, two Pipe() modules are created, one for the encoder and one for the decoder, since the encoder and decoder are called separately during generation. So, in that case, you need to set --pipeline-encoder-balance, --pipeline-encoder-devices, --pipeline-decoder-balance, --pipeline-decoder-devices instead. Awesome, works now. Thank you very much.
gharchive/issue
2020-10-23T15:59:08
2025-04-01T06:40:08.372910
{ "authors": [ "shruti-bh", "thies1006" ], "repo": "pytorch/fairseq", "url": "https://github.com/pytorch/fairseq/issues/2782", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
925198912
RecursionError: maximum recursion depth exceeded in comparison I use ort like this: ... model = nn.SyncBatchNorm.convert_sync_batchnorm(model) model = ORTModule(model) model = nn.parallel.DistributedDataParallel(model, find_unused_parameters=True, device_ids=[device]) ... But found error: Traceback (most recent call last): File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/users/min.du/hdlt/feature_j5fsd_configs/HDLT/hdlt/engine/ddp_trainer.py", line 156, in _main_func main_func(local_rank, *args) File "/home/users/min.du/hdlt/feature_j5fsd_configs/HDLT/tools/train.py", line 163, in train_entrance trainer.fit() File "/home/users/min.du/hdlt/feature_j5fsd_configs/HDLT/tools/trainer_wrapper.py", line 225, in fit self._trainer.fit() File "/home/users/min.du/hdlt/feature_j5fsd_configs/HDLT/hdlt/engine/trainer.py", line 298, in fit profiler=self.profiler, File "/home/users/min.du/hdlt/feature_j5fsd_configs/HDLT/hdlt/engine/processors/processor.py", line 265, in __call__ model_outs = model(*_as_list(batch_i)) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 705, in forward output = self.module(*inputs[0], **kwargs[0]) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/onnxruntime/training/ortmodule/ortmodule.py", line 41, in _forward return self._execution_manager(self._is_training()).forward(*inputs, **kwargs) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/onnxruntime/training/ortmodule/_training_manager.py", line 67, in forward build_gradient_graph = self._export_model(*inputs, **kwargs) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/onnxruntime/training/ortmodule/_graph_execution_manager.py", line 206, in _export_model schema = _io._extract_schema({'args': copy.copy(inputs), 'kwargs': copy.copy(kwargs)}) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/onnxruntime/training/ortmodule/_io.py", line 300, in _extract_schema data[key] = _extract_schema(data[key]) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/onnxruntime/training/ortmodule/_io.py", line 291, in _extract_schema data[idx] = _extract_schema(data[idx]) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/onnxruntime/training/ortmodule/_io.py", line 291, in _extract_schema data[idx] = _extract_schema(data[idx]) File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/onnxruntime/training/ortmodule/_io.py", line 291, in _extract_schema data[idx] = _extract_schema(data[idx]) [Previous line repeated 949 more times] File "/home/users/min.du/venvs/pytorch1.8/lib/python3.6/site-packages/onnxruntime/training/ortmodule/_io.py", line 287, in _extract_schema if isinstance(data, abc.Sequence): File "/home/users/min.du/venvs/pytorch1.8/lib64/python3.6/abc.py", line 184, in __instancecheck__ if subclass in cls._abc_cache: File "/home/users/min.du/venvs/pytorch1.8/lib64/python3.6/_weakrefset.py", line 75, in __contains__ return wr in self.data RecursionError: maximum recursion depth exceeded in comparison Any suggestion? Any chance your input has strings in it? and such string would have 949 chars Can you retest? https://github.com/microsoft/onnxruntime/pull/8098 may have fixed your issue (merged the same day you created the issue) @DuinoDu - after the re-test, if the issue persists, can u pls provide re-producible steps/code scripts, and possibly along with the model? thx Hi, @DuinoDu without waiting, I cloned pytorch repo, and leverage the UTs in the file for the same: ~/pytorch/torch/testing/_internal/distributed/distributed_test.py However, I can't reproduce your error with it. (Although there's other error encountered, but it's unrelated. And the fix will be in future release.) Therefore it will be nice if you can provide more details, i.e., small reproducible case if possible, if you still see the issue with the fix (https://github.com/microsoft/onnxruntime/pull/8098). Thanks. @DuinoDu FYI - by reverting the fix, I can repro the same exception. FAILED orttraining_test_ortmodule_api.py::test_input_with_string_exception - RecursionError: maximum recursion depth exceeded in comparison Hi @DuinoDu, we are closing this issue now, as we believe we have resolved it. Please re-open or create a new issue if you need more assistance. Thank you! @DuinoDu - please feel free to re-open it if you still have the same issue. Thanks.
gharchive/issue
2021-06-18T20:49:38
2025-04-01T06:40:08.464206
{ "authors": [ "DuinoDu", "natke", "thiagocrepaldi", "ytaous" ], "repo": "pytorch/ort", "url": "https://github.com/pytorch/ort/issues/34", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2177384904
aarch64 build for AWS Linux - Failed to load image Python extension 🐛 Describe the bug Built Torch 2.1.2 and TorchVision 2.1.2 from source and running into the following problem: /home/ec2-user/conda/envs/textgen/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/ec2-user/conda/envs/textgen/lib/python3.10/site-packages/torchvision/image.so: undefined symbol: _ZNK3c1017SymbolicShapeMeta18init_is_contiguousEv'If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpeg or libpng installed before building torchvision from source? previously the error was about missing libs and not undefined symbol, so I believe the libs are correctly installed now. Building says: Compiling extensions with following flags: FORCE_CUDA: False FORCE_MPS: False DEBUG: False TORCHVISION_USE_PNG: True TORCHVISION_USE_JPEG: True TORCHVISION_USE_NVJPEG: True TORCHVISION_USE_FFMPEG: True TORCHVISION_USE_VIDEO_CODEC: True NVCC_FLAGS: Compiling with debug mode OFF Found PNG library Building torchvision with PNG image support libpng version: 1.6.37 libpng include path: /home/ec2-user/conda/envs/textgen/include/libpng16 Running build on conda-build: False Running build on conda: True Building torchvision with JPEG image support libjpeg include path: /home/ec2-user/conda/envs/textgen/include libjpeg lib path: /home/ec2-user/conda/envs/textgen/lib Building torchvision without NVJPEG image support Building torchvision with ffmpeg support ffmpeg version: b'ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers\nbuilt with gcc 10.2.0 (crosstool-NG 1.22.0.1750_510dbc6_dirty)\nconfiguration: --prefix=/opt/conda/conda-bld/ffmpeg_1622823166193/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeh --cc=/opt/conda/conda-bld/ffmpeg_1622823166193/_build_env/bin/aarch64-conda-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264\nlibavutil 56. 31.100 / 56. 31.100\nlibavcodec 58. 54.100 / 58. 54.100\nlibavformat 58. 29.100 / 58. 29.100\nlibavdevice 58. 8.100 / 58. 8.100\nlibavfilter 7. 57.100 / 7. 57.100\nlibavresample 4. 0. 0 / 4. 0. 0\nlibswscale 5. 5.100 / 5. 5.100\nlibswresample 3. 5.100 / 3. 5.100\nlibpostproc 55. 5.100 / 55. 5.100\n' ffmpeg include path: ['/home/ec2-user/conda/envs/textgen/include'] ffmpeg library_dir: ['/home/ec2-user/conda/envs/textgen/lib'] Building torchvision without video codec support So I believe I do have things set up correctly to be able to do image calls (I don't care about video). Any idea why I would still be getting the undefined symbol warning? Thanks! Versions Collecting environment information... PyTorch version: 2.1.2+cu121 Is debug build: False CUDA used to build PyTorch: 12.2 ROCM used to build PyTorch: N/A OS: Amazon Linux 2023.3.20240304 (aarch64) GCC version: (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2) Clang version: Could not collect CMake version: version 3.28.3 Libc version: glibc-2.34 Python version: 3.10.9 (main, Mar 8 2023, 10:41:45) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.1.79-99.164.amzn2023.aarch64-aarch64-with-glibc2.34 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA T4G Nvidia driver version: 550.54.14 cuDNN version: Probably one of the following: /usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn.so.8.9.4 /usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_adv_infer.so.8.9.4 /usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_adv_train.so.8.9.4 /usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_cnn_infer.so.8.9.4 /usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_cnn_train.so.8.9.4 /usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_ops_infer.so.8.9.4 /usr/local/cuda-12.2/targets/sbsa-linux/lib/libcudnn_ops_train.so.8.9.4 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: aarch64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Vendor ID: ARM Model name: Neoverse-N1 Model: 1 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Stepping: r3p1 BogoMIPS: 243.75 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs L1d cache: 256 KiB (4 instances) L1i cache: 256 KiB (4 instances) L2 cache: 4 MiB (4 instances) L3 cache: 32 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-3 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Mitigation; CSV2, BHB Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.1.2+cu121 [pip3] torchaudio==2.1.2 [pip3] torchvision==0.16.2+cu121 [pip3] triton==2.1.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.1.2+cu121 pypi_0 pypi [conda] torchaudio 2.1.2 pypi_0 pypi [conda] torchvision 0.16.2+cu121 pypi_0 pypi [conda] triton 2.1.0 pypi_0 pypi [pip3] torchvision==0.16.2+cu121 [conda] torchvision 0.16.2+cu121 pypi_0 pypi Try uninstalling these versions first? [pip3] torchvision==0.16.2+cu121 [conda] torchvision 0.16.2+cu121 pypi_0 pypi Try uninstalling these versions first? What would that accomplish? That's literally the package that I'm trying to use and that is throwing the error. Built Torch 2.1.2 and TorchVision 2.1.2 from source What version of torchvision are you building from source, exactly? There's no torchvision 2.x. The latest stable version is 0.17. The fact that there already is a stable 0.16.2 version installed while you're trying to build from source is very likely to be causing some issues. Built Torch 2.1.2 and TorchVision 2.1.2 from source What version of torchvision are you building from source, exactly? There's no torchvision 2.x. The latest stable version is 0.17. The fact that there already is a stable 0.16.2 version installed while you're trying to build from source is very likely to be causing some issues. Updated original post, torchvision version was a typo. I did finally get torchvision to build and be functional, but only by forcibly editing the build scripts to pull in my custom build of torch+cuda 2.1.2. The build scripts were importing a non-cuda build because there is no aarch64 torch+cuda out there for pip to pull down. So finally, after forcing my own torch+cuda 2.1.2 whl into the torchvision build, now my torchvision actually works. I need to say - it's been PAINFUL dealing with building anything that relies on torch because all the build scripts pull down the non-cuda version and mess up the builds. Every time I want to build something relying on torch, now I need to hack in pulling my own torch whl instead for them to work (this also resolved issues I was having building a few other things). I reaaaaaally hope official aarch64 torch+cuda builds start to be made available so I don't have to keep doing this hackjob. The box is shut down but I believe it was pyproject.toml that I had to update to point directly at my torch whl and the command I used was "python setup.py bdist_wheel". I had the same outcomes with "pip install -v ." to directly install, though.
gharchive/issue
2024-03-09T20:13:46
2025-04-01T06:40:08.814741
{ "authors": [ "NicolasHug", "elkay" ], "repo": "pytorch/vision", "url": "https://github.com/pytorch/vision/issues/8305", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
449785749
Add S3 Bucket Dataset Download data from S3 and handle categories like ImageFolder Dataset I suppose the data format in S3 could be arbitrary? Like a zip with images, or a binary file, etc? So there is no single S3 bucket dataset that would apply, right? Instead, having a functionality that easily downloads from S3 might be the most generic approach, and I believe there are already libraries that do that. I recommend using alluxio and fuse to mount s3 buckets locally. This issue appears to be stale, so I'm closing it. If you think it should remain open, feel free to reopen it. This issue appears to be stale, so I'm closing it. If you think it should remain open, feel free to reopen it.
gharchive/issue
2019-05-29T12:43:32
2025-04-01T06:40:08.817808
{ "authors": [ "datumbox", "fmassa", "gm-spacagna", "rcourivaud" ], "repo": "pytorch/vision", "url": "https://github.com/pytorch/vision/issues/970", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
914383984
TorchVision cocoapods package Created Cocoapods package with libtorchvision_ops.a for iOS development. Cocoapods is created with similar fashion to LibTorch. Library is compiled using torch downloaded from LibTorch. (Provided build_podspec.sh to create archive) Currently uploaded test version of package to Cocoapods trunk with name LibTorch_vision_ops. Need to modify author and homepage on LibTorch_vision.ops.podspec file to main branch. (Currently on fork). Once PR Is accepted, I will pull package from Cocoapods. @husthyc can you have a look? @elb3k Thanks for drafting this PR. We are planning to do the Cocoapod release in recent days which aligns with the main PyTorch 1.9.0 release day. I think we cannot build the lib on a local machine and push to Cocoapod because if we do that, the debug symbols in the static lib will contain your personal machine's paths. Basically what we will do is we will trigger a job on the CI machine and upload the zip to AWS s3 bucket from there. Can you hold this PR for some days and see if our official release works? Thanks. @elb3k Thanks for drafting this PR. We are planning to do the Cocoapod release in recent days which aligns with the main PyTorch 1.9.0 release day. I think we cannot build the lib on a local machine and push to Cocoapod because if we do that, the debug symbols in the static lib will contain your personal machine's paths. Basically what we will do is we will trigger a job on the CI machine and upload the zip to AWS s3 bucket from there. Can you hold this PR for some days and see if our official release works? Thanks. @husthyc Yes. Hi, Thanks a lot for the PR! We have released a cocoapods package with the 0.10.0 release, and it has been merged in https://github.com/pytorch/vision/pull/4055 Thanks again for your help! Hi, Thanks a lot for the PR! We have released a cocoapods package with the 0.10.0 release, and it has been merged in #4055 Thanks again for your help! Hi, I am glad to hear that.
gharchive/pull-request
2021-06-08T05:44:02
2025-04-01T06:40:08.824776
{ "authors": [ "elb3k", "fmassa", "husthyc" ], "repo": "pytorch/vision", "url": "https://github.com/pytorch/vision/pull/3999", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
427832449
When persist is passed to process_intake, persist dask array Persist was already a valid argument - but was just being used to return a dask array. This PR makes it return a persisted array. @philippjfr ok with you if I merge this? Ok Philipp explained to me that it already worked. It just happened in a different spot than we were looking, so this PR isn't necessary.
gharchive/pull-request
2019-04-01T17:47:34
2025-04-01T06:40:08.881931
{ "authors": [ "jsignell" ], "repo": "pyviz/hvplot", "url": "https://github.com/pyviz/hvplot/pull/195", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
480724400
DiscreteSlider does not respect margin DiscreteSlider doesn't respect the margin attribute. It also has a default that overlaps the left of the viewport, cutting off part of the widget. import panel as pn import datetime as dt pn.extension() pn.Column( pn.widgets.DateSlider( name='Date1', start=dt.datetime(2019, 1, 1), end=dt.datetime(2019, 6, 1), value=dt.datetime(2019, 2, 8), margin=(0, 0, 0, 0) ), pn.widgets.DateSlider( name='Date1', start=dt.datetime(2019, 1, 1), end=dt.datetime(2019, 6, 1), value=dt.datetime(2019, 2, 8), margin=(0, 0, 0, 100) ), pn.widgets.DiscreteSlider( name='Hour', options=['7:00', '10:00', '15:00', '17:00', '19:00'], value='7:00', margin=(0, 0, 0, 0) ), pn.widgets.DiscreteSlider( name='Hour', options=['7:00', '10:00', '15:00', '17:00', '19:00'], value='7:00', margin=(0, 0, 0, 100) ), ) Python 3.7.3 Panel 0.6.2 Appears to be fixed.
gharchive/issue
2019-08-14T14:49:45
2025-04-01T06:40:08.884164
{ "authors": [ "SteveAlexander", "philippjfr" ], "repo": "pyviz/panel", "url": "https://github.com/pyviz/panel/issues/600", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
148294576
Cannot get descendants() for the dialog (notepad.exe) + incorrect application menu name Running the following code: import pywinauto app = pywinauto.Application(backend='uia').start_(r'notepad.exe') dlg = app.UntitledNotepad dlg.WrapperObject().descendants() Output: >>> dlg.WrapperObject().descendants() Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\OneDrive\python_students\pywinauto-64_github\pywinauto_fork\pywinauto\base_wrapper.py", line 384, in descendants desc_elements = self.element_info.descendants(proc_id=self.process_id()) File "C:\OneDrive\python_students\pywinauto-64_github\pywinauto_fork\pywinauto\UIAElementInfo.py", line 250, in descendants cond = IUIA().build_condition(**kwargs) TypeError: build_condition() got an unexpected keyword argument 'proc_id' Also dlg.MenuBar.WrapperObject().element_info.name returns "System" instead of "Application". BTW, dlg.WrapperObject().children()[-1].element_info.name returns "Application".
gharchive/issue
2016-04-14T08:51:47
2025-04-01T06:40:08.888133
{ "authors": [ "vasily-v-ryabov" ], "repo": "pywinauto/pywinauto", "url": "https://github.com/pywinauto/pywinauto/issues/173", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1490787110
chore: promote h52 to version 0.0.19 chore: promote h52 to version 0.0.19 this commit will trigger a pipeline to generate the actual kubernetes resources to perform the promotion which will create a second commit on this Pull Request before it can merge [APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: To complete the pull request process, please assign q1323829945 You can assign the PR to them by writing /assign @q1323829945 in a comment when ready. The full list of commands accepted by this bot can be found here. Needs approval from an approver in each of these files: OWNERS Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment
gharchive/pull-request
2022-12-12T02:43:08
2025-04-01T06:40:08.910392
{ "authors": [ "q1323829945" ], "repo": "q1323829945/jenkinX2", "url": "https://github.com/q1323829945/jenkinX2/pull/16", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2001962925
🛑 nitter.projectsegfau.lt is down In c1d9200, nitter.projectsegfau.lt (https://nitter.projectsegfau.lt) was down: HTTP code: 503 Response time: 337 ms Resolved: nitter.projectsegfau.lt is back up in fba99a4 after 16 minutes.
gharchive/issue
2023-11-20T11:20:42
2025-04-01T06:40:08.928207
{ "authors": [ "qallen028" ], "repo": "qallen028/nitter-instances", "url": "https://github.com/qallen028/nitter-instances/issues/10880", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2128746430
🛑 nitter.hostux.net is down In f005cbb, nitter.hostux.net (https://nitter.hostux.net) was down: HTTP code: 0 Response time: 0 ms Resolved: nitter.hostux.net is back up in 0399c2c after 12 hours, 19 minutes.
gharchive/issue
2024-02-10T23:03:10
2025-04-01T06:40:08.931383
{ "authors": [ "qallen028" ], "repo": "qallen028/nitter-instances", "url": "https://github.com/qallen028/nitter-instances/issues/16710", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1814474805
🛑 notabird.site is down In 9ad8cce, notabird.site (https://notabird.site) was down: HTTP code: 520 Response time: 77 ms Resolved: notabird.site is back up in 68031ae.
gharchive/issue
2023-07-20T17:25:50
2025-04-01T06:40:08.933990
{ "authors": [ "qallen028" ], "repo": "qallen028/nitter-instances", "url": "https://github.com/qallen028/nitter-instances/issues/1706", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1889112889
🛑 notabird.site is down In ff950c0, notabird.site (https://notabird.site) was down: HTTP code: 520 Response time: 81 ms Resolved: notabird.site is back up in 9d1c6d8 after 11 minutes.
gharchive/issue
2023-09-10T11:41:30
2025-04-01T06:40:08.936608
{ "authors": [ "qallen028" ], "repo": "qallen028/nitter-instances", "url": "https://github.com/qallen028/nitter-instances/issues/5545", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1911709235
🛑 nitter.projectsegfau.lt is down In 05a6a47, nitter.projectsegfau.lt (https://nitter.projectsegfau.lt) was down: HTTP code: 0 Response time: 0 ms Resolved: nitter.projectsegfau.lt is back up in 56c13e3 after 11 minutes.
gharchive/issue
2023-09-25T15:01:03
2025-04-01T06:40:08.939737
{ "authors": [ "qallen028" ], "repo": "qallen028/nitter-instances", "url": "https://github.com/qallen028/nitter-instances/issues/6560", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1962614098
🛑 nitter.kling.gg is down In 9185e4a, nitter.kling.gg (https://nitter.kling.gg) was down: HTTP code: 0 Response time: 0 ms Resolved: nitter.kling.gg is back up in 19a19ee after 10 minutes.
gharchive/issue
2023-10-26T03:03:54
2025-04-01T06:40:08.942810
{ "authors": [ "qallen028" ], "repo": "qallen028/nitter-instances", "url": "https://github.com/qallen028/nitter-instances/issues/8425", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1329431698
Is there support relative formula update in duplicate row function Hi, I wonder to know the feature is supported currently? In my imagine like below C1 = A1+B1 call duplicate row 1 C2 = A2+B2 but in library actually C1 = A1+B1 call duplicate row 1 C2 = A1+B1 I have try to find any documents but no answer in library. Thanks for your issue. This library doesn't support this feature currently. I'll consider adding support for it later. thank you sir~ same feature need, thank you ! I write this utils funtion can support this feature func parseFormula(formula string, sourceRowID, targetRowID int) (newFormula string) { reg := regexp.MustCompile(fmt.Sprintf("([A-Z]{1,2}%d)", sourceRowID)) cellSlice := reg.FindAllString(formula, -1) newFormula = formula for _, cell := range cellSlice { col, _, err := excelize.SplitCellName(cell) if err != nil { log.Fatalln(err) } targetCell, err := excelize.JoinCellName(col, targetRowID) if err != nil { log.Fatalln(err) } newFormula = strings.ReplaceAll(newFormula, cell, targetCell) } return } call like this parseFormula("A1 + B1", 1, 2) Hi @bailantaotao, @ljyf5593, this library now support this feature, please upgrade to the master branch code, and this feature will be released in the next version.
gharchive/issue
2022-08-05T04:28:16
2025-04-01T06:40:08.947107
{ "authors": [ "bailantaotao", "ljyf5593", "xuri" ], "repo": "qax-os/excelize", "url": "https://github.com/qax-os/excelize/issues/1306", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1811125711
Memory usage while reading workbook with large amount of data (大文件内存占用的问题) debian 12系统,excelize用的7.15号go get -u的版本,打开一个180M的xlsx(包200个sheet,其中2个sheet内容最多),设置比较大的UnzipXMLSizeLimit不解压到硬盘,在内存里处理,一开始内存占用近2G,逐渐增加到6G,持续一段时间后完成,耗时3分多钟。 主要代码如下: xl, err := excelize.OpenFile(fn, excelize.Options {UnzipXMLSizeLimit: 1e18}); if err == nil { for _, sheetName := range xl.GetSheetList() { if rows, err2 := xl.Rows(sheetName); err2 == nil { for rows.Next() { if row, err3 := rows.Columns(); err3 == nil { //proc } } } } } 尝试用rust的calamine库比较了一下,一开始占用50M,逐渐增加到1.3G就是最高峰了,比wps占用还少点,处理时间也是3分多钟,和go差别不大。主要代码如下: let mut wb: Xlsx<_> = open_workbook(fn).unwrap(); let sheets = wb.sheet_names().to_owned(); for sheet_name in sheets { if let Some(Ok(r)) = wb.worksheet_range(&sheet_name) { let mut cnt = 0; for row in r.rows() { cnt += 1; } println!("fffff={:?}, cnt={:?}", sheet_name, cnt); } } 用wps office表格打开占用1.9G. 感觉excelize内存占用偏大,不知道有没有改进的空间 Thanks for your issue. Could you provide a reproducible demo and your input file attachment without confidential info? This issue was similar to #1096. Please decrease the value of UnzipXMLSizeLimit to avoid high memory usage. 发现用wps表格保存过的大文件也存在同样的问题。这是测试代码: package main import ( "fmt" "flag" "time" "math/rand" "github.com/qax-os/excelize" ) //go build t.go && ./t -gen //生成一个测试文件/tmp/gen.xlsx //./t -name=/tmp/gen.xlsx //读取上面生成的文件,内存占用正常。若用wps表格打开gen.xlsx再保存,文件体积增大,再进行读测试,内存占用就会剧增。 var ( genXlsx = flag.Bool("gen", false, "genXlsx") name = flag.String("name", "", "filename") letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ") ) // func testRead(fn string) { if xl, err := excelize.OpenFile(fn, excelize.Options {UnzipXMLSizeLimit: 1e18}); err == nil { for _, sheetName := range xl.GetSheetList() { cnt := 0 if rows, err2 := xl.Rows(sheetName); err2 == nil { for rows.Next() { if _, err3 := rows.Columns(); err3 == nil { cnt += 1 } } } fmt.Println(sheetName, cnt) } } } // func GenXlsxTest(fn string) { rand.Seed(time.Now().UnixNano()) xl := excelize.NewFile() for i := 0; i <= 22; i ++ { sheetname := fmt.Sprintf("测试表%v", i) xl.NewSheet(sheetname) streamWriter, _ := xl.NewStreamWriter(sheetname) header := []interface{} {"序号", "名", "名2", "cc", "ccci", "zzz", "ccb", "hjjj", "jfjfj"} cell, _ := excelize.CoordinatesToCellName(1, 1) streamWriter.SetRow(cell, header) num := 125 switch i { case 3: num = 150000 case 7: num = 450000 } for rowi := 2; rowi <= num; rowi ++ { row := []interface{} {rowi-1, RandStr(5), RandStr(6), RandStr(7), RandStr(7), RandStr(7), RandStr(7), RandStr(8), RandStr(8), RandStr(8), RandStr(8), RandStr(9), RandStr(9), RandStr(9), RandStr(9), RandStr(10), RandStr(10), RandStr(10), RandStr(10)} cell, _ := excelize.CoordinatesToCellName(1, rowi) if err := streamWriter.SetRow(cell, row); err != nil { fmt.Println(rowi, err) } } if err := streamWriter.Flush(); err != nil { println(err) } } xl.SaveAs(fn) println("保存到", fn) } func RandStr(n int) string { b := make([]rune, n) for i := range b { b[i] = letterRunes[rand.Intn(len(letterRunes))] } return string(b) } func main() { flag.Parse() if *genXlsx { GenXlsxTest("/tmp/gen.xlsx") } else if *name != "" { testRead(*name) } } Thanks for your feedback. As the notice in the above reply, please decrease the value of UnzipXMLSizeLimit to avoid high memory usage. I have tested with following code with the workbook generated with your code. The RSS memory usage was about 254MB, so the performance was in expectation. package main import ( "fmt" "runtime" "syscall" "time" "github.com/xuri/excelize/v2" ) func main() { runtime.GC() startTime := time.Now() f, err := excelize.OpenFile("gen.xlsx") if err != nil { fmt.Println(err) return } defer func() { if err := f.Close(); err != nil { fmt.Println(err) } }() for _, sheetName := range f.GetSheetList() { cnt := 0 if rows, err2 := f.Rows(sheetName); err2 == nil { for rows.Next() { if _, err3 := rows.Columns(); err3 == nil { cnt += 1 } } } fmt.Println(sheetName, cnt) } printBenchmarkInfo("main", startTime) } func printBenchmarkInfo(fn string, startTime time.Time) { var memStats runtime.MemStats var rusage syscall.Rusage var bToMb = func(b uint64) uint64 { return b / 1024 / 1024 } runtime.ReadMemStats(&memStats) syscall.Getrusage(syscall.RUSAGE_SELF, &rusage) fmt.Printf("Func: %s \tRSS = %v MB\tAlloc = %v MB\tTotalAlloc = %v MB\tSys = %v MB\tNumGC = %v \tCost = %s\n", fn, bToMb(uint64(rusage.Maxrss)), bToMb(memStats.Alloc), bToMb(memStats.TotalAlloc), bToMb(memStats.Sys), memStats.NumGC, time.Since(startTime)) } Benchmark info: Func: main RSS = 254 MB Alloc = 6 MB TotalAlloc = 16921 MB Sys = 273 MB NumGC = 4383 Cost = 57.174368275s 需要用wps表格打开第一步生成的gen.xlsx,测试wps et保存后的文件,内存占用会增加很多。 UnzipXMLSizeLimit设得比较大,一方面是想测试不解压到硬盘的内存占用,另外我的系统/tmp是挂载到ram的,只分配了6G,如果解压到/tmp空间不够 Thanks for your feedback. The stream writer writes string cell value as an inline string (please also reference #1377), which storage cell value in the worksheet instead of SST (shared string table) for get better read performance with row iterator, but the workbook resaved after Kingsoft WPS, the cell value will be storage to the SST, so the library needs to parse SST internal parts of the workbook, that's will take more memory usage. The library can read, parse, and validate many internal workbook structures to get better capability, more sure the generated workbook is not corrupted, and more features and support, that will use more memory. I suggest decreasing the value of the UnzipXMLSizeLimit options for opening the workbook with the amount of data or increase your disk storage or memory resource to get better performance. I've closed this issue, and if you have any questions, please let me know and reopen this anytime.
gharchive/issue
2023-07-19T04:36:23
2025-04-01T06:40:08.954428
{ "authors": [ "aswjh", "xuri" ], "repo": "qax-os/excelize", "url": "https://github.com/qax-os/excelize/issues/1581", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2434323433
ProtonVPN features filters URL to the Wiki page https://github.com/qdm12/gluetun-wiki/blob/main/setup/providers/protonvpn.md What's missing? I see in the docs (https://github.com/qdm12/gluetun-wiki/blob/main/setup/providers/protonvpn.md) that issue #1582 (https://github.com/qdm12/gluetun/issues/1582) adds support for filtering servers by feature. Can you let me know what the relevant environment variables should be set to in order to filter to P2P servers? FYI, I use Docker Compose. Thanks! Done in 52c9b2ecf52e413eb4552345d7cffb41e6606806
gharchive/issue
2024-07-29T02:35:18
2025-04-01T06:40:09.032466
{ "authors": [ "blaisebrennan", "qdm12" ], "repo": "qdm12/gluetun-wiki", "url": "https://github.com/qdm12/gluetun-wiki/issues/87", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2587125609
PUBLICIP_PERIOD can't be disabled with 0 URL to the Wiki page https://github.com/qdm12/gluetun-wiki/blob/fbc589e8034f177d577547ea9e7fecbec85263b9/setup/options/others.md?plain=1#L14 What's incorrect? The docs state Set to 0 to disable. But setting it to 0 results in an error: 2024-10-14T21:33:04Z ERROR public ip check settings: public IP address check period is too short: 0s must be at least 5s https://github.com/qdm12/gluetun-wiki/blob/fbc589e8034f177d577547ea9e7fecbec85263b9/setup/options/others.md?plain=1#L14 This is probably just something that needs to be updated in the docs. Commit 03deb9a: feat(publicip): PUBLICIP_ENABLED replaces PUBLICIP_PERIOD You should be able to achieve this by using PUBLICIP_ENABLED=no
gharchive/issue
2024-10-14T21:41:20
2025-04-01T06:40:09.035252
{ "authors": [ "RogueOneEcho", "essinghigh" ], "repo": "qdm12/gluetun-wiki", "url": "https://github.com/qdm12/gluetun-wiki/issues/98", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
187565558
Please help with AOT Hey guys! Thank you @qdouble fir this great repo! One question though - http://stackoverflow.com/questions/40450604/aot-function-calls-are-not-supported. Could anyone help please? I don't get it what it wants from me... Think it would be better for you to do a search of this issue on the Angular repo as this error is just a general AOT issue and not specific to this repo.
gharchive/issue
2016-11-06T14:46:26
2025-04-01T06:40:09.036927
{ "authors": [ "alvipeo", "qdouble" ], "repo": "qdouble/angular-webpack2-starter", "url": "https://github.com/qdouble/angular-webpack2-starter/issues/163", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
120105718
tooltips字数过多时存在快速闪烁现象 当tooltips字数超过300时存在闪烁现象,超过500字符时闪烁的已经来不及看清tooltips上面显示的文字。 问题已经修复了
gharchive/issue
2015-12-03T06:44:30
2025-04-01T06:40:09.065576
{ "authors": [ "qdtroy", "xdxiaodong" ], "repo": "qdtroy/DuiLib_Ultimate", "url": "https://github.com/qdtroy/DuiLib_Ultimate/issues/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
516922528
Updated the SUSE / openSUSE Repos older than 15.0 were removed. Also added other architectures available in the repos. @dassau can you confirm (and pull) ? @badarotti I'm totally not in the Suse infra, so cannot check this. Will pull anyway if nobody responds (next build of site will be tomorrow anyway)
gharchive/pull-request
2019-11-04T01:21:20
2025-04-01T06:40:09.085203
{ "authors": [ "badarotti", "rduivenvoorde" ], "repo": "qgis/QGIS-Website", "url": "https://github.com/qgis/QGIS-Website/pull/701", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1065247932
闪退 这边没有什么异常日志,只知道闪退了,大概播放30分钟到一小时之后就会自行闪退 本来以为是卡了,但是滑到右下角的图标之后直接没了反而给我整不会了,在windows上好几次这样了 不过之前没反馈bug的时候还是0.4.1来着,同样的闪退问题 只不过我用了其他软件替代了所以就没来反馈 不久前看到了有更新就下载看看怎么样了,然后还是同样的闪退,就来看看这个能不能解决 就一个黄色提示,看起来没啥问题 日志其实我反倒不太会用,英语菜鸟啥都看不懂😢 打脸了,这次放了一小时也没问题,可能是我自身的网络波动或者其他原因吧,不过还是希望能加强一下软件
gharchive/issue
2021-11-28T06:59:41
2025-04-01T06:40:09.315690
{ "authors": [ "bilibililaomoji" ], "repo": "qier222/YesPlayMusic", "url": "https://github.com/qier222/YesPlayMusic/issues/1066", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
974215478
Windows单文件程序版,点击右上角的X,每次都会弹出这个窗口,是不是可以增加个默认设置呢?默认关闭程序或者默认最小化? 这个我反馈过了 的确很烦
gharchive/issue
2021-08-19T02:57:45
2025-04-01T06:40:09.317031
{ "authors": [ "China-Huanghe", "TotoWang-hhh" ], "repo": "qier222/YesPlayMusic", "url": "https://github.com/qier222/YesPlayMusic/issues/892", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
234998968
would a filter-fasta method be appropriate in this plugin? Improvement Description It would be useful to replicate the functionality of qiime1's filter_fasta.py — e.g., see this forum post. Comments I am not sure whether this would be at home in q2-quality-filter, demux, or some other plugin that has yet to be created (q2-filter-everything-under-the-sun). Whichever plugin would be appropriate (and perhaps this is an argument for making a new plugin, or for combining with the two demultiplexing plugins into a single fastq/fasta handling plugin), I could imagine other fasta/fastq functions that could accompany this, e.g., trimming, subsampling. References filter_fasta.py — e.g., see this forum post This functionality was implemented for FeatureData[Sequence] in QIIME 2 2017.9: https://forum.qiime2.org/t/qiime-2-2017-9-release-is-now-live/1160
gharchive/issue
2017-06-10T12:10:52
2025-04-01T06:40:09.320851
{ "authors": [ "nbokulich", "thermokarst" ], "repo": "qiime2/q2-quality-filter", "url": "https://github.com/qiime2/q2-quality-filter/issues/29", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }