id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
162307551
QXcbConnection: Could not connect to display Which version of PhantomJS are you using? Tip: run phantomjs --version. 2.1.1 What steps will reproduce the problem? Take a working javascript file (e.g. rasterize.js) Call the file from a script file (e.g. executable file, #!/bin/sh at the start) Now execute the script file as a cron job Which operating system are you using? Ubuntu 16.04 LTS Did you use binary PhantomJS or did you compile it from source? Binary - installed it as part of Ubuntu (using apt). Please provide any additional information below. This is very similar to https://github.com/ariya/phantomjs/issues/14240, but I am using the stock version of PhantomJS (with Ubuntu). Below is the output from running the cron job (which calls the script file, to generate a pdf file once a day) ... QXcbConnection: Could not connect to display PhantomJS has crashed. Please read the bug reporting guide at http://phantomjs.org/bug-reporting.html and file a bug report. Aborted (core dumped) Thanks! Please check your environment variable QT_QPA_PLATFORM. It must be empty. Yep, it is - in both cases. Does this need to be manually forced for some reason? Thanks! I would send a patch to reset all the troublesome environment variables before initializing Qt ... if I could find a list of them anywhere. But it sounds like this isn't actually the problem? It doesn't sound like it, but I could definitely be wrong. Trying different things, no luck. Is there a setting of QT_QPA_PLATFORM that you believe may help? Thanks! Hi, Having same issue here. Working with 1.9 on my dev machine, works fine. But on the live server I can't even download 1.9.x so I'm stuck now. Where can I find this settings of QT? How to get there. So I can check that on my side. Thanks! The offscreen platform is a usable workaround for me with the stock Ubuntu package. This works for me: QT_QPA_PLATFORM=offscreen phantomjs rasterize.js 2i8zaNg04d9B41Zir2kT3J/output.html 2i8zaNg04d9B41Zir2kT3J.png can confirm export QT_QPA_PLATFORM=offscreen before selenium scripts work's well. We finally got things working by downloading the binary at https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2 and extracting the phantomjs to /usr/bin I had hard times with ubuntu 16.04 they basically distribute half finished system with lots of broken packages. I'm running on 16.04 too, but the problem with the repo's is that they deliver with x11, which fails. So with the binary I don't have that issue and it works perfectly now. Works here now also - thanks!!! Debian/Ubuntu has a modified version of PhantomJS that can work headlessly, hence the problem with QXcb. See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=817277 for details. Unfortunately, this is not something that we (=PhantomJS team) can fix. @ricohumme Can you provide the steps (commands) to install it? I tried: export PHANTOM_JS_VERSION=2.1.1 wget "https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-$PHANTOM_JS_VERSION-linux-x86_64.tar.bz2" tar xvfj "./phantomjs-$PHANTOM_JS_VERSION-linux-x86_64.tar.bz2" ln -sf "./phantomjs-$PHANTOM_JS_VERSION-linux-x86_64/bin/phantomjs" "/usr/bin" but still couldn't get it working @3zzy the steps I used are as follows: wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2 bzip2 -d phantomjs-2.1.1-linux-x86_64.tar.bz2 tar -xvf phantomjs-2.1.1-linux-x86_64.tar cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/bin/phantomjs Does this help your case? Thanks for help, solution above works well :) I'm the same error but in Odoo xvfb-run worked best for me, from Ariya's link posted 8/2016. xvfb is "providing an unobtrusive way to run applications that don’t really need an X server but insist on having one anyway." @jglogan Thank you for solution with QT_QPA_PLATFORM=offscreen. It let me run phantomjs itself, but it was failing for me with EOFError: end of file reached message on certain pages. After some experiments with different platforms I found out that it works like a charm with QT_QPA_PLATFORM=minimal. Hope this will help someone. Thanks a lot, guys! QT_QPA_PLATFORM=offscreen works great on Debian 8 jessie amd64 with PhantomJS 2.1.1 # QT_QPA_PLATFORM=offscreen phantomjs --version 2.1.1 Debian/Ubuntu has a modified version of PhantomJS that can work headlessly, hence the problem with QXcb. See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=817277 for details. This is quite the opposite: Unfortunately it can not be fixed in Debian. To achieve headless-ness upstream statically link with customised QT + Webkit. We don't want to ship forks of those projects. It would be great to eventually convince upstream to use standard libraries. There is also a suggestion to use xvfb-run, which is right. Every time I try to plot anything with RStudio Cloud using the webshot package I obtain an error. An example: #install webshot library(webshot) webshot::install_phantomjs() # Make the graph my_graph=wordcloud2(demoFreq, size=1.5) # save it in html library("htmlwidgets") saveWidget(my_graph,"tmp.html",selfcontained = F) # and in png webshot("tmp.html","fig_1.png", delay =5, vwidth = 480, vheight=480) The error says: QXcbConnection: Could not connect to display :0 PhantomJS has crashed. Please read the bug reporting guide at <http://phantomjs.org/bug-reporting.html> and file a bug report. Error in webshot("tmp.html", "fig_1.png", delay = 5, vwidth = 480, vheight = 480) : webshot.js returned failure value: -6 Any thoughts?
gharchive/issue
2016-06-26T01:23:33
2025-04-01T06:37:57.300833
{ "authors": [ "3zzy", "Dingo64", "Vitallium", "ariya", "arrmo", "elhamdaoui", "ip1981", "istinspring", "jglogan", "jorgesinval", "michaelkl", "ricohumme", "seachanged", "wojciechmorawski", "zackw" ], "repo": "ariya/phantomjs", "url": "https://github.com/ariya/phantomjs/issues/14376", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
525335449
Feature Request: Add option for writing to index alias Feature: Provide an option to allow sessions2 index to write to an index alias. Use Case: This will allow users to manage the sessions2 index through ilm, curator or other means in elastic rather than being forced to use the built in moloch time based rotation. Arkime already supports ilm, just not auto roll over. With performance testing we've found that auto rollover increases search time.
gharchive/issue
2019-11-19T23:51:28
2025-04-01T06:37:57.349280
{ "authors": [ "Prodian0013", "awick" ], "repo": "arkime/arkime", "url": "https://github.com/arkime/arkime/issues/1316", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2089288865
Fix symbolic links in Beta branch fixed symbolics links with absolute path which had blocked portability fixed all linting warnings (these were all pre-existing warnings) fixed all formatting issues with prettier The test suite still fails in a number of places but I'm fairly certain that this PR doesn't introduce any problems that weren't already there. Hi thanks so much for this PR! There are some helpful changes in here for sure and I'm not sure which build script I have is screwing with those symlinks- I had thought I fixed it. That said, there's a lot here and most of it can be attributed to beta being a development branch. E.g. in the cases of those unused variables, most of them are unused because of something I still need to do before I merge the branch, before which time those lint errors would have to be addressed. If you want to create a PR that's not a straightforward bug fix (especially on a dev branch), I'd recommend running it by me first. Again, I really do appreciate the initiative to help out! Sure no problem; i needed to do most of this to get it to build locally and figured it might be useful as a PR. I will make a commit now fixing the symlink issue. If you run into anything else that blocks the build let me know!
gharchive/pull-request
2024-01-18T23:43:45
2025-04-01T06:37:57.357999
{ "authors": [ "ssalbdivad", "yankeeinlondon" ], "repo": "arktypeio/arktype", "url": "https://github.com/arktypeio/arktype/pull/905", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
498799441
mkpr Applied mkpr on package.json sed s/"commander": "^.*"/"commander": "^3.0.2"/ :tada: This PR is included in version 8.5.3 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2019-09-26T10:28:14
2025-04-01T06:37:57.381933
{ "authors": [ "arlac77" ], "repo": "arlac77/npm-template-sync", "url": "https://github.com/arlac77/npm-template-sync/pull/784", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
319134934
merge package from arlac77/npm-package-template README.md docs(README): update from template Coverage remained the same at 97.143% when pulling 3f5e8b79e5609de01534b9d2ac83d3c036ece2d2 on template-sync-1 into 30199184205545a50622bc46bf6964de8f02b886 on master. :tada: This PR is included in version 1.0.6 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2018-05-01T07:19:39
2025-04-01T06:37:57.434085
{ "authors": [ "arlac77", "coveralls" ], "repo": "arlac77/timeseries-sqlite2leveldb", "url": "https://github.com/arlac77/timeseries-sqlite2leveldb/pull/98", "license": "0BSD", "license_type": "permissive", "license_source": "github-api" }
2384278789
🛑 miniTerm Sant Feliu is down In addab0f, miniTerm Sant Feliu ($SF_URL) was down: HTTP code: 0 Response time: 0 ms Resolved: miniTerm Sant Feliu is back up in 9aa4671 after 11 minutes.
gharchive/issue
2024-07-01T16:42:27
2025-04-01T06:37:57.445710
{ "authors": [ "armadillu" ], "repo": "armadillu/upptime", "url": "https://github.com/armadillu/upptime/issues/334", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
386532191
Prevent stopping at walls Can make the player move in the general direction of walls, instead of stopping. When a player hits a wall. yes the only stop happens when he hits perpendicular to the wall.and we have to deal with edges also. sent a PR for this one Fixed in #73
gharchive/issue
2018-12-02T06:32:26
2025-04-01T06:37:57.489132
{ "authors": [ "Ishan1742", "arnav-t", "mathrulestheworld" ], "repo": "arnav-t/Shooting-Game", "url": "https://github.com/arnav-t/Shooting-Game/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1839506869
🛑 NWTFV is down In 0395736, NWTFV (https://nwtfv.com) was down: HTTP code: 0 Response time: 0 ms Resolved: NWTFV is back up in ecf924b.
gharchive/issue
2023-08-07T13:53:32
2025-04-01T06:37:57.491705
{ "authors": [ "arnef" ], "repo": "arnef/status", "url": "https://github.com/arnef/status/issues/4247", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1863981974
🛑 NWTFV is down In 48cd33e, NWTFV (https://nwtfv.com) was down: HTTP code: 0 Response time: 0 ms Resolved: NWTFV is back up in d0c97d0 after 279 days, 14 hours, 31 minutes.
gharchive/issue
2023-08-23T20:41:45
2025-04-01T06:37:57.494121
{ "authors": [ "arnef" ], "repo": "arnef/status", "url": "https://github.com/arnef/status/issues/4478", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
79969195
MainApp().run() import module To load the python code, the app MUST be using this format: Works: MainApp().run() This example doesn't work: app = MainApp() app.run() https://github.com/kivy/kivy-designer/blob/master/designer/project_loader.py#L994
gharchive/issue
2015-05-23T22:18:05
2025-04-01T06:37:57.516276
{ "authors": [ "aron-bordin" ], "repo": "aron-bordin/kivy-designer", "url": "https://github.com/aron-bordin/kivy-designer/issues/78", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
153773184
Adding embedded-dev build target Uses non-minified NGL code to aid debugging. thanks
gharchive/pull-request
2016-05-09T13:12:52
2025-04-01T06:37:57.517615
{ "authors": [ "arose", "sbliven" ], "repo": "arose/ngl", "url": "https://github.com/arose/ngl/pull/112", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
949321533
Password Generator Issue Type: Feature Request Describe the feature I would like to add Password Generator project. Go ahead @jigar-sable
gharchive/issue
2021-07-21T04:51:54
2025-04-01T06:37:57.522964
{ "authors": [ "arpit456jain", "jigar-sable" ], "repo": "arpit456jain/Amazing-Js-Projects", "url": "https://github.com/arpit456jain/Amazing-Js-Projects/issues/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1978866591
Any plans to create a Metrics Plugin? Hello. I was browsing this project and since the repositories are based on the equivalents from RedHat AMQ Broker I was wondering if there is any reason why the Metrics plugin is still using the one published by RedHat instead of having its own "mirrored" one in artemiscloud just like other repositories? Generally I am asking if there are plans to create this plugin repository here. https://github.com/artemiscloud/activemq-artemis-broker-kubernetes-image/blob/89a5d7600961569c4e3eef0541d977e57d6d5777/modules/activemq-artemis-launch/added/launch.sh#L299 Hi @labmonkey Currently it's the opposite. RedHat AMQ Broker Operator is based on ArtemisCloud project and RedHat AMQ Broker is based on Apache ActiveMQ Artemis. Currently there is no plan to have the ArtemisPrometheusMetricsPlugin under ArtemisCloud project since the project from RedHat is open source, has Apache license and its artifacts are published in RedHat public maven repository.
gharchive/issue
2023-11-06T11:05:21
2025-04-01T06:37:57.566277
{ "authors": [ "labmonkey", "tlbueno" ], "repo": "artemiscloud/activemq-artemis-broker-kubernetes-image", "url": "https://github.com/artemiscloud/activemq-artemis-broker-kubernetes-image/issues/83", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2259758333
🛑 Matrix (client) is down In 485d7b1, Matrix (client) (https://matrix.artemislena.eu) was down: HTTP code: 502 Response time: 760 ms Resolved: Matrix (client) is back up in fdcc1fe after 16 minutes.
gharchive/issue
2024-04-23T21:00:30
2025-04-01T06:37:57.568794
{ "authors": [ "artemislena" ], "repo": "artemislena/upptime", "url": "https://github.com/artemislena/upptime/issues/2972", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
476726318
Rename artichoke-backend to artichoke-backend-mruby We are getting close to the point where we can start experimenting with additional interpreter implementations. Rename artichoke-backend to artichoke-backend-mruby to reflect this new multi-VM implementation state. "we are getting close" 😅
gharchive/issue
2019-08-05T08:33:26
2025-04-01T06:37:57.602279
{ "authors": [ "lopopolo" ], "repo": "artichoke/artichoke", "url": "https://github.com/artichoke/artichoke/issues/128", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
900339592
Fix update moving point on editors that don't work properly in tablets Update editors "moving point" when adding a new point. This is done because if the client doesn't have a mouse, the moving point doesn't update, which can cause multiple issues in creating map entities. Thank you very much 🙏🏻
gharchive/pull-request
2021-05-25T06:33:36
2025-04-01T06:37:57.603265
{ "authors": [ "shayb-datumate" ], "repo": "articodeltd/angular-cesium", "url": "https://github.com/articodeltd/angular-cesium/pull/385", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
98538771
Missing Podspec in repo Please add the Podspec to the repo (useful to fetch a different branch directly from the Podfile without using a local repo, e.g. the "swift-2" branch). Thank you. Thanks for noticing. Fixed.
gharchive/issue
2015-08-01T13:45:53
2025-04-01T06:37:57.619517
{ "authors": [ "alenofx", "artman" ], "repo": "artman/Signals", "url": "https://github.com/artman/Signals/issues/6", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
64261781
internal getDirection function getDirection(x) { return x - stageLeft > (measures.w / 3); } why "(measures.w / 3)" ? shouldn't be "(measures.w / 2)" ? tks unanswered .. close
gharchive/issue
2015-03-25T12:53:27
2025-04-01T06:37:57.622087
{ "authors": [ "zecompadre" ], "repo": "artpolikarpov/fotorama", "url": "https://github.com/artpolikarpov/fotorama/issues/390", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
942288291
Config option to enable page numbers to be excluded from the title Added tests and documentation. All tests (render + Liquid + Nunjucks) are passing and I have tested the Nunjucks implementation on my own site as well using the global config and the front matter override. @aarongustafson I've reviewed your PR. Excellent work! 👍🏻 It's a really convenient and nice addition to the plugin. I'll have it merged and publish a new release of the plugin.
gharchive/pull-request
2021-07-12T17:35:00
2025-04-01T06:37:57.623589
{ "authors": [ "aarongustafson", "artstorm" ], "repo": "artstorm/eleventy-plugin-seo", "url": "https://github.com/artstorm/eleventy-plugin-seo/pull/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
54077228
@broskoski: Simplest possible swiping carousel There's still some issues with this I'm working out but it works alright. Would gladly swap this out with a library if I could find one that is halfway decent (suggestions?). :+1: looking good, can I use it in microgravity? This is interesting: http://dev.w3.org/csswg/css-snappoints/ There were some component-js swipe components that looked pretty good http://component.github.io/?q=swipe. Although I can't personally vouch for any of them. Maybe this is an opportunity to release our own open source swipe component :)
gharchive/pull-request
2015-01-12T16:51:39
2025-04-01T06:37:57.626595
{ "authors": [ "broskoski", "craigspaeth", "dzucconi" ], "repo": "artsy/2014.artsy.net", "url": "https://github.com/artsy/2014.artsy.net/pull/5", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
712996383
Adds article screen owner type This adds article and articles as a screen owner type. :rocket: PR was released in v1.40.0 :rocket:
gharchive/pull-request
2020-10-01T16:25:56
2025-04-01T06:37:57.629254
{ "authors": [ "abhitip", "artsyit" ], "repo": "artsy/cohesion", "url": "https://github.com/artsy/cohesion/pull/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2426225556
chore: update metaphysics graphql schema Greetings human :robot: this PR was automatically created as part of metaphysics' deploy process. #nochangelog Warnings :warning: ✅ No changelog changes Generated by :no_entry_sign: dangerJS against 89316bd2f23f778ebe74d35c04421c396a8f3573
gharchive/pull-request
2024-07-23T22:28:31
2025-04-01T06:37:57.631261
{ "authors": [ "ArtsyOpenSource", "artsyit" ], "repo": "artsy/eigen", "url": "https://github.com/artsy/eigen/pull/10511", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
207290331
[WIP] WorksForYou QA Items Closes https://github.com/artsy/eigen/issues/1572 [x] Use sale_message from gravity instead of native logic [ ] Address margins/layout issues [ ] Fix tests Regarding the sale message: institutional works don't have them in gravity, so the current pr work looks like this (note the Not For Sale work on the right and the price-label-less work on the left): This is what eigen currently shows (no label at all for sold or institutional works): I will add a 'Not For Sale' message to institutional works so they aren't left looking like something is missing from them for now, but perhaps it's something that would make sense in gravity (i.e. not ever leaving the sale_message empty and returning 'Not For Sale' if no info is provided) Fixes #1572 1 Warning :warning: PR is classed as Work in Progress Generated by :no_entry_sign: danger I can definitely address the single-artwork notification QA now, but for the masonry layout in general, I need to do a bigger refactor than I thought. @alloy looks good on-device 👍 Superkalifragilistic 👌
gharchive/pull-request
2017-02-13T17:47:11
2025-04-01T06:37:57.636523
{ "authors": [ "ArtsyOpenSource", "alloy", "sarahscott" ], "repo": "artsy/eigen", "url": "https://github.com/artsy/eigen/pull/2169", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1123115772
fix: match iOS sentry release name in fastfile #trivial The type of this PR is: fix Description I changed a '+' to a '-'. I'll fix this manually for the outgoing builds 🙄 PR Checklist (tick all before merging) [x] I have included screenshots or videos to illustrate my changes, or I have not changed anything that impacts the UI. [x] I have tested my changes on iOS and Android. [x] I have added tests/stories for my changes, or my changes don't require testing/stories, or I have included a link to a separate Jira ticket covering the tests. [x] I have added a feature flag, or my changes don't require a feature flag. (How do I add one?) [x] I have documented any follow-up work that this PR will require, or it does not require any. [x] I have added an app state migration, or my changes do not require one. (What are migrations?) [x] I have added a changelog entry below or my changes do not require one. To the reviewers 👀 [ ] I would like at least one of the reviewers to run this PR on the simulator or device. Changelog updates Changelog updates Cross-platform user-facing changes iOS user-facing changes Android user-facing changes Dev changes Fix sentry release name on iOS - Brian This PR contains the following changes: Dev changes (Fix sentry release name on iOS - Brian) Generated by :no_entry_sign: dangerJS against dec6f1478be64dc2fed650b5746d318cdb1c7a38
gharchive/pull-request
2022-02-03T13:55:29
2025-04-01T06:37:57.643989
{ "authors": [ "ArtsyOpenSource", "brainbicycle" ], "repo": "artsy/eigen", "url": "https://github.com/artsy/eigen/pull/6155", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1664829550
feat(CX-3598): add new onboarding cards to home screen This PR resolves CX-3598 Description https://user-images.githubusercontent.com/36167539/231513604-9d69b90b-cf7e-4707-8009-69b24ba3f216.mp4 PR Checklist [x] I have tested my changes on iOS and Android. [x] I hid my changes behind a feature flag, or they don't need one. [x] I have included screenshots or videos, or I have not changed the UI. [ ] I have added tests, or my changes don't require any. [ ] I added an app state migration, or my changes do not require one. [ ] I have documented any follow-up work that this PR will require, or it does not require any. [ ] I have added a changelog entry below, or my changes do not require one. To the reviewers 👀 [ ] I would like at least one of the reviewers to run this PR on the simulator or device. Changelog updates Changelog updates Cross-platform user-facing changes added new Artsy onboarding rail to the home screen -daria iOS user-facing changes Android user-facing changes Dev changes Need help with something? Have a look at our docs, or get in touch with us. @olerichter00 You mean the images themselves, not the way we display them in the component? Not sure, I downloaded them from figma and they have the same size as the ones on the old cards. You would suggest reducing the size? @olerichter00 You mean the images themselves, not the way we display them in the component? Not sure, I downloaded them from figma and they have the same size as the ones on the old cards. You would suggest reducing the size? Ok, let's add them as they are. I remember @MounirDhahri was working on the image sizes in Eigen a while ago. @MounirDhahri, do you think it is possible and needed to reduce the size of the images? @MounirDhahri @olerichter00 maybe we could document somewhere the optimal image sizes? Do we have a readme for similar purposes? I don't think it's possible to document that in numbers but it's probably a good idea to write this somewhere in our docs. The rationale behind it comes from some studies that emerged a few years ago about how app size matters. Overall, the smaller the app size, the more people convert into installing the app. In our case, since we have a fair amount of images in the app, our bundle size has been increasing quickly and it's important for us to try to cut the image size to avoid affecting our conversion
gharchive/pull-request
2023-04-12T15:54:54
2025-04-01T06:37:57.654843
{ "authors": [ "MounirDhahri", "dariakoko", "olerichter00" ], "repo": "artsy/eigen", "url": "https://github.com/artsy/eigen/pull/8498", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1819948048
build(deps): update flipper and dev deps This PR resolves [] Description Updates flipper dep and some other deps 🐙 PR Checklist [x] I have tested my changes on iOS and Android. [x] I hid my changes behind a feature flag, or they don't need one. [x] I have included screenshots or videos, or I have not changed the UI. [x] I have added tests, or my changes don't require any. [x] I added an app state migration, or my changes do not require one. [x] I have documented any follow-up work that this PR will require, or it does not require any. [x] I have added a changelog entry below, or my changes do not require one. To the reviewers 👀 [ ] I would like at least one of the reviewers to run this PR on the simulator or device. Changelog updates Changelog updates Cross-platform user-facing changes iOS user-facing changes Android user-facing changes Dev changes update flipper and dev deps - gkartalis Need help with something? Have a look at our docs, or get in touch with us. This PR contains the following changes: Dev changes (update flipper and dev deps - gkartalis) Generated by :no_entry_sign: dangerJS against 9a64039a79e01056bee93adab9616000291b91d8
gharchive/pull-request
2023-07-25T09:49:51
2025-04-01T06:37:57.663246
{ "authors": [ "ArtsyOpenSource", "gkartalis" ], "repo": "artsy/eigen", "url": "https://github.com/artsy/eigen/pull/9048", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1506939654
fix(offline): Add missing show queries to syncManager Description This adds a few missing queries to the syncManager: ShowArtworks, showArtworksQuery ($slug: String!, $imageSize: Int!) ShowInstalls, showInstallsQuery ($slug: String!, $imageSize: Int!) ShowDocuments, showDocumentsQuery ($slug: String!, $partnerID: String!) cc @artsy/mobile-platform PR Checklist [ ] I tested my changes on iOS and Android. [ ] I added screenshots or videos to illustrate my changes. [ ] I added Tests and Stories for my changes. To the reviewers 👀 [ ] I would like at least one of the reviewers to run this PR on the simulator or device. Need help with something? Have a look at our docs, or get in touch with us. This is pretty trivial so going to merge and if anyone has any feedback I can loop back tomorrow and address
gharchive/pull-request
2022-12-21T21:34:26
2025-04-01T06:37:57.667380
{ "authors": [ "damassi" ], "repo": "artsy/energy", "url": "https://github.com/artsy/energy/pull/395", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
446721057
[WIP] Use /auth2 for Consign signup/login flow Relates to work to add reCAPTCHA to all auth forms, replaces the one-off signup/login implementation for /consign with the new auth components. WIP while I confirm that analytics is working as expected. 😍 This looks so great!!! So happy to see all that deleted code.
gharchive/pull-request
2019-05-21T16:45:07
2025-04-01T06:37:57.668673
{ "authors": [ "eessex", "sweir27" ], "repo": "artsy/force", "url": "https://github.com/artsy/force/pull/4074", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
304880514
Adds March 13 notes, makes Show&Tell optional This adds the three things we saw today, as well as marks the meeting as optional by our engineers. 👍 thanks!
gharchive/pull-request
2018-03-13T18:18:10
2025-04-01T06:37:57.669718
{ "authors": [ "anandaroop", "ashfurrow" ], "repo": "artsy/meta", "url": "https://github.com/artsy/meta/pull/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1321762308
feat(scanner): filter config with env resolve #154 临时方案,暂时解决 BUG,后续需对 Scanner 进行重构,理清递归过程 Codecov Report Merging #155 (1addba3) into master (d9d5ee2) will increase coverage by 0.11%. The diff coverage is 100.00%. @@ Coverage Diff @@ ## master #155 +/- ## ========================================== + Coverage 88.82% 88.93% +0.11% ========================================== Files 50 51 +1 Lines 1083 1094 +11 Branches 174 177 +3 ========================================== + Hits 962 973 +11 Misses 121 121 Impacted Files Coverage Δ src/loader/impl/config.ts 88.88% <100.00%> (-1.81%) :arrow_down: src/loader/impl/framework_config.ts 100.00% <100.00%> (ø) src/loader/impl/plugin_config.ts 92.30% <100.00%> (+0.30%) :arrow_up: src/loader/utils/config_file_meta.ts 100.00% <100.00%> (ø) src/scanner/scan.ts 94.85% <100.00%> (+0.27%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update d9d5ee2...1addba3. Read the comment docs.
gharchive/pull-request
2022-07-29T05:00:25
2025-04-01T06:37:57.686748
{ "authors": [ "codecov-commenter", "noahziheng" ], "repo": "artusjs/core", "url": "https://github.com/artusjs/core/pull/155", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
592825489
USA only accepts "US" None of these values work: USA United States United States of America Hi @jbleyleSF I will add these as synonyms for "U.S" in the API I have add synonyms system to the API. Please look at this gif. I will close this and #4 issue.
gharchive/issue
2020-04-02T18:03:36
2025-04-01T06:37:57.709942
{ "authors": [ "arufian", "jbleyleSF" ], "repo": "arufian/LWC-Component-COVID19", "url": "https://github.com/arufian/LWC-Component-COVID19/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
63384690
Add suport for Debian 'the Debian way' This enables deployments using as much debian-vodo as possible and compatible with systemd +1 As a side note - as of 15.04 Ubuntu comes with systemd too. +1
gharchive/pull-request
2015-03-21T10:04:11
2025-04-01T06:37:57.714007
{ "authors": [ "marcdibold", "thinklinux", "xaiki" ], "repo": "arunoda/meteor-up", "url": "https://github.com/arunoda/meteor-up/pull/328", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
137684514
complete translation of UI The UI should handle: [ ] index.md in different languages [ ] show only courses / lessons for selected language [ ] save / restore state with localStorage [ ] propose lesson in different language if languages is set in YAML --- title: Straffespark languages: nn-NO: straffespark.nn.md en-GB: penalty.md --- Ref: https://github.com/kodeklubben/oppgaver/issues/170 Will be fixed in codeclub-viewer
gharchive/issue
2016-03-01T20:56:39
2025-04-01T06:37:57.716884
{ "authors": [ "arve0" ], "repo": "arve0/codeclub_lesson_builder", "url": "https://github.com/arve0/codeclub_lesson_builder/issues/201", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
498844239
Expand the card by default As the name suggest, how would you keep a card expanded by default ? You can wrap your ExpandablePanel with an ExpandableNotifier and set initialExpanded to true: ExpandableNotifier( initialExpanded: true, child: ExpandablePanel( ... ) )
gharchive/issue
2019-09-26T12:10:42
2025-04-01T06:37:57.725690
{ "authors": [ "aryzhov", "pspatil16" ], "repo": "aryzhov/flutter-expandable", "url": "https://github.com/aryzhov/flutter-expandable/issues/25", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
808517618
Getting : ERROR: 2 : Invalid sessionkey when running ./zbx-hpmsa.py Hello, When trying to test the connection to MSA using the ./zbx-hpmsa.py lld 10.81.231.151 disks or ./zbx-hpmsa.py full 10.81.231.152 volumes command I get the following error ERROR: 2 : Invalid sessionkey Could you please help to resolve this ? thank you in advance, Spyros Issue resolved by using custom user/password
gharchive/issue
2021-02-15T12:57:39
2025-04-01T06:37:57.734218
{ "authors": [ "sanemogi" ], "repo": "asand3r/zbx-hpmsa", "url": "https://github.com/asand3r/zbx-hpmsa/issues/38", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
117832534
Mc cleanup @DanielJMaher @birdage , minor fixes: Cleaned up the plot control buttons (got messed up a little on some css changes) Fixed the event class column on the plotting page (now shows up again) Added an info button next to the subscription button (per Eoin) Disabled the selection of different download options for now (per Eoin) 4 might cause us some issues. There are requirements for Json and CSV. But since they are going through ERDDAP at some point, we would be fine. I'm a little worried about doing this, but ithink it might be fine. @DanielJMaher great find that this works now for the download! Just re-enabled it! reviewed with Dan
gharchive/pull-request
2015-11-19T15:05:06
2025-04-01T06:37:57.736880
{ "authors": [ "DanielJMaher", "FBRTMaka" ], "repo": "asascience-open/ooi-ui", "url": "https://github.com/asascience-open/ooi-ui/pull/597", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
714404775
Version 2 unusable: App crashes when pressing + Since version 2.0.0 the app crashes immidietaly when you click on + at tasks. I tried to wipe data, didnt help. Android Version: 9.0 Model: Nokia 3 TA-1032 GMS: manually removed I am terribly sorry for your experience. However, I cannot reproduce the issue you are having. Can you update to the 2.0.1 release? 2.0.1 has the same problem for me. I will try Debugging tomorrow maybe. Btw, maybe you could add an error Catcher showing the exception in an extra Activity Update: I have reproduced the issue, it seems that this is only present in Android 9.0 Pie. The TwoLineRadioButton component is throwing a NullPointerException. Okay, is this fixable? Yes, working on a fix right now. :) Update: will release the update with the hotfix later Hotfix is now live in the Releases section :)
gharchive/issue
2020-10-04T21:18:08
2025-04-01T06:37:57.740289
{ "authors": [ "Niwla23", "asayah-san" ], "repo": "asayah-san/fokus-android", "url": "https://github.com/asayah-san/fokus-android/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
230943088
UIActivityIndicatorView Always on top left UIActivityIndicatorView as loadingView Always on top left even when it's set with a larger frame, it is always small in the corner. Is there a work around this? As the insets method applies for all the state views. Are you using the latest version from master? Can you post some example code? 3.0, sure, in my test I am simply giving it a frame and assigning it to the loadingView: let loading = UIActivityIndicatorView(frame: CGRect(x: 100, y: 100, width: 400, height: 400)) loading.startAnimating() loadingView = loading But regardless of the size of the View it is always in the top left corner: Hm, please try with the latest master branch, there are some changes in there that did not make it into an official release yet. Are you using a vanilla UIViewController or UITableViewController/UICollectionViewController? It works perfectly with the master branch! Thank you very much aschuch! @aschuch Can you release a new version from the master branch which will include these changes?
gharchive/issue
2017-05-24T07:19:27
2025-04-01T06:37:57.743386
{ "authors": [ "Baam25", "aschuch", "erickva" ], "repo": "aschuch/StatefulViewController", "url": "https://github.com/aschuch/StatefulViewController/issues/57", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
802836459
Inconsistent whitespace sensitivity with numbers If you type the equivalent of "1.2 00" into the test environment, it renders as '1.200' (i.e. one number). This is consistent with doing "12 00", which renders as '1200'. However, if you try "1. 200", you instead get '1. 200' (i.e two 'numbers'). This raises multiple questions regarding parsing of whitespace and numbers: Is whitespace significant for numbers or not? The current situation appears to be 'depends', which is unhelpful. Is it valid to have something like "12." parse as a number? I would argue that this doesn't really make sense. Furthermore, even though some specification of the grammar exists, how numbers are parsed is not specified: I had to discover myself that, for example, insignificant leading or trailing zeroes remain. This should likely get fixed too, and it relates to this issue, which is why I mention it here. The short answer is that whitespace breaks tokens. So "1200" becomes <mn>1200<mn>, while "12 00" becomes <mn>12</mn><mn>00</mn>. So in the MathML it is two numbers, but it's up the renderer (FireFox or MathJax if you're using that) how to display that, and it appears they display them without a space. That's not something AsciiMathML controls, so not anything to "fix" here. Now, interestingly, it does appear "1.2" becomes <mn>1.2</mn> while "12." becomes <mn>12</mn><mo>.</mo>. So currently "12." does not get parsed as a number, but as a number and an operator, and the rendering engine seems to add a space after operators, which is why "1. 200" is appearing to you as two numbers. I would argue that parsing is actually wrong, and "12." should get parsed as a number, since an ending decimal point can be used to indicate trailing zeros are significant digits in science fields. So if anything needs fixing, it'd be getting "12." to get parsed as a single number. @drlippman Thank you for that - it's very useful information. I feel it would help a lot if all this got added to the description of the grammar on the main AsciiMath site. Feel free to make a PR to the website repo with suggested changes.
gharchive/issue
2021-02-07T02:15:37
2025-04-01T06:37:57.768534
{ "authors": [ "drlippman", "kozross" ], "repo": "asciimath/asciimathml", "url": "https://github.com/asciimath/asciimathml/issues/124", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1412766503
Rename path classes Current path classes (NioPath, RamPath, SubPath, etc) are currently a bit ambiguous as they may be implied to be types of java.nio paths. In reality, they are a wrapper around them instead. Files currently exist at https://github.com/ascopes/java-compiler-testing/tree/bd99311b0de4508537b11708d5dc5ec6c6c12092/java-compiler-testing/src/main/java/io/github/ascopes/jct/paths Instead, I'd like to rename them to something more meaningful. Haven't got a final set of names in mind just yet but something like this might make sense: PathLike -> PathHolder NioPath -> PathPathHolder SubPath -> NestedPathHolder RamPath -> RamPathHolder The package should also be renamed accordingly. Examples in the README will need updating as well when this is done. Closed by f1437b9ff8775212f8af75861754663775b300c9
gharchive/issue
2022-10-18T08:10:18
2025-04-01T06:37:57.797592
{ "authors": [ "ascopes" ], "repo": "ascopes/java-compiler-testing", "url": "https://github.com/ascopes/java-compiler-testing/issues/100", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
224710477
Tutorial? It would be fantastic if a kind of tutorial could be provided in the README file. Currently, it is hard to grasp the expressiveness of RefDiff just by reading the README file Dear Bergel, I have added more info in the README, along with a first version of a tutorial: https://github.com/aserg-ufmg/RefDiff/blob/master/doc/Tutorial1.md We intend to keep improving the docs. Let me know if you have other suggestions. Thanks
gharchive/issue
2017-04-27T08:32:29
2025-04-01T06:37:57.813590
{ "authors": [ "bergel", "danilofes" ], "repo": "aserg-ufmg/RefDiff", "url": "https://github.com/aserg-ufmg/RefDiff/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
180107449
label text animation with animating gauge is not supported When the gauge starts animating the text inside the label shows the hard value, instead of showing the value at the current frame of animation. I am leaving this requirement altogether. I don't think providing an extra label anywhere in the gauge is useful. The users might put gauge inside a div where they have the liberty to put label anywhere. If i add this feature, i can only support two places: above or below only.
gharchive/issue
2016-09-29T17:05:44
2025-04-01T06:37:57.826464
{ "authors": [ "ashish-chopra" ], "repo": "ashish-chopra/angular-gauge", "url": "https://github.com/ashish-chopra/angular-gauge/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
314732063
Add FSMonitor - macOS app that monitors all changes in the file system http://fsmonitor.com/ Thanks for the suggestion, why don't send a pull request?
gharchive/issue
2018-04-16T16:36:47
2025-04-01T06:37:57.827475
{ "authors": [ "alichtman", "ashishb" ], "repo": "ashishb/osx-and-ios-security-awesome", "url": "https://github.com/ashishb/osx-and-ios-security-awesome/issues/10", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
277534559
Issue when starting Gekko - Windows 10 Installed Gekko using this guide: https://gekko.wizb.it/docs/installation/installing_gekko_on_windows_with_bash_on_windows_10.html Install seems to work ok except the handle.js file is different. Line 53 is different so i modified WAL to DEL any (note the error is the same regardless of whether i modify handle.js or not) When i start using 👍 n**ode gekko --ui** I get the output below: **TAlib is enabled TULIP indicators is enabled Serving Gekko UI on http://localhost:3000/ (node:808) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Exited with code 3** When i then navigate with my browser it seems ot be working except as soon as i start a new live Gekk and then press start i get this: **_events.js:160 throw er; // Unhandled 'error' event ^ Error: SQLITE_IOERR: disk I/O error at Error (native) RECEIVED ERROR IN GEKKO 1943633284398145 Child process has died._** and a popup in my browser saying: GEKKO ERROR: Child process has died. Hi, the problem here is that your SQLITE3 plugin is using WAL journaling mode, which doesn't work well on windows 10 and creates this problem when you try to open the history database for reading. The develop branch has a fix to auto-select the DELETE journaling mode when windows is detected, alternatively change the 'WAL' entry in sqlite.journalMode in your config file to 'DELETE', you will need to delete and reimport your history. However there are other problems you will experience on Windows 10, even with Delete mode the write to the databse will be very slow and data may become corrupt during an import (data will be missing from the end of the import our the journal is corrupted which can use your import to be broken into multiple date ranges). There is a pull request oustanding to address this, #1369 @askmike I think the PR is going to be important for Windows users on BASH. Any chance of prioritizing a review? I did modify WAL to DEL on line 53 of handle.js. Should I have made that DELETE instead? I amended line 53 to read: db.run('PRAGMA journal_mode = ' + config.sqlite.journalMode||'DELETE'); but it is still getting errors. The full output is below: TAlib is enabled TULIP indicators is enabled Serving Gekko UI on http://localhost:3000/ (node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Exited with code 3 <-- GET /favicon.ico --> GET /favicon.ico 404 4ms - <-- GET /api/imports --> GET /api/imports 200 1ms 2b <-- GET /api/gekkos --> GET /api/gekkos 200 1ms 2b <-- GET /api/gekkos --> GET /api/gekkos 200 0ms 2b <-- GET /api/apiKeys --> GET /api/apiKeys 200 0ms 2b <-- GET /api/exchanges --> GET /api/exchanges 200 6,170ms 60.95kb <-- GET /api/configPart/candleWriter <-- GET /api/configPart/performanceAnalyzer <-- GET /api/strategies --> GET /api/configPart/candleWriter 200 11ms 47b --> GET /api/configPart/performanceAnalyzer 200 10ms 144b <-- GET /api/configPart/paperTrader --> GET /api/configPart/paperTrader 200 4ms 132b --> GET /api/strategies 200 46ms 1.76kb <-- POST /api/startGekko Gekko 129180573268402 started --> POST /api/startGekko 200 18ms 143b <-- POST /api/startGekko Gekko 627610306949613 started --> POST /api/startGekko 200 10ms 486b events.js:160 throw er; // Unhandled 'error' event ^ Error: SQLITE_IOERR: disk I/O error at Error (native) RECEIVED ERROR IN GEKKO 627610306949613 Child process has died. "disk I/O error" has to do with your harddrive, not much Gekko (or SQLITE) can do about it, try moving Gekko to another disk or partition. On Wed, Nov 29, 2017 at 1:01 PM, welly59 notifications@github.com wrote: I amended line 53 to read: db.run('PRAGMA journal_mode = ' + config.sqlite.journalMode||'DELETE'); but it is still getting errors. The full output is below: TAlib is enabled TULIP indicators is enabled Serving Gekko UI on http://localhost:3000/ (node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Exited with code 3 <-- GET /favicon.ico --> GET /favicon.ico 404 4ms - <-- GET /api/imports --> GET /api/imports 200 1ms 2b <-- GET /api/gekkos --> GET /api/gekkos 200 1ms 2b <-- GET /api/gekkos --> GET /api/gekkos 200 0ms 2b <-- GET /api/apiKeys --> GET /api/apiKeys 200 0ms 2b <-- GET /api/exchanges --> GET /api/exchanges 200 6,170ms 60.95kb <-- GET /api/configPart/candleWriter <-- GET /api/configPart/performanceAnalyzer <-- GET /api/strategies --> GET /api/configPart/candleWriter 200 11ms 47b --> GET /api/configPart/performanceAnalyzer 200 10ms 144b <-- GET /api/configPart/paperTrader --> GET /api/configPart/paperTrader 200 4ms 132b --> GET /api/strategies 200 46ms 1.76kb <-- POST /api/startGekko Gekko 129180573268402 started --> POST /api/startGekko 200 18ms 143b <-- POST /api/startGekko Gekko 627610306949613 started --> POST /api/startGekko 200 10ms 486b events.js:160 throw er; // Unhandled 'error' event ^ Error: SQLITE_IOERR: disk I/O error at Error (native) RECEIVED ERROR IN GEKKO 627610306949613 Child process has died. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/askmike/gekko/issues/1375#issuecomment-347761666, or mute the thread https://github.com/notifications/unsubscribe-auth/AA7MD77M442fCeVe1Y3dAMUu7iwrCyVgks5s7PM0gaJpZM4Qt5Y5 . -- PGP key at keybase.io/mikevanrossum https://keybase.io/mikevanrossum/key.asc @askmike This is one of the problems I received when trying to sort out corruption problems of my SQLITE3 setup on Windows on BASH. It may not be the disk in this case. @welly59 Delete your history files, then merge the PR #1369 and let me know if that addresses your issue. cmroche, would you mind providing instructions on how to merge the PR? @welly59 If you are using GitKraken I think you can right click on the PR (in the PR view) and it will give you an option to merge it to your current branch. If you sync to the develop branch you should be able to do this without any conflicts. I will give that a try. I installed using git via bash so a cmd line method would be ideal ok i think i have done correctly. I am not getting disk i/o error anymore and have set up 2 paper traders. I'll report back once they have run for a bit How long does a typical import take? I've tried various imports for just the last day or two, and none successfully go through. What's the best way to troubleshoot? ok i/o error solved but now failing to load getCandles. It keeps spawning node processes until my machine dies. Ive installed ubuntu in a VM instead which is working fine Got this same error running gekko on bash on windows, on two different computers. Switching to the develop branch seems to work for now.
gharchive/issue
2017-11-28T20:54:22
2025-04-01T06:37:57.869358
{ "authors": [ "OterLabb", "askmike", "cmroche", "patjk", "welly59" ], "repo": "askmike/gekko", "url": "https://github.com/askmike/gekko/issues/1375", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
378408761
Unable to Backtest on Ubuntu Note: this is the technical bug tracker, please use other platforms for getting support and starting a (non technical) discussion. See the getting help page for details. I'm submitting a ... [ X ] bug report [ ] question about the decisions made in the repository Action taken (Tried Backtesting a Dataset) Expected result (Would have shown the backtest results) Actual result (TypeError: Cannot read property 'to' of undefined) Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc) Error : ` from: r.to, ^ TypeError: Cannot read property 'to' of undefined at _.map.r (/root/gekko/core/tools/dateRangeScanner.js:117:25) at Function.map (/root/gekko/node_modules/lodash/dist/lodash.js:3509:27) at async.whilst (/root/gekko/core/tools/dateRangeScanner.js:115:15) at /root/gekko/node_modules/async/dist/async.js:988:16 at Object.whilst (/root/gekko/node_modules/async/dist/async.js:5092:25) at async.parallel (/root/gekko/core/tools/dateRangeScanner.js:68:13) at /root/gekko/node_modules/async/dist/async.js:3853:9 at /root/gekko/node_modules/async/dist/async.js:484:16 at iterateeCallback (/root/gekko/node_modules/async/dist/async.js:1013:24) at /root/gekko/node_modules/async/dist/async.js:988:16 at /root/gekko/node_modules/async/dist/async.js:3850:13 at apply (/root/gekko/node_modules/async/dist/async.js:41:25) at /root/gekko/node_modules/async/dist/async.js:76:12 at Statement. (/root/gekko/plugins/sqlite/reader.js:135:5) ` Tried on Gekko UI. Haven't tried command-line. More Findings I retried backtesting and it worked once and then again when I tried it gave out below error Error: non-error thrown: Child process has died. at Object.onerror (/root/gekko/node_modules/koa/lib/context.js:105:40) at <anonymous> at process._tickCallback (internal/process/next_tick.js:189:7) Plz note am using default RSI strategy to backtest with only modifications in the TOML file. I restarted my bot and after 2 days it crashed again with the same error. It happens all the time. I'm having the same issue running on a Raspberry Pi 3 npm v6.4.1 and nodejs 8.12.0 Heres the error: `<-- POST /api/scansets /home/qwe/gekko/core/tools/dateRangeScanner.js:117 from: r.to, ^ TypeError: Cannot read property 'to' of undefined at _.map.r (/home/qwe/gekko/core/tools/dateRangeScanner.js:117:25) at Function.map (/home/qwe/gekko/node_modules/lodash/dist/lodash.js:3509:27) at async.whilst (/home/qwe/gekko/core/tools/dateRangeScanner.js:115:15) at /home/qwe/gekko/node_modules/async/dist/async.js:988:16 at Object.whilst (/home/qwe/gekko/node_modules/async/dist/async.js:5092:25) at async.parallel (/home/qwe/gekko/core/tools/dateRangeScanner.js:68:13) at /home/qwe/gekko/node_modules/async/dist/async.js:3853:9 at /home/qwe/gekko/node_modules/async/dist/async.js:484:16 at iterateeCallback (/home/qwe/gekko/node_modules/async/dist/async.js:1013:24) at /home/qwe/gekko/node_modules/async/dist/async.js:988:16 at /home/qwe/gekko/node_modules/async/dist/async.js:3850:13 at apply (/home/qwe/gekko/node_modules/async/dist/async.js:41:25) at /home/qwe/gekko/node_modules/async/dist/async.js:76:12 at Statement. (/home/qwe/gekko/plugins/sqlite/reader.js:135:5) --> POST /api/scansets 200 6,668ms 254b ` This happens as soon as I try to scan datasets. Me to :(
gharchive/issue
2018-11-07T18:14:39
2025-04-01T06:37:57.881031
{ "authors": [ "pitbullgti", "sahni619", "thecccut" ], "repo": "askmike/gekko", "url": "https://github.com/askmike/gekko/issues/2646", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
351899830
Stable What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) feature What is the current behavior? (You can also link to an open issue here) When gekko service ir restarted (manually) we lost live gekkos (are empty) What is the new behavior (if this is a feature change)? When service is stopped (i.e. for a git pull) and then restarted, gekko recovers all running gekkos and stir running them Other information: Hey, it seems something went wrong with this PR. Just to be sure: a PR is a pull request which has some new code that you'd like to add to the project. When service is stopped (i.e. for a git pull) and then restarted, gekko recovers all running gekkos and stir running them I can't find code for this in the changes. But I do find a lot of other stuff. Could you clarify what you were trying to do? Closing this, feel free to comment and answer the questions above if you want to reopen.
gharchive/pull-request
2018-08-19T12:38:46
2025-04-01T06:37:57.884886
{ "authors": [ "askmike", "mariodantas" ], "repo": "askmike/gekko", "url": "https://github.com/askmike/gekko/pull/2460", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1594809031
[Confluence Plugin] Add info endpoint in plugin This change adds a new /info endpoint that can be called by any logged in user. By default it returns the following fields: User Info: Returns information regarding the logged in user such as name, key, email, isAdmin, etc. Instance info: Returns information regarding the Confluence Server instance such as version, baseUrl, etc. Plugin info: Returns information regarding the Scio plugin such as target, version, lastResponseCode, lastSuccessTime, lastFailureTime, etc. [x] Test locally [x] Test on actual instance Latest changes LGTM.
gharchive/pull-request
2023-02-22T10:02:51
2025-04-01T06:37:57.887379
{ "authors": [ "bharath-bhushan-glean", "sriram-vudayagiri-glean" ], "repo": "askscio/atlassian-plugins", "url": "https://github.com/askscio/atlassian-plugins/pull/19", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
63552877
Allow more expressive functions to be given as input in operator tables for buildExpressionParser Currently, the Operator constructors used for building expression parsers are required to be pure functions wrapped in the ParsecT monad, i.e.: Infix (ParsecT s u m (a -> a -> a)) Assoc Prefix (ParsecT s u m (a -> a)) Postfix (ParsecT s u m (a -> a)) This however limits what can be done with the operators, for example it is not possible to perform additional checks of the parsed argument under the ParsecT monad and then fail using fail or unexpected, or build up new information in the complete term using the parsed sub-terms. I would therefore like to ask for a feature request that allows taking more expressive functions where only the resulting term needs to be in the ParsecT monad. In order to not break backwards compatibility, one could imagine having an M postfix to each new constructor, so the following constructors are added: InfixM (a -> a -> ParsecT s u m a) Assoc PrefixM (a -> ParsecT s u m a) PostfixM (a -> ParsecT s u m a) I would happily submit a PR if the feature request is considered. @ahmadsalim, this is interesting. We're considering replacing of old constructors with something like this in Megaparsec. You can discuss it here: mrkkrp/megaparsec#22.
gharchive/issue
2015-03-22T16:57:16
2025-04-01T06:37:57.890372
{ "authors": [ "ahmadsalim", "mrkkrp" ], "repo": "aslatter/parsec", "url": "https://github.com/aslatter/parsec/issues/32", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1369637455
js_binary + ESM + hermeticity Assuming something like this in a BUILD file: load("@aspect_rules_ts//ts:defs.bzl", "ts_project") load("@aspect_rules_js//js:defs.bzl", "js_binary") load("@aspect_rules_swc//swc:defs.bzl", "swc_transpiler") load("@bazel_skylib//lib:partial.bzl", "partial") ts_project( name = "src", srcs = [ "src.ts", ], declaration = True, transpiler = partial.make( swc_transpiler, source_maps = "true", swcrc = "//:swcrc_esm", ), tsconfig = "//:tsconfig", deps = [ "//:node_modules/@types/node", ], ) js_binary( name = "src_bin", entry_point = ":src.js", ) And assuming that the transpilation of src.ts makes an ESM bundle in bazel-dist, running the js_binary fails because the generated src.js file is neither src.mjs, nor does Node know to run as "type": "module". In other words, bazel run //src:src_bin will fail: (node:92736) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension. However, if in the source tree I update the package.json file to have "type": "module", the js_binary will now run. I'm not sure if this is a bug or a question, but I'm surprised that a change to the package.json file in the source tree affects the js_binary. I assumed (perhaps incorrectly) that there would be a sandbox for the node process underpinning js_binary, and that it wouldn't have access to a package.json in the source tree. Am I missing something? Ah, I see. Happy to close this if it's a dupe... wdyt?
gharchive/issue
2022-09-12T10:22:26
2025-04-01T06:37:57.900848
{ "authors": [ "paullewis" ], "repo": "aspect-build/rules_js", "url": "https://github.com/aspect-build/rules_js/issues/446", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
261877945
Update descriptions for API docs Fixes #2197 This has the smell :nose: of an auto-generated document. If so, plz close. If not, this fills in the MIA descriptions. @guardrex, Thanks for having already signed the Contribution License Agreement. Your agreement was validated by .NET Foundation. We will now review your pull request. Thanks, .NET Foundation Pull Request Bot Looks like it's covered by https://github.com/aspnet/ApiDocs/pull/23.
gharchive/pull-request
2017-09-30T22:51:59
2025-04-01T06:37:57.911049
{ "authors": [ "dnfclas", "guardrex" ], "repo": "aspnet/ApiDocs", "url": "https://github.com/aspnet/ApiDocs/pull/29", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
523012456
Hosted services package reference Fixes #15649 I think we can merely call out the package is added/present for the Worker SDK and implicit for the Web SDK. Strip out the bits on a Web Host and Generic Host pair of samples. Those went away recently (3.0 updates era), so those bits no longer apply. How about that :point_up"?
gharchive/pull-request
2019-11-14T17:50:57
2025-04-01T06:37:57.912214
{ "authors": [ "guardrex" ], "repo": "aspnet/AspNetCore.Docs", "url": "https://github.com/aspnet/AspNetCore.Docs/pull/15692", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
464854984
Blazor Server-Side: Api-Controllers of an other assembly got not mapped I'm running a blazor server side application on the latest .net core preview6. When i'm moving the api controllers to an other referenced assembly the controller got not mapped to any route at runtime. It works when i move them back to the host or when i remove all the blazor server hosting stuff. Thank you for filing this issue. In order for us to investigate this issue, please provide a minimalistic repro project that illustrates the problem. I have provided a sample here: https://1drv.ms/u/s!AvfU-VGBgXuLg-o4LRpQynSJvInKcA?e=Pb00ls There is an API Project with the moved Controller. By loading the FetchData Page you'll see that the controller was not found. By removing the external controler and activating the original one on the server project you'll see that the FetchData Page is working. We're closing this as this seems to be a dupe of https://github.com/aspnet/AspNetCore/issues/11921
gharchive/issue
2019-07-06T13:34:50
2025-04-01T06:37:57.914990
{ "authors": [ "endeffects", "mkArtakMSFT" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/11937", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
467609947
StreamId Map should be cleared when connection is stopped Currently we don't clear the internal dictionary for mapping stream ids to active streams when the client disconnects. It's a silent issue because we will override any stale values with new streams, but regardless, we should be clearing out the map on connection close. And removing on stream closes? We could. We'd have to have another map though correlating invocation ids to their stream ids. So we'd have to push the invocation id into the checkUploadStreamlogic where we allocate the stream Ids so we could group them together. I don't feel to strongly about this tbh. Sure, but it's technically a memory leak on the client. *sighs in agreement Consider for preview 8, relatively low priority.
gharchive/issue
2019-07-12T21:36:49
2025-04-01T06:37:57.917029
{ "authors": [ "BrennanConroy", "anurse", "mikaelm12" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/12135", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
478427422
Blazor - Javascript interop in thread from Invoke() breaks. Hi, If you try to change the pages html from any function/thread starting from Invoke() it breaks the page with no errors. e.g. public void ButtonClicked() { // do stuff... Invoke(() => { this.StateHasChanged(); UpdateHTMLDOM(); // <---------- this breaks the entire page. }); } private async void UpdateHTMLDOM() { await JSRuntime.invokemethod<>("CreateImage", "myImageString.png"); } Can you provide a minimal example that reproduces the problem? Given the default project template, what is the minimal set of steps needed to update it to show the problem you're describing? Using the default project template "Server-side razor" given in the visual studio 2019 preview version. Using netcore sdk version "3.0.0-preview6-27804-01". Replace the counter.razor code with the following: @page "/counter" @inject IJSRuntime JSRuntime <h1>Counter</h1> <p>Current count: @currentCount</p> <button @onclick="@ButtonClicked">Increment Counter.</button> @code { private int currentCount = 0; private string Title = ""; bool PreventRenderLoop = false; protected override void OnAfterRender() { if (PreventRenderLoop) return; // stop the page from entering a render loop. Task getDataTask = Task.Run(GetData); getDataTask.Wait(); } public async void GetData() { // demo just load some variables. Title = "Hello World"; // e.g. data = MyService.LoadData(); PreventRenderLoop = true; // refresh the page after loading new values Invoke(() => { this.StateHasChanged(); // now the page has the new values loaded from the service // we want to display the DOM which is relavent Task.Run(DisplayDOM).Wait(); }); } public async Task DisplayDOM() { currentCount = -1; await JSRuntime.InvokeAsync<string>("Notify", "Successfully displayed dom!", "success"); } public void ButtonClicked() { currentCount++; } } And the code for the javascript notification (which goes in wwwroot/js and you reference in _Host.cshtml) is: function Notify(pStrMessage, pStrMode) { alert(pStrMessage + " _ " + pStrMode); } If you run the above code, when you click the button to increment the counter, nothing happens. I resolved this by changing it to this: @page "/counter" @inject IJSRuntime JSRuntime <h1>Counter</h1> <p>Current count: @currentCount</p> <button @onclick="@ButtonClicked">Increment Counter.</button> @code { private int currentCount = 0; private string Title = ""; bool PreventRenderLoop = false; protected override void OnAfterRender() { if (PreventRenderLoop) return; // stop the page from entering a render loop. Task getDataTask = Task.Run(GetData); getDataTask.Wait(); } public async void GetData() { // demo just load some variables. Title = "Hello World"; // e.g. data = MyService.LoadData(); PreventRenderLoop = true; // refresh the page after loading new values await Invoke(() => { this.StateHasChanged(); }); // now the page has the new values loaded from the service // we want to display the DOM which is relavent Task domTask = Task.Run(DisplayDOM); await domTask; } public async Task DisplayDOM() { currentCount = -1; await JSRuntime.InvokeAsync<string>("Notify", "Successfully displayed dom!", "success"); } public void ButtonClicked() { currentCount++; } } I notice now that i set the PreventRender boolean and never unset it, therefore the OnAfterRender() function is constantly blocked. Could that cause the rest of the program to stop working? if so why does the above code allow the counter to work? Cheers! Richard Thanks for contacting us, @Bambofy. It's not clear what you're trying to do as it most probably be simpler than this. Give us some more context, so that we are able to guide you better. i'm trying to load data from a service in the OnAfterRender function and then refreshing the page to update the values. Why use OnAfterRender? Typically the pattern for loading data asynchronously in Blazor looks like this: @if (myData == null) { <div>something to show while data is loading</div> } else { <div>show your data here</div> } @code { private MyData myData; protected override async Task OnInitialized() { myData = await LoadMyData(); } } This pattern will render twice, once to show a "loading" UI and then again after the LoadMyData() method completes. Do you have something special that you're trying to do here? Why use OnAfterRender? Typically the pattern for loading data asynchronously in Blazor looks like this: @if (myData == null) { <div>something to show while data is loading</div> } else { <div>show your data here</div> } @code { private MyData myData; protected override async Task OnInitialized() { myData = await LoadMyData(); } } This pattern will render twice, once to show a "loading" UI and then again after the LoadMyData() method completes. Do you have something special that you're trying to do here? Hey i did not find the root cause of the original problem to be honest! The way i'm making it work at the moment is like this: private static Type AccountType = null; private static string EmailHash = ""; private static string PasswordHash = ""; protected override async Task OnAfterRenderAsync() { // Ensure user is authenticated. if (AccountType == null) { await Authenticate(); // Show new data. await Invoke(() => { this.StateHasChanged(); }); } await JSRuntime.InvokeAsync<string>("WebpageReady"); } protected async Task Authenticate() { /* * This authorization check can potentially be overriden since this code is clientside * Therefore we only use this clientside authorization check to render the page's DOM. */ EmailHash = await JSRuntime.InvokeAsync<string>("GetSessionStorage", "EMAIL"); PasswordHash = await JSRuntime.InvokeAsync<string>("GetSessionStorage", "PASSWORD"); // Get the account type. AccountType = AccountService.GetAccountType(EmailHash, PasswordHash); }
gharchive/issue
2019-08-08T12:01:53
2025-04-01T06:37:57.926123
{ "authors": [ "Bambofy", "SteveSandersonMS", "mkArtakMSFT", "rynowak" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/12969", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
499257727
Visual Studio 2019 Preview miss Blazor Template Clean new INstall of Visual Studio 2019 Preivew- with all updates Create Blazor Server App .Razor files are uncolored in edtor Visual Studio installer dialog comes up- is already running- closing dialog- see nothing happen Starting Project "Missing asp.net core-modul for hosting … iis express" 2nd machine everything works fine Thanks for contacting us, @hannespreishuber. This is not something runtime related. Please use VS Feedback to report this issue so the appropriate team can investigate this further.
gharchive/issue
2019-09-27T06:21:39
2025-04-01T06:37:57.929122
{ "authors": [ "hannespreishuber", "mkArtakMSFT" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/14498", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
392384106
Don't swallow IHostedService.StartAsync exceptions As discussed briefly in https://github.com/aspnet/Hosting/issues/1085#issuecomment-305717425 I propose that the HostedServiceExecutor should not catch exceptions in StartAsync. By swallowing exceptions, the hosting-layer forces all services to be optional which is undesired IMO. By not swallowing exceptions we give services the option to decide this on its own. If a service is truly optional, it can swallow exceptions in its own StartAsync code. I also think that StartAsync-exceptions should not be combined into an AggregateException. Instead it should just stop on the first exception. One thing to consider is that right now this code is executed AFTER the server has been started. Not sure what effects this has when exceptions are no longer caught?! I still hold the opinion that hosted services should be started before the server is started (a service could always decide to kick off another task in its StartAsync if it wants to be delayed) but this has also been discussed briefly in #1085 and declined. Workaround for the 2.x version public static class ServiceCollectionUtils { public static void AddCriticalHostedService<TService, TImplementation>(this IServiceCollection services) where TService : class where TImplementation : class, IHostedService, TService { services.AddSingleton<TService, TImplementation>(); services.AddHostedService<CriticalHostedServiceWrapper<TService>>(); } private class CriticalHostedServiceWrapper<TService> : IHostedService { private readonly IHostedService _hostedService; private readonly ILogger _logger; public CriticalHostedServiceWrapper(TService hostedService, ILoggerFactory loggerFactory) { _hostedService = (IHostedService)hostedService; _logger = loggerFactory.CreateLogger(_hostedService.GetType()); } public async Task StartAsync(CancellationToken cancellationToken) { try { await _hostedService.StartAsync(cancellationToken); } catch (Exception e) { _logger.LogCritical(e, "Cannot start critical service. The application will exit."); Environment.Exit(1); } } public Task StopAsync(CancellationToken cancellationToken) { return _hostedService.StopAsync(cancellationToken); } } } Looks like behavior still the same in 3.0. Update to use the generic host? It works with generic host. Damn, I'm so far behind..
gharchive/issue
2017-07-31T08:09:51
2025-04-01T06:37:57.933306
{ "authors": [ "AsValeO", "cwe1ss", "davidfowl", "srollinet" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/5900", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
409134326
Remove CodeGenerator metadata in default service reference experience Aim here is to remove the current requirement to specify the CodeGenerator metadata on <Service...Reference> (soon <OpenApiReference> and <ProtobufReference>) items in a project file. Some metadata must remain to say "give me TypeScript" (instead of project's default language) but this should be the exception and not the rule. Fortunately, the type of document is going to be explicit (again <OpenApiReference> and <ProtobufReference>) and we won't have to choose among multiple tools. Probably best to make TargetLanguage the main override we provide in this area. (@glennc agreed?) This change will require our Open API code generation partners (NSwag, AutoRest, ...) either all implement the same target name or include a well-known property specifying their target's prefix. Either way, the last file that declares a target or updates a property wins. So, unless our partners object, let's avoid the indirection and go with a single Open API target name. (gRPC doesn't have multiple partners for the foreseeable future and should be fine with a single target for that scenario.) @glennc I put this in Preview4. That make sense to you? @mkArtakMSFT the "Small" cost here doesn't mean calendar time due to the need for NSwag changes. From discussion today. Generator tag isn't required by default. There will be a language property a customer can add with the values "Default", and "TypeScript" where default means whatever the extensions/language the project is for and TypeScript is TypeScript. Partially addressed in ce8f053af7e9. Will add language metadata soon. After offline discussion with @glennc, we've decided to close this issue and await custom input on the feature's overall usability
gharchive/issue
2019-02-12T06:01:27
2025-04-01T06:37:57.937688
{ "authors": [ "dougbu", "glennc" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/7491", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
414167433
Assembly Load Context throws errors in Blazor App 0.8 Describe the bug When trying to load an assembly using the AssemblyLoadContext.Default a NotImplementedException is thrown. The Static property Default and it's other static methods don't seem to have any implementations. To Reproduce Steps to reproduce the behavior: Download Sample code from: fred-perkins/BlazorLoading Run application Select Fetch Assembly in the left panel Click "Fetch Assembly using AssemblyLoadContext" Button Observe error in console: WASM: [System.NotImplementedException] The method or operation is not implemented. Expected behavior AssemblyLoadContext should attempt to load the DLL in the default context. Additional context We need to be able to dynamically load libraries one at a time within Blazor. Using Assembly.Load to read from bytes presents a problem where assemblies can be loaded several times, which causes a InvalidCastException to be thrown when resolving a type from one of those libraries. Using AssemblyLoadContext it is possible to avoid this, however it seems unsupported when running in Webassembly on Blazor - But functions within a console application. This is likely a mono issue: Looking at mono it seems there are two implementations of the class: AssemblyLoadContext Facade & AssemblyLoadContext. @danroth27 - We spoke about this a week ago (Part of a derivco call). This is blocking us from migrating our framework to Blazor for our POC, as we need to be able to dynamically load libraries in the background. Is there a known alternative we could use other than Assembly.Load - Which doesn't solve our problem @fred-perkins Thanks for sharing this feedback with us! @SteveSandersonMS @lewing Thoughts on this one? Do we have any support currently for dynamically loading .NET assemblies in a Blazor app? Is this really all part of supporting lazy loading of application areas? Could you clarify what the problem with Assembly.Load is? If your code caches the results of loading the assembly, is that not sufficient to ensure you only load each one once? @SteveSandersonMS / @danroth27 So after a bit of testing I got Assembly.Load(byte[]) to work for me as i wanted. I think there was a misunderstanding about how Assembly.Load varies compared to the API on DotNet core - Where a byte array loaded assembly is anonymous and causes issues where assemblies would be loaded twice. This occurred when one was manually loaded, and the second where an assembly used a type for a dependent assembly - Which results in a InvalidCastException with duplicate types. This means that for me to work around the issue I've had to use AssemblyLoadContext in .Net Core & Assembly.Load for the web assembly portion. Ideally i'd prefer to use the AssemblyLoadContext Api across the board so i don't have to do fallback code in my .NetStandard libraries like the following: public Assembly LoadFromStream(byte[] assemblySource) { if (AssemblyLoadContext.Default != null) { return AssemblyLoadContext.Default.LoadFromStream(new MemoryStream(assemblySource)) } return Assembly.Load(assemblySource); } I did have a dig through the mono source code in the end. It looks like they don't fully support the AssemblyLoadContext Api, and where they have partial support it only works for when you specify the assemblyName or filePath - The LoadFromStream throws a not supported exception as mentioned above. Thanks for the update, @fred-perkins! I'm glad you have a workable solution for now. Longer term, any inconsistencies between .NET Core and Mono are definitely worth reporting. In this specific case it sounds like Mono ideally would support more of AssemblyLoadContext, and might even want tweaks to ensure that Assembly.Load behaves the same as it does on .NET Core (even if that behavior isn't what you wanted in your scenario, it should still be consistent). If you were able to report any such API gaps or inconsistencies to http://github.com/mono/mono that would be really helpful. I'll close this as external since it's really about the underlying runtime, not about Blazor. @SteveSandersonMS Awesome, thanks for the reply. I'll get a ticket raised with the Mono guys :) @fred-perkins Can you link to the related mono issue? I'm struggling to properly resolve a type existing in Assembly A, that depends on Assembly B. Both are loaded into the default context already, but I'm looking to ensure A uses the type from B, rather than what's already loaded.
gharchive/issue
2019-02-25T15:41:01
2025-04-01T06:37:57.949558
{ "authors": [ "SteveSandersonMS", "danroth27", "fred-perkins", "tylerhartwig" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/7917", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
486612056
Add a FetchHttpClient Fixes https://github.com/aspnet/AspNetCore/issues/9444 Taking over PR #12599, I'll try to keep the two original commits when merging. Bueller
gharchive/pull-request
2019-08-28T21:13:11
2025-04-01T06:37:57.951457
{ "authors": [ "BrennanConroy" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/pull/13524", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
493980448
Remove Newtonsoft JSON.NET from Blazor WASM template Addresses #13422 @adrianwright109 could you cherry-pick your commit on the release/3.1 branch? Closing this PR in favour of #14061 that correctly targets release/3.1 branch.
gharchive/pull-request
2019-09-16T11:00:41
2025-04-01T06:37:57.953028
{ "authors": [ "adrianwright109", "pranavkm" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/pull/14028", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
42945280
Harden trigger and callback registrations in CacheEntry There are possible race conditions in CacheEntry between AttachTriggers and DetatchTriggers. Similar for the callback APIs. These need to be hardened to prevent potential stress/race crashes. Every call to InvokeEvictionCallbacks is wrapped in the ReaderWriterLock in MemoryCache so it looks impossible to invoke more than once #126 and #132
gharchive/issue
2014-09-16T23:18:14
2025-04-01T06:37:57.957907
{ "authors": [ "BrennanConroy", "Tratcher" ], "repo": "aspnet/Caching", "url": "https://github.com/aspnet/Caching/issues/17", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
159244617
new topic RUNNING ASP.NET CORE APPLICATIONS WITH IIS AND ANTARES (AZURE WEBSITES) [ ] New article focused on Azure websites. Get info from RUNNING ASP.NET CORE APPLICATIONS WITH IIS AND ANTARES (AZURE WEBSITES) https://docs.asp.net/en/latest/publishing/iis.html could also include some of these details. @Tratcher There are a few nice troubleshooting adds for sure. I'll open an issue and make a list, including talking about the publish-iis tooling, which still isn't in that doc. Given that people assume it actually publishes apps to IIS, yeah, it probably should get a few words.
gharchive/issue
2016-06-08T18:55:59
2025-04-01T06:37:57.960384
{ "authors": [ "GuardRex", "Rick-Anderson", "Tratcher" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/issues/1358", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
206393452
aspnetcore/hosting/directory-structure.md https://docs.microsoft.com/en-us/aspnet/core/hosting/directory-structure fixed by https://github.com/aspnet/Docs/commit/b6ff637c34bca1eba67c5a1cb7a4892ca201edc9
gharchive/issue
2017-02-09T03:24:41
2025-04-01T06:37:57.961771
{ "authors": [ "Rick-Anderson" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/issues/2709", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
333556073
Missing "Select Department" line of code, or, code description is inaccurate. There appears to be one line of code missing in the following section: https://docs.microsoft.com/en-us/aspnet/core/data/ef-rp/update-related-data?view=aspnetcore-2.1#update-the-courses-edit-page Under the section that instructs us to "Update Pages/Courses/Edit.cshtml" the instructions state that the "preceding markup" has added a "Select Department" statement to the pull-down list. However, it appears that the line of code to add the statement is left out in the example. On the other hand, it may have been intentionally left out for editing section as a Department would have already been selected when the Course was created. If this is the case, then the statement describing what the preceding code has added should not include the statement: "Adds the "Select Department" option. This change renders "Select Department" rather than the first department. Removing the "Select Department" option would also seem to render the validation statement unnecessary, as there would be no way to unselect a department before saving changes. Once again, thank you for the great tutorials. Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 140ce637-4d58-995f-c69e-b1de51f1dbb5 Version Independent ID: 9bc46ffa-60db-bdee-0984-fab183c8f503 Content: Razor Pages with EF Core in ASP.NET Core - Update Related Data - 7 of 8 Content Source: aspnetcore/data/ef-rp/update-related-data.md Product: asp.net-core GitHub Login: @Rick-Anderson Microsoft Alias: riande @Rick-Anderson can you assign this issue to me so I can fix it up and submit a PR? @m-henderson sorry missed this. I got it fixed. Thanks for the offer.
gharchive/issue
2018-06-19T07:49:11
2025-04-01T06:37:57.967918
{ "authors": [ "Rick-Anderson", "m-henderson", "yamazaroon" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/issues/7132", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
387865396
"Add providers" example doesn't match current ASP.NET Core Template None of the configuration examples matches the code found in the current ASP.NET Core template used by Visual Studio. public static class Program { /// &lt;summary&gt;Creates the web host builder.&lt;/summary&gt; /// &lt;param name="args"&gt;The arguments.&lt;/param&gt; /// &lt;returns&gt;&lt;/returns&gt; public static IWebHostBuilder CreateWebHostBuilder(string[] args) =&gt; WebHost.CreateDefaultBuilder(args) .UseStartup&lt;Startup&gt;(); /// &lt;summary&gt;Defines the entry point of the application.&lt;/summary&gt; /// &lt;param name="args"&gt;The arguments.&lt;/param&gt; public static void Main(string[] args) { CreateWebHostBuilder(args) .Build().Run(); } } Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 7184a308-a254-9350-a5cd-5bced1f369ae Version Independent ID: 726e3bf1-f367-d733-8933-bccc04da0e16 Content: Logging in ASP.NET Core Content Source: aspnetcore/fundamentals/logging/index.md Product: aspnet-core GitHub Login: @tdykstra Microsoft Alias: tdykstra @Grauenwolf ... It's somewhat common across the repo for a few reasons. For one thing, the sample apps trail the framework releases. Another is that samples concentrate on the concepts that the topics cover, which often don't strictly require the app to follow the template conventions. In this case (logging), the common thread is just hanging the ConfigureLogging method calls off of the host builder ... that's the same pattern regardless of the 2.x flavor. I'll mark this issue on the sample update tracking issue. The doc author who takes on the 2.2 updates for this sample will see this at that time. Thanks for commenting on the topic.
gharchive/issue
2018-12-05T17:28:55
2025-04-01T06:37:57.973670
{ "authors": [ "Grauenwolf", "guardrex" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/issues/9844", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
138772615
Update dependency-injection.rst ASP.NET 5 to ASP.NET Core 1 Thanks for your interest. Closing as this will be updated in the next sweep. Show all checks
gharchive/pull-request
2016-03-06T09:15:12
2025-04-01T06:37:57.975150
{ "authors": [ "Rick-Anderson", "SychevIgor" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/pull/1066", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
116860777
Fixed instructions for configuring social logins Fixes #648 Fixes instructions on how to dnvm use 1.0.0-beta 8 Fixes instructions on how to install user-secrets Will have to update these instructions for RC too, but can deal with that after it's released. Hi @bitcrazed, I'm your friendly neighborhood .NET Foundation Pull Request Bot (You can call me DNFBOT). Thanks for your contribution! This seems like a small (but important) contribution, so no Contribution License Agreement is required at this point. Real humans will now evaluate your PR. TTYL, DNFBOT; @rustd @Rick-Anderson Please review Looks good :shipit: Thanks! 05fc5fd2b5d92dc48f89cb696e1adedc522dad76
gharchive/pull-request
2015-11-13T21:52:24
2025-04-01T06:37:57.978521
{ "authors": [ "Rick-Anderson", "bitcrazed", "danroth27", "dnfclas" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/pull/649", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
186028179
Migration qa - index/toc fixes I also removed the _Files too. Hi @v-anpasi, I'm your friendly neighborhood .NET Foundation Pull Request Bot (You can call me DNFBOT). Thanks for your contribution! In order for us to evaluate and accept your PR, we ask that you sign a contribution license agreement. It's all electronic and will take just minutes. I promise there's no faxing. https://cla2.dotnetfoundation.org. TTYL, DNFBOT;
gharchive/pull-request
2016-10-28T21:38:05
2025-04-01T06:37:57.981018
{ "authors": [ "dnfclas", "v-anpasi" ], "repo": "aspnet/EntityFramework.Docs", "url": "https://github.com/aspnet/EntityFramework.Docs/pull/286", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
147483762
Include() ThenInclude() throws “Sequence contains more than one matching element” exception I have a model which involves parent-child relations on 3 levels : Corporations have companies Companies belong to a corporation and have factories Factories belong to a company Since these 3 entities share a lot in common, they all inherit from an abstract BaseOrganization entity. When I try to list all the factories, including their mother companies, and then including their mother corporations, I have these two different scenarios : Without including BaseOrganization into the context, code-first creates three tables (TPC). Include() and ThenInclude() work fine, and I can list factories and traverse relations as expected. Including BaseOrganization into the context, code-first creates one table (TPH) with a discriminator field. Include() and ThenInclude() throw a Sequence contains more than one matching element exception. This issue (without the inheritance pattern) was already adressed here ( https://github.com/aspnet/EntityFramework/issues/1460). So I think there is an issue with multi-level relations Include when these relations involve same base types AND when the tables are generated along the Table Per Hierarchy strategy. Below is the full reproduction code : using System; using System.Collections.Generic; using System.Linq; using Microsoft.Data.Entity; namespace MultiLevelTest { // All places share name and Id public abstract class BaseOrganization { public int Id { get; set; } public string Name { get; set; } } // a corporation (eg : Airbus Group) public class Corporation : BaseOrganization { public virtual ICollection<Company> Companies { get; set; } = new List<Company>(); } // a company (eg : Airbus, Airbus Helicopters, Arianespace) public class Company : BaseOrganization { public virtual Corporation Corporation { get; set; } public virtual ICollection<Factory> Factories { get; set; } = new List<Factory>(); } // a factory of a company (Airbus Toulouse, Airbus US...) public class Factory : BaseOrganization { public virtual Company Company { get; set; } } // setup DbContext public class MyContext : DbContext { // if this line is commented, then code first creates 3 tables instead of one, and everything works fine. public DbSet<BaseOrganization> BaseOrganizationCollection { get; set; } public DbSet<Corporation> Corporations { get; set; } public DbSet<Company> Companies { get; set; } public DbSet<Factory> Factories { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer( @"Server=(localdb)\mssqllocaldb;Database=MultiLevelTest;Trusted_Connection=True;MultipleActiveResultSets=true"); } protected override void OnModelCreating(ModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); modelBuilder.Entity<Corporation>().HasMany(c => c.Companies).WithOne(c => c.Corporation); modelBuilder.Entity<Company>().HasMany(c => c.Factories).WithOne(c => c.Company); modelBuilder.Entity<Factory>().HasOne(f => f.Company); } } public class Program { public static void Main(string[] args) { using (var ctx = new MyContext()) { ctx.Database.EnsureDeleted(); ctx.Database.EnsureCreated(); // Add a corporation with companies then factories (this works fine) if (!ctx.Corporations.Any()) CreateOrganizationGraph(ctx); // Get all the factories without including anything (this is still working fine) var simpleFactories = ctx.Factories.ToList(); foreach(var f in simpleFactories) Console.WriteLine(f.Name); // Get all the factories including their mother company, then their mother corporation var fullFactories = ctx.Factories .Include(f => f.Company) .ThenInclude(c => c.Corporation) .ToList(); foreach (var f in fullFactories) Console.WriteLine($"{f.Company.Corporation.Name} > {f.Company.Name} > {f.Name}"); } } public static void CreateOrganizationGraph(MyContext ctx) { var airbusCorp = new Corporation() { Name = "Airbus Group", Companies = new List<Company>() { new Company { Name = "Airbus", Factories = new List<Factory>() { new Factory {Name = "Airbus Toulouse (FR)"}, new Factory {Name = "Airbus Hambourg (DE)"} } }, new Company { Name = "Airbus Helicopters", Factories = new List<Factory>() { new Factory {Name = "Eurocopter Marignane (FR)"}, new Factory {Name = "Eurocopter Deutschland (DE)"} } } } }; ctx.Corporations.Add(airbusCorp); ctx.SaveChanges(); } } } Has anyone had the opportunity to reproduce, and state if that is an actual bug, or an improper approach on my own ? @kall2sollies I ran the scenario on our current bits and it seems to be working. I guess it must have been addressed in one of the Include improvements that we did after RC1 was released. Repro produces the following output: Airbus Toulouse (FR) Airbus Hambourg (DE) Eurocopter Marignane (FR) Eurocopter Deutschland (DE) Airbus Group > Airbus > Airbus Toulouse (FR) Airbus Group > Airbus > Airbus Hambourg (DE) Airbus Group > Airbus Helicopters > Eurocopter Marignane (FR) Airbus Group > Airbus Helicopters > Eurocopter Deutschland (DE) Hello, It must be as you say, since my code was run against rc1, and also ef7rc1. So the next release will probably solve that. Thanks for the reply. Calendau GUQUET Via mon iPhone Le 20 avr. 2016 à 23:57, Maurycy Markowski notifications@github.com a écrit : @kall2sollies I ran the scenario on our current bits and it seems to be working. I guess it must have been addressed in one of the Include improvements that we did after RC1 was released. Repro produces the following output: Airbus Toulouse (FR) Airbus Hambourg (DE) Eurocopter Marignane (FR) Eurocopter Deutschland (DE) Airbus Group > Airbus > Airbus Toulouse (FR) Airbus Group > Airbus > Airbus Hambourg (DE) Airbus Group > Airbus Helicopters > Eurocopter Marignane (FR) Airbus Group > Airbus Helicopters > Eurocopter Deutschland (DE) — You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub I am unable to migrate my test project to RC2 (ou RC3) builds of EF. Here's what I did: Added rererence to the beta nuget channel (https://www.myget.org/F/aspnetvnext/api/v3/index.json) Changed package names to reflect the new naming and version reset Here's my project.json file: { "version": "1.0.0-*", "description": "LinqKitIssue Console Application", "authors": [ "cguquet" ], "tags": [ "" ], "projectUrl": "", "licenseUrl": "", "compilationOptions": { "emitEntryPoint": true }, "dependencies": { "LinqKit": "1.1.3.1", "Microsoft.EntityFrameworkCore": "1.0.0-rc2-*", "Microsoft.EntityFrameworkCore.Commands": "1.0.0-rc2-*", "Microsoft.EntityFrameworkCore.SqlServer": "1.0.0-rc2-*", "Microsoft.Extensions.Caching.Abstractions": "1.0.0-rc2-*" }, "commands": { "LinqKitIssue": "LinqKitIssue" }, "frameworks": { "dnx451": { } } } It results in having unresolved dependencies, because they are not compatible with DNX version 4.5.1. I have tried to change the framework to everything possible or impossible (net451, dnx46, net46, dnx462, net462) but the result would always be the same. Here's the output of dnu list: Microsoft .NET Development Utility Clr-x86-1.0.0-rc1-16609 Listing dependencies for LinqKitIssue (D:\Documents\Visual Studio 2015\Projects\LinqKitIssue\src\LinqKitIssue\project.json) [Target framework DNX,Version=v4.6.2 (dnx462)] Framework references: fx/Microsoft.CSharp fx/mscorlib fx/System fx/System.ComponentModel.DataAnnotations fx/System.Core fx/System.Data fx/System.Transactions Package references: EntityFramework 6.1.3 Ix-Async 1.2.5 * LinqKit 1.1.3.1 Microsoft.AspNetCore.Hosting.Abstractions 1.0.0-rc2-20466 Microsoft.AspNetCore.Hosting.Server.Abstractions 1.0.0-rc2-20466 Microsoft.AspNetCore.Http.Abstractions 1.0.0-rc2-20466 Microsoft.AspNetCore.Http.Features 1.0.0-rc2-20466 * Microsoft.EntityFrameworkCore 1.0.0-rc2-20466 * Microsoft.EntityFrameworkCore.Commands 1.0.0-rc2-20466 Microsoft.EntityFrameworkCore.Relational 1.0.0-rc2-20466 Microsoft.EntityFrameworkCore.Relational.Design 1.0.0-rc2-20466 * Microsoft.EntityFrameworkCore.SqlServer 1.0.0-rc2-20466 * Microsoft.Extensions.Caching.Abstractions 1.0.0-rc2-20466 - Unresolved Microsoft.Extensions.Caching.Memory 1.0.0-rc2-20466 Microsoft.Extensions.Configuration.Abstractions 1.0.0-rc2-20466 - Unresolved Microsoft.Extensions.DependencyInjection 1.0.0-rc2-20466 - Unresolved Microsoft.Extensions.DependencyInjection.Abstractions 1.0.0-rc2-20466 - Unresolved Microsoft.Extensions.FileProviders.Abstractions 1.0.0-rc2-20466 - Unresolved Microsoft.Extensions.Logging 1.0.0-rc2-20466 - Unresolved Microsoft.Extensions.Logging.Abstractions 1.0.0-rc2-20466 - Unresolved Microsoft.Extensions.Options 1.0.0-rc2-20466 - Unresolved Microsoft.Extensions.PlatformAbstractions 1.0.0-rc2-20466 Microsoft.Extensions.Primitives 1.0.0-rc2-20466 - Unresolved Remotion.Linq 2.0.2 System.Diagnostics.DiagnosticSource 4.0.0-rc2-23931 System.Text.Encodings.Web 4.0.0-rc2-23931 - Unresolved @kall2sollies Visual Studio is missing the tooling support for RC2 stuff, you can try using .NET cli which is the replacement for dnx, and a command line. Here is the link to the project, which contains some getting started instructions: http://dotnet.github.io/getting-started/ Among all the things I thought I could try, it has been the only one I didn't, because I thought it was a lightweight version of the runtime, and not the future runtime itself, so ok tomorrow I'll go for it, thanks! Ok I have installed the redist installer and can now run the .Net CLI with dotnet commands (.NET Command Line Tools (1.0.0-beta-001598). No problem scaffolding a new projet with the dotnet new command, but I am still unable to build or run this simple project mentioned above. Below is my current project.json file. Current framework moniker is set to net46, but I also tried net461, netcore50 (as in default CLI hello world project, but it still fails because it either cannot resolve dependencies (when targetting net46), or tells dependencies are not compatible with netcore50 (especially LinqKit, which brings EF6 with). { "version": "1.0.0-*", "description": "LinqKitIssue Console Application", "authors": [ "cguquet" ], "tags": [ "" ], "projectUrl": "", "licenseUrl": "", "compilationOptions": { "emitEntryPoint": true }, "dependencies": { "NETStandard.Library": "1.0.0-rc2-23811", "LinqKit": "1.1.3.1", "Microsoft.EntityFrameworkCore": "1.0.0-rc3-20637", "Microsoft.EntityFrameworkCore.Commands": "1.0.0-rc3-20637", "Microsoft.EntityFrameworkCore.SqlServer": "1.0.0-rc3-20637", "Microsoft.Extensions.Caching.Abstractions": "1.0.0-rc3-20637" }, "commands": { "LinqKitIssue": "LinqKitIssue" }, "frameworks": { "net46": { } } }
gharchive/issue
2016-04-11T16:32:55
2025-04-01T06:37:57.998706
{ "authors": [ "kall2sollies", "maumar" ], "repo": "aspnet/EntityFramework", "url": "https://github.com/aspnet/EntityFramework/issues/5033", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
198725330
Explicit loading for list of entries: context.Entries(...)...Load() For explicit loading EF Core offers (from docs): var blog = context.Blogs .Single(b => b.BlogId == 1); context.Entry(blog) .Reference(b => b.Owner) .Load(); If I have, for expample, list of 10 entries, I need to 10 times call Reference(...).Load() in foreach, that generate 10 SQL queries to DB. How about optimized method Entries(): var blogs = context.Blogs .Where(...) .ToList(); context.Entries(blogs) .Reference(b => b.Owner) .Load(); which make a single SQL query like: select .... where [BlogOwner].[BlogId] in (?, ?, ?, ?, ?, ?) Sorry, but I have not found similar functionality. Thanks To things you can try: Use eager loading, i.e. Include(b => b.Owner) Write a query like this (I haven't actually tested it, but it should work): var ids = context.ChangeTracker .Entries<Blog>() .Select(e => e.Property(b => b.BlogId)) .ToList(); context.Owner.Where(o => ids.Contains(o.BlogId)).Load(); @Remleo BTW, we would be interested in understanding your scenario better, e.g. why didn't you use eager loading in this case. Sometimes may need to eager/explicit load a relationship after the parent model has already been retrieved. For example, this may be useful if I need to dynamically decide whether to load related models: // This code is responsible for retrieve specific blog entities // But it have no idea about logic in BlackBox.SomeMethod() // That is why it dont load Navigations var neededBlogs = context.Blogs.Where(....).ToList(); .... class BlackBox { public void SomeMethod(IEnumerable<Blog> blogs) { // There might be code that ensure Owner loading, // because this method have no idea about is `blogs` was load `Owners` or not. // Also `Loader` should be intelligent enough for load only empty Navigations // so calling this method multiple times is safe context.Entries(blogs).Reference(b => b.Owner).Load(); foreach (var blog in blogs) { if (someDynamicCondition && blog.Owner != null) { SendNotificationToOwner(blog.Owner.email, "Alert!"); } } } } This code works: var ids = context.ChangeTracker .Entries<Blog>() .Select(e => e.Property(b => b.BlogId)) .ToList(); context.Owner.Where(o => ids.Contains(o.BlogId)).Load(); but code is not "dry". where statement need to be hardcoded and match FK for Navigation-property. Context knows about FK and it is his "job" to load Navigation-properties. Sorry for my english ( Reopening so we can visit this in triage. Closing but will reconsider if we see more requests. We would consider a PR with the feature. You could also look at implementing it as an extension method. +1 This is totally valid example for any TPT inheritance. Base model does not contains navigation properties and if we woluld like to display list with all inherited types with the common property fe. "Name", but loaded from different related entities of derivered models. We can't use include because there is no navigation property to do so, and if we load directly EF is not clever enough just to get related entites IDs by FK, but it will get whole tables, and performance will be not acceptable. In that case we have to change whole structure to TPH or create view, map it to new model and at the end we will end up with two different models to describe exactly the same entity :/ Visualisation ;): A (entity with all 3 digit numbers - like dictionary of all of them) B (base model to keep numbers assigments - abstract) --> C (derived) --> User (additional relation to entity with Name) --> D (derived) --> Company (additional relation to entity with Name) --> etc. I would like to see this as there are many query methods that use the same query but don't use the same included references therefore if you have one method to return the query and add all references that are used in all instances then you end up with a method that works but is really inefficient. For example: public IQueryable<Account> GetAccountsCreatedOnDate(DateTime time) { context.Account.Where(x => x.CreatedDateTime == time); } public IEnumerable<Post> GetPostsByUsersCreatedOnDate(DateTime time) { var users = GetAccountsCreatedOnDate(time); context.Entries(users).Reference(b => b.Posts).Load(); return users.SelectMany(x => x.Posts); } @KieranDevlinSycous - Just try this. public IEnumerable<Post> GetPostsByUsersCreatedOnDate(DateTime time) { return GetAccountsCreatedOnDate(time).SelectMany(x => x.Posts).AsEnumerable(); } I find this feature very useful. For example if you don't want to use eager loading too much, because it generates JOIN queries and if some other query already fetched the data, you are still stuck with lot of joins instead of simple selects. For example you have entity A with property C and entity B with property C and you eager load A with C and then you still have too eager load C with B instead of just selecting B. So you either have costly queries or you have many many methods on your repositories. So it would be much simpler to just load the necessary references in place. I would love this feature to be implemented. Now I use this instead public Query<TEntity> LoadBy<TForeignEntity>( IEnumerable<TForeignEntity> foreignEntities, Func<TForeignEntity, TEntity> entitySelector, Func<TForeignEntity, Guid?> entityIdSelector, bool unrestricted = false) { var ids = foreignEntities.Where(e => entitySelector(e) == null).Select(e => entityIdSelector(e).ToOption()); return Select(unrestricted).Where(e => e.Id, ids); } Our team would really need this feature, instead of it we must manually download entries by using id of a main entry and set its collection state to be loaded (Context.Entry(obj).Collection(c => c.Collections).IsLoaded = true)
gharchive/issue
2017-01-04T14:52:32
2025-04-01T06:37:58.009444
{ "authors": [ "KieranDevlinSycous", "Remleo", "adduss", "ajcvickers", "divega", "romfir", "rowanmiller", "smitpatel", "starychfojtu" ], "repo": "aspnet/EntityFramework", "url": "https://github.com/aspnet/EntityFramework/issues/7350", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
179915199
Implement complex type usages and properties A ComplexTypeUsage represents the reference to a complex type on an entity type or on another complex type. It contains the ComplexProperty instances that represent the properties of the complex type and their facets in this usage. /cc @AndriySvyryd (But anyone can review...) Agreed. We probably should review it with others--I'll make a note to talk about it when we discuss in the design meeting.
gharchive/pull-request
2016-09-28T23:16:59
2025-04-01T06:37:58.011945
{ "authors": [ "ajcvickers" ], "repo": "aspnet/EntityFramework", "url": "https://github.com/aspnet/EntityFramework/pull/6632", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
354937307
Implementing IDataErrorInfo break the database creation When I try to implement IDataErrorInfo interface in my base entity the ORM fail to create the database due to reflection TargetParameterCountException Exception message:"Parameter count mismatch." Stack trace: at System.Reflection.RuntimeMethodInfo.InvokeArgumentsCheck(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)\r\n at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)\r\n at System.Reflection.RuntimePropertyInfo.GetValue(Object obj, BindingFlags invokeAttr, Binder binder, Object[] index, CultureInfo culture)\r\n at System.Reflection.RuntimePropertyInfo.GetValue(Object obj, Object[] index)\r\n at Microsoft.EntityFrameworkCore.Metadata.Internal.EntityType.GetData(Boolean providerValues)\r\n at Microsoft.EntityFrameworkCore.Infrastructure.ModelValidator.ValidateData(IModel model)\r\n at Microsoft.EntityFrameworkCore.Infrastructure.ModelValidator.Validate(IModel model)\r\n at Microsoft.EntityFrameworkCore.Infrastructure.RelationalModelValidator.Validate(IModel model)\r\n at Microsoft.EntityFrameworkCore.Internal.SqlServerModelValidator.Validate(IModel model)\r\n at Microsoft.EntityFrameworkCore.Infrastructure.ModelSource.CreateModel(DbContext context, IConventionSetBuilder conventionSetBuilder, IModelValidator validator)\r\n at Microsoft.EntityFrameworkCore.Infrastructure.ModelSource.<>c__DisplayClass5_0.<GetModel>b__1()\r\n at System.Lazy`1.CreateValue()\r\n at System.Lazy`1.LazyInitValue()\r\n at System.Lazy`1.get_Value()\r\n at Microsoft.EntityFrameworkCore.Infrastructure.ModelSource.GetModel(DbContext context, IConventionSetBuilder conventionSetBuilder, IModelValidator validator)\r\n at Microsoft.EntityFrameworkCore.Internal.DbContextServices.CreateModel()\r\n at Microsoft.EntityFrameworkCore.Internal.DbContextServices.get_Model()\r\n at Microsoft.EntityFrameworkCore.Infrastructure.EntityFrameworkServicesBuilder.<>c.<TryAddCoreServices>b__7_1(IServiceProvider p)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitFactory(FactoryCallSite factoryCallSite, ServiceProviderEngineScope scope)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitScoped(ScopedCallSite scopedCallSite, ServiceProviderEngineScope scope)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitConstructor(ConstructorCallSite constructorCallSite, ServiceProviderEngineScope scope)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitScoped(ScopedCallSite scopedCallSite, ServiceProviderEngineScope scope)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.DynamicServiceProviderEngine.<>c__DisplayClass1_0.<RealizeService>b__0(ServiceProviderEngineScope scope)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.GetService(Type serviceType, ServiceProviderEngineScope serviceProviderEngineScope)\r\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.GetService(Type serviceType)\r\n at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService(IServiceProvider provider, Type serviceType)\r\n at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService[T](IServiceProvider provider)\r\n at Microsoft.EntityFrameworkCore.DbContext.get_DbContextDependencies()\r\n at Microsoft.EntityFrameworkCore.DbContext.get_InternalServiceProvider()\r\n at Microsoft.EntityFrameworkCore.DbContext.Microsoft.EntityFrameworkCore.Infrastructure.IInfrastructure<System.IServiceProvider>.get_Instance()\r\n at Microsoft.EntityFrameworkCore.Infrastructure.DatabaseFacade.Microsoft.EntityFrameworkCore.Infrastructure.IInfrastructure<System.IServiceProvider>.get_Instance()\r\n at Microsoft.EntityFrameworkCore.Internal.InternalAccessorExtensions.GetService[TService](IInfrastructure`1 accessor)\r\n at Microsoft.EntityFrameworkCore.Infrastructure.AccessorExtensions.GetService[TService](IInfrastructure`1 accessor)\r\n at Microsoft.EntityFrameworkCore.Infrastructure.DatabaseFacade.get_DatabaseCreator()\r\n at Microsoft.EntityFrameworkCore.Infrastructure.DatabaseFacade.EnsureDeleted()\r\n at CM.DataContext.GetLogin(String userName, String password)\r\n at CM.Contexts.Users.GetLogin(String userName, String password)\r\n at CM.Contexts.Users.<>c__DisplayClass3_0.<GetLoginAsync>b__0()\r\n at System.Threading.Tasks.Task`1.InnerInvoke()\r\n at System.Threading.Tasks.Task.Execute() Steps to reproduce Create a.NET Standard project and add Microsoft.EntityFrameworkCore.SqlServer package. Create a base entity class that implement IDataErrorInfo interface. Create entity using your base entity. Create your DbContext and expose your main entity in it (Not the base entity). Create another project to consume your library and you will get the exception. public class BaseEntity : INotifyPropertyChanged, IDataErrorInfo { #region IDataErrorInfo [NotMapped] string IDataErrorInfo.this[string propertyName] => ""; [NotMapped] string IDataErrorInfo.Error => ""; #endregion #region INotifyPropertyChanged event PropertyChangedEventHandler PropertyChanged; event PropertyChangedEventHandler INotifyPropertyChanged.PropertyChanged { add => PropertyChanged += value; remove => PropertyChanged -= value; } protected virtual bool Set<TValue>(ref TValue oldValue, TValue newValue, string relatedProperty = null, [CallerMemberName]string propertyName = null) { if (Equals(oldValue, newValue)) return false; oldValue = newValue; NotifyPropertyChanged(propertyName); if (!string.IsNullOrWhiteSpace(relatedProperty)) NotifyPropertyChanged(relatedProperty); return true; } protected virtual bool Set<TValue>(ref TValue oldValue, TValue newValue, IEnumerable<string> relatedProperties, [CallerMemberName]string propertyName = null) { if (Equals(oldValue, newValue)) return false; oldValue = newValue; NotifyPropertyChanged(propertyName); foreach (var item in relatedProperties) if (!string.IsNullOrWhiteSpace(item)) NotifyPropertyChanged(item); return true; } protected virtual void NotifyPropertyChanged(string propertyName) => PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); #endregion } public class User : BaseEntity { Guid _Id = Guid.NewGuid(); [Key] public Guid ID { get => _Id; set => Set(ref _Id, value); } string _Name; [StringLength(250, MinimumLength = 3), Required(AllowEmptyStrings = false)] public string Name { get => _Name; set => Set(ref _Name, value); } string _Password; [StringLength(250, MinimumLength = 3), Required(AllowEmptyStrings = false)] public string Password { get => _Password; set => Set(ref _Password, value); } string _DisplayName; [StringLength(250, MinimumLength = 3), Required(AllowEmptyStrings = false)] public string DisplayName { get => _DisplayName; set => Set(ref _DisplayName, value); } bool _IsEnabled = true; public bool IsEnabled { get => _IsEnabled; set => Set(ref _IsEnabled, value); } } class DataContext : DbContext { #region Configures protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=CM;Trusted_Connection=True;"); //base.OnConfiguring(optionsBuilder); } protected override void OnModelCreating(ModelBuilder modelBuilder) { var users = modelBuilder.Entity<User>(); users.HasData(new User() { Name = "admin", Password = "1234", DisplayName = "Administrator", IsEnabled = true }); users.HasIndex(u => u.Name).IsUnique(); users.HasIndex(u => u.DisplayName).IsUnique(); base.OnModelCreating(modelBuilder); } #endregion #region Tables public DbSet<User> Users { get; set; } #endregion #region Static Methods public static User GetLogin(string userName, string password) { using (var cxt = new DataContext()) { cxt.Database.EnsureDeleted(); cxt.Database.EnsureCreated(); LoginUser = cxt.Users.AsNoTracking().FirstOrDefault(u => u.IsEnabled && u.Name == userName && u.Password == password); } return LoginUser; } #endregion } Further technical details EF Core version: Microsoft.EntityFrameworkCore (2.1.2) Database Provider: Microsoft.EntityFrameworkCore.SqlServer Operating system: Windows 10 IDE: Visual Studio 2017 15.8.1 Duplicate of #12605
gharchive/issue
2018-08-28T23:12:23
2025-04-01T06:37:58.018211
{ "authors": [ "ajcvickers", "eiadxp" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/issues/13148", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
358318171
EntityQueryModelVisitor does not properly use CreateTransparentIdentifierType As a 3rd-party EF Core provider maintainer, I need to be able to control the transparent identifier type used by EntityQueryModelVisitor. I can override CreatetTransparentIdentifierType, but this has no effect on VisitAdditionalFromClause because it does not properly call that method: https://github.com/aspnet/EntityFrameworkCore/blob/146863a3bdc25c1ea1cb224ba0611594398c8459/src/EFCore/Query/EntityQueryModelVisitor.cs#L983 Instead, it directly references the internal TransparentIdentifier<,> type. This causes type mismatches when overriding CreateTransparentIdentifierType and attempting to use that type in other visitor methods. The net effect is having to override more methods that strictly necessary just to ensure that the correct type is used. Further technical details EF Core version: ALL historical versions Database Provider: Non-specific (problem is in EFCore) Tracking this for 3.0 as part of the the larger query refactoring that is tracked by #12795
gharchive/issue
2018-09-08T18:33:24
2025-04-01T06:37:58.021249
{ "authors": [ "ajcvickers", "crhairr" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/issues/13265", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
478358886
Getting a string[] from a UDF I can't seem to find anything about table valued functions in EF core is this even possible. Here's a sample of what I would like to achieve ... I have a role class that looks like this ... [Table("Roles", Schema = "Security")] public class Role { [Key] public Guid Id { get; set; } // this is a csv list of "privilege keys" that the role grants public string Privs { get; set; } ... } Add this to my DbContext ... [DbFunction("[DMS].[GetFolderPrivList]")] public static string[] GetFolderPrivList(string userId, Guid folderId) { throw new Exception(); } Then I have a UDF like this to compute a list of privs for a folder "path" (which recursively crawls the tree for inherited permissions) ... CREATE FUNCTION [DMS].[GetFolderPrivList] ( @UserId nvarchar(450), @FolderId uniqueidentifier ) RETURNS @PrivList TABLE ( priv nvarchar(100) NOT NULL) AS BEGIN INSERT INTO @PrivList (priv) SELECT DISTINCT * FROM STRING_SPLIT((SELECT R.Privs FROM DMS.Folders P JOIN [Security].FolderRoles PR ON FR.FolderId= P.Id JOIN [Security].Roles R ON R.Id= FR.RoleId JOIN [Security].UserRoles UR ON UR.RoleId = R.Id WHERE P.Id=@FolderId AND UR.UserId=@UserId),','); IF (SELECT ParentId FROM [DMS].[Folders] WHERE Id=@FolderId) IS NOT NULL BEGIN INSERT INTO @PrivList (priv) SELECT priv FROM [DMS].[GetFolderPrivList](@UserId,(SELECT ParentId FROM CMS.Pages WHERE Id=@FolderId)) WHERE priv NOT IN (SELECT priv FROM @PrivList) END --*/ RETURN END; I can't get EF to accept this such that I can make the call in a linq query, for example ... db.GetAll<Folder>().FirstOrDefault(f => f.AppId == app.Id && f.Path.ToLower() == path.Lowered && CoreDataContext.GetFolderPrivList(user.Id, f.Id).Contains(privKey)) Duplicate of #4319
gharchive/issue
2019-08-08T09:38:59
2025-04-01T06:37:58.023878
{ "authors": [ "TehWardy", "ajcvickers" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/issues/17025", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
474681325
Strange DI behaviour with ASPNETCORE_ENVIRONMENT Describe the bug I have aspnetcore application, where singleton type has a dependency on scoped type. When a variable is defined ASPNETCORE_ENVIRONMENT=Development, I get an InvalidOperationException: Cannot consume scoped service 'B' from singleton 'A', but when the ASPNETCORE_ENVIRONMENT='Anything', an exception is not raised and application started. To Reproduce Steps to reproduce the behavior: Using this version of ASP.NET Core '2.2.6' Run this code https://gist.github.com/DenisPimenov/fbde9d1f91170d2423cee1089a29028c With these arguments ASPNETCORE_ENVIRONMENT="Production" Application started without exceptions Expected behavior Application should be terminated with InvalidOperationException Additional context dotnet --info .NET Core SDK (reflecting any global.json): Version: 2.2.301 Commit: 70d6be0814 Runtime Environment: OS Name: Mac OS X OS Version: 10.14 OS Platform: Darwin RID: osx.10.14-x64 Base Path: /usr/local/share/dotnet/sdk/2.2.301/ In development mode we run some extra validations as the container is built. The fact that you're seeing an error means that something is wrong. In this case you're using a scoped service from a singleton - meaning that you're creating a long-lived instance of something that you expect to be scoped. We don't run this validation at startup time in production scenarios because it's slow. This validations has public api? @pakrym You can use UseDefaultServiceProvider method to configure the scope validation to always be on: public static IWebHostBuilder CreateHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseDefaultServiceProvider(options => options.ValidateScopes = true) .UseStartup<Startup>(); Thank you for your comments.
gharchive/issue
2019-07-30T15:38:15
2025-04-01T06:37:58.030581
{ "authors": [ "DenisPimenov", "pakrym", "rynowak" ], "repo": "aspnet/Extensions", "url": "https://github.com/aspnet/Extensions/issues/2113", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
307646814
Unlisted Dependency Hi ASPNet Team, I am trying to use the HttpSysServer in a project and I'm getting the error message below. Itseems as though my project is missing a reference, but I'm noticing that System.Threading.Overlapped is not listed as a package dependency for this project. Are there any other unlisted dependencies I should add? System.IO.FileNotFoundException: Could not load file or assembly 'System.Threading.Overlapped, Version=0.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified. File name: 'System.Threading.Overlapped, Version=0.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' at Microsoft.AspNetCore.Server.HttpSys.RequestQueue..ctor(UrlGroup urlGroup, ILogger logger) at Microsoft.AspNetCore.Server.HttpSys.HttpSysListener..ctor(HttpSysOptions options, ILoggerFactory loggerFactory) at Microsoft.AspNetCore.Server.HttpSys.MessagePump..ctor(IOptions`1 options, ILoggerFactory loggerFactory, IAuthenticationSchemeProvider authentication) --- End of stack trace from previous location where exception was thrown --- at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitConstructor(ConstructorCallSite constructorCallSite, ServiceProvider provider) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitScoped(ScopedCallSite scopedCallSite, ServiceProvider provider) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitSingleton(SingletonCallSite singletonCallSite, ServiceProvider provider) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument) at Microsoft.Extensions.DependencyInjection.ServiceProvider.<>c__DisplayClass22_0.<RealizeService>b__0(ServiceProvider provider) at Microsoft.Extensions.DependencyInjection.ServiceProvider.GetService(Type serviceType) at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService(IServiceProvider provider, Type serviceType) at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService[T](IServiceProvider provider) at Microsoft.AspNetCore.Hosting.Internal.WebHost.EnsureServer() at Microsoft.AspNetCore.Hosting.Internal.WebHost.BuildApplication() at Microsoft.AspNetCore.Hosting.WebHostBuilder.Build() at AlphaDrive.Services.AspNetCore.AspNetCoreExtension.<Start>d__14.MoveNext() in C:\Users\Tony Valenti\source\repos\AlphaDrive\AlphaDrive.Services.AspNetCore\AspNetCoreExtension.cs:line 60 What package version is this? I am using this DLL: C:\Users\Tony Valenti\source\repos\AlphaDrive\packages\Microsoft.AspNetCore.Server.HttpSys.2.0.3\lib\netstandard2.0\Microsoft.AspNetCore.Server.HttpSys.dll and it has this version: 2.0.3.0 And you're running on .NET Core 2 or .NET 4.x? I'm not sure how to answer that question. The library is a cross platform library that I'm compiling for .NET 4.6.1. Oddly, my app works fine on Windows 10, but on Windows 7 I get the error mentioned. Actually, I guess I see the error happening on windows 10 too. Apparently it is just not happening on my dev box. This seems like an issue specific to your project dependency setup. Can you share a dummy project on github that reproduces it?
gharchive/issue
2018-03-22T13:38:49
2025-04-01T06:37:58.035327
{ "authors": [ "TonyValenti", "Tratcher" ], "repo": "aspnet/HttpSysServer", "url": "https://github.com/aspnet/HttpSysServer/issues/436", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
201606423
Webdeploy parameter file support/tool Hello, I've been working on deploying a asp.net core app to azure via webdeploy (in vsts) recently and I think the tooling could be improved here, in particular with respects to webdeploy and its parameter file syntax. The current docs recommends simply zipping up the publish contents and this works fine if you don't want to use the parameters file support provided by web deploy and the vsts website tasks. however a very nice feature webdeloy is to be able to specify a Setparameters.xml file at release time for different environments. When building with msbuild this file is generated by the publish target, but to my knowledge this is not supported with the dotnet cli. Proposal: Add a new tool to Microsoft.AspNetCore.Server.IISIntegration.Tools that generates a SetParameter.xml file In addition to doing this I think it would be nice to have tools for actually managing web deploy parameters, but that is kind of an aside. I went looking for the code for publish-iis and I did found it in the master branch in this repo, but not in the dev branch, has it moved somewhere else? (also, do you think this is a worth while idea at all?) I know webdeploy isn't very cutting edge but it is a very useful tool especially combined with the vsts tooling. /cc @vijayrkn I think a better repo to report this is https://github.com/aspnet/websdk. This is also a new home for the publish-iis tool which was converted to an MSBuild task (https://github.com/aspnet/websdk/tree/dev/src/Publish/Microsoft.NETCore.Sdk.Publish.Tasks) We have added support for adding parameters file for web deploy publish. https://github.com/aspnet/websdk/blob/dev/src/Publish/Microsoft.NET.Sdk.Publish.Targets/netstandard1.0/PublishTargets/Microsoft.NET.Sdk.Publish.MSDeploy.targets#L237 The next VS 2017 release should have this. aha nice to see it coming! also I didn't know they opened up the websdk repo, that's great!
gharchive/issue
2017-01-18T15:31:56
2025-04-01T06:37:58.040754
{ "authors": [ "aL3891", "moozzyk", "vijayrkn" ], "repo": "aspnet/IISIntegration", "url": "https://github.com/aspnet/IISIntegration/issues/317", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
179913317
Make UseIISIntegration idempotent @Tratcher :shipit:
gharchive/pull-request
2016-09-28T23:03:50
2025-04-01T06:37:58.041741
{ "authors": [ "Tratcher", "pakrym" ], "repo": "aspnet/IISIntegration", "url": "https://github.com/aspnet/IISIntegration/pull/274", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
381789469
Upgrade projects to netcoreapp3.0 Changes: Upgrade dependencies Change TFM on Microsoft.AspNetCore.Identity to netcoreapp3.0 Remove .NET Framework tests Part of https://github.com/aspnet/AspNetCore/issues/3754 Whoa, lots of test failures. Are these already known issues @HaoK? That looks to be errors from the razor UI stuff, maybe the upgrade to 3.0 broke something there? @javiercn @pranavkm any ideas? Can't find'C:\projects\identity\test\Identity.FunctionalTests\bin\Release\netcoreapp3.0\Identity.DefaultUI.WebSite.deps.json' Does the 3.0 Sdk no longer generate a deps file? Does the 3.0 Sdk no longer generate a deps file? This sounds like something @ryanbrandenburg was investigating. Is this the issue you were seeing, Ryan? I don't recall having missing deps.json files. This might have been it: https://github.com/aspnet/Identity/pull/2072/commits/43bae5e91f15bffa63eadef179d7292ff7384a64 Tests pass locally now.
gharchive/pull-request
2018-11-16T22:55:49
2025-04-01T06:37:58.045918
{ "authors": [ "HaoK", "natemcmaster", "pranavkm", "ryanbrandenburg" ], "repo": "aspnet/Identity", "url": "https://github.com/aspnet/Identity/pull/2072", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
179559820
Patching of dictionaries should be case sensitive on keys The current code handles all (dynamic(example: ExpandoObject implements IDictionary<string,object>) or not) dictionaries which implement IDictionary<string, object> in a case Insensitive way. This is an incorrect behavior. For example: When Json.net deserializes data into a regular IDictionary<,> it considers the keys case-sensitive. When Json.net deserializes data into a regular Poco type, it considers the property names case-Insensitive. Considering the above behavior, we should do the following in json patch: For dynamic types (like ExpandoObject, DynamicObject etc.), the matching of keys should be case-Insensitive. This is to keep the behavior consistent with a Poco type where the property names are considered case-Insensitive. For dictionaries, keep the default json.net behavior, i.e. keep them case sensitive. @dougbu @rynowak @kichalla can you link to the PR/commit where you previously fixed this and then close this? Thanks! 81931e75d48370cf9163254523b1f6c4bcc92acb
gharchive/issue
2016-09-27T17:29:24
2025-04-01T06:37:58.050301
{ "authors": [ "Eilon", "kichalla" ], "repo": "aspnet/JsonPatch", "url": "https://github.com/aspnet/JsonPatch/issues/37", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
157367719
Using command-line options in RC2? My blog is currently running on ASP.NET 5 RC1. I have commands for live vs staging configured in project.json: "commands": { "web-live": "Microsoft.AspNet.Server.Kestrel --server.urls http://unix:/run/dan-live.sock", "web-staging": "Microsoft.AspNet.Server.Kestrel --server.urls http://unix:/run/dan-staging.sock" }, And then I run a command like this to actually run the site: /var/www/.dnx/runtimes/dnx-mono.1.0.0-rc1-final/bin/dnx --appbase /var/www/dan.cx/live/site/approot/packages/Daniel15.Web/1.0.0/root Microsoft.Dnx.ApplicationHost --configuration Release web-live With RC2, it looks like my site compiles into a .exe file that I can execute directly via mono: % mono Daniel15.Web.exe Hosting environment: Production Content root path: /var/www/dan.cx/staging/site Now listening on: http://localhost:5000 Application started. Press Ctrl+C to shut down. However, I can't figure out how to specify the server.urls. Do I need to have separate config files for staging vs live? How do I specify which one to use? (note that my site is currently not compatible with .NET Core, so I am using Mono + .NET Framework 4.5.1) https://github.com/aspnet/Hosting/issues/737 Thanks @davidfowl! Once I added the config: var config = new ConfigurationBuilder() .AddCommandLine(args) .Build(); var host = new WebHostBuilder() .UseConfiguration(config) // ...... .Build(); It worked as expected :smile:
gharchive/issue
2016-05-29T03:46:56
2025-04-01T06:37:58.054300
{ "authors": [ "Daniel15", "davidfowl" ], "repo": "aspnet/KestrelHttpServer", "url": "https://github.com/aspnet/KestrelHttpServer/issues/893", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
99711006
Gracefully handle exceptions thrown from OnStarting callbacks If OnStarting is being called after the app func has completed, return a 500. If Onstarting is being called due to a call to write, throw from write. Addresses #25 :shipit: :tanabata_tree:
gharchive/pull-request
2015-08-07T19:18:43
2025-04-01T06:37:58.055737
{ "authors": [ "halter73", "lodejard" ], "repo": "aspnet/KestrelHttpServer", "url": "https://github.com/aspnet/KestrelHttpServer/pull/155", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
158601511
rc2 postback very slow on first execution I recently upgraded my asp.net 5 application to asp.net core 1.0. The postbacks are very slow, it takes 45 seconds for the postback objected to be created and till the first breakpoint in the postback method is reached. After that the breakpoint is hit and the request is executed normaly. If I execute the same method again, the postback is very fast. It only happens if I post back objects created by the entity framework with dependencies to other object. For Instance: public class PostbackObject { public virtual Object1 Object1 {get;set;} public int Object1ID {get;set;} } If I comment out Object1 public class PostbackObject { //public virtual Object1 Object1 {get;set;} public int Object1ID {get;set;} } I don't have any issues. See https://github.com/aspnet/Mvc/issues/4666 @rynowak does this sound the same as #4666 and this is a dup? It is the same. The fix from #4666 worked for me. Thank you.
gharchive/issue
2016-06-06T05:08:43
2025-04-01T06:37:58.060371
{ "authors": [ "Eilon", "azimmerer", "davidfowl" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/4814", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
180118758
Add an option to inline css file inside a Razor page Hi, I just wrote a small TagHelper that inlines css files as it's one of the recommendations of PageSpeed by Google: https://developers.google.com/speed/docs/insights/InlineCSS https://developers.google.com/speed/docs/insights/OptimizeCSSDelivery It would be nice if ASP.NET would have something like that baked into the framework and the existing LinkTagHelper which surely will be much more robust than my extremely simple, couple of minutes work: [HtmlTargetElement("embedCss")] public class EmbedCssTagHelper : TagHelper { private readonly IMemoryCache _memoryCache; private readonly IHostingEnvironment _hostingEnvironment; private const string CacheKeyPrefix = "EmbedCss_"; public string Href { get; set; } public EmbedCssTagHelper(IMemoryCache memoryCache, IHostingEnvironment hostingEnvironment) { _memoryCache = memoryCache; _hostingEnvironment = hostingEnvironment; } public override void Process(TagHelperContext context, TagHelperOutput output) { var cacheKey = CacheKeyPrefix + Href; output.TagName = "style"; var content = _memoryCache.GetOrCreate(cacheKey, entry => { if (_hostingEnvironment.IsStaging() || _hostingEnvironment.IsProduction()) { Href = Href.Replace(".css", ".min.css"); } var path = _hostingEnvironment.WebRootPath+ Href; var result = File.ReadAllText(path); // Surely there are other ways, like watching the file system for changes... // If you could reference a resource that shows how to do that it would be great! entry.SetAbsoluteExpiration(TimeSpan.FromMinutes(1)); entry.SetValue(result); return result; }); output.Content.SetHtmlContent(content); output.TagMode = TagMode.StartTagAndEndTag; } } What do you think about it @Eilon? BTW, while writing the TagHelper was easy, I couldn't find why my TagHelper didn't get fire until I added [HtmlTargetElement("embedCss")], what should be the element name in the HTML if it wasn't specified? @DamianEdwards can you share thoughts on how important a feature like this is? Especially considering HTTP2, where making multiple requests shouldn't be as expensive as it is today from a latency perspective. Still worthwhile. I did it on live.asp.net. HTTP2 is a while away from normal, especially given it requires Windows Server 2016 for IIS. We should have this built in to both the link and script tag helpers. @DamianEdwards Awesome, thanks! One of the big problems we're concerned about is that embedding CSS is tricky when you have URLs inside the CSS (e.g. to images). The URLs have to be re-pathed based on the current request's URL. The code would have to parse the CSS and re-path every URL inside it. This would affect the ability to cache the CSS because there can't be one canonical copy of it - there would have to be a copy for every URL where it's used (or at least for every "group" of similar URLs). That's a very good point. We could simply say "not supported" in that case, but I agree it's less than ideal. There are many cases when this would still be useful, as plenty of CSS doesn't contain URLs to images or fonts, but it's not clear what the correct behavior here should be. @Eilon @DamianEdwards It's easily solved with having only URI "absolute relative" (totally made this term up right now) paths in the CSS files: Good: /static/images/kitchen.jpeg @font-face { font-family: 'MyWebFont'; src: url('/static/fonts/webfont.woff2') format('woff2') } Bad: ../images/kitchen.jpeg @font-face { font-family: 'MyWebFont'; src: url('../fonts/webfont.woff2') format('woff2') } In fact, this is how we currently use CSS on our website. We create the website in a very components fashion, like a SPA, with tons of partial views, and each partial view has its own independent SCSS file. So it's impossible to have tons of HTTP requests for css files for each and every partial view in a page (can be a dozen for a page). The only drawback of this is every request for the page, all the CSS needs to be included in the HTML and can't be cached locally on the client browser. I thought about ways to solve it: every page loads the global css file (all the SCSS files, combined) asynchronously and set a cookie with the last time it was received / ETAG. When rendering the page, using <embedCss>, we'll check the cookie, if its ETAG is the newest, we won't include inline CSS but rely on the external CSS file. We didn't implement it yet as we are busy with writing our entire client side from scratch, but that's the idea. @gdoron using only absolute URLs is a "restriction," not a "solution" 😄 Using relative paths in CSS is extremely common because in "normal" CSS files it's the safest to use. It's quite normal for the author of the CSS file to require some of their own images, yet not know the final path where the CSS file and related images will be placed. For example, Bootstrap.css has tons of relative url() values. But anyway, we're definitely not saying this is a bad feature - as @DamianEdwards said it's quite interesting. The concern is that there's a lot of work required to call it "done". Not the least of which is doing actual browser perf measurements to see whether this feature even improves performance in several common scenarios. (We all know that just making various "site optimization" checklists happy is not necessarily actually good for the site.) @Eilon @DamianEdwards I agree, using relative paths in CSS is extremely common and useful BUT only for external resources (like bootstrap you mentioned). This is because the authors of the CSS framework can't enforce a specific website, application and folders structure, so they must use relative paths to link the fonts and images. Inline large chuck for CSS file is actually a bad pattern and discouraged by page speed: Be careful when inlining data URIs in CSS files. While selective use of small data URIs in your CSS may make sense, inlining large data URIs can cause the size of your above-the-fold CSS to be larger, which will slow down page render time. So It really only makes sense to inline homemade CSS, which doesn't benefit at all from using relative paths, and sometimes it's even more resilient to use absolute paths since you can move the CSS file around and it work break anything (for example, serving from a CDN or moving the .min.css to a different folder). Not the least of which is doing actual browser perf measurements to see whether this feature even improves performance in several common scenarios. Make sure to test on mobile devices and slow networks, inline resources on these have a huge impact, because of browsers' parallel request limitation and TCP (HTTP) slow start See more on performance and mobile: Presentation YouTube lecture both of Ilya Grigorik, a web performance engineer at Google, co-chair of the W3C Web Performance Working group. @Eilon Curious to know whether you agree with the the above or not. @gdoron those are all good points, but still no plans to have this feature built-in. If you end up building this tag helper you can release a package on NuGet, and we'd be happy to link to it. @Eilon Problem is, I think it shouldn't really be a new tagHelper but a boolean property on LinkTagHelper. Should I really copy paste the entire code (with great features) from LinkTagHelper? That's bad practice. Is there a DRY way of extending TagHelpers in MVC Core? To include this in the framework I think we'd want to try resolving some of the issues mentioned earlier. As of right now I think some of these issues are a "pit of failure" where it's very easy to lead a user into doing things that won't work. As far as extending the current tag helpers, it's a matter of whether the tag helper in question was designed to support any particular extensibility, and in this case what's being discussed is a whole new feature, so I'm not surprised that it doesn't have that extensibility. @Eilon So what do you suggest? It seems like the community is blocked by the framework, unless we are going to copy paste the entire LinkTagHelper class (and its dependencies) which is really bad for maintenance. Thanks. @gdoron I looked at LinkTagHelper and I'm not sure this is a good fit for the existing tag helper. The existing tag helper is all about doing file globbing (not very useful here) and resource/CDN fallback (clearly not applicable here). So this sounds to me like it fits best as a separate tag helper. If this tag helper requires some core pieces of functionality from some existing tag helper we would certainly be open to exposing some of that functionality somewhere - it's a matter of identifying what should be made reusable. Do you have thoughts on that? @Eilon I think you're probably right regarding LinkTagHelper not fitting to embeding css, but that's (at least from my understanding) is the only way we can still use the known HTML element <link> and just having a flag to embed the content. Regarding something common, I guess the only the thing that might be common is the FileVersionProvider which is in a separate class anyway. BTW, do you think it's a good idea storing the CSS file content in the MemoryCache and invalidate it when the content changes? Or is it better reading it from disk every time? @gdoron you can have multiple tag helpers targeting the same element, though you're right that it wouldn't be the same as having just one extra switch. Regarding loading from disk vs. using a memory cache, the best way to answer that is to not implement caching, use a profiler to measure the behavior, and then optimize whatever the slowest part is, and then finally measure again to see that the behavior improved in the expected way. Then repeat. Closing this as we have no plans to do this.
gharchive/issue
2016-09-29T17:56:36
2025-04-01T06:37:58.079884
{ "authors": [ "DamianEdwards", "Eilon", "gdoron", "mkArtakMSFT" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/5340", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
214431086
JSonInputFormatter throws two exceptions when I receive invalid json as JObject Given the input { scheduleType: "simple", scheduleDetail: { RepeatInterval: "0.01:00:00.0000" }, arguments: { AgeHours: 7, TransStatus: a, PersonUserID: "CENTRALADMIN" }, description: 333 } I get this back { "arguments.TransStatus": [ "The input was not valid.", "The input was not valid." ] } In my model I receive arguments as JObject because it can be different with every request. I enabled logging in MVC and I see exception thrown twice. 2017-03-15 16:30:26.8313|DEBUG|Microsoft.AspNetCore.Mvc.Formatters.JsonInputFormatter|JSON input formatter threw an exception.|Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: a. Path 'arguments.TransStatus', line 8, position 17. at Newtonsoft.Json.JsonTextReader.ParseValue() at Newtonsoft.Json.JsonWriter.WriteToken(JsonReader reader, Boolean writeChildren, Boolean writeDateConstructorAsDate, Boolean writeComments) at Newtonsoft.Json.JsonWriter.WriteToken(JsonReader reader, Boolean writeChildren) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateJToken(JsonReader reader, JsonContract contract) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2017-03-15 16:30:26.8453|DEBUG|Microsoft.AspNetCore.Mvc.Formatters.JsonInputFormatter|JSON input formatter threw an exception.|Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: a. Path 'arguments.TransStatus', line 8, position 17. at Newtonsoft.Json.JsonTextReader.ParseValue() at Newtonsoft.Json.JsonReader.Skip() at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.HandleError(JsonReader reader, Boolean readPastError, Int32 initialDepth) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent) When I change value of TransStatus to 2a it tries to parse it as number and throws only one exception and one "The input was not valid." message in response. Stack trace is: 2017-03-15 16:37:01.8970|DEBUG|Microsoft.AspNetCore.Mvc.Formatters.JsonInputFormatter|JSON input formatter threw an exception.|Newtonsoft.Json.JsonReaderException: Input string '2a' is not a valid number. Path 'arguments.TransStatus', line 8, position 19. at Newtonsoft.Json.JsonTextReader.ParseNumber(ReadType readType) at Newtonsoft.Json.JsonTextReader.ParseValue() at Newtonsoft.Json.JsonWriter.WriteToken(JsonReader reader, Boolean writeChildren, Boolean writeDateConstructorAsDate, Boolean writeComments) at Newtonsoft.Json.JsonWriter.WriteToken(JsonReader reader, Boolean writeChildren) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateJToken(JsonReader reader, JsonContract contract) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) I'm not sure if this is a bug or not. But I don't think returning same message twice is good. @kichalla Could you please try to reproduce this? @PiotrPerak I am unable to repro this issue in the 1.0.3, 1.1.2 and latest versions of MVC. Could you share a simple repro project? [HttpPost("{jobName}/trigger")] public async Task<IActionResult> Trigger(string jobName, [FromBody] JobArguments jobArgs) { if (!ModelState.IsValid || string.IsNullOrEmpty(jobName) || jobArgs == null) return BadRequest(ModelState); return Ok(); } public class JobArguments { public JObject Arguments { get; set; } }``` Thanks @PiotrPerak. I am able to repro it. We will get back to you regarding this. I see that this issue happens outside the context of ASP.NET, for example here the error would be printed 2 times. public class Program { public static void Main(string[] args) { EventHandler<Newtonsoft.Json.Serialization.ErrorEventArgs> errorHandler = (sender, eventArgs) => { Console.WriteLine(eventArgs.ErrorContext.Error.ToString()); Console.WriteLine("-----------------------------"); eventArgs.ErrorContext.Handled = true; }; using (var streamReader = new StreamReader(File.OpenRead("input.txt"))) { using (var jsonReader = new JsonTextReader(streamReader)) { var jsonSerializer = new JsonSerializer(); jsonSerializer.Error += errorHandler; var jobArgs = jsonSerializer.Deserialize<JobArguments>(jsonReader); } } } } public class JobArguments { public JObject Arguments { get; set; } } input above was: { "arguments": { personUserId: "CENTRALADMIN", ageHours: 2, transStatus: a, userType: 2 } } @rynowak @dougbu what do you suggest we should be doing here? @kichalla I suggest we treat this issue as External and ensure the correct issue is filed in https://github.com/JamesNK/Newtonsoft.Json. The JsonSerializerInternalReader.HandleError() should be better-protected to avoid repeating the Exception that method is handling. Thanks @dougbu ...I filed here: https://github.com/JamesNK/Newtonsoft.Json/issues/1262
gharchive/issue
2017-03-15T15:39:01
2025-04-01T06:37:58.088696
{ "authors": [ "PiotrPerak", "danroth27", "dougbu", "kichalla" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/5970", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
279798454
Storing Empty list in TempData cookies results in a Jarray object when read I have code that may store an empty list object in the tempdata property of the controller. When using the default tempdata provider (cookies) after I redirect to the MVC action that consumes the data it will be read as a Jarray instead of the intended type List @Tarig0, can you please share a sample code to repro the issue? Just had a thought, could we add generics so we can get an empty list. TempData<List<Int>>("Test"); Key Exists Value Result No null Throw key not found Yes null null Yes [] Empty List Yes ["Hello","World"] Throw cast error Yes [1,2,3,4] List with values The problem is any change to the interface ITempDataDictionary would be a breaking change and we wouldn't be able to make it until the next major release. But yes, perhaps the idea that you'd pass in the type of the the value to the dictionary would be to make the type support more complex types among other things.
gharchive/issue
2017-12-06T15:38:52
2025-04-01T06:37:58.093703
{ "authors": [ "Tarig0", "mkArtakMSFT", "pranavkm" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/7114", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
304443835
Format Date issue with querystring return get, change format From @shpsyte on February 2, 2018 16:6 Date format issue for GET METHOD ON CONTROLLER I'm developing an application and I get this "error", I have a data model and I have datetime properties set. In the first GET on the page, Date format is correct, eg: dd/mm/yyyy (as defined in the data model) But after the Second GET the date format is changed to mm/dd/yyyy (in back end) but in view is dd/mm/yyyy in post I receive mm/dd/yyy but in View I see dd/mm/yyyy It's vary crazy General I'm using .Net Core 2.0 + Bootstrap (No using any jquery, but i try too)... For some issues, chek and details i put on GIT for reproduce this issue REP ISSUE DATE Copied from original issue: dotnet/core#1252 From @shpsyte on February 3, 2018 13:37 any ? From @Petermarcu on March 11, 2018 16:24 @muratg @Eilon @ajcvickers Is this somewhere in MVC/EF? @shpsyte I suspect this is because of the following behavior in MVC: Model binding (the thing that generates the parameters for your action method) will use the Invariant Culture for parsing dates from the query string But rendering the output in <input type="text" asp-for="dueDateStart" class="form-control"> will use the current culture Because these two cultures are often different, this can create unexpected results. I think you can change the <input> tag to remove the type="text" and MVC will generate the correct date markup, which I think will work correctly. cc @dougbu @rynowak @shpsyte removing type="text" may allow the browser to use date widgets for your DateTime fields. Whether that happens depends on the browser. The widgets hide any formatting (while making the formatting more constrained due to browser restrictions). Either way, MVC will generate the correct HTML for those widgets without type="text". Also recommend using <form method="form"> to consistently use the current culture for all user input, not just DateTime fields. @dougbu what's <form method="form"> ? I'had the same problem. I've resolved with this code: I've created the model binder that parse correctly date time public class DateTimeModelBinder : IModelBinder { public Task BindModelAsync(ModelBindingContext bindingContext) { if (bindingContext == null) { throw new ArgumentNullException(nameof(bindingContext)); } if (bindingContext.ModelType != typeof(DateTime?)) { return Task.CompletedTask; } var modelName = GetModelName(bindingContext); var valueProviderResult = bindingContext.ValueProvider.GetValue(modelName); if (valueProviderResult == ValueProviderResult.None) { return Task.CompletedTask; } bindingContext.ModelState.SetModelValue(modelName, valueProviderResult); var dateToParse = valueProviderResult.FirstValue; if (string.IsNullOrEmpty(dateToParse)) { return Task.CompletedTask; } var dateTime = ParseDate(bindingContext, dateToParse); bindingContext.Result = ModelBindingResult.Success(dateTime); return Task.CompletedTask; } private DateTime? ParseDate(ModelBindingContext bindingContext, string dateToParse) { string[] dateFormats = new string[] { "dd/MM/yyyy", "dd/MM/yyyy HH\\:mm\\:ss" }; if (dateFormats == null) { return ParseDateTime(dateToParse); } return ParseDateTime(dateToParse, dateFormats ); } private string GetModelName(ModelBindingContext bindingContext) { // The "Name" property of the ModelBinder attribute can be used to specify the // route parameter name when the action parameter name is different from the route parameter name. // For instance, when the route is /api/{birthDate} and the action parameter name is "date". // We can add this attribute with a Name property [DateTimeModelBinder(Name ="birthDate")] // Now bindingContext.BinderModelName will be "birthDate" and bindingContext.ModelName will be "date" if (!string.IsNullOrEmpty(bindingContext.BinderModelName)) { return bindingContext.BinderModelName; } return bindingContext.ModelName; } public DateTime? ParseDateTime( string dateToParse, string[] formats = null, IFormatProvider provider = null, DateTimeStyles styles = DateTimeStyles.AssumeLocal) { var CUSTOM_DATE_FORMATS = new string[] { "dd/MM/yyyy", "dd/MM/yyyy HH\\:mm\\:ss" }; if (formats == null) { formats = CUSTOM_DATE_FORMATS; } DateTime validDate; foreach (var format in formats) { if (format.EndsWith("Z")) { if (DateTime.TryParseExact(dateToParse, format, provider, DateTimeStyles.AssumeUniversal, out validDate)) { return validDate; } } if (DateTime.TryParseExact(dateToParse, format, provider, styles, out validDate)) { return validDate; } } return null; } } Then i've created a modelbinder provider that use model binder. public class DateCultureItaliaModelBinderProvider : IModelBinderProvider { public IModelBinder GetBinder(ModelBinderProviderContext context) { if (context.Metadata.ModelType == typeof(DateTime?)) { return new DateTimeModelBinder(); } return null; } } Then i've configured my services: services.AddMvc(o => { // Date time format o.ModelBinderProviders.Insert(0, new DateCultureItaliaModelBinderProvider()); }) This way every date is well parsed in all my models in all actions. It works Thanks
gharchive/issue
2018-03-12T16:25:35
2025-04-01T06:37:58.103805
{ "authors": [ "Eilon", "alvisemion", "dougbu", "jigarce" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/7471", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
213494838
Connection Timeout on packets loss Hello I'm using proxy on the azure VM connected to internal network via VPN. It works all right most of the time however periodically I experience TCP packet loss (with tcp retransmission) when sending POST request. This leads to TaskCancelledException (operation timed out) in the following line: HttpResponseMessage responseMessage = await _httpClient.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, context.RequestAborted); I set HttpClient Timeout period to 15 minutes (which is more than necessary) but it seems when I start getting packet send problems following with (RST, ACK), connection stuck forever and terminates on timeout. Sometimes I have the same Timeout error after 15-20 seconds no matter that I set ReceiveHeadersTimeout and SendTimeout to 900 seconds. I never see the same problem when I run proxy on local network. With VPN, about 2-5 POST requests of 100 are failed. I'm not sure if I will have the same problem on full .net 4.5. I experienced similar results with .net 4.5 proxy and direct website POST. I guess it is expected behavior to have timeout on DNS query (15 seconds) no matter what timeout is set for entire httpClient. Closing this issue.
gharchive/issue
2017-03-11T01:41:09
2025-04-01T06:37:58.107437
{ "authors": [ "SergeyRudakov" ], "repo": "aspnet/Proxy", "url": "https://github.com/aspnet/Proxy/issues/55", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2069062611
Cannot install openai~=1.3.3 and openai~=1.6.1 MacOS Sonoma 14.2.1 Apple M2 Pro Cant install the requirements, getting an error ERROR: Cannot install openai~=1.3.3 and openai~=1.6.1 because these package versions have conflicting dependencies. Full log: ✝  Documents/Github/gpt-researcher   master  pip install -r requirements.txt DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/commix-3.9.dev0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/dnsgen-1.0.4-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/xnLinkFinder-4.1-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/ghauri-1.2.7-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/LinkFinder-1.0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/urless-1.0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/wafw00f-2.2.0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/py_altdns-1.0.2-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/Interlace-1.9.8-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/cloud_enum-0.0.0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/DNSValidator-0.1-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/cmseek-1.1.3-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/waymore-1.28-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 Collecting asyncio==3.4.3 (from -r requirements.txt (line 2)) Using cached asyncio-3.4.3-py3-none-any.whl (101 kB) Collecting beautifulsoup4==4.12.2 (from -r requirements.txt (line 3)) Using cached beautifulsoup4-4.12.2-py3-none-any.whl (142 kB) Requirement already satisfied: colorama==0.4.6 in /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages (from -r requirements.txt (line 4)) (0.4.6) Collecting duckduckgo_search==4.1.1 (from -r requirements.txt (line 5)) Using cached duckduckgo_search-4.1.1-py3-none-any.whl.metadata (19 kB) Collecting md2pdf==1.0.1 (from -r requirements.txt (line 6)) Using cached md2pdf-1.0.1.tar.gz (6.4 kB) Preparing metadata (setup.py) ... done Collecting openai~=1.3.3 (from -r requirements.txt (line 7)) Using cached openai-1.3.9-py3-none-any.whl.metadata (17 kB) Collecting playwright==1.40.0 (from -r requirements.txt (line 8)) Using cached playwright-1.40.0-py3-none-macosx_11_0_arm64.whl.metadata (3.6 kB) ERROR: Cannot install openai~=1.3.3 and openai~=1.6.1 because these package versions have conflicting dependencies. The conflict is caused by: The user requested openai~=1.3.3 The user requested openai~=1.6.1 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts Same Issue with Docker image ✘ ✝  Documents/Github/gpt-researcher   master  docker-compose up WARN[0000] The "TAVILY_API_KEY" variable is not set. Defaulting to a blank string. [+] Running 1/1 ! gpt-researcher Warning 2.0s [+] Building 53.7s (12/14) docker:desktop-linux => [gpt-researcher internal] load .dockerignore 0.0s => => transferring context: 81B 0.0s => [gpt-researcher internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.04kB 0.0s => [gpt-researcher internal] load metadata for docker.io/library/python:3.11.4-slim-bullseye 2.7s => [gpt-researcher auth] library/python:pull token for registry-1.docker.io 0.0s => [gpt-researcher install-browser 1/3] FROM docker.io/library/python:3.11.4-slim-bullseye@sha256:40319d0a897896e746edf877783ef39685d44e90e1e6de8 3.0s => => resolve docker.io/library/python:3.11.4-slim-bullseye@sha256:40319d0a897896e746edf877783ef39685d44e90e1e6de8d964d0382df0d4952 0.0s => => sha256:40589f858a36548f3a99431b7b0d983ba27e5350bd0a032910f956dc14bc77a3 1.06MB / 1.06MB 0.4s => => sha256:ebfe57ae3ec3cd387f0f58f3246abacb367501246ba38834c134db2e117a9922 12.03MB / 12.03MB 1.5s => => sha256:40319d0a897896e746edf877783ef39685d44e90e1e6de8d964d0382df0d4952 1.65kB / 1.65kB 0.0s => => sha256:295605814c6beef84ee8d2bc80e42348ba4c4d0bb01425c6d5262c3849d3ba48 1.37kB / 1.37kB 0.0s => => sha256:a52167001c4fe71875dd1c847ca252944583d73eec2fb93451a11b0024f5161e 6.94kB / 6.94kB 0.0s => => sha256:41f92d5a73b9bee296c7b4a3817b28098b22fb60112608b42bb03570ca296115 30.06MB / 30.06MB 1.5s => => sha256:0cffb447bcdc453d0e4f501f9b8a46080ccf47c1945e63b39801bdea23c38cdf 242B / 242B 0.8s => => sha256:f09dc7ca5e15b356f3b324046a9fac4bba7214fd246ff86d6be85d380930cffa 3.38MB / 3.38MB 1.4s => => extracting sha256:41f92d5a73b9bee296c7b4a3817b28098b22fb60112608b42bb03570ca296115 0.9s => => extracting sha256:40589f858a36548f3a99431b7b0d983ba27e5350bd0a032910f956dc14bc77a3 0.0s => => extracting sha256:ebfe57ae3ec3cd387f0f58f3246abacb367501246ba38834c134db2e117a9922 0.3s => => extracting sha256:0cffb447bcdc453d0e4f501f9b8a46080ccf47c1945e63b39801bdea23c38cdf 0.0s => => extracting sha256:f09dc7ca5e15b356f3b324046a9fac4bba7214fd246ff86d6be85d380930cffa 0.2s => [gpt-researcher internal] load build context 0.1s => => transferring context: 2.91MB 0.0s => [gpt-researcher install-browser 2/3] RUN apt-get update && apt-get satisfy -y "chromium, chromium-driver (>= 115.0)" && chromium 32.4s => [gpt-researcher install-browser 3/3] RUN apt-get install -y firefox-esr wget && wget https://github.com/mozilla/geckodriver/releases/down 10.4s => [gpt-researcher gpt-researcher-install 1/4] RUN mkdir /usr/src/app 0.1s => [gpt-researcher gpt-researcher-install 2/4] WORKDIR /usr/src/app 0.0s => [gpt-researcher gpt-researcher-install 3/4] COPY ./requirements.txt ./requirements.txt 0.0s => ERROR [gpt-researcher gpt-researcher-install 4/4] RUN pip install -r requirements.txt 5.0s ------ > [gpt-researcher gpt-researcher-install 4/4] RUN pip install -r requirements.txt: 1.027 Collecting asyncio==3.4.3 (from -r requirements.txt (line 2)) 1.365 Downloading asyncio-3.4.3-py3-none-any.whl (101 kB) 1.466 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 101.8/101.8 kB 973.0 kB/s eta 0:00:00 1.582 Collecting beautifulsoup4==4.12.2 (from -r requirements.txt (line 3)) 1.660 Downloading beautifulsoup4-4.12.2-py3-none-any.whl (142 kB) 1.694 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.0/143.0 kB 4.7 MB/s eta 0:00:00 1.803 Collecting colorama==0.4.6 (from -r requirements.txt (line 4)) 1.869 Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) 1.999 Collecting duckduckgo_search==4.1.1 (from -r requirements.txt (line 5)) 2.069 Downloading duckduckgo_search-4.1.1-py3-none-any.whl (26 kB) 2.159 Collecting md2pdf==1.0.1 (from -r requirements.txt (line 6)) 2.228 Downloading md2pdf-1.0.1.tar.gz (6.4 kB) 2.247 Preparing metadata (setup.py): started 3.211 Preparing metadata (setup.py): finished with status 'done' 3.313 Collecting openai~=1.3.3 (from -r requirements.txt (line 7)) 3.384 Downloading openai-1.3.9-py3-none-any.whl (221 kB) 3.414 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 221.4/221.4 kB 7.8 MB/s eta 0:00:00 3.570 Collecting playwright==1.40.0 (from -r requirements.txt (line 8)) 3.641 Downloading playwright-1.40.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (37.0 MB) 4.715 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 37.0/37.0 MB 31.5 MB/s eta 0:00:00 4.739 ERROR: Cannot install openai~=1.3.3 and openai~=1.6.1 because these package versions have conflicting dependencies. 4.739 4.739 The conflict is caused by: 4.739 The user requested openai~=1.3.3 4.739 The user requested openai~=1.6.1 4.739 4.739 To fix this you could try to: 4.739 1. loosen the range of package versions you've specified 4.739 2. remove package versions to allow pip attempt to solve the dependency conflict 4.739 4.739 ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts 4.984 4.984 [notice] A new release of pip is available: 23.1.2 -> 23.3.2 4.984 [notice] To update, run: pip install --upgrade pip ------ failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 1 @maorkuriel Thanks it is resolved now. Issue was caused due to PR bot
gharchive/issue
2024-01-07T09:50:10
2025-04-01T06:37:58.119253
{ "authors": [ "assafelovic", "maorkuriel" ], "repo": "assafelovic/gpt-researcher", "url": "https://github.com/assafelovic/gpt-researcher/issues/326", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1801583358
openai.error.InvalidRequestError: The model: gpt-4 does not exist I thought this will work with GPT-3.5 and also thought that OpenAI made it possible to anyone to use GPT-4 API but after several hours tinkering with this I am not able to get it to work, I see the webpage but it keeps throwing an error after another (fixed them all) but I am unable to make it work because of this error I have the same problem. Hi! I’m one of the founders of Sweep, a github app that solves issues(like small bugs) by writing pull requests. This looks like a good issue for Sweep https://github.com/sweepai/sweep to try. It might need more details from the maintainers though. We have onboarding instructions here, I’m also happy to help you onboard directly :) I have the same error "openai.error.InvalidRequestError: The model: gpt-4 does not exist" just stopped. The model is configured here https://github.com/assafelovic/gpt-researcher/blob/master/config/config.py#L25 By accessing an enviroment variable self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4") If you are having issues with access to gpt-4, you can set an environment variable with a model name that you do have access to e.g. gpt-3.5-turbo For example: export SMART_LLM_MODEL="gpt-3.5-turbo" Hi! By making the following modifications, I was able to resolve the issue: In gpt-researcher/config/config.py, substitute in line 25 "gpt-4" with "gpt-3.5-turbo-16k" In gpt-researcher/config/config.py, substitute in line 27: 8000 with 4000 Upon executing this modified script, I encountered another error: "json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 58)". So, if you also get the same error, all you need to do is: In gpt-researcher/agent/research_agent.py line 92, substitute "return json.loads(result)" with " return result". This avoids the double-parsing as result itself is a JSON Object. Hope this helps! @madiha1ahmed I did it your way and it worked. thanks! It works for me now. Thanks
gharchive/issue
2023-07-12T18:59:27
2025-04-01T06:37:58.128761
{ "authors": [ "OumYacout", "bcgit2023", "jortegac", "madiha1ahmed", "neoOpus", "thinkJD", "wwzeng1" ], "repo": "assafelovic/gpt-researcher", "url": "https://github.com/assafelovic/gpt-researcher/issues/34", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2150204341
Integrate Granary as Middleware for AT Protocol Interoperability Summary This proposal outlines a plan to integrate Granary as middleware within ActivityPods to enable interoperability with the AT Protocol. The goal is to bridge ActivityPods, which utilises ActivityPub and Solid standards, with the emerging AT Protocol, thereby enhancing cross-platform communication and user interaction across decentralised social networks. https://github.com/snarfed/granary Background Granary is a library that converts data between different social web formats, including ActivityStreams, microformats, HTML, and JSON. Extending Granary to support AT Protocol (https://github.com/bluesky-social/atproto)and integrating it as middleware in ActivityPods can achieve seamless interoperability between ActivityPods-based applications and AT Protocol platforms. Objectives Enhance Interoperability: Enable ActivityPods applications to communicate with AT Protocol platforms, breaking down silos in the decentralised social web. User Empowerment: Allow users to interact across different platforms without duplicating their social graph or content. Developer Support: Provide developers with tools to build applications that can operate across different decentralised social networking protocols. Proposed Approach Phase 1: Planning and Analysis Understand the protocols (ActivityPub, Solid, AT Protocol) and identify key integration points. Map common entities and actions between ActivityStreams and AT Protocol lexicons. Phase 2: Extension of Granary Develop AT Protocol support in Granary, focusing on conversion logic for core entities like users and posts. Handle extensions and custom fields specific to ActivityStreams and AT Protocol. Phase 3: Middleware Integration Design and implement middleware architecture within ActivityPods to utilise Granary for data conversion. Adapt ActivityPods' API endpoints for AT Protocol communication, focusing on authentication and data flow. Phase 4: Testing and Validation Conduct comprehensive testing, including unit, functional, and interoperability tests. Set up a test environment for real-world scenario testing. Phase 5: Documentation and Deployment Document the integration process, API usage, and contribution guidelines. Develop a deployment strategy and monitor the integration's performance. Request for Comments I invite the ActivityPods community to discuss this proposal. Feedback on the approach, potential challenges, and additional benefits are welcome. Collaboration is key to achieving interoperability and enhancing the decentralised social web. Conclusion Integrating Granary as middleware to achieve interoperability with AT Protocol represents a significant step towards a more interconnected and user-centric decentralised social web. This proposal aims to start a collaborative effort towards this goal, leveraging the strengths of ActivityPods, Granary, and the broader open-source community. Thank you for considering this proposal. I look forward to your feedback and the opportunity to work together on this exciting integration. I understand this a large undertaking. I in no way believe this to be addressed nor completed anytime soon but the possibilities are incredible. ActivityPods would garner rapid adoption Adding this to provide context for Granary and what it covers. Granary is a library and REST API that fetches and converts between a wide variety of social data sources and formats: Facebook, Flickr, GitHub, Instagram, Mastodon, and Twitter native APIs Instagram and Facebook scraped HTML ActivityStreams 1.0 and 2.0 JSON, including ActivityPub HTML and JSON with microformats2 Atom, RSS 2.0, JSON Feed Plain XML Bluesky/AT Protocol Nostr, with many NIPs Free yourself from silo API chaff and expose the sweet social data foodstuff inside in standard formats and protocols! Thanks for the idea and vote of support @outlaw-dame! Granary can indeed convert social data to Bluesky format, ie the app.bsky.* lexicons, which can then be served over AT Protocol. Actually supporting AT Protocol itself, ie acting as a full fledged PDS, is a bigger lift: you need to store and serve MST nodes and a repo commit chain over websocket. You can do that with eg https://github.com/snarfed/arroba , which integrates well with granary, or you can write your data to a standalone PDS, either one you run (the official one is open source) or someone else's like the official https://bsky.social/ . Thanks for the idea and vote of support @outlaw-dame! Granary can indeed convert social data to Bluesky format, ie the app.bsky.* lexicons, which can then be served over AT Protocol. Actually supporting AT Protocol itself, ie acting as a full fledged PDS, is a bigger lift: you need to store and serve MST nodes and a repo commit chain over websocket. You can do that with eg https://github.com/snarfed/arroba , which integrates well with granary, or you can write your data to a standalone PDS, either one you run (the official one is open source) or someone else's like the official https://bsky.social/ . How would this be used to map the data/PDS to the Pod Store which is the ActivityPods backend and where user data is housed You'd integrate arroba into the ActivityPods backend to create ATProto users, repos and create, update, and delete records in those repos when you do the corresponding things for native ActivityPods users. You could optionally also write a new implementation of arroba's Storage class that stores repo data in the ActivityPods data store. However, it looks like ActivityPods is JavaScript, and arroba is Python, so you'd have to use cross-language bindings, which can sometimes be awkward. The official PDS is JavaScript, so you might want to look at using that instead.
gharchive/issue
2024-02-23T01:04:04
2025-04-01T06:37:58.148432
{ "authors": [ "outlaw-dame", "snarfed" ], "repo": "assemblee-virtuelle/activitypods", "url": "https://github.com/assemblee-virtuelle/activitypods/issues/184", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2466847958
fix(filter): Take redactions into account for Array elides Verified this against cargo in rewriting some tests Fixes #352 Pull Request Test Coverage Report for Build 10395049619 Details 7 of 7 (100.0%) changed or added relevant lines in 1 file are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 51.251% Totals Change from base Build 10303311497: 0.0% Covered Lines: 1393 Relevant Lines: 2718 💛 - Coveralls
gharchive/pull-request
2024-08-14T21:09:55
2025-04-01T06:37:58.153733
{ "authors": [ "coveralls", "epage" ], "repo": "assert-rs/snapbox", "url": "https://github.com/assert-rs/snapbox/pull/358", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1118031191
Breaking change: Extracting within SoftAssertions now throws an assertion error if actual is null Check List: Unit tests : YES Javadoc with a code example (on API only) : NA PR meets the contributing guidelines : Should be, but feel free to notify me if not! #2401 introduced extracting to throw an assertion error if actual is null, but it's not effective when using SoftAssertions. So I've fixed it. Actually, I think I should also see related #2411 and #2412. I will process if my approach is right. Weirdly, the Pull request list page doesn't show this. @scordio Can I get some review? Sure @KENNYSOFT, I had already a first look. I would like to check our Byte Buddy configuration, it might be that we can solve this issue directly there. I'll play with BB during the weekend and get back to you. Wow, sounds great! Looking forward to see the issue resolved in any form. @KENNYSOFT I'm still working on this topic and I might need a few more days for it. Thanks for the update. Hope you finish well. By the way, I now learned JUnit's @ParameterizedTest and it resolves my actual issue in some different way; I used SoftAssertions in the loop to check all violations within the given list, but now the loop is iterated by JUnit and I just use a simple Assertion. 😉
gharchive/pull-request
2022-01-29T05:00:35
2025-04-01T06:37:58.158496
{ "authors": [ "KENNYSOFT", "scordio" ], "repo": "assertj/assertj-core", "url": "https://github.com/assertj/assertj-core/pull/2476", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
830293041
Docs should recommend creating a new GCP project? AFAICT a GCP project can only have one Oauth consent form. If another app is configured to use Oauth in a user's sole GCP project, it means Harvester would need to share that same form. This appears to be possible (i.e. you can add multiple Redirect URIs and Authorized domains), but I believe they would need to share the same user verification screen and related metadata about user policy, etc. Might be a good idea to instruct users to create a new GCP project specifically for Harvester. Yes this is a great point. The new version of the docs recommend creating a new project so I'm going to close this: https://harvester.readthedocs.io/en/latest/google_credentials/#google-cloud-platform-project
gharchive/issue
2021-03-12T16:54:06
2025-04-01T06:37:58.181540
{ "authors": [ "andrewmilligan", "zstumgoren" ], "repo": "associatedpress/harvester", "url": "https://github.com/associatedpress/harvester/issues/8", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
2621117068
All types are inferred as string Type inference error for request and response: I tried to build an OpenAPI using this library in combination with hono.js and Next.js. However, after defining the schema, I discovered that when calling the API from the Next.js client, type errors occur. All request parameters and response types have become string types, including date and number types. response type error export const postSchema = z .object({ id: z.string(), title: z.string(), thumb: z.string(), summary: z.string().nullable().optional(), keywords: z.string().nullable().optional(), description: z.string().nullable().optional(), slug: z.string().nullable().optional(), body: z.string(), createdAt: z.coerce.date(), updatedAt: z.coerce.date(), }) .strict(); app .openapi( createRoute({ tags: ['文章操作'], method: 'get', path: '/:item', request: { params: z.object({ item: z.string(), }), }, responses: { 200: { content: { 'application/json': { schema: postSchema, }, }, description: '文章查询结果', }, }, }), async (c) => { try { const { item } = c.req.param(); const result = await queryPostItem(item); return c.json(result) as any; } catch (error) { return c.json({ error }, 500); } }, ) const result = await apiClient.api.posts[':item'].$get({ param: { item: params.item } }); if (!result.ok) return notFound(); const post = await result.json(); // ... <div> <span> <AiOutlineCalendar /> </span> <time className="tw-ellips"> {!isNil(post.updatedAt) ? formatChineseTime(post.updatedAt) : formatChineseTime(post.createdAt)} </time> </div> request type error export const postPaginateQuerySchema = z.object({ page: z.coerce.number().optional().openapi({ type: 'number' }), limit: z.coerce.number().optional().openapi({ type: 'number' }), orderBy: z.enum(['asc', 'desc']).optional(), }); export const postPaginateResultSchema = z.object({ items: z.array(postSchema), meta: z.object({ itemCount: z.coerce.number(), totalItems: z.coerce.number(), perPage: z.coerce.number(), totalPages: z.coerce.number(), currentPage: z.coerce.number(), }), }); export type PostPaginate = z.infer<typeof postPaginateResultSchema>; const res = await apiClient.api.posts.$get({ query: { page, limit }, }); const { items, meta } = (await res.json()) as any as PostPaginate; if (meta.totalPages && meta.totalPages > 0 && page > meta.totalPages) { return redirect('/'); } // ... @pincman this is not an issue with our library in terms of your integration. The date. z.date() does validate dates. However in terms of "specification" (Open API) there is no date representation - there is only a string with some specific formatting but it still a string. The second screenshot with page and limit is probably correct - I think the documentation would say that it is a number. However in general over HTTP requests the query parameters are all strings. So I imagine what you are seeing is caused by the Next.JS (or something else on your stack) type system and it has nothing to do with our library. I am going to close this issue but if you happen to notice that is wrongly generated feel free to open up a new issue or reopen this one. Good luck! I'm encountering the same issue. @pincman how did you go about resolving response values that are dates/numbers being converted to strings?
gharchive/issue
2024-10-29T12:35:27
2025-04-01T06:37:58.190285
{ "authors": [ "AGalabov", "paulwongx", "pincman" ], "repo": "asteasolutions/zod-to-openapi", "url": "https://github.com/asteasolutions/zod-to-openapi/issues/267", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2031912531
Fix handling of trailing target comment Summary This PR fixes an issue where Ruff moved a trailing target comment past the statement end c = b[dddddd, aaaaaa] = ( a[ aaaaaaa, bbbbbbbbbbbbbbbbbbb ] # comment 2 ) = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Before c = b[dddddd, aaaaaa] = a[ aaaaaaa, bbbbbbbbbbbbbbbbbbb ] = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # comment 2 Now c = b[dddddd, aaaaaa] = ( a[aaaaaaa, bbbbbbbbbbbbbbbbbbb] # comment 2 ) = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Which matches black's formatting Test Plan Added test Current dependencies on/for this PR: main PR #9051 👈 This stack of pull requests is managed by Graphite.
gharchive/pull-request
2023-12-08T04:34:49
2025-04-01T06:37:58.233472
{ "authors": [ "MichaReiser" ], "repo": "astral-sh/ruff", "url": "https://github.com/astral-sh/ruff/pull/9051", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2138293386
problems building wheel for dlib (tmp) starenka /tmp % uv pip install 'dlib==19.24.2' Resolved 1 package in 2ms error: Failed to download distributions Caused by: Failed to fetch wheel: dlib==19.24.2 Caused by: Failed to build: dlib==19.24.2 Caused by: Build backend failed to build wheel through `build_wheel()`: --- stdout: running bdist_wheel running build running build_ext Building extension for Python 3.12.2 (main, Feb 7 2024, 20:47:03) [GCC 13.2.0] Invoking CMake setup: 'cmake /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/build/lib.linux-x86_64-cpython-312 -DPYTHON_EXECUTABLE=/tmp/.tmpPJiyRY/.venv/bin/python -DCMAKE_BUILD_TYPE=Release' -- pybind11 v2.10.0 -- Found PythonInterp: /tmp/.tmpPJiyRY/.venv/bin/python (found suitable version "3.12.2", minimum required is "3.6") -- Found PythonLibs: python3.12 -- Using CMake version: 3.28.3 -- Compiling dlib version: 19.24.2 -- SSE4 instructions can be executed by the host processor. -- AVX instructions can be executed by the host processor. -- Enabling AVX instructions -- Found system copy of libpng: /usr/lib/x86_64-linux-gnu/libpng.so;/usr/lib/x86_64-linux-gnu/libz.so -- Found system copy of libjpeg: /usr/lib/x86_64-linux-gnu/libjpeg.so -- Configuring done (0.8s) -- Generating done (0.0s) -- Build files have been written to: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/build/temp.linux-x86_64-cpython-312 Invoking CMake build: 'cmake --build . --config Release -- -j8' [ 1%] Linking CXX static library libdlib.a [ 70%] Built target dlib [ 71%] Building CXX object CMakeFiles/_dlib_pybind11.dir/src/dlib.cpp.o [ 72%] Building CXX object CMakeFiles/_dlib_pybind11.dir/src/matrix.cpp.o [ 73%] Building CXX object CMakeFiles/_dlib_pybind11.dir/src/vector.cpp.o [ 75%] Building CXX object CMakeFiles/_dlib_pybind11.dir/src/svm_c_trainer.cpp.o [ 75%] Building CXX object CMakeFiles/_dlib_pybind11.dir/src/svm_rank_trainer.cpp.o [ 76%] Building CXX object CMakeFiles/_dlib_pybind11.dir/src/decision_functions.cpp.o [ 77%] Building CXX object CMakeFiles/_dlib_pybind11.dir/src/other.cpp.o [ 78%] Building CXX object CMakeFiles/_dlib_pybind11.dir/src/basic.cpp.o --- stderr: <string>:208: SyntaxWarning: invalid escape sequence '\(' <string>:209: SyntaxWarning: invalid escape sequence '\(' <string>:210: SyntaxWarning: invalid escape sequence '\(' CMake Deprecation Warning at /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/CMakeLists.txt:8 (cmake_minimum_required): Compatibility with CMake < 3.5 will be removed from a future version of CMake. Update the VERSION argument <min> value or use a ...<max> suffix to tell CMake that the project does not need compatibility with older versions. CMake Warning (dev) at /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/tools/FindPythonLibsNew.cmake:98 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/tools/pybind11Tools.cmake:50 (find_package) /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/tools/pybind11Common.cmake:180 (include) /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/CMakeLists.txt:200 (include) This warning is for project developers. Use -Wno-dev to suppress it. In file included from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/attr.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/class.h:12, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/pybind11.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python/pybind_utils.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/opaque_types.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/decision_functions.cpp:4: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/common.h:212:10: fatal error: Python.h: No such file or directory 212 | #include <Python.h> | ^~~~~~~~~~ In file included from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/attr.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/class.h:12, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/pybind11.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python/pybind_utils.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/opaque_types.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/svm_rank_trainer.cpp:4: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/common.h:212:10: fatal error: Python.h: No such file or directory 212 | #include <Python.h> | ^~~~~~~~~~ In file included from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/attr.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/class.h:12, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/pybind11.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python/pybind_utils.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/opaque_types.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/svm_c_trainer.cpp:4: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/common.h:212:10: fatal error: Python.h: No such file or directory 212 | #include <Python.h> | ^~~~~~~~~~ In file included from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/attr.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/class.h:12, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/pybind11.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python/pybind_utils.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/opaque_types.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/vector.cpp:4: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/common.h:212:10: fatal error: Python.h: No such file or directory 212 | #include <Python.h> | ^~~~~~~~~~ compilation terminated. compilation terminated. compilation terminated. compilation terminated. In file included from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/attr.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/class.h:12, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/pybind11.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python/pybind_utils.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/basic.cpp:3: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/common.h:212:10: fatal error: Python.h: No such file or directory 212 | #include <Python.h> | ^~~~~~~~~~ compilation terminated. In file included from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/attr.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/class.h:12, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/pybind11.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python/pybind_utils.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/opaque_types.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/other.cpp:4: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/common.h:212:10: fatal error: Python.h: No such file or directory 212 | #include <Python.h> | ^~~~~~~~~~ compilation terminated. In file included from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/attr.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/class.h:12, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/pybind11.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python/pybind_utils.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/opaque_types.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/matrix.cpp:4: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/common.h:212:10: fatal error: Python.h: No such file or directory 212 | #include <Python.h> | ^~~~~~~~~~ compilation terminated. gmake[2]: *** [CMakeFiles/_dlib_pybind11.dir/build.make:132: CMakeFiles/_dlib_pybind11.dir/src/svm_rank_trainer.cpp.o] Error 1 gmake[2]: *** Waiting for unfinished jobs.... gmake[2]: *** [CMakeFiles/_dlib_pybind11.dir/build.make:104: CMakeFiles/_dlib_pybind11.dir/src/vector.cpp.o] Error 1 gmake[2]: *** [CMakeFiles/_dlib_pybind11.dir/build.make:118: CMakeFiles/_dlib_pybind11.dir/src/svm_c_trainer.cpp.o] Error 1 gmake[2]: *** [CMakeFiles/_dlib_pybind11.dir/build.make:146: CMakeFiles/_dlib_pybind11.dir/src/decision_functions.cpp.o] Error 1 gmake[2]: *** [CMakeFiles/_dlib_pybind11.dir/build.make:174: CMakeFiles/_dlib_pybind11.dir/src/basic.cpp.o] Error 1 gmake[2]: *** [CMakeFiles/_dlib_pybind11.dir/build.make:160: CMakeFiles/_dlib_pybind11.dir/src/other.cpp.o] Error 1 gmake[2]: *** [CMakeFiles/_dlib_pybind11.dir/build.make:90: CMakeFiles/_dlib_pybind11.dir/src/matrix.cpp.o] Error 1 In file included from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/attr.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/class.h:12, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/pybind11.h:13, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python/pybind_utils.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/../dlib/python.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/opaque_types.h:6, from /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/tools/python/src/dlib.cpp:4: /home/starenka/.cache/uv/built-wheels-v0/pypi/dlib/19.24.2/E852f7Unij9FdSd1uh9s-/dlib-19.24.2.tar.gz/dlib/external/pybind11/include/pybind11/detail/common.h:212:10: fatal error: Python.h: No such file or directory 212 | #include <Python.h> | ^~~~~~~~~~ compilation terminated. gmake[2]: *** [CMakeFiles/_dlib_pybind11.dir/build.make:76: CMakeFiles/_dlib_pybind11.dir/src/dlib.cpp.o] Error 1 gmake[1]: *** [CMakeFiles/Makefile2:118: CMakeFiles/_dlib_pybind11.dir/all] Error 2 gmake: *** [Makefile:91: all] Error 2 Traceback (most recent call last): File "<string>", line 6, in <module> File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/build_meta.py", line 404, in build_wheel return self._build_with_temp_dir( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/build_meta.py", line 389, in _build_with_temp_dir self.run_setup() File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/build_meta.py", line 311, in run_setup exec(code, locals()) File "<string>", line 218, in <module> File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/__init__.py", line 103, in setup return distutils.core.setup(**attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 185, in setup return run_commands(dist) ^^^^^^^^^^^^^^^^^^ File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 201, in run_commands dist.run_commands() File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands self.run_command(cmd) File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/dist.py", line 963, in run_command super().run_command(command) File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command cmd_obj.run() File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/wheel/bdist_wheel.py", line 368, in run self.run_command("build") File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command self.distribution.run_command(command) File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/dist.py", line 963, in run_command super().run_command(command) File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command cmd_obj.run() File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/command/build.py", line 131, in run self.run_command(cmd_name) File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command self.distribution.run_command(command) File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/dist.py", line 963, in run_command super().run_command(command) File "/tmp/.tmpPJiyRY/.venv/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command cmd_obj.run() File "<string>", line 130, in run File "<string>", line 170, in build_extension File "/usr/lib/python3.12/subprocess.py", line 413, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j8']' returned non-zero exit status 2. Did you try the same build with a plain pip install 'dlib==19.24.2'? By the looks of it, you don't have the Python development files installed. my bad, debian seems to have sneakily uninstalled my python3.12-dev package, sorry. I haven't tried it before submitting as i already have env with this installed so i knew it worked back then with pip.
gharchive/issue
2024-02-16T10:22:50
2025-04-01T06:37:58.240226
{ "authors": [ "akx", "starenka" ], "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/issues/1475", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2239000643
新增安装 Python 功能 我以前比较喜欢使用 miniconda 来当做虚拟环境管理。 主要原因是它能帮助我创建其他版本的 Python 环境。 而不用我下载一个新的 py 安装包。 在 win 中更新 Python 只能重新下载一个 Python 安装包感觉这样比较麻烦。 Linux 中更新 py 也会有影响。 在看到 uv 包出来后也没有这个功能。 (我在想既然使用 rust 实现了,已经不依赖 Python 了那么为什么检测不到系统中有 Python 的情况下不能自己下载一个 Python 呢?) 我在想是否也能在没有 Python 环境的情况下安装一个 Python 再进行创建 Python 的虚拟环境? I used to prefer miniconda for virtual environment management. The main reason is that it helps me create other versions of Python environments. Instead of me downloading a new py installation package. Updating Python in win can only be done by re-downloading a Python installation package. Updating py in Linux will also have an impact. There is no such function after seeing the uv package come out. (I was wondering why I can't download Python myself if I can't detect Python on the system since I'm using rust and don't rely on Python anymore?) I was wondering if it was possible to install Python without a Python environment and then create a Python virtual environment? You can do uv venv --python <path to executable> to create an environment with a specific Python interpreter.
gharchive/issue
2024-04-12T03:50:07
2025-04-01T06:37:58.244313
{ "authors": [ "522247020", "zanieb" ], "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/issues/3006", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2450164897
PYTHONPATH support for uv Python Package (Sort-of related to, and would fix: https://github.com/astral-sh/uv/issues/4450) Python supports a myriad of methods to declare dependency locations, not all of which can be correctly identified by the uv Python package. For example, PYTHONPATH: ❯ PYTHONPATH=$VENV_PATH/lib/python3.10/site-packages python Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 16.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import uv >>> uv.find_uv_bin() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "$VENV_PATH/lib/python3.10/site-packages/uv/__init__.py", line 30, in find_uv_bin raise FileNotFoundError(path) FileNotFoundError: /Users/(thats_me)/.local/bin/uv which throws, even though the uv package is located in the environment. Expected Behavior uv.find_uv_bin() correctly identifies that its binary should be located in $VENV_PATH/bin and returns $VENV_PATH/bin/uv Resonable Other Behavior uv.find_uv_bin() falls back to fetching the binary location from $PATH, and gives the responsibility of correctly setting it to the process which populated $PYTHONPATH Forgive my ignorance here, but isn't PYTHONPATH just a module search path? I don't quite see how we can "correctly" infer the virtual environment bin path from just the PYTHONPATH.
gharchive/issue
2024-08-06T07:33:43
2025-04-01T06:37:58.248154
{ "authors": [ "ulucs", "zanieb" ], "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/issues/5808", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2479838038
Incorrect precedence of UV_INDEX_URL vs --index-url in reqs file If an index url has been specified directly in the requirements file using -i or --index-url, this should have precedence over env var. Currently uv will use UV_INDEX_URL if it's set, and ignore the one in the requirements file. Reproducer: $ uv --version uv 0.3.1 (be17d132a 2024-08-21) $ python3 -VV Python 3.12.4 (v3.12.4:8e8a4baf65, Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)] $ python3 -m venv .venv --without-pip $ vi reqs.txt $ cat reqs.txt --index-url https://test.pypi.org/simple uv-reqs-example==0.1 $ UV_INDEX_URL=https://pypi.org/simple uv pip install -r reqs.txt × No solution found when resolving dependencies: ╰─▶ Because uv-reqs-example was not found in the package registry and you require uv-reqs-example==0.1, we can conclude that your requirements are unsatisfiable. $ UV_INDEX_URL=https://example.org/ uv pip install -r reqs.txt ⠴ Resolving dependencies... error: HTTP status server error (500 Internal Server Error) for url (https://example.org/uv-reqs-example/) This is in contrast to pip, where the index url from environment is lower priority to the cmd: $ uv pip install pip Resolved 1 package in 308ms Prepared 1 package in 573ms Installed 1 package in 11ms + pip==24.2 $ PIP_INDEX_URL=https://example.org/ .venv/bin/pip install -r reqs.txt Looking in indexes: https://test.pypi.org/simple Collecting uv-reqs-example==0.1 (from -r reqs.txt (line 2)) Downloading https://test-files.pythonhosted.org/packages/78/20/c8431c9645d1390acce2f2da698539f473818e77a168d135c09279dd7bf5/uv_reqs_example-0.1-py3-none-any.whl.metadata (58 bytes) Downloading https://test-files.pythonhosted.org/packages/78/20/c8431c9645d1390acce2f2da698539f473818e77a168d135c09279dd7bf5/uv_reqs_example-0.1-py3-none-any.whl (986 bytes) Installing collected packages: uv-reqs-example Successfully installed uv-reqs-example-0.1 This makes uv + UV_INDEX_URL somewhat dangerous to use as a drop-in replacement for pip + PIP_INDEX_URL. A failure to install is not the worst scenario, the worst is that you succeed to install potentially totally different packages from a totally different index (note: checksums in the reqs file could protect you from that). I'm intending to convince you that retaining compatibility with pip is a better choice than documenting as an incompatibility. Consider when there is a custom default index set. This is pretty common in corporate environments, where PyPI packages may need to be cached, whitelisted or shadowed with patched versions, e.g. by using an internal devpi-server. The custom index is set as the default index (for all users) by /etc/pip.conf file, owned by root and placed by sysadmin or config mgmt. Now, uv doesn't look at /etc/pip.conf, nor allow a global default configuration, only considers the user's ~/.config/uv/uv.toml. The next-best option for the sysadmin to put a sensible default would probably be setting UV_INDEX_URL (perhaps in /etc/environment, or by wrapping the uv executable). However, if the UV_INDEX_URL takes precedence over the index specified by requirements files, that's an unfortunate situation - users would have to unset the env var (which they might not even know about) to allow using the index which the requirements file was attempting to provide. As I mentioned earlier, the simple failure mode is that packages are not found, but the more confusing and hard-to-debug failure mode is if the installation succeeds but uses the wrong packages (because it was using the wrong index). Switching from pip to uv pip should not be so error-prone. The failure mode I describe might sound like a weird edge-case, but could actually be quite common especially with devpi-server, where "team-specific" indices could inherit and mirror a default index, to host a mix of their own/patched local versions layered over the default versions. ... and the one in a requirements.txt in a file was picked up (which could be from another project or similar). I'm not sure what you mean here? Unless I'm mistaken, it is part of the requirements file format that the file can control the index-url. This is not dangerous, because requirements files and their custom indices can not be used during dependency resolving from an index, the requirements file would be installed directly with -r. At least, it's not any more dangerous than a direct url requirement, as specified in PEP 508.
gharchive/issue
2024-08-22T04:55:34
2025-04-01T06:37:58.256043
{ "authors": [ "wimglenn" ], "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/issues/6407", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }