id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
630258918
|
Fix a typo in the email confirmation sash
addresses https://artsyproduct.atlassian.net/browse/AUCT-1072
Just fixing a typo in the email confirmation sash and adding a CHANGELOG entry.
The test failure has been caused by the copy update and it also needs to be updated.
|
gharchive/pull-request
| 2020-06-03T19:20:08 |
2025-04-01T04:56:03.452435
|
{
"authors": [
"yuki24"
],
"repo": "artsy/eigen",
"url": "https://github.com/artsy/eigen/pull/3398",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1006214159
|
format(tslint): adding some rules
The type of this PR is: Enhancement
This PR resolves []
Description
I added three rules:
horizontal={true} becomes horizontal as we use it in most places
<Flex bla="wow"></Flex> becomes <Flex bla="wow" /> for empty components
<Text variant={"lg"}> becomes <Text variant="lg">, removing the unnessecary curlies.
PR Checklist (tick all before merging)
[x] I have included screenshots or videos to illustrate my changes, or I have not changed anything that impacts the UI.
[x] I have tested my changes on iOS and Android.
[x] I have added tests for my changes, or my changes don't require testing, or I have included a link to a separate Jira ticket covering the tests.
[x] I have added a feature flag, or my changes don't require a feature flag. (How do I add one?)
[x] I have documented any follow-up work that this PR will require, or it does not require any.
[x] I have added an app state migration, or my changes do not require one. (What are migrations?)
[x] I have added a changelog entry below or my changes do not require one.
Changelog updates
Changelog updates
Cross-platform user-facing changes
iOS user-facing changes
Android user-facing changes
Dev changes
adding some tslint rules - pavlos
This PR contains the following changes:
Dev changes (adding some tslint rules - pavlos)
Generated by :no_entry_sign: dangerJS against f7863693f5841e013b3f183e23f51b3118b5e729
|
gharchive/pull-request
| 2021-09-24T08:27:00 |
2025-04-01T04:56:03.460009
|
{
"authors": [
"ArtsyOpenSource",
"pvinis"
],
"repo": "artsy/eigen",
"url": "https://github.com/artsy/eigen/pull/5513",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1633549871
|
feat: add relay dev tools on flipper
This PR resolves []
Description
Adds relay-dev-tools in flipper so we can debug what's going on with the relay store 🤩
PR Checklist
[x] I have tested my changes on iOS and Android.
[x] I hid my changes behind a feature flag, or they don't need one.
[x] I have included screenshots or videos, or I have not changed the UI.
[x] I have added tests, or my changes don't require any.
[x] I added an app state migration, or my changes do not require one.
[x] I have documented any follow-up work that this PR will require, or it does not require any.
[x] I have added a changelog entry below, or my changes do not require one.
To the reviewers 👀
[ ] I would like at least one of the reviewers to run this PR on the simulator or device.
Changelog updates
Changelog updates
Cross-platform user-facing changes
iOS user-facing changes
Android user-facing changes
Dev changes
add relay dev tools on flipper - gkartalis
Need help with something? Have a look at our docs, or get in touch with us.
if someone can pull down the branch and see if flipper is getting automatically included in flipper it would be great.
If not I will also update the docs to reflect the installation of the plugin
@gkartalis Forgot to comment - this works for me 🙌 thanks for the addition
|
gharchive/pull-request
| 2023-03-21T09:51:00 |
2025-04-01T04:56:03.468437
|
{
"authors": [
"MounirDhahri",
"gkartalis"
],
"repo": "artsy/eigen",
"url": "https://github.com/artsy/eigen/pull/8392",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1890148775
|
feat: add create alert at end of artworks list
This PR resolves ONYX-295
Description
[This is already reviewable]
[Erin still needs to update the copy before merging this PR]
This PR adds support to creating an alert at the end of the list of artworks in Artist page.
Tracking event
Create alert Message
PR Checklist
[x] I have tested my changes on iOS and Android.
[x] I hid my changes behind a feature flag, or they don't need one.
[x] I have included screenshots or videos, or I have not changed the UI.
[x] I have added tests, or my changes don't require any.
[x] I added an app state migration, or my changes do not require one.
[x] I have documented any follow-up work that this PR will require, or it does not require any.
[x] I have added a changelog entry below, or my changes do not require one.
To the reviewers 👀
[ ] I would like at least one of the reviewers to run this PR on the simulator or device.
Changelog updates
Changelog updates
Cross-platform user-facing changes
add create alert at the end of artworks list grid - mounir
iOS user-facing changes
Android user-facing changes
Dev changes
Need help with something? Have a look at our docs, or get in touch with us.
Looks like the spacing is way too small. Could you, please, adjust it?
Updated the copy to what Erin asked, thanks all for reviewing. will add #squashongreen
|
gharchive/pull-request
| 2023-09-11T10:05:01 |
2025-04-01T04:56:03.479069
|
{
"authors": [
"MounirDhahri",
"dariakoko"
],
"repo": "artsy/eigen",
"url": "https://github.com/artsy/eigen/pull/9239",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
335578113
|
Refresh status of registration when you return to ConfirmBid screen + some tiny fixes
The outcome of a pairing session with @orta!
This PR addresses: https://artsyproduct.atlassian.net/browse/PURCHASE-196
Previously, when you returned to the ConfirmBid screen after placing a bid, such as after you find out you'd been outbid (so after visiting SelectMaxBidScreen --> ConfirmBidScreen --> BidResult --> ConfirmBidScreen), you'd see the same options for registering/agreeing to the conditions of sale even though at this point you must be registered.
This PR updates to explicitly refetch your bidder information right before navigating back to the ConfirmBid screen. I also added a similar refetch of the SaleArtwork information when revisiting the SelectMaxBid screen, as we found that the bid picker also became out of date.
(I just realized that we could do a lot of this by passing props around... i.e. after you bid, you must be registered so we could pass a prop to not show the register info. I think that solution is worse/a little confusing as it means we're not getting the most up-to-date truth, but it's an option.)
Fails
:no_entry_sign:
This PR includes changes to the Emission Pod's native code but does not have a package.json
change for the update to the "native-code-version". If this is fine, add #native_no_changes to your PR message.
Warnings
:warning:
This PR comes from a fork, and won't get JS generated for QA testing this PR inside the Emission Example app.
Generated by :no_entry_sign: dangerJS
Looks good - merge on green
|
gharchive/pull-request
| 2018-06-25T21:40:24 |
2025-04-01T04:56:03.485836
|
{
"authors": [
"DangerCI",
"orta",
"sweir27"
],
"repo": "artsy/emission",
"url": "https://github.com/artsy/emission/pull/1119",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
167126467
|
Convert to Flow
This does two major things:
[x] Passes ESLint
[x] Converts to using Flow
and one semi-major thing.
[x] Converts the app to use class properties.
So this would take these realllllllly weird bits of a constructor: this.func_name = this.func_name.bind(this) and turns them into normal functions using class instance fields. This comes in with React Native in the form of class-properties-transform. It means that you replace it with
func_name = () => {
...
}
whereas it used to be func_name(){ ... }. The => will rebind the this for you. Seems to work, at least a bunch of functions (like tabbing) seems to have not broken. Blog post on the topic.
TODO:
[x] Decide on what to do with ensuring RN / R are in the flow/typescript types database.
The decision here is that we would have to put a 5k file in that needs updating whenever react/react-native update.
As ever, I've been documenting all this stuff in the vscode.md file. You can read the version while this PR is open here: https://github.com/artsy/emission/blob/flow/docs/vscode.md
I’m very excited to have this in! 👍 💯
Feedback addressed, gonna see what happens WRT the typings - should make it automatically work for everyone rather than requiring some extra work
|
gharchive/pull-request
| 2016-07-22T20:33:34 |
2025-04-01T04:56:03.491248
|
{
"authors": [
"alloy",
"orta"
],
"repo": "artsy/emission",
"url": "https://github.com/artsy/emission/pull/220",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
150209371
|
Add support for displaying "unique" in basic artwork metadata
It's been flagged by a couple liaisons and galleries that we dont surface if a work (from medium usually including multiples such as design, photography, fashion etc) is flagged as unique.
[ ] we should display "Unique" if applied to basic artwork metadata on the (1) artwork view and (2) in offer emails just below the dimension information
where it can be applied in volt (only on certain mediums per: https://github.com/artsy/volt/pull/1730)
where it displays in force:
@orta know this isnt a bug per se but would be great to take care of after the bux fix list and before any additional features given that its small and has been flagged as an inconstancy by partners who find this field important
Add support for displaying "unique" in basic artwork metadata
|
gharchive/issue
| 2016-04-21T22:30:30 |
2025-04-01T04:56:03.495095
|
{
"authors": [
"garrengotthardt"
],
"repo": "artsy/energy",
"url": "https://github.com/artsy/energy/issues/193",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1564387906
|
feat: app state migrations
This PR resolves MOPLAT-478
Description
moving migrations from eigen to energy. i tried with ezpz persist (doesn't include migrations), redux-persist (doesn't work well with ezpz's computed values), so i took the working-very-well @ds300's work from eigen.
PR Checklist
[ ] I tested my changes on iOS and Android.
[ ] I added screenshots or videos to illustrate my changes.
[ ] I added Tests and Stories for my changes.
To the reviewers 👀
[ ] I would like at least one of the reviewers to run this PR on the simulator or device.
Need help with something? Have a look at our docs, or get in touch with us.
converting to draft because i see some funky things on login.
|
gharchive/pull-request
| 2023-01-31T14:11:26 |
2025-04-01T04:56:03.499235
|
{
"authors": [
"pvinis"
],
"repo": "artsy/energy",
"url": "https://github.com/artsy/energy/pull/482",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
410073255
|
Bump replicas to 10
PR-ing this temporary DDoS measure for visibility.
Closing in favor of https://github.com/artsy/force/pull/3545
|
gharchive/pull-request
| 2019-02-14T00:50:00 |
2025-04-01T04:56:03.506124
|
{
"authors": [
"joeyAghion"
],
"repo": "artsy/force",
"url": "https://github.com/artsy/force/pull/3535",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1185482487
|
more progress on testing, leaving a few todos
I'm pushing this up before I leave! There are still a few things to clean up in this module, they are all labelled with TODOs.
ill unassign myself. others can take over, if this repo is still useful.
|
gharchive/pull-request
| 2022-03-29T22:13:22 |
2025-04-01T04:56:03.508310
|
{
"authors": [
"pepopowitz",
"pvinis"
],
"repo": "artsy/relay-workshop",
"url": "https://github.com/artsy/relay-workshop/pull/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
161114824
|
JavaComplete2 : TextChangedI feature needs vim version >= 7.4.143
Actual behavior (Required!)
when i write new java file show me:
JavaComplete2 : TextChangedI feature needs vim version >= 7.4.143
Press ENTER or type command to continue
Expected behavior (Required!)
write java code with vim without errors or warnings.
The steps to reproduce actual behavior (Required!)
use vundle to install this plugin
write "autocmd FileType java,jsp setlocal omnifunc=javacomplete#Complete" in .vimrc
use vim to open new file "test.java"
Environment (Required!)
OS: ubuntu14.04
Vim version: VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Jan 2 2014 19:39:47)
Included patches: 1-52
Neovim version:
Q&A
Yes, I tried minimal .vimrc configuraion.
Yes, I have enabled logs with JCdebugEnableLogs and can put here content of JCdebugGetLogContent command, if you need.
Even, if you wish, I can set g:JavaComplete_JavaviDebug to 1, than set g:JavaComplete_JavaviLogfileDirectory, and put here server logs, too.
Screenshot (Optional)
The output of :redir and :message (Optional)
Fixed. Thanks.
|
gharchive/issue
| 2016-06-20T04:25:38 |
2025-04-01T04:56:03.515693
|
{
"authors": [
"crazy-canux"
],
"repo": "artur-shaik/vim-javacomplete2",
"url": "https://github.com/artur-shaik/vim-javacomplete2/issues/250",
"license": "Vim",
"license_type": "permissive",
"license_source": "github-api"
}
|
1194582077
|
feat: 将 Application 基类整合 IoC Container
之前全局共用一个 Container 和 Application,且组合形式比较松散,使用起来比较难懂,现重构为继承形式,为上层框架包装提供些许便利
Class(OOP) 形式声明 Hook 的格式进行调整,相关写法改为 reflect-metadata 模式,IoC 模型对接更彻底
@hyj1991 改 Loader 的时候发现 Hook 定义这块问题比较大,我改了个比较彻底的版本,要麻烦再看一下(doge
|
gharchive/pull-request
| 2022-04-06T13:00:11 |
2025-04-01T04:56:03.526445
|
{
"authors": [
"noahziheng"
],
"repo": "artusjs/core",
"url": "https://github.com/artusjs/core/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2568546381
|
Added single player mode with AI
Added additonal features:
implemented single player mode
made it faster using defer attribute
faster loading
@arujjval can u please check and assign for hacktoberFest!
|
gharchive/pull-request
| 2024-10-06T08:52:46 |
2025-04-01T04:56:03.528198
|
{
"authors": [
"prishamehta01"
],
"repo": "arujjval/tic-tac-toe",
"url": "https://github.com/arujjval/tic-tac-toe/pull/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1441553748
|
🛑 SPOwnerBot is down
In be67617, SPOwnerBot (https://backend.isbotdown.com/bots/SPOwnerBot) was down:
HTTP code: 200
Response time: 153 ms
Resolved: SPOwnerBot is back up in 56aad0c.
|
gharchive/issue
| 2022-11-09T07:23:55 |
2025-04-01T04:56:03.584128
|
{
"authors": [
"arynyklas"
],
"repo": "arynyklas/uptime",
"url": "https://github.com/arynyklas/uptime/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
53982339
|
Scroll speed
Hello.
I am using your scrollbar implemetation in my project. I have a question about scrollspeed when scrolling using mouse wheel. In your implementation, scroll speed is depends on total content size.
How can I make a fixed scrollspeed (and scroll offset assiciated with one mouse wheel step)?
https://github.com/asafdav/ng-scrollbar/blob/master/src/ng-scrollbar.js#L86 seems to change the scroll speed.
Only problem is the behavior is quite the opposite in Firefox and Chrome. A high value will make it fast in FF but very slow in Chrome, and vice-versa.
@asafdav , any ideas?
You can add a flag that uses a constant value, we can set a different constant for each browser. deltaY will be updated based on the constant and that will give you a fixed scrolling speed. I think it should work, if you want to pull request I'll go over it and merge.
|
gharchive/issue
| 2015-01-11T05:21:24 |
2025-04-01T04:56:03.608616
|
{
"authors": [
"asafdav",
"braincomb",
"softgears"
],
"repo": "asafdav/ng-scrollbar",
"url": "https://github.com/asafdav/ng-scrollbar/issues/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
108969771
|
tidy up status page, based on feedback
@DanielJMaher based on feedback
@DanielJMaher @FBRTMaka @Bobfrat review and merge please
added roll up for inactive/healthy
@DanielJMaher @Bobfrat ready
@birdage still a problem here.
Cool! Works!
|
gharchive/pull-request
| 2015-09-29T22:05:57 |
2025-04-01T04:56:03.673406
|
{
"authors": [
"FBRTMaka",
"birdage"
],
"repo": "asascience-open/ooi-ui",
"url": "https://github.com/asascience-open/ooi-ui/pull/509",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1122417217
|
Fixed broken links for Netflix Tech Blog after domain migration
Netflix Tech Blog recently moved from http://techblog.netflix.com to https://netflixtechblog.com/
Thanks, Vladyslav. I have checker setup for broken links, but it missed those because they silently redirect.
|
gharchive/pull-request
| 2022-02-02T21:43:10 |
2025-04-01T04:56:03.680718
|
{
"authors": [
"asatarin",
"vladsydorenko"
],
"repo": "asatarin/testing-distributed-systems",
"url": "https://github.com/asatarin/testing-distributed-systems/pull/5",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
218552035
|
Playback failure
kodi-agile hash a29705b3b7
inputstream.adaptive hash 3afbfba31c8b
kodi.log :
`11:58:49.109 T:139837237405440 INFO: AddOnLog: InputStream Adaptive: SetVideoResolution (1920 x 1080)
11:58:49.109 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: Open()
11:58:49.109 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: found inputstream.adaptive.license_key: [not shown]
11:58:49.109 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: found inputstream.adaptive.license_type: com.widevine.alpha
11:58:49.109 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: found inputstream.adaptive.manifest_type: mpd
11:58:49.109 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: found inputstream.adaptive.server_certificate: [not shown]
11:58:49.109 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: Initial bandwidth: 0
11:58:49.109 T:139837237405440 DEBUG: kodi::General::get_setting - add-on 'InputStream Adaptive' requests setting 'MAXRESOLUTION'
11:58:49.110 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: MAXRESOLUTION selected: 0
11:58:49.110 T:139837237405440 DEBUG: kodi::General::get_setting - add-on 'InputStream Adaptive' requests setting 'MAXRESOLUTIONSECURE'
11:58:49.110 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: MAXRESOLUTIONSECURE selected: 1
11:58:49.110 T:139837237405440 DEBUG: kodi::General::get_setting - add-on 'InputStream Adaptive' requests setting 'STREAMSELECTION'
11:58:49.110 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: STREAMSELECTION selected: 0
11:58:49.110 T:139837237405440 DEBUG: kodi::General::get_setting - add-on 'InputStream Adaptive' requests setting 'MEDIATYPE'
11:58:49.110 T:139837237405440 DEBUG: kodi::General::get_setting - add-on 'InputStream Adaptive' requests setting 'DECRYPTERPATH'
11:58:49.110 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: Searching for decrypters in: /home/xbmc/.kodi/cdm
11:58:49.111 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: Supported URN:
11:58:49.111 T:139837237405440 DEBUG: CurlFile::Open(0x7f2e90652680) http://localhost:43661/manifest?id=80113577
....
11:58:49.486 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: Download http://localhost:43661/manifest?id=80113577 finished
11:58:49.486 T:139837237405440 INFO: AddOnLog: InputStream Adaptive: Successfully parsed .mpd file. #Streams: 6 Download speed: 0.0000 Bytes/s
11:58:49.486 T:139837237405440 ERROR: AddOnLog: InputStream Adaptive: Unable to handle decryption. Unsupported!
11:58:49.487 T:139837237405440 DEBUG: AddOnLog: InputStream Adaptive: GetStreamIds()
11:58:49.487 T:139837237405440 ERROR: CVideoPlayer::OpenInputStream - error opening [http://localhost:43661/manifest?id=80113577]
11:58:49.487 T:139837237405440 NOTICE: CVideoPlayer::OnExit()
`
I can playback netflix from chrome. libwidevinecdm.so is the same:
xbmc@mbhtpc:~$ md5sum /opt/google/chrome/libwidevinecdm.so 1270ddce835c71888457337ed1214c5a /opt/google/chrome/libwidevinecdm.so xbmc@mbhtpc:~$ md5sum .kodi/cdm/libwidevinecdm.so 1270ddce835c71888457337ed1214c5a .kodi/cdm/libwidevinecdm.so
At least of Windows, I have to use the widevine 1.4.8.962 version. Go here, download the 56.0.2924.87 deb and extract the libwidevinecdm.so included (md5 = a649358a9749c346c39f6bf0dec11337)
Unfortunatly same result with same error.
This is my current hash:
xbmc@mbhtpc:~/.kodi/cdm$ md5sum /home/xbmc/.kodi/cdm/libwidevinecdm.so a649358a9749c346c39f6bf0dec11337 /home/xbmc/.kodi/cdm/libwidevinecdm.so
@sirtow Might be a stupid question, but have you linked the libssd_wv.so from inputstream.adaptive in your .cdmfolder as well?
@sirtow , what @asciidisco said. You have to compile https://github.com/liberty-developer/inputstream.adaptive/tree/agile/wvdecrypter
@asciidisco, @Uukrull Actually that was THE question... I didnt have such file in my sustem. I went back to InputStream.Adaptive git repo and it has the wvdecrypter folder. I did as Readme says , compiled and put it under .kodi/cdm and it worked!
Thank you all for your time and support !
|
gharchive/issue
| 2017-03-31T16:23:29 |
2025-04-01T04:56:03.692865
|
{
"authors": [
"Uukrull",
"asciidisco",
"sirtow"
],
"repo": "asciidisco/plugin.video.netflix",
"url": "https://github.com/asciidisco/plugin.video.netflix/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
300397335
|
Fix HTML encoding
Sorry, guys, I broke the HTML encoding 😇
No worries, I should´ve seen it during the review...
Thank you for the fix :)
|
gharchive/pull-request
| 2018-02-26T21:11:16 |
2025-04-01T04:56:03.694020
|
{
"authors": [
"asciidisco",
"janettt"
],
"repo": "asciidisco/web-conferences-2018",
"url": "https://github.com/asciidisco/web-conferences-2018/pull/65",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
406357217
|
Umlaut issues when using Asiciidoctor-pdf with dita
Hello folks,
I use Asciidoctor-Diagram with ditaa to draw diagrams for my documentation.
Preview with Firefox-Plugin works well. However, when I create the diagram via asciidoctor-pdf, all Umlauts are replaced by squares.
In the rest of the documentation all Umlauts are shown correctly. Therefore I assume there is some message missing in the diagram that tells the compiler to choose correct encoding.
Thank you for your support.
Clemi
It is not only umlauts it is generally all unicode text (I belive).
It really depends on your font setup. If you have a fallback font, and the fallback font has those characters, it will find the glyphs. If it can't find the glyph, you are either going to blank or something other than what you expect.
See https://github.com/asciidoctor/asciidoctor-pdf/blob/master/docs/theming-guide.adoc#fonts
Also, keep in mind that the default theme no longer provides a fallback font (because it turns out to really slow down conversion). But it's still available if you set the theme to default-with-fallback-font (e.g., -a pdf-theme=default-with-fallback-font).
Not really, fallback font is not the case (I have it properly setup).
The thing is that asciidoctor-diagram process ditaa data in a manner that trash unicode chars. It might be connected with how in Windows text is passed to a subprogram in some cp1251 encoding instead of UTF-8 or maybe something else.
And strangely ditaa with unicode works just fine in firefox asciidoctor.js.
So the issue is not only asciidoctor-pdf + asciidoctor-diagram. It is also a regular asciidoctor to html.
test data
== test 1
// 2019-04-22
Press the button kbd:[Alt+F4]
[ditaa, test-ditaa, png]
....
/--------\ +-------+
|cAAA +---+Version|
| прив | | V3 |
| Base | |cRED{d}|
| {s}| +-------+
\---+----/
....
PDF output
HTML output
asciidoctor.js output
Ah, I see.
Then this is an Asciidoctor Diagram issue, it seems. Asciidoctor PDF just inserts the diagram that Asciidoctor Diagram creates, so there's nothing really that Asciidoctor PDF is doing here. You could also check the cached file that Asciidoctor Diagram creates to verify it is already corrupt at that point.
(ditaa processing in Asciidoctor.js uses different stack, so it's not the same situation).
Thanks for verifying, @habamax.
(And keep in mind, diagrams in Asciidoctor.js are using a completely different library, so how it operates is not relevant in this context...aside from verifying that the diagram code itself is valid).
Perhaps this is already solved by this issue? https://github.com/asciidoctor/asciidoctor-diagram/issues/150
If not, feel free to reopen another in Asciidoctor Diagram with more information.
I have all gems updated including asciidoctor-diagram -- so it looks that it is not fixed, at least not for windows. (I have 1.5.18 version of asciidoctor-diagram installed)
But anyway I don't use ditaa at all (I have tried it but end up either with plantuml or plain png/svg embedding) so for me it is only slightly relevant.
It would still be very useful if we let Pepijn know this is still an issue so it can be addressed, if possible / necessary. That way, we can close this issue here (since the relevant code is not in this repo).
made a comment there
Thanks! We'll continue the conversation there.
|
gharchive/issue
| 2019-02-04T14:35:03 |
2025-04-01T04:56:03.710415
|
{
"authors": [
"Clemi81",
"habamax",
"mojavelinux"
],
"repo": "asciidoctor/asciidoctor-pdf",
"url": "https://github.com/asciidoctor/asciidoctor-pdf/issues/1000",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
384778986
|
plantuml and dat-uri don't work together ?
Hi,
I'd like to inline plantuml diagrams in my html5 document, however I cannot seem to get it right.
Either I have difficulty to find the right way to use the data-uri scheme to inline images for images generated by asciidoctorj-diagram in plantuml format. And even using inline SVG don't results in inlined document.
:doctype: book
:icons: font
:source-highlighter: coderay
:toc: left
:toclevels: 4
:sectlinks:
:sectanchors:
:docinfo:
:nofooter:
:data-uri:
:description: documentation
:logo: images/xxxxxxxxxxx.png
image:{logo}[text]
[plantuml, "flow", svg, opts="inline", height="100%", width="100%"]
----
title flow
autonumber
user -> client : access
----
[plantuml, "flow2", png]
----
title flow
autonumber
user -> client : access
----
With :data-uri:
The real png image used in image:{logo} is correctly transformed as a data-uri. But the images generated for the plantuml diagram don't work. I got the following warnings.
/absolute/path/to/src/docs/asciidoc/index.adoc: SVG does not exist or cannot be read: /absolute/path/to/src/docs/asciidoc/flow.svg
image to embed not found or not readable: /absolute/path/to/src/docs/asciidoc/flow2.png
without :data-uri:
The inlined SVG doesn't work anyway the generated HTML looks like that
<div class="imageblock">
<div class="content">
<span class="alt">flow</span>
</div>
</div>
Versions
I'm generating the doc within a gradle build :
buildscript {
dependencies {
classpath 'org.asciidoctor:asciidoctor-gradle-plugin:1.5.8.1'
}
}
dependencies {
asciidoctor "org.asciidoctor:asciidoctorj-diagram:1.5.10"
}
asciidoctorj {
version = "1.5.7"
}
asciidoctor {
requires 'asciidoctor-diagram'
options doctype: 'book'
attributes = ['source-highlighter': 'coderay',
'encoding' : 'utf-8',
'version' : project.version,
'build-timestamp' : Instant.now().toString(),
'commit' : commit.id ]
sources {
include 'index.adoc'
}
// enforce asciidoc processing
// outputs.upToDateWhen { false }
}
References
https://github.com/asciidoctor/asciidoctor/issues/1301#issuecomment-222620678
https://github.com/asciidoctor/asciidoctor-diagram/issues/64
https://github.com/asciidoctor/asciidoctor-diagram/issues/110
It seems that this issue is similar to mine but it's related to the ruby version of asciidoctor-diagram : https://github.com/asciidoctor/asciidoctor-diagram/issues/198
It seems asciidoctorj-diagram has been moved from this repo to its own project, so I created a ticket hoping this will fix my issue : https://github.com/asciidoctor/asciidoctorj-diagram/issues/1
Because at this time it's a bit confusing.
Closing as asciidoctor/asciidoctorj-diagram#1 has been closed.
|
gharchive/issue
| 2018-11-27T12:47:31 |
2025-04-01T04:56:03.735105
|
{
"authors": [
"bric3",
"robertpanzer"
],
"repo": "asciidoctor/asciidoctorj",
"url": "https://github.com/asciidoctor/asciidoctorj/issues/738",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
910247429
|
脱字の修正
修正箇所
『Async Functionと反復処理』の「fetchResources関数」に脱字があったので修正しました。
修正前: fetchResource関数
修正後: fetchResources関数
ありがとうございます!
|
gharchive/pull-request
| 2021-06-03T08:23:28 |
2025-04-01T04:56:03.736785
|
{
"authors": [
"azu",
"perzikanz"
],
"repo": "asciidwango/js-primer",
"url": "https://github.com/asciidwango/js-primer/pull/1324",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1059248765
|
Update README examples with the options at the end
It seems to me that options are now passed as an object in the last argument to create, and so we need to update the examples in the README accordingly. That’s at least the only way it could work for me.
Thanks a lot for your work on this new version, I particularly love the fit option and the use of WASM.
Good catch, thanks. And thanks for the feedback!
|
gharchive/pull-request
| 2021-11-20T23:14:29 |
2025-04-01T04:56:03.738164
|
{
"authors": [
"cljoly",
"sickill"
],
"repo": "asciinema/asciinema-player",
"url": "https://github.com/asciinema/asciinema-player/pull/158",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2304444522
|
Unable to Upgrade from 2023-11-20 to 2023-12-16 (or later)
Describe the bug
Recorded data cannot be loaded.
To Reproduce
Steps to reproduce the behavior:
Upgrade Asciinema server from version 20231120 to 20231216.
Attempt to play any recorded session.
Observe that the data fails to load correctly.
Expected behavior
I should be able to play recorded *.cast files without any issues.
Screenshots
Versions:
OS: windows 11
Browser: edge 124.0.2478.105
Asciinema server: 20231120
Additional context
Hello,
I used a translation tool for this message. If the context seems off, please let me know, and I will provide a more detailed explanation.
For several months, there were issues with my server, preventing me from upgrading the Asciinema server. The repairs were completed yesterday, and I attempted to upgrade the Asciinema server. The upgrade guide in the package suggested simply updating the version information, indicating no additional steps were necessary.
also same issue localhost
Can you provide server logs of the 500 errors? This should direct us towards the root cause.
It was my mistake. I could immediately identify the error upon reviewing the log.
Since 20231216:
volumes:
- cache:/opt/app/cache
- - uploads:/opt/app/uploads
+ - uploads:/var/opt/asciinema/uploads
It seems the upload directory has changed with the version upgrade.
If I'm not mistaken, the initial Docker Compose setup was specified as /opt/app/uploads.
Shouldn't the upgrade guide include instructions to adjust the volume locations?
"The document has changed slightly from the original. There used to be guidance for /opt/app.cache in the Compose file, but it's no longer visible. Can the parts that have disappeared from the guidance be removed?"
Yeah, this path changed, sorry about the omission. I'll make sure it's included in the release notes of the version which changed it.
Release notes updated: https://github.com/asciinema/asciinema-server/releases/tag/v20231216
Thank you for addressing that.
Regarding the cache mapping I mentioned earlier, it seems that cache:/opt/app/cache has either been moved to /var/opt/asciinema or has been removed. Shouldn't this part also be mentioned?
Right, good catch again. The cache moved in 20231120 version from /opt/app/cache to /var/cache/asciinema. I'll add that to the 20231120 release notes. Thanks!
Updated: https://github.com/asciinema/asciinema-server/releases/tag/v20231120
I am glad to have been of help. There is one more thing...
this self-hosted config document
This document does not mention the use of caching internally. We need information for new users.
True, good idea, will update the doc 👍
|
gharchive/issue
| 2024-05-19T05:49:14 |
2025-04-01T04:56:03.747816
|
{
"authors": [
"XIYO",
"ku1ik"
],
"repo": "asciinema/asciinema-server",
"url": "https://github.com/asciinema/asciinema-server/issues/444",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
995401780
|
Dev: Add new relationship for GitHub identifiers and papers that "mention" them
With ADS providing "mentioned" GitHub URLs, add an additional relationship for "referred" URLs.
Already exists within the asclepias framework.
|
gharchive/issue
| 2021-09-13T22:35:44 |
2025-04-01T04:56:03.749201
|
{
"authors": [
"mubdi"
],
"repo": "asclepias/asclepias-broker",
"url": "https://github.com/asclepias/asclepias-broker/issues/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2613357047
|
Problem with jvm plugin
I tried to write and use my own plugin. But the build process failed with:
common.proto: is a proto3 file that contains optional fields, but code generator protoc-gen-_X21147SuBNvl0GbuRV6D12RgwA-0 hasn't been updated to support optional fields in proto3. Please ask the owner of this code generator to support proto3 optional.
I think the problem lies in the auto-generated script which runs my jvm plugin. As the plugin itself is just a class with main, so it can't have or not have any kind of "support" here.
NVM, found out about .setSupportedFeatures(CodeGeneratorResponse.Feature.FEATURE_PROTO3_OPTIONAL_VALUE)
Glad you found the solution!
|
gharchive/issue
| 2024-10-25T07:47:14 |
2025-04-01T04:56:03.751129
|
{
"authors": [
"ascopes",
"morvael"
],
"repo": "ascopes/protobuf-maven-plugin",
"url": "https://github.com/ascopes/protobuf-maven-plugin/issues/431",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
897691606
|
Fix error installing php 7.2.34 on mac os 11.3
> asdf install php 7.2.34
Determining configuration options...
Downloading source code...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 123 100 123 0 0 262 0 --:--:-- --:--:-- --:--:-- 262
100 16.9M 0 16.9M 0 0 1799k 0 --:--:-- 0:00:09 --:--:-- 2340k
Extracting source code...
Running buildconfig...
Forcing buildconf
Removing configure caches
Running ./configure --with-gmp=/usr/local/opt/gmp --with-sodium=/usr/local/opt/libsodium --with-freetype-dir=/usr/local/opt/freetype --with-gettext=/usr/local/opt/gettext --with-icu-dir=/usr/local/opt/icu4c --with-jpeg-dir=/usr/local/opt/jpeg --with-webp-dir=/usr/local/opt/webp --with-png-dir=/usr/local/opt/libpng --with-openssl=/usr/local/opt/openssl@1.1 --with-libxml-dir=/usr/local/opt/libxml2 --with-readline=/usr/local/opt/readline --prefix=/Users/Dylan/.asdf/installs/php/7.2.34 --enable-bcmath --enable-calendar --enable-dba --enable-exif --enable-fpm --enable-ftp --enable-gd --enable-gd-native-ttf --enable-intl --enable-mbregex --enable-mbstring --enable-mysqlnd --enable-pcntl --enable-shmop --enable-soap --enable-sockets --enable-sysvmsg --enable-sysvsem --enable-sysvshm --enable-wddx --enable-zip --sysconfdir=/Users/Dylan/.asdf/installs/php/7.2.34 --with-config-file-path=/Users/Dylan/.asdf/installs/php/7.2.34 --with-config-file-scan-dir=/Users/Dylan/.asdf/installs/php/7.2.34/conf.d --with-curl --with-external-gd --with-fpm-group=www-data --with-fpm-user=www-data --with-gd --with-mhash --with-mysql=mysqlnd --with-mysqli=mysqlnd --with-pdo-mysql=mysqlnd --with-xmlrpc --with-zip --with-zlib --without-snmp --with-pear --with-pdo-pgsql
configure: WARNING: unrecognized options: --enable-gd, --enable-gd-native-ttf, --with-external-gd, --with-mysql, --with-zip
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for a sed that does not truncate output... /usr/bin/sed
checking build system type... x86_64-apple-darwin20.4.0
checking host system type... x86_64-apple-darwin20.4.0
checking target system type... x86_64-apple-darwin20.4.0
checking for cc... cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking how to run the C preprocessor... cc -E
checking for icc... no
checking for suncc... no
checking whether cc understands -c and -o together... yes
checking how to run the C preprocessor... cc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking minix/config.h usability... no
checking minix/config.h presence... no
checking for minix/config.h... no
checking whether it is safe to define __EXTENSIONS__... yes
checking whether ln -s works... yes
checking for system library directory... lib
checking whether to enable runpaths... yes
checking if compiler supports -R... no
checking if compiler supports -Wl,-rpath,... yes
checking for gawk... no
checking for nawk... no
checking for awk... awk
checking if awk is broken... no
checking for bison... bison -y
checking for bison version... invalid
configure: WARNING: This bison version is not supported for regeneration of the Zend/PHP parsers (found: 2.3, min: 204, excluded: ).
checking for re2c... no
configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers.
configure: error: bison is required to build PHP/Zend when building a GIT checkout!
and other errors...
but I do encounter other errors
/bin/sh /var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/libtool --silent --preserve-dup-deps --mode=compile cc -I/usr/local/Cellar/icu4c/69.1/include -Wno-write-strings -D__STDC_LIMIT_MACROS -DZEND_ENABLE_STATIC_TSRMLS_CACHE=1 -Iext/intl/ -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/intl/ -DPHP_ATOM_INC -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/main -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33 -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/date/lib -I/usr/local/Cellar/libxml2/2.9.10_2/include/libxml2 -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/zlib/include -I/usr/local/opt/webp/include -I/usr/local/opt/jpeg/include -I/usr/local/opt/libpng/include -I/usr/local/opt/freetype/include/freetype2 -I/usr/local/opt/gettext/include -I/usr/local/opt/gmp/include -I/usr/local/opt/libiconv/include -I/usr/local/Cellar/icu4c/69.1/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/oniguruma -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/libmbfl -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/libmbfl/mbfl -I/usr/local/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/sqlite3/libsqlite -I/usr/local/opt/readline/include -I/usr/local/opt/libsodium/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/zip/lib -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/TSRM -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/Zend -no-cpp-precomp -I/usr/local/opt/libiconv/include -g -O2 -fvisibility=hidden -DZEND_SIGNALS -c /var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/intl/collator/collator_class.c -o ext/intl/collator/collator_class.lo
/bin/sh /var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/libtool --silent --preserve-dup-deps --mode=compile cc -I/usr/local/Cellar/icu4c/69.1/include -Wno-write-strings -D__STDC_LIMIT_MACROS -DZEND_ENABLE_STATIC_TSRMLS_CACHE=1 -Iext/intl/ -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/intl/ -DPHP_ATOM_INC -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/main -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33 -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/date/lib -I/usr/local/Cellar/libxml2/2.9.10_2/include/libxml2 -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/zlib/include -I/usr/local/opt/webp/include -I/usr/local/opt/jpeg/include -I/usr/local/opt/libpng/include -I/usr/local/opt/freetype/include/freetype2 -I/usr/local/opt/gettext/include -I/usr/local/opt/gmp/include -I/usr/local/opt/libiconv/include -I/usr/local/Cellar/icu4c/69.1/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/oniguruma -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/libmbfl -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/libmbfl/mbfl -I/usr/local/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/sqlite3/libsqlite -I/usr/local/opt/readline/include -I/usr/local/opt/libsodium/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/zip/lib -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/TSRM -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/Zend -no-cpp-precomp -I/usr/local/opt/libiconv/include -g -O2 -fvisibility=hidden -DZEND_SIGNALS -c /var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/intl/collator/collator_sort.c -o ext/intl/collator/collator_sort.lo
/bin/sh /var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/libtool --silent --preserve-dup-deps --mode=compile cc -I/usr/local/Cellar/icu4c/69.1/include -Wno-write-strings -D__STDC_LIMIT_MACROS -DZEND_ENABLE_STATIC_TSRMLS_CACHE=1 -Iext/intl/ -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/intl/ -DPHP_ATOM_INC -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/main -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33 -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/date/lib -I/usr/local/Cellar/libxml2/2.9.10_2/include/libxml2 -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/zlib/include -I/usr/local/opt/webp/include -I/usr/local/opt/jpeg/include -I/usr/local/opt/libpng/include -I/usr/local/opt/freetype/include/freetype2 -I/usr/local/opt/gettext/include -I/usr/local/opt/gmp/include -I/usr/local/opt/libiconv/include -I/usr/local/Cellar/icu4c/69.1/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/oniguruma -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/libmbfl -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/mbstring/libmbfl/mbfl -I/usr/local/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/sqlite3/libsqlite -I/usr/local/opt/readline/include -I/usr/local/opt/libsodium/include -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/zip/lib -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/TSRM -I/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/Zend -no-cpp-precomp -I/usr/local/opt/libiconv/include -g -O2 -fvisibility=hidden -DZEND_SIGNALS -c /var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/intl/collator/collator_convert.c -o ext/intl/collator/collator_convert.lo
/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/intl/collator/collator_sort.c:349:26: error: use of undeclared identifier 'TRUE'
collator_sort_internal( TRUE, INTERNAL_FUNCTION_PARAM_PASSTHRU );
^
/var/folders/kx/wsz1xqjj5cncxlwzwh1rh_4c0000gp/T/php-src-php-7.2.33/ext/intl/collator/collator_sort.c:543:26: error: use of undeclared identifier 'FALSE'
collator_sort_internal( FALSE, INTERNAL_FUNCTION_PARAM_PASSTHRU );
^
2 errors generated.
make: *** [ext/intl/collator/collator_sort.lo] Error 1
make: *** Waiting for unfinished jobs....
This issue has more information https://github.com/asdf-community/asdf-php/issues/61
Not sure why this is closed. I just came across the same problem. There is no fix mentioned.
Other places where the same bug is encountered.
https://github.com/henkrehorst/homebrew-php/issues/134
Apparently related to this bug
https://bugs.php.net/bug.php?id=80310
this is a dupe of #61 so please put any info here
Since that issue is closed and is really about a lack of documentation rather than the actual impossibility of installing php 7.2, I've created a new issue #142
|
gharchive/pull-request
| 2021-05-21T05:30:04 |
2025-04-01T04:56:03.758029
|
{
"authors": [
"daamsie",
"dylan-chong"
],
"repo": "asdf-community/asdf-php",
"url": "https://github.com/asdf-community/asdf-php/pull/89",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1396174762
|
Update readme file
📚 Documentation
bla bla bla
Add relevant screenshot or video (if any)
No response
Additional context
No response
Can you assign me this issue?
Ok @tokyo3001 assigned
|
gharchive/issue
| 2022-10-04T12:13:35 |
2025-04-01T04:56:03.811523
|
{
"authors": [
"ashavijit",
"tokyo3001",
"wasimreja"
],
"repo": "ashavijit/Meetify",
"url": "https://github.com/ashavijit/Meetify/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1456742871
|
🛑 Home Assistant is down
In 8e92223, Home Assistant (https://ha.ashhhleyyy.dev/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Home Assistant is back up in db464b5.
|
gharchive/issue
| 2022-11-20T02:42:06 |
2025-04-01T04:56:03.813921
|
{
"authors": [
"robot-ashley"
],
"repo": "ashhhleyyy/status-page",
"url": "https://github.com/ashhhleyyy/status-page/issues/406",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2567872870
|
npm run dev fails
Running npm run dev (npm=6.14.16) produces the following:
ERROR in ./src/client/create/scss/create.scss (./node_modules/css-loader/dist/cjs.js!./node_modules/sass-loader/dist/cjs.js!./src/client/create/scss/create.scss)
Module build failed (from ./node_modules/sass-loader/dist/cjs.js):
Error: Node Sass version 7.0.0 is incompatible with ^4.0.0.
at getSassImplementation (/home/.../repos/magic-maze/node_modules/sass-loader/dist/getSassImplementation.js:46:13)
at Object.loader (/home/.../repos/magic-maze/node_modules/sass-loader/dist/index.js:40:61)
@ ./src/client/create/scss/create.scss 2:12-147 9:17-24 13:15-29
ERROR in ./src/client/home/scss/home.scss (./node_modules/css-loader/dist/cjs.js!./node_modules/sass-loader/dist/cjs.js!./src/client/home/scss/home.scss)
Module build failed (from ./node_modules/sass-loader/dist/cjs.js):
Error: Node Sass version 7.0.0 is incompatible with ^4.0.0.
at getSassImplementation (/home/.../repos/magic-maze/node_modules/sass-loader/dist/getSassImplementation.js:46:13)
at Object.loader (/home/.../repos/magic-maze/node_modules/sass-loader/dist/index.js:40:61)
@ ./src/client/home/scss/home.scss 2:12-145 9:17-24 13:15-29
ERROR in ./src/client/play/scss/play.scss (./node_modules/css-loader/dist/cjs.js!./node_modules/sass-loader/dist/cjs.js!./src/client/play/scss/play.scss)
Module build failed (from ./node_modules/sass-loader/dist/cjs.js):
Error: Node Sass version 7.0.0 is incompatible with ^4.0.0.
at getSassImplementation (/home/.../repos/magic-maze/node_modules/sass-loader/dist/getSassImplementation.js:46:13)
at Object.loader (/home/.../repos/magic-maze/node_modules/sass-loader/dist/index.js:40:61)
@ ./src/client/play/scss/play.scss 2:12-145 9:17-24 13:15-29
Hi @pepe-bawagan. Thank you for reporting this, but I'm not maintaining this project anymore. Sorry!
alright, no worries. thanks! for transparency i got it to work using node@12.22.12 and npm@7.24.2 :)
|
gharchive/issue
| 2024-10-05T11:08:41 |
2025-04-01T04:56:03.826277
|
{
"authors": [
"ashugeo",
"pepe-bawagan"
],
"repo": "ashugeo/magic-maze",
"url": "https://github.com/ashugeo/magic-maze/issues/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
276624017
|
Wrapper for ccxt exchanges
What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)
Feature
What is the new behavior (if this is a feature change)?
Added ccxt wrapper that add all exchanges supported by ccxt (https://github.com/ccxt/ccxt)
Other information:
I've not been able to add the wrapper without modifying some core files but I kept the number of changes as low as possible.
As discussed in #1171 , ccxt is working in a different way. This a wrapper to make it work wit Gekko.
Finally :+1: I've been looking for a bot that can use ccxt for so long
Well this worked fine for me :) Just having some issues replicating the fork on my VPS. Something with deasync but it works fine locally.
@Yoyae this looks really interesting
maybe would be useful some doc to explain how to config it and make it working
Follow Gekko docs. I just added ccxt exchanges as if it is an additionnal exchange. Configuration and uses isn't changing from another (non ccxt) exchange.
@Yoyae This is awesome! One thing I am noticing Specifically for Binance is constantly throwing the "Insufficient funds" error. This seems to be because of the Rounding of assets and currencies. I am not sure about the best way to go about fixing it though I understand within ccxt's exchange.js you can calculate the lot size and price precision using the following;
` priceToPrecision (symbol, price) {
return parseFloat (price).toFixed (this.markets[symbol].precision.price)
}
amountToPrecision (symbol, amount) {
return this.truncate(amount, this.markets[symbol].precision.amount)
}
amountToLots (symbol, amount) {
return this.amountToPrecision (symbol, Math.floor (amount / this.markets[symbol].lot) * this.markets[symbol].lot)
}`
I would implement it but unfortunately I have the JS skills of a 5 year old.
@mattS00 The issue with Binance rounding should be fixed in the develop branch. If that is not the case, please open a new issue and ping me so we don't hijack the PR :)
@mattS00 I've add rounding on amount before sending the order (buying or selling), if the precision exists.
Note that for Binance, ccxt was already doing some rounding using amountToPrecision. In this case, it should work with amountToLots as discussed in https://github.com/ccxt/ccxt/issues/663.
@yoyae awesome! @cmroche I will check it out didn't even notice it was added as a normal exchange as well :O.
@yoyae When I started the bot in gave me an error after updating:
Error buy TypeError: Cannot read property 'lot' of undefined
at /root/gekko/exchanges/ccxt.js:153:38
at Trader.buy (/root/gekko/exchanges/ccxt.js:180:6)
at Trader.bound [as buy] (/root/gekko/node_modules/lodash/dist/lodash.js:729:21)
at Timeout._onTimeout (/root/gekko/exchanges/ccxt.js:69:25)
at ontimeout (timers.js:365:14)
at tryOnTimeout (timers.js:237:5)
at Timer.listOnTimeout (timers.js:207:5)
@mattS00 I fixed the coding issue but that doesn't solve "Insufficient funds" error.
It's coming from somewhere else. I'll look into it.
From my understanding, there is a price or amount issue. Here's an example :
2017-12-11 16:22:21 (INFO): USDT: 42.977811000000
2017-12-11 16:22:21 (INFO): BTC: x
2017-12-11 16:22:21 (INFO):
2017-12-11 16:22:21 (INFO): Trader Received advice to go long. Buying BTC
2017-12-11 16:22:23 (INFO): Attempting to BUY 0.0026577023493850735 BTC at Ccxt price: 16188.92
2017-12-11 16:22:23 (ERROR): Error buy { Error: binance POST https://api.binance.com/api/v3/order 400 Bad Request {"code":-2010,"msg":"Account has insufficient balance for requested action."}.....
The issue came from BUY 0.0026577023493850735 BTC x price: 16188.92 = 43.0253307 USDT (too much). So either the price or the amount is not calculate/fetch in the correct way.
@Yoyae My understanding it we also have to set the PriceToPrecision playing around with it, seems to work if I place parseFloat (price).toFixed (4); for ADX for example.
priceToPrecision (symbol, price) {
return parseFloat (price).toFixed (this.markets[symbol].precision.price)
}
@Yoyae I have been playing around with it and the following seems to fix the insufficient fund issues Obviously we need to do the same for buy.
Trader.prototype.sell = function(amount, price, callback) {
var args = _.toArray(arguments);
var retFlag = false;
(async () => {
try{
var roundAmount = 0;
var priceAmount = 0;
try{
var lot = this.ccxt.markets[this.pair]['lot'];
}catch(e){
var lot = undefined;
}
try{
var precision = this.ccxt.markets[this.pair]['precision']['amount'];
}catch(e){
var precision = undefined;
}
if(!_.isUndefined(lot)){
roundAmount = this.ccxt.amountToLots(this.pair, amount);
priceAmount = priceToPrecision(this.pair, price);
}else if (!_.isUndefined(precision)){
roundAmount = this.ccxt.amountToPrecision(this.pair, amount);
priceAmount = priceToPrecision(this.pair, price);
}else{
roundAmount = amount;
priceAmount = price;
}
data = await this.ccxt.createLimitSellOrder (this.pair, roundAmount, priceAmount);
callback(null, data['id']);
}catch(e){
log.error(e);
retFlag = true;
return this.retry(this.sell, args);
}
retFlag = true;
}) ();
deasync.loopWhile(function(){return !retFlag;});
}
@mattS00 Thanks for your testing. I've made the change and it works well.
@Yoyae@mattS00 Please con you share the tested working code, I can try it here. Thanks.
@Shootle I usually use a test strategy like this
//====================================================================
// Initialization
//====================================================================
// Prepare everything our method needs
strat.init = function() {
log.debug("init");
this.test = 0;
}
//====================================================================
// Handle Routine
//====================================================================
strat.update = function(candle) {
if(this.test == 0){
this.advice('long');
this.test = 1;
}
}
It buy one time the selected pair. (just long by short to sell).
Hmm @Yoyae after trying it with a couple of different assets QSP/BNB throws the -2010 error although it seems to be something to do with the Amount not the price the max amount should in fact be 700 not 727.98 gonna look into it now.
2017-12-14 11:32:43 (DEBUG): Reel price and amount are : 727.98 QSP at 0.03475 BNB
2017-12-14 11:32:44 (ERROR): Error buy { Error: binance POST https://api.binance.com/api/v3/order 400 Bad Request {"code":-2010,"msg":"Account has insufficient balance for requested action."} (possible reasons: invalid API keys, bad or old nonce, exchange is down or offline, on maintenance, DDoS protection, rate-limiting)
at binance.defaultErrorHandler (/root/gekko/node_modules/ccxt/js/base/Exchange.js:410:15)
at response.text.then.text (/root/gekko/node_modules/ccxt/js/base/Exchange.js:423:25)
at process._tickDomainCallback (internal/process/next_tick.js:129:7)
at Function.module.exports.loopWhile (/root/gekko/node_modules/deasync/index.js:71:11)
at Trader.buy (/root/gekko/exchanges/ccxt.js:189:11)
at Trader.bound [as buy] (/root/gekko/node_modules/lodash/dist/lodash.js:729:21)
at Manager.buy (/root/gekko/plugins/trader/portfolioManager.js:216:17)
at Manager.bound [as buy] (/root/gekko/node_modules/lodash/dist/lodash.js:729:21)
at Manager.act (/root/gekko/plugins/trader/portfolioManager.js:162:16)
at bound (/root/gekko/node_modules/lodash/dist/lodash.js:729:21) constructor: [Function: ExchangeNotAvailable] }
@mattS00 Thanks for looking at it. I will add some trace to know what price and amount is taken and when/why. I don't have time these day to look deeply into it :/
@mattS00 It seems the ask/bid price was not inverted as I previously thought. I fix it. It should be good now.
@yoyae Trading is working although after buying the following happens
2017-12-19 10:22:17 (ERROR): TypeError: Cannot read property 'balance' of undefined
at PerformanceAnalyzer.calculateReportStatistics (/root/gekko/plugins/performanceAnalyzer/performanceAnalyzer.js:134:37)
at PerformanceAnalyzer.bound [as calculateReportStatistics] (/root/gekko/node_modules/lodash/dist/lodash.js:729:21)
at PerformanceAnalyzer.processTrade (/root/gekko/plugins/performanceAnalyzer/performanceAnalyzer.js:69:23)
at Trader.bound (/root/gekko/node_modules/lodash/dist/lodash.js:729:21)
at emitOne (events.js:115:13)
at Trader.emit (events.js:210:7)
at Trader.bound [as emit] (/root/gekko/node_modules/lodash/dist/lodash.js:729:21)
at Manager.Trader.manager.on.trade (/root/gekko/plugins/trader/trader.js:24:10)
at emitOne (events.js:115:13)
at Manager.emit (events.js:210:7)
@mattS00 I fixed the error.
@WilhelmErasmus I change a lot of things, It doesn't work based on deasync lib anymore. It may solve your issue on VPS.
@askmike Any idea when this will be merged?
|
gharchive/pull-request
| 2017-11-24T13:56:24 |
2025-04-01T04:56:03.844448
|
{
"authors": [
"Filoz",
"Shootle",
"WilhelmErasmus",
"Yoyae",
"cmroche",
"mattS00"
],
"repo": "askmike/gekko",
"url": "https://github.com/askmike/gekko/pull/1365",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
365218739
|
bugfix #2568 transforms commonjs modules to ES2015
What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)
Bugfix #2568
What is the current behavior? (You can also link to an open issue here)
see #2568
What is the new behavior (if this is a feature change)?
commonjs modules get transpiled with babel.
Other information:
This works great! Thanks a lot. I will update my stackoverflow question with your answer.
|
gharchive/pull-request
| 2018-09-30T11:51:39 |
2025-04-01T04:56:03.847684
|
{
"authors": [
"askmike",
"eusorov"
],
"repo": "askmike/gekko",
"url": "https://github.com/askmike/gekko/pull/2573",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1473207347
|
🛑 Quiver Site is down
In 105f1a2, Quiver Site (https://www.quivertheapp.com/) was down:
HTTP code: 403
Response time: 10 ms
Resolved: Quiver Site is back up in 35c84ed.
|
gharchive/issue
| 2022-12-02T17:34:40 |
2025-04-01T04:56:04.077592
|
{
"authors": [
"asmur"
],
"repo": "asmur/quiver",
"url": "https://github.com/asmur/quiver/issues/185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
477607120
|
Enable Component parameter delegates to not require @() for coloring at design time.
The core issue is that the Razor parser splits attribute values based on whitespace. Therefore, when it encounters @onclick="() => Foo()" it breaks it into three different tokens based on the spaces involved. This separation results in multiple adjacent classified spans for C# which is currently unsupported by WTE due to multiple seams overlapping. All that being said we have the opportunity to be smarter when generating attribute values that we feel can be simplified or collapsed; because of this in this PR I changed the TagHelperBlockRewriter phase to understand "simple" collapsible blocks and to then collapse them. In the future a goal would be to take a collapsing approach to all potential attributes and then to re-inspect each token individually at higher layers in order to decouple our TagHelper phases from what the parser initially parses.
Added an integration and parser test to validate the new functionality. Most of the testing is from the fact that no other tests had to change because of this (it doesn't break anything).
Added a new SyntaxNode method GetTokens that flattens a node into only its token representation.
aspnet/AspNetCore#11826
@aspnet/build it looks like the CI has completed but the reporting from azure devops to GitHub is broken? Is this a known issue?
Is this a known issue?
I haven't heard anything. But, transient problems like this do happen occasionally. @MattGal?
BTW, you can still push this commit @NTaylorMullen
@dougbu this is a communication burp between Azure DevOps and Github, and are usually fairly rare (like < once / day). Given both pieces have their own concept of throttling / tarpitting, it may be related to this; If we can gather enough instances we may be able to get AzDO to investigate, until then I suggest just restarting checks unnecessarily or merging after checking in AzDO.
|
gharchive/pull-request
| 2019-08-06T21:40:23 |
2025-04-01T04:56:04.089908
|
{
"authors": [
"MattGal",
"NTaylorMullen",
"dougbu"
],
"repo": "aspnet/AspNetCore-Tooling",
"url": "https://github.com/aspnet/AspNetCore-Tooling/pull/942",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
472110900
|
DI in Startup constructor will throw exception in .NET Core 3.0 preview 7
public Startup(IConfiguration configuration, ILogger<Startup> logger)
This code no longer works. Same issue as https://github.com/aspnet/Extensions/issues/1096
Per Andrew Stanton-Nurse's comment under that issue. https://github.com/aspnet/Extensions/issues/1096#issuecomment-489378332
This document needs update.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 7184a308-a254-9350-a5cd-5bced1f369ae
Version Independent ID: 726e3bf1-f367-d733-8933-bccc04da0e16
Content: Logging in ASP.NET Core
Content Source: aspnetcore/fundamentals/logging/index.md
Product: aspnet-core
Technology: aspnetcore-fundamentals
GitHub Login: @tdykstra
Microsoft Alias: tdykstra
Hello @EdiWang ... This is a duplicate of our issue to work the whole topic for 3.0. I'll move your remark to that issue.
|
gharchive/issue
| 2019-07-24T06:53:48 |
2025-04-01T04:56:04.096623
|
{
"authors": [
"EdiWang",
"guardrex"
],
"repo": "aspnet/AspNetCore.Docs",
"url": "https://github.com/aspnet/AspNetCore.Docs/issues/13469",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
493240581
|
Wrong Controller
In the HTML, you have set the tag:
asp-controller="Movies"
but no controller with that name has been created yet.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 788d46c9-a91a-2788-8db7-b53c37b512f2
Version Independent ID: 00c2c01d-d235-2d2d-5c96-1f33a9314382
Content: Add a view to an ASP.NET Core MVC app
Content Source: aspnetcore/tutorials/first-mvc-app/adding-view.md
Product: aspnet-core
Technology: aspnetcore-tutorials
GitHub Login: @Rick-Anderson
Microsoft Alias: riande
Sorry, I didn't see the line:
Note: The Movies controller has not been implemented. At this point, the Movie app link does not work.
|
gharchive/issue
| 2019-09-13T10:02:04 |
2025-04-01T04:56:04.101176
|
{
"authors": [
"pampua84"
],
"repo": "aspnet/AspNetCore.Docs",
"url": "https://github.com/aspnet/AspNetCore.Docs/issues/14296",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
506120596
|
Format pipeline code
See https://github.com/dotnet/samples/issues/1630
Merged
|
gharchive/issue
| 2019-10-12T02:50:01 |
2025-04-01T04:56:04.102384
|
{
"authors": [
"Rick-Anderson"
],
"repo": "aspnet/AspNetCore.Docs",
"url": "https://github.com/aspnet/AspNetCore.Docs/issues/15038",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
481022540
|
When exception occurs during Startup.Configure propagate to webhost
Is your feature request related to a problem? Please describe.
In Configure(IApplicationBuilder app, IHostingEnvironment env) I am registering a method using app.ApplicationServices instance. This method throws an exception when i can't make a socket connection (in my case i tried to connect to AMQP client).
I noticed if I move the registration inside ConfigureServices(IServiceCollection services) the exception bubbles up to my webhost. To be able to to this inside ConfigureServices I had to get an instance of serviceProvider by using services.BuildServiceProvider() (which I am not sure if it's a good idea).
Describe the solution you'd like
I was wondering if it is possible to propagate exceptions that occur during Configure stage to webhost.
@Tratcher This option doesn't help with the scenario mentioned by me.
If you throw an exception inside public void Configure(IApplicationBuilder app, IHostingEnvironment env) this will not stop the webHost starting (regardless of the setting you mentioned above). And this is what I want to achieve with this feature request.
I can have a look where in the pipeline Configure method is called and see how hard is to make the exception (when occurs, of course) propagate to webHost.
But I was wondering if I am the only one seeing value in this feature request?
Where are you running the app? In IIS (in proc or out of proc)? Or in a console app using Kestrel directly?
Here's a simple repro showing an exception from Configure propagating to Main when using Kestrel in a console app. This is the default behavior, no configuration is required.
public void Configure(IApplicationBuilder app)
{
app.Run((httpContext) =>
{
var payload = _helloWorldBytes;
var response = httpContext.Response;
response.StatusCode = 200;
response.ContentType = "text/plain";
response.ContentLength = payload.Length;
return response.BodyWriter.WriteAsync(payload).GetAsTask();
});
throw new System.Exception("Startup Exception");
}
Application startup exception: System.Exception: Startup Exception
at PlaintextApp.Startup.Configure(IApplicationBuilder app) in D:\github\AspNetCore\src\Servers\Kestrel\samples\PlaintextApp\Startup.cs:line 34
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at Microsoft.AspNetCore.Hosting.MethodInfoExtensions.InvokeWithoutWrappingExceptions(MethodInfo methodInfo, Object obj, Object[] parameters) in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\MethodInfoExtensions.cs:line 17
at Microsoft.AspNetCore.Hosting.ConfigureBuilder.Invoke(Object instance, IApplicationBuilder builder) in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\ConfigureBuilder.cs:line 56
at Microsoft.AspNetCore.Hosting.ConfigureBuilder.<>c__DisplayClass4_0.<Build>b__0(IApplicationBuilder builder) in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\ConfigureBuilder.cs:line 20
at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.Configure(IApplicationBuilder app) in D:\github\AspNetCore\src\Hosting\hosting\src\Startup\ConventionBasedStartup.cs:line 25
at Microsoft.AspNetCore.Hosting.WebHost.BuildApplication() in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\WebHost.cs:line 228
Unhandled exception. System.Exception: Startup Exception
at PlaintextApp.Startup.Configure(IApplicationBuilder app) in D:\github\AspNetCore\src\Servers\Kestrel\samples\PlaintextApp\Startup.cs:line 34
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at Microsoft.AspNetCore.Hosting.MethodInfoExtensions.InvokeWithoutWrappingExceptions(MethodInfo methodInfo, Object obj, Object[] parameters) in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\MethodInfoExtensions.cs:line 17
at Microsoft.AspNetCore.Hosting.ConfigureBuilder.Invoke(Object instance, IApplicationBuilder builder) in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\ConfigureBuilder.cs:line 56
at Microsoft.AspNetCore.Hosting.ConfigureBuilder.<>c__DisplayClass4_0.<Build>b__0(IApplicationBuilder builder) in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\ConfigureBuilder.cs:line 20
at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.Configure(IApplicationBuilder app) in D:\github\AspNetCore\src\Hosting\hosting\src\Startup\ConventionBasedStartup.cs:line 25
at Microsoft.AspNetCore.Hosting.WebHost.BuildApplication() in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\WebHost.cs:line 228
at Microsoft.AspNetCore.Hosting.WebHost.StartAsync(CancellationToken cancellationToken) in D:\github\AspNetCore\src\Hosting\hosting\src\Internal\WebHost.cs:line 148
at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token, String startupMessage) in D:\github\AspNetCore\src\Hosting\hosting\src\WebHostExtensions.cs:line 109
at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token, String startupMessage) in D:\github\AspNetCore\src\Hosting\hosting\src\WebHostExtensions.cs:line 147
at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token) in D:\github\AspNetCore\src\Hosting\hosting\src\WebHostExtensions.cs:line 94
at PlaintextApp.Startup.Main(String[] args) in D:\github\AspNetCore\src\Servers\Kestrel\samples\PlaintextApp\Startup.cs:line 48
at PlaintextApp.Startup.<Main>(String[] args)
D:\github\AspNetCore\artifacts\bin\PlaintextApp\Debug\netcoreapp3.0\PlaintextApp.exe (process 12656) exited with code -532462766.
Press any key to close this window . . .
IIS out-of-proc has different behavior because the console is not usually visible. It captures the errors and shows an error page. See https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/web-host?view=aspnetcore-2.2#detailed-errors
IIS in-proc is a bit different, and has changed significantly across versions. You're using 2.2, correct? @jkotalik what's in-proc's setting for CaptureStartupErrrors in 2.2?
After some digging I noticed that having (I am using Microsoft.ApplicationInsights.AspNetCore 2.7.0)
services.AddApplicationInsightsTelemetry(); in ConfigureServices the exception is swallowed.
If I remove the applicationInsights telemetry configuration the exception bubbles up.
You can add that to your sample application, it is quite easy to reproduce.
Didn't expect that behavior.
But you are right, the feature is already there.
The problem is in combination with application insights configuration.
Closing as it sounds like the issue has been resolved and there is no action needed in ASP.NET Core.
But this is still an issue in combination with application insights - which I find very strange. Can we maybe reference this issue to application insights repository?
@jkotalik this looks like the issue you investigated that was resolved in a later AI version?
https://github.com/microsoft/ApplicationInsights-aspnetcore/issues/897
|
gharchive/issue
| 2019-08-15T07:03:04 |
2025-04-01T04:56:04.111723
|
{
"authors": [
"Tratcher",
"anurse",
"arabelaa",
"jkotalik"
],
"repo": "aspnet/AspNetCore",
"url": "https://github.com/aspnet/AspNetCore/issues/13157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
400962592
|
Unit testing HttpContext with HTTP2 features
Is there a simple way to create a HttpContext with HTTP2 features? I want to write some HTTP2-ish unit tests.
var context = new DefaultHttpContext();
context.Response.AppendTrailers("", "");
This throws an error because there is no IHttpResponseTrailersFeature.
Is there a simple way to make the context HTTP2-erized. If not, consider adding one.
Duplicate of #6880 ?
This issue is about using DefaultHttpContext with trailers independently.
#6880 is about using trailers inside TestHost.
Since trailers aren't as reliable a feature, I don't think we'd put this in DefaultHttpContext directly, we should look at an .EnableHttp2Goop() function of some kind to initialize default values for HTTP/2 expected features in DefaultHttpContext
We'll leave this in preview 7, but it's on the verge :). If you're passionate @JamesNK , we take PRs 🐴
|
gharchive/issue
| 2019-01-19T05:15:45 |
2025-04-01T04:56:04.115152
|
{
"authors": [
"JamesNK",
"anurse"
],
"repo": "aspnet/AspNetCore",
"url": "https://github.com/aspnet/AspNetCore/issues/6871",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
423818294
|
Build C++ Client on macOS, Windows and Linux with CMake
Epic #5301
[ ] Works on Windows, Linux, and Mac
[ ] CI integration
[ ] Optional flag/define for building tests (off by default?)
[ ] ReadMe describing how to build/consume the client
Pre-triage notes:
Not necessary for preview 5, suggest moving to a later preview.
I'd argue this is good for preview4, I already have a draft PR 95% working for this, https://github.com/aspnet/SignalR-Client-Cpp/pull/4
Triage decision: We'll leave this in preview 5 but it's fine if it slips.
Let's see about getting this done in Preview 8 (though I think it's basically done?)
Almost, there was some feedback I'd like to react to.
Done via https://github.com/aspnet/SignalR-Client-Cpp/pull/4
|
gharchive/issue
| 2019-03-21T16:16:53 |
2025-04-01T04:56:04.118873
|
{
"authors": [
"BrennanConroy",
"anurse"
],
"repo": "aspnet/AspNetCore",
"url": "https://github.com/aspnet/AspNetCore/issues/8706",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
433941014
|
SignalR high memory consumption
Hi
We're seeing some wierd behaviour in our application. We have at most 600 users online at a given time, and our memory seems to be rising constantly. Please see image below.
The drop in memory is because we restarted the pod.
We're running this in Kubernetes (linux container) with Istio and using a redis backplane.
Everything is running behind a Google Cloud Load Balancer.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using BulletLeague.Datastores.Friends.Interfaces;
using BulletLeague.Datastores.PlayerInformationStore.Interfaces;
using BulletLeague.Social.Models;
using BulletLeague.Social.Services.FriendsService;
using BulletLeague.Social.SignalR.Enums;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.SignalR;
using Microsoft.Extensions.Logging;
namespace BulletLeague.Social.SignalR.Hubs.V1
{
[Authorize]
public class SocialHub : Hub<IClientOperations>
{
private readonly IFriendsStore friendsStore;
private readonly IPlayerInformationStore playerInfoStore;
private readonly IFriendsService friendsService;
private readonly ILogger<SocialHub> logger;
public SocialHub(
IFriendsStore friendsStore,
IPlayerInformationStore playerInfoStore,
IFriendsService friendsService,
ILogger<SocialHub> logger)
{
this.friendsStore = friendsStore;
this.playerInfoStore = playerInfoStore;
this.friendsService = friendsService;
this.logger = logger;
}
public override async Task OnConnectedAsync()
{
try
{
await base.OnConnectedAsync();
var playerInfo = await playerInfoStore.ReadAsync(Context.User.Identity.Name);
var username = playerInfo.IsDev ? $"[Dev] {playerInfo.Username}" : playerInfo.Username;
var player = new Player(Context.User.Identity.Name, username, Context.ConnectionId, State.Idle);
var friends = await friendsStore.ReadAsync(Context.User.Identity.Name);
if (friends == null)
{
friends = new Datastores.Friends.Models.Friends
{
Id = Context.User.Identity.Name,
FriendsList = new List<Datastores.Friends.Models.Friend>(),
Created = DateTime.UtcNow,
};
await friendsStore.UpsertAsync(friends);
}
player.FriendsList = friends.FriendsList;
await friendsService.AddOrUpdatePlayer(player);
var friendStates = new Dictionary<string, State>();
var shouldUpsert = false;
foreach (var friendItem in player.FriendsList)
{
switch (friendItem.State)
{
case Datastores.Friends.Enums.FriendState.Connected:
shouldUpsert = await HandleConnectedStatesAsync(player, friends, friendStates, shouldUpsert, friendItem);
break;
default:
friendStates.Add(friendItem.Id, State.Requested);
break;
}
}
if (shouldUpsert)
{
await friendsStore.UpsertAsync(friends);
}
await Clients.Caller.ConnectSuccessful(player.FriendsList, friendStates);
}
catch (Exception ex)
{
logger.LogError(nameof(OnConnectedAsync), ex);
}
}
private async Task<bool> HandleConnectedStatesAsync(Player player, Datastores.Friends.Models.Friends friends, Dictionary<string, State> friendStates, bool shouldUpsert, Datastores.Friends.Models.Friend friendItem)
{
var friend = await friendsService.GetPlayer(friendItem.Id);
if (friend != null)
{
await Clients.Client(friend.ConnectionId).FriendStateUpdated(player.Id, player.Username, player.State);
if (friendItem.Username != friend.Username)
{
var actualFriendItem = friends.FriendsList.FirstOrDefault(x => x.Id == friend.Id);
actualFriendItem.Username = friend.Username;
shouldUpsert = true;
}
friendItem.Username = friend.Username;
friendStates.Add(friendItem.Id, friend.State);
}
else
{
var playerInfo = await playerInfoStore.ReadAsync(friendItem.Id);
if (playerInfo != null && playerInfo.Username != friendItem.Username)
{
var actualFriendItem = friends.FriendsList.FirstOrDefault(x => x.Id == playerInfo.Id);
var isDevPrefixed = playerInfo.IsDev ? "[Dev] " : string.Empty;
actualFriendItem.Username = $"{isDevPrefixed}{playerInfo.Username}";
friendItem.Username = actualFriendItem.Username;
shouldUpsert = true;
}
friendStates.Add(friendItem.Id, State.Offline);
}
return shouldUpsert;
}
// TODO: Should be removed when removing SetOnline method in game client
public async Task SetOnline(string username)
{
username.IsNormalized();
await Task.CompletedTask;
}
public async Task SetState(State state)
{
try
{
var player = await friendsService.GetPlayer(Context.User.Identity.Name);
if (player == null)
{
return;
}
player.State = state;
await friendsService.AddOrUpdatePlayer(player);
foreach (var friendItem in player.FriendsList.Where(x => x.State == Datastores.Friends.Enums.FriendState.Connected))
{
var friend = await friendsService.GetPlayer(friendItem.Id);
if (friend != null)
{
await Clients.Client(friend.ConnectionId).FriendStateUpdated(player.Id, player.Username, player.State);
}
}
}
catch (Exception ex)
{
logger.LogError(nameof(SetState), ex);
}
}
public async Task UpdateUsername(string username)
{
try
{
var player = await friendsService.GetPlayer(Context.User.Identity.Name);
if (player == null)
{
return;
}
player.Username = username;
await friendsService.AddOrUpdatePlayer(player);
foreach (var friendItem in player.FriendsList.Where(x => x.State == Datastores.Friends.Enums.FriendState.Connected))
{
var friend = await friendsService.GetPlayer(friendItem.Id);
if (friend != null)
{
await Clients.Client(friend.ConnectionId).FriendStateUpdated(player.Id, player.Username, player.State);
}
}
}
catch (Exception ex)
{
logger.LogError(nameof(UpdateUsername), ex);
}
}
public async Task FriendSendRequest(string username)
{
try
{
if (!username.Contains("#", StringComparison.InvariantCulture))
{
await Clients.Caller.FriendRequestFailed("Username is missing #.");
return;
}
var player = await friendsService.GetPlayer(Context.User.Identity.Name);
if (player == null)
{
return;
}
var otherPlayerInfo = await playerInfoStore.ReadByUsernameAndUniqueIdAsync(username);
if (otherPlayerInfo == null)
{
await Clients.Caller.FriendRequestFailed("User not found.");
return;
}
if (player.FriendsList.Any(x => x.Id == otherPlayerInfo.Id))
{
var befriendedPlayer = player.FriendsList.FirstOrDefault(x => x.Id == otherPlayerInfo.Id);
if (befriendedPlayer.State == Datastores.Friends.Enums.FriendState.Requested)
{
await FriendAcceptRequest(otherPlayerInfo.Id);
await Clients.Caller.FriendRequestSuccess(befriendedPlayer);
return;
}
await Clients.Caller.FriendRequestFailed("User already befriended.");
return;
}
var friends = await friendsStore.ReadAsync(Context.User.Identity.Name);
var newFriend = new Datastores.Friends.Models.Friend
{
Id = otherPlayerInfo.Id,
Username = otherPlayerInfo.Username,
State = Datastores.Friends.Enums.FriendState.SentRequest,
};
friends.FriendsList.Add(newFriend);
player.FriendsList = friends.FriendsList;
var friend = await friendsStore.ReadAsync(otherPlayerInfo.Id);
if (friend == null)
{
friend = new Datastores.Friends.Models.Friends
{
Id = otherPlayerInfo.Id,
FriendsList = new List<Datastores.Friends.Models.Friend>(),
Created = DateTime.UtcNow,
};
}
var newFriendRequest = new Datastores.Friends.Models.Friend
{
Id = Context.User.Identity.Name,
Username = player.Username,
State = Datastores.Friends.Enums.FriendState.Requested,
};
friend.FriendsList.Add(newFriendRequest);
await friendsService.AddOrUpdatePlayer(player);
var otherPlayer = await friendsService.GetPlayer(otherPlayerInfo.Id);
if (otherPlayer != null)
{
otherPlayer.FriendsList = friend.FriendsList;
await friendsService.AddOrUpdatePlayer(otherPlayer);
await Clients.Client(otherPlayer.ConnectionId).FriendRequestReceived(newFriendRequest);
}
await friendsStore.UpsertAsync(friends);
await friendsStore.UpsertAsync(friend);
await Clients.Caller.FriendRequestSuccess(newFriend);
}
catch (Exception ex)
{
logger.LogError(nameof(FriendAcceptRequest), ex);
}
}
public async Task FriendAcceptRequest(string otherId)
{
try
{
var player = await friendsService.GetPlayer(Context.User.Identity.Name);
if (player == null)
{
return;
}
var otherPlayer = await friendsService.GetPlayer(otherId);
if (otherPlayer != null)
{
await Clients.Caller.FriendStateUpdated(otherPlayer.Id, otherPlayer.Username, otherPlayer.State);
await Clients.Client(otherPlayer.ConnectionId).FriendStateUpdated(player.Id, player.Username, player.State);
}
else
{
var f = player.FriendsList.FirstOrDefault(x => x.Id == otherId);
await Clients.Caller.FriendStateUpdated(f.Id, f.Username, State.Offline);
}
var friends = await friendsStore.ReadAsync(Context.User.Identity.Name);
var f1 = friends.FriendsList.FirstOrDefault(x => x.Id == otherId);
f1.State = Datastores.Friends.Enums.FriendState.Connected;
player.FriendsList = friends.FriendsList;
await friendsService.AddOrUpdatePlayer(player);
var friend = await friendsStore.ReadAsync(otherId);
var f2 = friend.FriendsList.FirstOrDefault(x => x.Id == Context.User.Identity.Name);
f2.State = Datastores.Friends.Enums.FriendState.Connected;
if (otherPlayer != null)
{
otherPlayer.FriendsList = friend.FriendsList;
await friendsService.AddOrUpdatePlayer(otherPlayer);
}
await friendsStore.UpsertAsync(friends);
await friendsStore.UpsertAsync(friend);
}
catch (Exception ex)
{
logger.LogError(nameof(FriendAcceptRequest), ex);
}
}
public async Task FriendDeclineRequest(string otherId)
{
try
{
var player = await friendsService.GetPlayer(Context.User.Identity.Name);
if (player == null)
{
return;
}
var otherPlayer = await friendsService.GetPlayer(otherId);
if (otherPlayer != null)
{
await Clients.Client(otherPlayer.ConnectionId).FriendDeclineRequest(player.Id);
}
var friends = await friendsStore.ReadAsync(Context.User.Identity.Name);
var f = friends.FriendsList.FirstOrDefault(x => x.Id == otherId);
friends.FriendsList.Remove(f);
player.FriendsList = friends.FriendsList;
await friendsService.AddOrUpdatePlayer(player);
var friend = await friendsStore.ReadAsync(otherId);
var f2 = friend.FriendsList.FirstOrDefault(x => x.Id == Context.User.Identity.Name);
friend.FriendsList.Remove(f2);
if (otherPlayer != null)
{
otherPlayer.FriendsList = friend.FriendsList;
await friendsService.AddOrUpdatePlayer(otherPlayer);
}
await friendsStore.UpsertAsync(friends);
await friendsStore.UpsertAsync(friend);
}
catch (Exception ex)
{
logger.LogError(nameof(FriendDeclineRequest), ex);
}
}
public async Task FriendRemove(string otherId)
{
try
{
var player = await friendsService.GetPlayer(Context.User.Identity.Name);
if (player == null)
{
return;
}
var otherPlayer = await friendsService.GetPlayer(otherId);
if (otherPlayer != null)
{
await Clients.Client(otherPlayer.ConnectionId).FriendRemove(player.Id);
}
var friends = await friendsStore.ReadAsync(Context.User.Identity.Name);
var f = friends.FriendsList.FirstOrDefault(x => x.Id == otherId);
friends.FriendsList.Remove(f);
player.FriendsList = friends.FriendsList;
await friendsService.AddOrUpdatePlayer(player);
var friend = await friendsStore.ReadAsync(otherId);
var f2 = friend.FriendsList.FirstOrDefault(x => x.Id == Context.User.Identity.Name);
friend.FriendsList.Remove(f2);
if (otherPlayer != null)
{
otherPlayer.FriendsList = friend.FriendsList;
await friendsService.AddOrUpdatePlayer(otherPlayer);
}
await friendsStore.UpsertAsync(friends);
await friendsStore.UpsertAsync(friend);
}
catch (Exception ex)
{
logger.LogError(nameof(FriendRemove), ex);
}
}
public async Task InviteToFriendBrawl(string otherId, string gameCode)
{
try
{
var otherPlayer = await friendsService.GetPlayer(otherId);
if (otherPlayer == null || otherPlayer.State != State.Idle)
{
await Clients.Caller.FriendNotAvailableForGameInvites();
return;
}
var player = await friendsService.GetPlayer(Context.User.Identity.Name);
await Clients.Client(otherPlayer.ConnectionId).FriendBrawlInvite(player.Id, gameCode);
await Clients.Caller.FriendBrawlInviteSuccess();
}
catch (Exception ex)
{
logger.LogError(nameof(InviteToFriendBrawl), ex);
}
}
public async Task FriendBrawlDecline(string otherId)
{
try
{
var otherPlayer = await friendsService.GetPlayer(otherId);
if (otherPlayer != null)
{
await Clients.Client(otherPlayer.ConnectionId).FriendBrawlDeclined(Context.User.Identity.Name);
}
}
catch (Exception ex)
{
logger.LogError(nameof(FriendBrawlDecline), ex);
}
}
public async Task InviteToDuo(string otherId, string region)
{
try
{
var otherPlayer = await friendsService.GetPlayer(otherId);
if (otherPlayer == null || otherPlayer.State != State.Idle)
{
await Clients.Caller.FriendNotAvailableForGameInvites();
return;
}
await Clients.Client(otherPlayer.ConnectionId).DuoInvite(Context.User.Identity.Name, region);
await Clients.Caller.DuoInviteSuccess();
}
catch (Exception ex)
{
logger.LogError(nameof(InviteToDuo), ex);
}
}
public async Task DuoInviteAccept(string inviterId)
{
try
{
var playerId = Context.User.Identity.Name;
if (string.IsNullOrWhiteSpace(playerId))
{
return;
}
var otherPlayer = await friendsService.GetPlayer(inviterId);
if (otherPlayer == null)
{
await Clients.Caller.DuoInviteMissingInviter();
}
await Clients.Client(otherPlayer.ConnectionId).DuoInviteAccepted(playerId);
}
catch (Exception ex)
{
logger.LogError(nameof(DuoInviteAccept), ex);
}
}
public async Task DuoInviteDecline(string inviterId)
{
try
{
var otherPlayer = await friendsService.GetPlayer(inviterId);
if (otherPlayer != null)
{
await Clients.Client(otherPlayer.ConnectionId).DuoInviteDeclined(Context.User.Identity.Name);
}
}
catch (Exception ex)
{
logger.LogError(nameof(DuoInviteDecline), ex);
}
}
public async Task DuoInviteCancel(string otherId)
{
try
{
var otherPlayer = await friendsService.GetPlayer(otherId);
if (otherPlayer != null)
{
await Clients.Client(otherPlayer.ConnectionId).DuoInviteCanceled(Context.User.Identity.Name);
}
}
catch (Exception ex)
{
logger.LogError(nameof(DuoInviteCancel), ex);
}
}
public override async Task OnDisconnectedAsync(Exception exception)
{
try
{
await base.OnDisconnectedAsync(exception);
var playerId = await friendsService.GetMapping(Context.ConnectionId);
if (!string.IsNullOrWhiteSpace(playerId))
{
var player = await friendsService.GetPlayer(playerId);
if (player != null)
{
foreach (var fId in player.FriendsList.Where(x => x.State == Datastores.Friends.Enums.FriendState.Connected))
{
var friend = await friendsService.GetPlayer(fId.Id);
if (friend != null)
{
await Clients.Client(friend.ConnectionId).FriendStateUpdated(player.Id, player.Username, State.Offline);
}
}
await friendsService.RemovePlayer(player.Id, Context.ConnectionId);
}
}
}
catch (Exception ex)
{
logger.LogError(nameof(OnDisconnectedAsync), ex);
}
}
}
}
Everything is wrapped in try/catch to see if we caught anything, but we didn't.
I have a feeling connections is not being closed correctly, or GC is not cleaning up the ConcurrentDictionary that holds the connections. It also might be a code issue (I mean; it might be our own code that's faulty).
Any help appreciated.
It sounds like it would be best if you can do some initial investigation to see if it's your own objects that are leaking. If you are able to capture a memory dump showing the leaked objects, that would be helpful. Let us know if you have one (do NOT post it here, as memory dumps contain sensitive information) and we can give you an email address to send it to.
Could be the same issue as #8369, try taking a dependency on StackExchange.Redis 2.0.593 or later and see if it helps.
I've now tried setting a memory and CPU limit for the kubernetes deployment. We're now seeing a more stable memory consumption.
It seems when docker starts the container without a --memory parameter, that it just assumes it got all the resources it could ever need, and never does any proper GC (just an assumption on my part).
I might update the StackExchange.Redis if we see any issues again.
after setting a limit we've seen no issues with memory just rising. We also updated StackExchange.Redis for good measure as the issue in #8369 seems pretty nice to get rid of.
Thanks for the help!
|
gharchive/issue
| 2019-04-16T19:10:46 |
2025-04-01T04:56:04.130334
|
{
"authors": [
"anurse",
"marklonquist",
"sehra"
],
"repo": "aspnet/AspNetCore",
"url": "https://github.com/aspnet/AspNetCore/issues/9434",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
463463418
|
Await dispose for inner stream before disposing SslStream.
Part of many issues from https://github.com/dotnet/corefx/issues/38911#issuecomment-507818823
I think I know how it can happen.
I think I know how it can happen.
Don't leave us hanging lol
|
gharchive/pull-request
| 2019-07-02T22:29:36 |
2025-04-01T04:56:04.133651
|
{
"authors": [
"davidfowl",
"halter73",
"jkotalik"
],
"repo": "aspnet/AspNetCore",
"url": "https://github.com/aspnet/AspNetCore/pull/11819",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
387500597
|
Check HubConnection state before running invoke logic
Before calling methods that require the HubConnection to be connected we run a simple connection state check like
https://github.com/aspnet/AspNetCore/blob/e310ccac7b30e901f657f81c86c4048fecfedf7b/src/SignalR/clients/java/signalr/src/main/java/com/microsoft/signalr/HubConnection.java#L464-L466
Addresses #4393
@mikaelm12 can you also add the shiproom template? (You can find a copy from https://github.com/aspnet/AspNetCore/pull/4403)
Approved for 2.2.2
The failure was for an IIS Functional test. An unrelated test failure.
Starting test execution, please wait...
2018-12-04T22:55:50.8997274Z [xUnit.net 00:00:17.97] Microsoft.AspNetCore.Server.IIS.FunctionalTests.AppOfflineTests.AppOfflineDroppedWhileSiteFailedToStartInRequestHandler_SiteStops_InProcess [SKIP]
2018-12-04T22:55:51.8627524Z Skipped Microsoft.AspNetCore.Server.IIS.FunctionalTests.AppOfflineTests.AppOfflineDroppedWhileSiteFailedToStartInRequestHandler_SiteStops_InProcess
2018-12-04T22:57:20.4801986Z [xUnit.net 00:01:47.51] Microsoft.AspNetCore.Server.IIS.FunctionalTests.AppOfflineTests.AppOfflineDroppedWhileSiteStarting_SiteShutsDown_InProcess [FAIL]
2018-12-04T22:57:20.4848481Z Failed Microsoft.AspNetCore.Server.IIS.FunctionalTests.AppOfflineTests.AppOfflineDroppedWhileSiteStarting_SiteShutsDown_InProcess
2018-12-04T22:57:20.4848649Z Error Message:
2018-12-04T22:57:20.4848762Z Assert.Equal() Failure
2018-12-04T22:57:20.4848871Z Expected: 0
2018-12-04T22:57:20.4848956Z Actual: -1073740767
|
gharchive/pull-request
| 2018-12-04T22:05:29 |
2025-04-01T04:56:04.136343
|
{
"authors": [
"mikaelm12",
"muratg",
"vivmishra"
],
"repo": "aspnet/AspNetCore",
"url": "https://github.com/aspnet/AspNetCore/pull/4400",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
360500501
|
replacing string interpolation with proper logging syntax
Fixes #362.
I found some other places where it was used.
I'm not sure of the convention of the logging properties
I couldn't get the preview bits to build on my Mac or AppVeyor.
I'm not sure if the tests will pass
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: daniel-white sign nowYou have signed the CLA already but the status is still pending? Let us recheck it.
well your CI got it passed. awesome sauce.
Thanks
|
gharchive/pull-request
| 2018-09-15T02:49:24 |
2025-04-01T04:56:04.140347
|
{
"authors": [
"Tratcher",
"daniel-white",
"dnfclas"
],
"repo": "aspnet/BasicMiddleware",
"url": "https://github.com/aspnet/BasicMiddleware/pull/363",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
231779303
|
建议增加按前缀清理缓存的接口
public static void KeyDeleteWithPrefix(this IDatabase database, string prefix)
{
if (database == null)
{
throw new ArgumentException("Database cannot be null", "database");
}
if (string.IsNullOrWhiteSpace(prefix))
{
throw new ArgumentException("Prefix cannot be empty", "database");
}
database.ScriptEvaluate(@"
local keys = redis.call('keys', ARGV[1])
for i=1,#keys,5000 do
redis.call('del', unpack(keys, i, math.min(i+4999, #keys)))
end", values: new RedisValue[] {prefix});
}
public void Clear()
{
Database.KeyDeleteWithPrefix(GetLocalizedKey("*"));
}
对于列表数据,你很难用明确的key来定位删除,没有这个接口缓存是不完美的
// Copyright (c) .NET Foundation. All rights reserved.
// Licensed under the Apache License, Version 2.0. See License.txt in the project root for license information.
using System.Threading.Tasks;
namespace Microsoft.Extensions.Caching.Distributed
{
public interface IDistributedCache
{
byte[] Get(string key);
Task<byte[]> GetAsync(string key);
void Set(string key, byte[] value, DistributedCacheEntryOptions options);
Task SetAsync(string key, byte[] value, DistributedCacheEntryOptions options);
void Refresh(string key);
Task RefreshAsync(string key);
void Remove(string key);
Task RemoveAsync(string key);
}
}
或者把连接暴露出来,让开发者更自由的使用
We periodically close 'discussion' issues that have not been updated in a long period of time.
We apologize if this causes any inconvenience. We ask that if you are still encountering an issue, please log a new issue with updated information and we will investigate.
|
gharchive/issue
| 2017-05-27T06:14:42 |
2025-04-01T04:56:04.142696
|
{
"authors": [
"aspnet-hello",
"molinjinyi"
],
"repo": "aspnet/Caching",
"url": "https://github.com/aspnet/Caching/issues/311",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
154514669
|
Should default(StringSegment) != default(StringSegment)?
I've noticed that default(StringSegment) == default(StringSegment) evaluates to false, which seems to be done for a reason in the source code, but I'm not sure what that reason is. I was just wondering if someone could fill me in, I'm just curious.
Seems like an oversight. cc @javiercn
Yup this just seems like a bug.
Yep, @rynowak is there any perf reason to not do string.Compare directly? I've done a quick test of the API on the interactive window in VS and it works fine for null values. (We will want a theory test here, though)
@jbagga please enjoy this bug 😄
cc @pranavkm for FYI information.
|
gharchive/issue
| 2016-05-12T15:39:27 |
2025-04-01T04:56:04.145608
|
{
"authors": [
"Eilon",
"javiercn",
"pranavkm",
"tuespetre"
],
"repo": "aspnet/Common",
"url": "https://github.com/aspnet/Common/issues/111",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
296239532
|
Add support for DataMember attribute to override JSON settings key
Addresses #774
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: dodexahedron sign nowYou have signed the CLA already but the status is still pending? Let us recheck it.
Note: The link to the CLA on the aspnet/Home repo is invalid.
Thanks for your PR, but the current thinking is that we are going to do something like https://github.com/aspnet/Configuration/pull/789 to address this
|
gharchive/pull-request
| 2018-02-12T00:00:02 |
2025-04-01T04:56:04.148871
|
{
"authors": [
"HaoK",
"dnfclas",
"dodexahedron"
],
"repo": "aspnet/Configuration",
"url": "https://github.com/aspnet/Configuration/pull/775",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
276273808
|
Update an-overview-of-project-katana.md
Quotes fix at owin.RequestProtocol
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: sultanimbayev sign nowYou have signed the CLA already but the status is still pending? Let us recheck it.
@sultanimbayev looks like you have a commit with https://github.com/aspnet/Docs/pull/4859
I'm sorry. I didn't mentioned that i have opened two pull requests with the same changes
can you just drop changes of sample4.cs?
I will open another pull request.
|
gharchive/pull-request
| 2017-11-23T06:12:29 |
2025-04-01T04:56:04.152615
|
{
"authors": [
"Rick-Anderson",
"dnfclas",
"sultanimbayev"
],
"repo": "aspnet/Docs",
"url": "https://github.com/aspnet/Docs/pull/4858",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
344047062
|
Update complex-data-model.md
Removed duplication in the Update the Enrollment entity header.
@GoFightNguyen Nice catch! Thank you for taking the time to fix it.
|
gharchive/pull-request
| 2018-07-24T13:52:22 |
2025-04-01T04:56:04.153699
|
{
"authors": [
"GoFightNguyen",
"scottaddie"
],
"repo": "aspnet/Docs",
"url": "https://github.com/aspnet/Docs/pull/7784",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
450266450
|
Can this be used to synchronize related table
lets say we have a Student table and StudentGrades table.
I don't want other requests to edit the grades at the same time.
can i do
BeginTransaction
var Student = db.Students.Find(...);
Student.LastModified = DateTimeOffset.Now;
db.SaveChanges();// this will update the rowversion
query StudentGrades;
edit StudentGrades;
db.SaveChanges();
Commit Transaction;
or do i have to put RowVersion on each StudentGrade?
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 649fbab8-b960-6873-14bf-188d14757fee
Version Independent ID: f5ee3135-54c8-689c-e28b-b203b376bd6f
Content: Handling Concurrency Conflicts - EF Core
Content Source: entity-framework/core/saving/concurrency.md
Product: entity-framework
Technology: entity-framework-core
GitHub Login: @rowanmiller
Microsoft Alias: divega
@hoksource you specify what properties act as row version in the model. No need to do it for each individual instance.
|
gharchive/issue
| 2019-05-30T11:44:43 |
2025-04-01T04:56:04.159569
|
{
"authors": [
"divega",
"hoksource"
],
"repo": "aspnet/EntityFramework.Docs",
"url": "https://github.com/aspnet/EntityFramework.Docs/issues/1494",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
53749241
|
EF7 code-based model .. using a Data Dictionary approach ?
To explain what this is, consider the fact that if we have something like "Customer ID" property that is repeated some 40 times in 40 different model entities, that all refer to the same physical meaning, ie sharing the same common attributes such as Type, Display Format, Display Label, Validation, etc., then with a Data Dictionary approach this Property is defined Once with all its related attributes and then referred to as many as required in the model entities without the need to redefine the Attributes in each entity model file.
As it is now, we copy the Property with all its Attributes to all the Model files that need to have it, and this makes maintenance of the Model a nightmare each time we need to adjust, correct, or change an attribute of a property, we need to inspect all ViewModels and Models to see where else this property is mentioned.
The idea of Data Dictionary based development is not new (in the 1990's of object oriented 4GL development) proved to be a very productive and clean development environments.
Just a humbled thought I wanted to share.
@awbmansour could you elaborate a little more on the code you want to write. Do you want to still have CLR classes for each entity and just apply the same configuration to every CustomerId property? Or do you not want strongly typed classes and have more of a property bag (Dictionary<string, object>) approach?
Dear Rowan,
Thanks a lot for entertaining my humbled suggestion.
My proposal is to have it in an object oriented way, with inheritance, so
on a context level we have overall definitions that are global to the
context, which can be used in any entity class, and optionally in an entity
class the attributes can be altered or redefined.
Assume that the dictionary associated with the context "mycontext" will be
defined in a mycontext_globaldefs.cs file :
mycontext_globaldefs.cs would contain the properties that would be used
commonly or repeatedly in a context with one or more of the attributes that
can be assigned to such properties :
[DisplayFormat(....)]
[DataType(....)]
[Display(...)]
Guid customer_id;
[DisplayFormat(....)]
[DataType(....)]
[Display(...)]
[StringLength(....)]
string customer_name;
In the individual entity classes, we will use these common properties along
with other singular properties which would not justify/warrant the use of a
common/global definition:
in customer.cs entity class file :
namespace MyApp.Models
{
public class customer
{
[Key]
[Required(......)]
public customer_id { get; set; }
[Required(......)]
public customer_name { get; set; }
}
}
In another class (eg, prioritycustomer.cs), where the property will be
used, except one attribute (in the example, the Display attribute) that
needs redefinition, and other attribute that is not needed (in the example,
Required not needed for Customer Name)
in prioritycustomer.cs entity class file :
namespace MyApp.Models
{
public class prioritycustomer
{
[Key]
[Required(......)]
[Display(...)]
public customer_id { get; set; }
public customer_name { get; set; }
}
}
I hope I answered your kind question,
Best Regards
Adel
On Sat, Jan 10, 2015 at 1:32 AM, Rowan Miller notifications@github.com
wrote:
@awbmansour https://github.com/awbmansour could you elaborate a little
more on the code you want to write. Do you want to still have CLR classes
for each entity and just apply the same configuration to every CustomerId
property? Or do you not want strongly typed classes and have more of a
property bag (Dictionary) approach?
—
Reply to this email directly or view it on GitHub
https://github.com/aspnet/EntityFramework/issues/1370#issuecomment-69419597
.
--
Thanks and Best Regards
Adel Mansour
Director
simplerApps Software Solutions Ltd.
www.simplerapps.com
amansour@simplerapps.com
mobile | viber | whatsapp : +20 122 3987214
@awbmansour - This can be achieved in EF6 using bulk configuration (a.k.a custom conventions).
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder
.Properties<string>()
.Where(p => p.Name == "customer_name")
.Configure(p => p.HasMaxLength(200).IsUnicode(true));
}
Of course, if you wanted to parse some other format (such as the globaldefs.cs class you mentioned) and dynamically create conventions based on that it would be possible.
We'll have something similar in EF7 too, tracked by https://github.com/aspnet/EntityFramework/issues/214.
Wow .. this is awesome. I will need to learn more on this.
Thanks a lot for your care and time during your busy schedule.
On Jan 13, 2015 12:29 AM, "Rowan Miller" notifications@github.com wrote:
@awbmansour https://github.com/awbmansour - This can be achieved in EF6
using bulk configuration (a.k.a custom conventions).
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder
.Properties()
.Where(p => p.Name == "customer_name")
.Configure(p => p.HasMaxLength(200).IsUnicode(true));
}
Of course, if you wanted to parse some other format (such as the
globaldefs.cs class you mentioned) and dynamically create conventions based
on that it would be possible.
We'll have something similar in EF7 too, tracked by #214
https://github.com/aspnet/EntityFramework/issues/214.
—
Reply to this email directly or view it on GitHub
https://github.com/aspnet/EntityFramework/issues/1370#issuecomment-69659423
.
Closing out this issue now, feel free to re-open if you have further questions :smile:
|
gharchive/issue
| 2015-01-08T13:17:11 |
2025-04-01T04:56:04.177006
|
{
"authors": [
"awbmansour",
"rowanmiller"
],
"repo": "aspnet/EntityFramework",
"url": "https://github.com/aspnet/EntityFramework/issues/1370",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
113903074
|
Design meeting discussion: October 28, 2015
This issue is for discussion of the EF design meeting on October 28, 2015.
Items discussed were:
DNX Commands and Startup Projects
Please comment on this issue to provide feedback, ask questions, etc.
Just a note about the wiki page, the command :
dnx ef migrations add MyMigration --targetProject MyApp.Data
Should be : (on the latest version 1.0.0-rc2-16317)
dnx ef migrations add MyMigration --target-project MyApp.Data
@aconstant design meeting notes (and other design docs) are just a point in time thing and we don't keep them updated as the product evolves. Our docs for actually using EF are at docs.efproject.net and we keep those updated with each release.
We are closing this issue because no further action is planned for this issue. If you still have any issues or questions, please log a new issue with any additional details that you have.
|
gharchive/issue
| 2015-10-28T19:24:32 |
2025-04-01T04:56:04.180789
|
{
"authors": [
"aconstant",
"ajcvickers",
"bricelam",
"rowanmiller"
],
"repo": "aspnet/EntityFramework",
"url": "https://github.com/aspnet/EntityFramework/issues/3590",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
151524093
|
Query: Translate string.IsNullOrEmpty
I'm having a situation where I need to get data from a single table with 100k+ rows with a quite simple where clause. This query was originally created with EF6 and returned within 100ms.
The code is:
return await dbContext.Tees
.Where(w => w.CourseId.Equals(courseId) && !string.IsNullOrEmpty(w.Name) && w.IsActive.Equals(true) && w.IsDeleted.Equals(false))
.OrderBy(o => o.Color)
.Select(s => new TeeDto
{
CourseId = s.CourseId,
TeeId = s.TeeId,
FrontPar = s.ParFront,
BackPar = s.ParBack,
CoursePar = s.CoursePar,
DistanceBack = s.DistanceBack,
DistanceFront = s.DistanceFront,
DistanceTotal = s.DistanceTotal,
Name = s.Name,
Color = s.Color,
Rating = s.Rating,
Slope = s.Slope
}).ToListAsync();
The query above took more than 55 secs to come back. So I used Profiler to discover that the generated SQL did not include any of the where clause except for the last one (IsDeleted).
SELECT [w].[CourseId], [w].[Name], [w].[IsActive], [w].[TeeId], [w].[ParFront], [w].[ParBack], [w].[CoursePar], [w].[DistanceBack], [w].[DistanceFront], [w].[DistanceTotal], [w].[Color], [w].[Rating], [w].[Slope]
FROM [Tee] AS [w]
WHERE [w].[IsDeleted] = 0
ORDER BY [w].[Color]
I changed the clauses order and pushed the string.IsNullOrEmpty at the end, so the query looks like
return await dbContext.Tees
.Where(w => w.CourseId.Equals(courseId) && w.IsActive.Equals(true) && w.IsDeleted.Equals(false) && !string.IsNullOrEmpty(w.Name))
.OrderBy(o => o.Color)
.Select(s => new TeeDto
{
CourseId = s.CourseId,
TeeId = s.TeeId,
FrontPar = s.ParFront,
BackPar = s.ParBack,
CoursePar = s.CoursePar,
DistanceBack = s.DistanceBack,
DistanceFront = s.DistanceFront,
DistanceTotal = s.DistanceTotal,
Name = s.Name,
Color = s.Color,
Rating = s.Rating,
Slope = s.Slope
}).ToListAsync();
With the modified where clauses, the query came back under 100ms but the generated SQL was not exactly what I was hoping for; the string.IsNullOrEmpty was not translated at all:
exec sp_executesql N'SELECT [w].[Name], [w].[CourseId], [w].[TeeId], [w].[ParFront], [w].[ParBack], [w].[CoursePar], [w].[DistanceBack], [w].[DistanceFront], [w].[DistanceTotal], [w].[Color], [w].[Rating], [w].[Slope]
FROM [Tee] AS [w]
WHERE (([w].[CourseId] = @__courseId_0) AND ([w].[IsActive] = 1)) AND ([w].[IsDeleted] = 0)
ORDER BY [w].[Color]',N'@__courseId_0 bigint',@__courseId_0=3959
I had to manually recreate the string.IsNullOrEmpty method in order to get the results I needed, and thus the correct SQL.
return await dbContext.Tees
.Where(w => w.CourseId.Equals(courseId) && (w.Name != null || w.Name != "") && w.IsActive.Equals(true) && w.IsDeleted.Equals(false))
.OrderBy(o => o.Color)
.Select(s => new TeeDto
{
CourseId = s.CourseId,
TeeId = s.TeeId,
FrontPar = s.ParFront,
BackPar = s.ParBack,
CoursePar = s.CoursePar,
DistanceBack = s.DistanceBack,
DistanceFront = s.DistanceFront,
DistanceTotal = s.DistanceTotal,
Name = s.Name,
Color = s.Color,
Rating = s.Rating,
Slope = s.Slope
}).ToListAsync();
exec sp_executesql N'SELECT [w].[CourseId], [w].[TeeId], [w].[ParFront], [w].[ParBack], [w].[CoursePar], [w].[DistanceBack], [w].[DistanceFront], [w].[DistanceTotal], [w].[Name], [w].[Color], [w].[Rating], [w].[Slope]
FROM [Tee] AS [w]
WHERE ((([w].[CourseId] = @__courseId_0) AND ([w].[Name] IS NOT NULL OR (([w].[Name] <> '''') OR [w].[Name] IS NULL))) AND ([w].[IsActive] = 1)) AND ([w].[IsDeleted] = 0)
ORDER BY [w].[Color]',N'@__courseId_0 bigint',@__courseId_0=3959
Am I doing this correctly or for now, the support of translating string.IsNullOrEmpty into SQL is not supported yet? I'm using EF 7.0.0-rc1-final
Any chance you added support for String.IsNullOrWhiteSpace while you were at it?
@MrGadget1024 - Yes. string.IsNullOrWhiteSpace is already supported.
Test - https://github.com/aspnet/EntityFramework/blob/dev/test/Microsoft.EntityFrameworkCore.SqlServer.FunctionalTests/QuerySqlServerTest.cs#L5810
|
gharchive/issue
| 2016-04-28T00:42:57 |
2025-04-01T04:56:04.186194
|
{
"authors": [
"MrGadget1024",
"raphlo",
"smitpatel"
],
"repo": "aspnet/EntityFramework",
"url": "https://github.com/aspnet/EntityFramework/issues/5199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
232811733
|
[DataType] annotation not working
Property
public DateTime? TimeCreated { get; set; }
is by default mapped to datetime2(7).
But if I want to map it to older datetime type it works when explicitly configured in FluentApi but does not work(remains default type) when configured with Annotation:
[DataType("datetime")]
public DateTime? TimeCreated { get; set; }`
Further technical details
EF Core version: 1.1.2
Database Provider: Microsoft.EntityFrameworkCore.SqlServer
Operating system: Windows 10
IDE: Visual Studio 2017
You should use the Column attribute, not DataType
[Column(TypeName = "datetime")]
Thx for that info.
|
gharchive/issue
| 2017-06-01T09:04:38 |
2025-04-01T04:56:04.189586
|
{
"authors": [
"ErikEJ",
"borisdj"
],
"repo": "aspnet/EntityFramework",
"url": "https://github.com/aspnet/EntityFramework/issues/8662",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
102842538
|
Migrations: Only create FKs once in a hierarchy
Fixes #2887
Looks good. :shipit:
|
gharchive/pull-request
| 2015-08-24T16:51:41 |
2025-04-01T04:56:04.190541
|
{
"authors": [
"bricelam",
"lajones"
],
"repo": "aspnet/EntityFramework",
"url": "https://github.com/aspnet/EntityFramework/pull/2910",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
190864814
|
Fix: #7048 - EF Core 1.1 .ToString() translation causes exception
Adding bad data error handling logic to materialization introduced the possibility
of hitting a known expression compiler limitation (as described here: https://github.com/dotnet/corefx/pull/13126).
This change works around the limitation by moving the read-value try-catch to a helper method, thus we no
longer insert a try-catch directly into the output ET.
:shipit: after changes in the ExpressionPrinter
|
gharchive/pull-request
| 2016-11-21T23:22:10 |
2025-04-01T04:56:04.192231
|
{
"authors": [
"anpete",
"maumar"
],
"repo": "aspnet/EntityFramework",
"url": "https://github.com/aspnet/EntityFramework/pull/7090",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
419960709
|
Migration issue with multiple nested "OwnsOne" after upgrading to v.2.2.2
I get error message when I try to update the database after generating a migration. Before upgrading to 2.2.2 - it works fine.
Exception message: System.InvalidOperationException: The entity type 'EFError1.Models.SubAddress' cannot be added to the model because a weak entity type with the same name already exists.
Stack trace:System.InvalidOperationException: The entity type 'EFError1.Models.SubAddress' cannot be added to the model because a weak entity type with the same name already exists.
at Microsoft.EntityFrameworkCore.Metadata.Internal.Model.AddEntityType(EntityType entityType)
at Microsoft.EntityFrameworkCore.Metadata.Internal.Model.AddEntityType(String name, ConfigurationSource configurationSource)
at Microsoft.EntityFrameworkCore.Metadata.Internal.InternalModelBuilder.Entity(TypeIdentity& type, ConfigurationSource configurationSource, Boolean allowOwned, Boolean throwOnQuery)
at Microsoft.EntityFrameworkCore.Metadata.Internal.InternalModelBuilder.Entity(String name, ConfigurationSource configurationSource, Boolean allowOwned, Boolean throwOnQuery)
at Microsoft.EntityFrameworkCore.Metadata.Builders.ReferenceOwnershipBuilder.FindRelatedEntityType(String relatedTypeName, String navigationName)
at Microsoft.EntityFrameworkCore.Metadata.Builders.ReferenceOwnershipBuilder.HasOne(String relatedTypeName, String navigationName)
at EFError1.Migrations.Initial.<>c.<BuildTargetModel>b__2_7(ReferenceOwnershipBuilder b3) in D:\SandBox\EFIssues\EFError1\EFError1\Migrations\20190312102225_Initial.Designer.cs:line 128
at Microsoft.EntityFrameworkCore.Metadata.Builders.ReferenceOwnershipBuilder.OwnsOne(String ownedTypeName, String navigationName, Action`1 buildAction)
at EFError1.Migrations.Initial.<>c.<BuildTargetModel>b__2_6(ReferenceOwnershipBuilder b2) in D:\SandBox\EFIssues\EFError1\EFError1\Migrations\20190312102225_Initial.Designer.cs:line 118
at Microsoft.EntityFrameworkCore.Metadata.Builders.ReferenceOwnershipBuilder.OwnsOne(String ownedTypeName, String navigationName, Action`1 buildAction)
at EFError1.Migrations.Initial.<>c.<BuildTargetModel>b__2_3(ReferenceOwnershipBuilder b1) in D:\SandBox\EFIssues\EFError1\EFError1\Migrations\20190312102225_Initial.Designer.cs:line 101
at Microsoft.EntityFrameworkCore.Metadata.Builders.EntityTypeBuilder.OwnsOne(String ownedTypeName, String navigationName, Action`1 buildAction)
at EFError1.Migrations.Initial.<>c.<BuildTargetModel>b__2_1(EntityTypeBuilder b) in D:\SandBox\EFIssues\EFError1\EFError1\Migrations\20190312102225_Initial.Designer.cs:line 86
at Microsoft.EntityFrameworkCore.ModelBuilder.Entity(String name, Action`1 buildAction)
at EFError1.Migrations.Initial.BuildTargetModel(ModelBuilder modelBuilder) in D:\SandBox\EFIssues\EFError1\EFError1\Migrations\20190312102225_Initial.Designer.cs:line 34
at Microsoft.EntityFrameworkCore.Migrations.Migration.<.ctor>b__4_0()
at Microsoft.EntityFrameworkCore.Internal.LazyRef`1.get_Value()
at Microsoft.EntityFrameworkCore.Migrations.Migration.get_TargetModel()
at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.GenerateUpSql(Migration migration)
at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.<>c__DisplayClass13_2.<GetMigrationCommandLists>b__2()
at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.Migrate(String targetMigration)
at Microsoft.EntityFrameworkCore.Design.Internal.MigrationsOperations.UpdateDatabase(String targetMigration, String contextType)
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.UpdateDatabase.<>c__DisplayClass0_1.<.ctor>b__0()
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.Execute(Action action)
The entity type 'EFError1.Models.SubAddress' cannot be added to the model because a weak entity type with the same name already exists.
Steps to reproduce
For reproduce this issue create test project, add EF core 2.2.2, and do the migration using the classes described below
Partial code listings:
public class Root
{
public int Id { get; set; }
public MainAddress Address1 { get; set; }
public MainAddress Address2 { get; set; }
}
public class MainAddress
{
public int Id { get; set; }
public string Name { get; set; }
public SubAddress SubAddress { get; set; }
}
public class SubAddress
{
public int Id { get; set; }
public string Name { get; set; }
public int Number { get; set; }
public SubSubAddress SubSubAddress { get; set; }
}
public class SubSubAddress
{
public int Id { get; set; }
public string Text { get; set; }
}
public class RootConfig : IEntityTypeConfiguration<Root>
{
public void Configure(EntityTypeBuilder<Root> builder)
{
builder.ToTable("Roots");
builder.HasKey(it => it.Id);
builder.OwnsOne(root => root.Address1, address =>
{
address.OwnsOne(a => a.SubAddress, subAddress =>
{
subAddress.OwnsOne(it => it.SubSubAddress);
});
});
builder.OwnsOne(root => root.Address2, address =>
{
address.OwnsOne(a => a.SubAddress, subAddress =>
{
subAddress.OwnsOne(it => it.SubSubAddress);
});
});
}
}
public class TestContext : DbContext
{
public TestContext(DbContextOptions<TestContext> context) : base(context)
{
}
public DbSet<Root> Roots { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.ApplyConfiguration(new RootConfig());
}
}
Further technical details
EF Core version: 2.2.2
Database Provider: Npgsql.EntityFrameworkCore.PostgreSQL 2.2.0
Operating system: Windows 10
IDE: Rider 2018.3
@ASBrattsev What version are you upgrading from?
2.1.8
The problem has a temporary solution, but I don't like it. I changed the example a bit. It is necessary to comment out the first mentions of a triple attachment of a double "OwnsOne", as on the picture, and then everything works. (sorry if not clearly written, I don't know how to write it in my native language, not that English =) )
Duplicate of #18183
|
gharchive/issue
| 2019-03-12T12:31:54 |
2025-04-01T04:56:04.198003
|
{
"authors": [
"ASBrattsev",
"AndriySvyryd",
"ajcvickers"
],
"repo": "aspnet/EntityFrameworkCore",
"url": "https://github.com/aspnet/EntityFrameworkCore/issues/14994",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
523540475
|
EFCore 2.2.6 DBQuery.FromSql.Select error (Object reference not set to an instance of an object)
I made tests using BDQuery and had issues when mapping the result of a FromSql() request in a Select().
I have an error: Object reference not set to an instance of an object.
Here is an example:
DBContext class:
public class MyDbContext : DbContext
{
public MyDbContext(DbContextOptions<MyDbContext> options)
: base(options) { }
public DbQuery<MyEntity> MyEntities { get; set; }
}
Here is the MyEntity class definition:
public class MyEntity
{
public int Id { get; set;}
}
If in my code I make the following call, it works fine (_dbContext is of type MyDbContext of course):
var result = await _dbContext.MyEntities
.FromSql("Exec MyStoredProc")
.Select(s => new { s.Id })
.ToListAsync()
The MyStoreProc returns a column with name Id.
But if I declare a class MyDTO like this :
public class MyDTO
{
public int MyId { get; set;}
}
And then make a call like this :
var result = await _dbContext.MyEntities
.FromSql("Exec MyStoredProc")
.Select(s => new MyDTO {MyId = s.Id })
.ToListAsync()
Then an exception is raised: Object reference not set to an instance of an object
If I rename the field MyId by Id in the MyDTO Class definition, it works fine again. Any idea regarding this behavior? Why when the field name doesn't map with the SQL column name an error occurs?
Here is the stack trace when the exception occurs:
at Microsoft.EntityFrameworkCore.Storage.TypedRelationalValueBufferFactoryFactory.CacheKey.<>c.b__6_0(Int32 t, TypeMaterializationInfo v)
at System.Linq.Enumerable.Aggregate[TSource,TAccumulate](IEnumerable1 source, TAccumulate seed, Func3 func)
at System.Collections.Generic.ObjectEqualityComparer1.GetHashCode(T obj) at System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd(TKey key, Func2 valueFactory) at Microsoft.EntityFrameworkCore.Query.Sql.Internal.FromSqlNonComposedQuerySqlGenerator.CreateValueBufferFactory(IRelationalValueBufferFactoryFactory relationalValueBufferFactoryFactory, DbDataReader dataReader) at Microsoft.EntityFrameworkCore.Internal.NonCapturingLazyInitializer.EnsureInitialized[TParam,TValue](TValue& target, TParam param, Func2 valueFactory)
at Microsoft.EntityFrameworkCore.Query.Internal.ShaperCommandContext.NotifyReaderCreated(DbDataReader dataReader)
at Microsoft.EntityFrameworkCore.Query.Internal.AsyncQueryingEnumerable1.AsyncEnumerator.<BufferlessMoveNext>d__12.MoveNext() at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerExecutionStrategy.<ExecuteAsync>d__72.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.EntityFrameworkCore.Query.Internal.AsyncQueryingEnumerable1.AsyncEnumerator.<MoveNext>d__11.MoveNext() at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.EntityFrameworkCore.Query.Internal.AsyncLinqOperatorProvider.ExceptionInterceptor1.EnumeratorExceptionInterceptor.d__5.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable1.ConfiguredTaskAwaiter.GetResult() at System.Linq.AsyncEnumerable.<Aggregate_>d__63.MoveNext() in D:\a\1\s\Ix.NET\Source\System.Interactive.Async\Aggregate.cs:line 120
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter1.GetResult() at ants.Services.CampaignService.<GetSmsStatusesAsync>d__14.MoveNext() in C:\Repositories\Professional Services\Gouv\ants\ants\Services\CampaignService.cs:line 546 at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter1.GetResult()
at ants.Controllers.SmsController.d__5.MoveNext() in C:\Repositories\Professional Services\Gouv\ants\ants\Controllers\SmsController.cs:line 91
EF Core version: 2.2.6
Database provider: Microsoft.EntityFrameworkCore.SqlServer
.NET Core 2.2
Operating system: Windows 10
IDE: Visual Studio 2019 16.2.5
@vbond007 EF does not compose SQL after execution of a stored procedure, so this would be evaluated on the client in EF Core 2.2.x. EF Core 3.0 doesn't support automatic client evaluation, so there is nothing to fix in 3.0. Instead be explicit about the client evaluation. For example:
var result = (await _dbContext.MyEntities
.FromSql("Exec MyStoredProc")
.ToListAsync())
.Select(s => new MyDTO {MyId = s.Id })
.ToList()
|
gharchive/issue
| 2019-11-15T15:39:01 |
2025-04-01T04:56:04.209074
|
{
"authors": [
"ajcvickers",
"vbond007"
],
"repo": "aspnet/EntityFrameworkCore",
"url": "https://github.com/aspnet/EntityFrameworkCore/issues/18933",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
425763604
|
Port fix for #1041 to 2.2
Impact
It's not possible to set the Period of the health check publisher hosted service without using reflection. The wrong backing field is set.
This isn't very discoverable because the bug manifests in a setting not being honoured. You might think you are publishing health data every 30 seconds, but it's going to always use the default period.
Workaround
The workaround is to use reflection.
Risk
The risk here is that someone is setting Period - which actually sets Delay and they are relying on this detail. If a user actually wanted to set Delay, they can already do that.
Approved for 2.2.5
Wait for branch to open.
PR needs at least two things:
approval from someone familiar with the changed code
updates to PatchConfig.props
@rynowak he deadline for merging 2.x fixes is currently the 15th. Please get this approved and in soon so that we can start working on official builds a bit early.
@dougbu @pranavkm - one of you want to mash the like button on this trivial change?
|
gharchive/pull-request
| 2019-03-27T04:52:03 |
2025-04-01T04:56:04.213164
|
{
"authors": [
"dougbu",
"rynowak",
"vivmishra"
],
"repo": "aspnet/Extensions",
"url": "https://github.com/aspnet/Extensions/pull/1312",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
363893129
|
Should we assume Internal Server Errors can be triggered by user input?
I've a question regarding error handling strategy with ASP.NET Core and MVC in particular. In a system we're building, we've been following the philosophy that we should return an HTTP 4xx result code for every error that can be triggered by the user/client. HTTP 5xx error codes should only be returned if we really have a server error: resource problems (e.g., DB connection down), programming errors, etc.
Therefore, whenever an HTTP 500 Internal Server Error is rendered to a user, we log a message that causes the operations and development teams to investigate.
We've observed that sometimes ASP.NET Core and MVC infrastructure will throw exceptions in situations that can be causes by the user, e.g., https://github.com/aspnet/Mvc/issues/5631.
While that issue is going to be fixed anyway, I'm wondering if in general we can assume that ASP.NET Core and MVC will not throw exceptions for specific inputs, e.g., malformed requests. Or should we assume that user input can result in unhandled exceptions while processing the request?
Specifically, @Tratcher said in https://github.com/aspnet/Mvc/issues/5631#issuecomment-419794212 that
[...] a 5xx response does not qualify as a server crash, the server is still operating fine, only the request was rejected. 4xx just means we understood what was wrong with the request and decided it was worth the effort to give you a more specific response. These only get added for common errors where it's obvious that the client was at fault and how.
Is this the indeed general ASP.NET Core policy? If so, we need to redefine our operations strategy, we don't want to by design raise an alert every time someone crafts a malformed request...
Indeed, you must understand the cause of an error to return a 4xx response. The server can do this when parsing the initial request headers as it's clear something is wrong with the message. However there are some errors that can't be raised until the app is processing the request and body. Unfortunately exceptions are sometimes used to communicate this and it's up to the app to catch them and decide if a 4xx is appropriate. Errors in antiforgery are one example. Errors in the request body format are another.
Okay, thanks for giving concrete examples.
exceptions are sometimes used to communicate this and it's up to the app to catch them and decide if a 4xx is appropriate
Is there a list of expected exceptions thrown by ASP.NET Core in reaction to request input so we can explicitly handle them? Or any other guidance on how we could differentiate system errors from request errors?
@fschmied unfortunately the list of exceptions is ever changing, and also varies based on 3rd party components, so providing such a list would be infeasible. However, in cases such as ASP.NET Core MVC, we've done work for API controllers to return better status codes by default, which would just avoid certain exceptions from being thrown in the first place.
But I'm not aware of any general purpose solution.
@fschmied I think in general yes that's the pattern. Another issue with having it be too "automatic" is that the right way to deal with an unhandled exception can vary dramatically between apps. Some apps might want to retry an operation, some might fall back to a default behavior, or who knows what.
I think what might be reasonable is that if you find an unhandled exception that you think that ASP.NET Core should definitely just handle in a definitive way, we could look at that and see if there should be a default behavior.
We periodically close 'discussion' issues that have not been updated in a long period of time.
We apologize if this causes any inconvenience. We ask that if you are still encountering an issue, please log a new issue with updated information and we will investigate.
|
gharchive/issue
| 2018-09-26T07:31:10 |
2025-04-01T04:56:04.222381
|
{
"authors": [
"Eilon",
"Tratcher",
"aspnet-hello",
"fschmied"
],
"repo": "aspnet/Home",
"url": "https://github.com/aspnet/Home/issues/3558",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
150197281
|
DNX branding shows up in the ErrorPage
From @rustd on April 21, 2016 19:21
Repro Steps:
Create an ASP.NET Core Web App
Introduce a compilation error
Run the app to see the error page
Expected:
DNX branding should not show up in the error page. It does at the bottom as shown in this screenshot
Copied from original issue: aspnet/Diagnostics#271
From @Tratcher on April 21, 2016 21:24
That's Hosting's startup exception page, not the developer error page:
https://github.com/aspnet/Hosting/blob/dev/src/Microsoft.AspNetCore.Hosting/Startup/StartupExceptionPage.cs
https://github.com/aspnet/Hosting/blob/dev/src/Microsoft.AspNetCore.Hosting/compiler/resources/GenericError_Footer.html
From @Eilon on April 21, 2016 21:11
Hmm I don't see this in the latest packages. I ran from source code and there's no footer at all. I also don't see this being generated by searching the repo. I also looked over the last month or two of changes and didn't see where the might have been removed.
@Tratcher @ryanbrandenburg do you guys recall removing this? Or is it coming from somewhere else?
From @rustd on April 21, 2016 21:36
I am seeing this with this version 1.0.0-rc2-20581
|
gharchive/issue
| 2016-04-21T21:37:39 |
2025-04-01T04:56:04.227923
|
{
"authors": [
"muratg"
],
"repo": "aspnet/Hosting",
"url": "https://github.com/aspnet/Hosting/issues/722",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
324961314
|
Identity errors not displayed properly when using UseStatusCodePagesWithReExecute
When using UseStatusCodePagesWithReExecute, custom error StatusCodes like those found in GenerateRecoveryCodes.cshtml.cs do not display using the route as defined by UseStatusCodePagesWithReExecute in the startup config.
Eg. return NotFound($"Unable to load user with ID '{_userManager.GetUserId(User)}'.");
Retuning a StatusCode(404) without an object displays fine in the custom error layout, but when returned with an object like StatusCode(404, "Custom message..."), it's returned as a string result.
@adams-hub
Found this closed issue: https://github.com/aspnet/Mvc/issues/6717
Try using UseExceptionHandler.
|
gharchive/issue
| 2018-05-21T15:36:19 |
2025-04-01T04:56:04.230406
|
{
"authors": [
"adams-hub",
"gberlanga"
],
"repo": "aspnet/Identity",
"url": "https://github.com/aspnet/Identity/issues/1800",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
168091470
|
If UseWebpackDevMiddleware gets back an error from Node during app startup, it fails to surface good error information
Exception occures when adding app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions {HotModuleReplacement = true}); to Configure()
Node: 6.2.2
SpaServices: 1.0.0-beta-000009
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at Microsoft.AspNetCore.Hosting.Internal.ConfigureBuilder.Invoke(Object instance, IApplicationBuilder builder) at Microsoft.AspNetCore.Hosting.Internal.ConfigureBuilder.<>c__DisplayClass4_0.<Build>b__0(IApplicationBuilder builder) at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.Configure(IApplicationBuilder app) at Microsoft.AspNetCore.Hosting.Internal.AutoRequestServicesStartupFilter.<>c__DisplayClass0_0.<Configure>b__0(IApplicationBuilder builder) at Microsoft.AspNetCore.Hosting.Internal.WebHost.BuildApplication() at Microsoft.AspNetCore.Hosting.Internal.WebHost.Initialize() at Microsoft.AspNetCore.Hosting.WebHostBuilder.Build() at AurelaTestProject.Program.Main(String[] args) in D:\downloads\AurelaTestProject\src\AureliaTestProject\Program.cs:line 14
So i got this sorted. The exception is actually from here:
https://github.com/aspnet/JavaScriptServices/blob/dev/src/Microsoft.AspNetCore.NodeServices/HostingModels/HttpNodeInstance.cs#L66
But it did not bubble to the surface so I could actually read the actual message, so that might be something someone should look at.
Thanks for reporting that! I'll look into whether we can better report exceptions that occur in that location.
In the current version, we do surface such errors (at least in the cases I'm aware of). If anyone can still repro disappearing error messages, please let us know!
|
gharchive/issue
| 2016-07-28T13:09:56 |
2025-04-01T04:56:04.234524
|
{
"authors": [
"SteveSandersonMS",
"leak"
],
"repo": "aspnet/JavaScriptServices",
"url": "https://github.com/aspnet/JavaScriptServices/issues/226",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
212339990
|
Re-rendering with serialized data sent to browser
Angular Universal indicates it can be configured to send serialized cached data to the browser. I would love to take advantage of this feature, but cannot see where to start.
I'm assuming I understand this feature correctly. My hope is that if my first route requires DataA and DataB to be loaded, that these are queried for during the re-rendering on the server, and then somehow sent to the browser so that these do not have to be queried again.
Here's where I find the feature mentioned on Angular Universal.
https://universal.angular.io/overview/
In your boot-server.ts file, you can change the resolve call so that it also supplies a globals value, e.g:
const myData = { ... your stuff here ... };
resolve({
html: html,
globals: { myData }
});
Then, in boot-client.ts, you can access window.myData to get the data transferred from the server. You can use this to initialise your application however you want.
In the ReactRedux template, this is all set up for you, because there's an official "correct" way to do it. In the Angular template, it isn't set up for you, because there's no single official way to do it - the type of data you're transferring, and how to initialise your application with it, will depend on you. Hope that helps!
|
gharchive/issue
| 2017-03-07T06:10:35 |
2025-04-01T04:56:04.237613
|
{
"authors": [
"SteveSandersonMS",
"larstbone"
],
"repo": "aspnet/JavaScriptServices",
"url": "https://github.com/aspnet/JavaScriptServices/issues/740",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
320410542
|
Make some new Kestrel types internal
There were some new public types added to Kestrel between 2.0 and 2.1 that don’t need to be exposed. We should make them internal to alleviate back compat concerns going forward.
@muratg This is already approved by the ship room, right? If so, I can merge #2543 once @davidfowl approves the PR.
|
gharchive/issue
| 2018-05-04T20:04:24 |
2025-04-01T04:56:04.238770
|
{
"authors": [
"halter73"
],
"repo": "aspnet/KestrelHttpServer",
"url": "https://github.com/aspnet/KestrelHttpServer/issues/2544",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
224613097
|
Simplify connection lifetime control flow [dev]
Same as #1776 but for merging into dev.
Why not just merge back from rel/2.0.0-preview1?
@cesarbs That's exactly what I plan to do. I just wanted to see the tests passing since travis and appveyor don't run for PRs merged into non-dev branches.
AllowCertificateContinuesWhenNoCertificate failed
#1776 now targets the dev branch.
|
gharchive/pull-request
| 2017-04-26T22:04:14 |
2025-04-01T04:56:04.240701
|
{
"authors": [
"cesarbs",
"davidfowl",
"halter73"
],
"repo": "aspnet/KestrelHttpServer",
"url": "https://github.com/aspnet/KestrelHttpServer/pull/1777",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
240276400
|
Create common manifest file
Addresses: https://github.com/aspnet/MetaPackages/issues/172
This will trim additional packages such as runtime.win-arm64.runtime.native.system.data.sqlclient.sni
:up::date:
|
gharchive/pull-request
| 2017-07-03T22:40:55 |
2025-04-01T04:56:04.241982
|
{
"authors": [
"JunTaoLuo"
],
"repo": "aspnet/MetaPackages",
"url": "https://github.com/aspnet/MetaPackages/pull/188",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
80895000
|
Cannot use nameof(XController) in asp-controller tag helper
Similar to issue #2586
I'm speaking specifically to tag helpers here, though, and I think this is something developers might reasonably expect to be possible:
<a asp-controller="@nameof(HomeController)" asp-action="@nameof(HomeController.About)">About</a>
The workaround of not using the Controller suffix on HomeController doesn't work, it just throws a CompilationFailedException with The name 'HomeController' does not exist in the current context
It seems wrong that C# 6 adds the nameof operator specifically to avoid magic string problems just like this one, and MVC 6 fails to take advantage of it, for what would surely be a quick check to see whether or not a string ends with "Controller".
It is also inconsistent, because the asp-action="@nameof(HomeController.Action)" works fine. The reason why may be obvious to some developers, but to many it won't be.
Technically asp-action="@nameof(HomeController.Action)" is problematic as well because you might have [ActionName("Foo")] on your action.
Using nameof in this way is may work in some cases (ex you name your controller "Home" instead of "HomeController") but nameof is not aware of the Application Model and so cannot work in all cases.
What we could do is consider providing an API that allows you to get the controller name or action name based on a type.
Another place where this would be really helpful is the asp-for attribute. For example <input asp-for="@nameof(Model.PropertyName)" />. As of rc1-final, I get the following exception when I try this.
System.InvalidOperationException: Templates can be used only with field access, property access, single-dimension array index, or single-parameter custom indexer expressions.
at Microsoft.AspNet.Mvc.ViewFeatures.ExpressionMetadataProvider.FromLambdaExpression[TModel,TResult](Expression`1 expression, ViewDataDictionary`1 viewData, IModelMetadataProvider metadataProvider)
at Microsoft.AspNet.Mvc.Razor.RazorPage`1.CreateModelExpression[TValue](Expression`1 expression)
at Asp.ASPV__Views_Xxx__YyyPartial_cshtml.<<ExecuteAsync>b__24_0>d.MoveNext() in /Views/Xxx/_YyyPartial.cshtml:line 4
--- End of stack trace from previous location where exception was thrown ---
Closing because we have no plans to support this.
|
gharchive/issue
| 2015-05-26T10:13:07 |
2025-04-01T04:56:04.246328
|
{
"authors": [
"Eilon",
"danroth27",
"markrendle",
"toddlucas"
],
"repo": "aspnet/Mvc",
"url": "https://github.com/aspnet/Mvc/issues/2608",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
89380018
|
Append file version does not work when href/src contains a url with query string
Example:
This works
<img src="images/sample.png" asp-file-version="true" alt="some text" />
Output:
<img src="images/sample.png?v=M9cy9a-9lV1-MFfOg2JDzIzBsWiFUVl17uHAR0cgpTE" alt="some text" />
This produces 500
<img src="images/sample.png?some=value" asp-file-version="true" alt="some text" />
Expected output:
<img src="images/sample.png?some=value&v=M9cy9a-9lV1-MFfOg2JDzIzBsWiFUVl17uHAR0cgpTE" alt="some text" />
Same applies for Link and Script tag helper
6213354b85ee4b0e7c42d0eb64fc76447e28e9fb
|
gharchive/issue
| 2015-06-18T19:41:43 |
2025-04-01T04:56:04.248757
|
{
"authors": [
"ajaybhargavb"
],
"repo": "aspnet/Mvc",
"url": "https://github.com/aspnet/Mvc/issues/2719",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
130530511
|
Highlight sterling performance work
I'd just like to extend my personal thanks to @rynowak and team for the amazing performance strides being made in MVC.
cc @DamianEdwards @SajayAntony
:smiley_cat:
@danroth27 should we move this to the Backlog? :smile:
Oh, and where do I log the same bug but to thank @benaadams for all his hard work and contributions???
:blush:
:+1:
:fireworks: :+1: :heart_eyes:
@Eilon my great hope is one day people will move past the defeatist argument that it doesn't matter because "my database is slow". That's like saying I might as well use php in interpreted mode because "my database is slow". There is so much more potential to the platform!
We were going custom for the performance; but we've come back to aspnet because of the improvements in Kestrel; and now we are coming back to to the full stack for the great strides in performance and direction of MVC. So I just wanted to say thank you.
Also saves us a bunch of work :wink:
It begins... https://stevedesmond.ca/blog/performance-is-paramount /cc @stevedesmond-ca
:+1: :yum:
LOL @ the timing of this -- I was trying to find the time to finish that post for days!
But seriously, thanks! I wish I had the cycles to contribute more directly, but you're all doing great work!
@stevedesmond-ca great blog post!
:blush:
Closing because this is an old discussion. We always appreciate the kind words 😄
|
gharchive/issue
| 2016-02-02T00:42:46 |
2025-04-01T04:56:04.254523
|
{
"authors": [
"Eilon",
"MJomaa",
"PureKrome",
"SajayAntony",
"benaadams",
"ctolkien",
"rynowak",
"stevedesmond-ca"
],
"repo": "aspnet/Mvc",
"url": "https://github.com/aspnet/Mvc/issues/4030",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
376916525
|
BodyModelBinder should make use of model providers when binding models
Currently only ComplexModelBinder is making use of other model binders when binding model properties, while BodyModelBinder is relying on underlying formatter to convert values.
If we have a look at the example of custom model binder described here, it will only work for route values, and not if we have a FromBody model with Author type property which has to be bound from data source.
Could probably pass <Type,IModelBinder> dictionary in InputFormatterContext, which would keep backwards compatibility with IInputFormatters that haven't implemented support for this yet.
As for json, that would create a JsonConverter that would attempt to read value as string and pass to corresponding IModelBinder, but preserving correct ModelBindingContext could be an issue.
Also, Newtonsoft.Json does not support async deserializating in JsonConvert yet.
Thanks for contacting us, @lil-Toady.
@dougbu, can you please look into this? Thanks!
Hi, @dougbu !
I'll try to explain what I'm proposing in detail:
Currently model binding from route values, query string, headers goes by getting the values from IValueProvider which comes in ModelBindingContext and instantiating corresponding models. But that is not the case when we want to bind model from body.
Say we have our models:
public class Foo {
...
}
public class Bar {
public Foo Foo { get; set; }
}
and create our custom model binder FooBinder for Foo, which binds it in our own specific way (say retrieving the value from storage, as in the linked example). An action that accepts Bar will work great, as ComplexModelBinderProvider will give ComplexModelBinder all the binders required to bind properties of Bar, including our custom binder.
But that is not the case if we're trying to bind Bar from body, because BodyModelBinder will outsource all the instantiation to IInputFormatter, that will further pass it to JsonSerializer, XmlSerializer etc. as you mentioned.
Now, if I want the same Bar model to be bound from body, I need to get in depths of IInputFormatter implementation. For example, if it's json, I would have to write a custom JsonConverter that would do the same thing as FooBinder, but now retreiving values from JsonReader; and the same for all the other input formatters supported.
What I'm suggesting is letting IInputFormatters feed values to corresponding IModelBinder during deserialization, instead of letting them do all the work. So in the end when JsonSerializer hits Foo type that we know we have a IModelBinder for, instead of attempting to completely deserialize it itself, it will create an IValueProvider and ModelBindingContext for model binders to take the values they need.
Of course I'm not suggesting to do that at the level of the deserializers themselves, but rather handle at IInputFormatter implementation level, which in case if json (just as an example, again) could work by adding a JsonConverter that will do all the work of providing the values back to instances of IModelBinder that we have.
if I want the same Bar model to be bound from body, …
As long as Bar is the topmost class that's bound from body, create a custom model binder for Bar that takes an IModelBinder and have the corresponding model binder provider pass a BodyModelBinder instance.
Instead of letting IInputFormatter implementations do the model binding themselves, they should rather work as value providers for the model binding infrastructure.
It's already possible to use [FromBody] and / or BodyModelBinder to bind (input format) properties of a higher-level model. This change would fundamentally alter what a value provider is and does. And, it is not necessary for your scenario.
We have no plans to revise the way model binding uses input formatting and input formatting uses framework-provided and external serializers.
Please let us know if you hit any issues using the BarBinder approach.
Thanks for contacting us. We believe that the question you've raised have been answered. If you still feel a need to continue the discussion, feel free to reopen it and add your comments.
|
gharchive/issue
| 2018-11-02T18:13:46 |
2025-04-01T04:56:04.263879
|
{
"authors": [
"dougbu",
"lil-Toady",
"mkArtakMSFT"
],
"repo": "aspnet/Mvc",
"url": "https://github.com/aspnet/Mvc/issues/8687",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
113873000
|
Simplify instrumentation confirmations in RazorPageExecutionInstrumentationTest
test class can now use the MvcTestFixture
#3139 part 3 of 3
dump instrumentation data at end of _Layout.cshtml
include FilePath in display
compare against new .html resource
nits:
normalize line endings to CRLF for consistency with other tests
add InstrumentionData to avoid Tuple
make TestPageExecutionContext class private
remove newly-unused actions and associated .cshtml files
/cc @pranavkm @rynowak
:shipit:
a69a7a6
|
gharchive/pull-request
| 2015-10-28T17:20:59 |
2025-04-01T04:56:04.268164
|
{
"authors": [
"dougbu",
"pranavkm"
],
"repo": "aspnet/Mvc",
"url": "https://github.com/aspnet/Mvc/pull/3431",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
117401524
|
Improve Angular sample project
Ideas:
Some useful 'gulp watch'-style auto-compilation
Karma-based test runner
I'll look into improving the build system after it's possible to update to a newer version of Angular 2, because then it might be possible to remove the inlineNg2Template step.
The auto-compilation is now handled via Webpack. It's not hugely useful to bring in Karma for the sample, but we should look into it for the templates instead.
|
gharchive/issue
| 2015-11-17T17:09:02 |
2025-04-01T04:56:04.269969
|
{
"authors": [
"SteveSandersonMS"
],
"repo": "aspnet/NodeServices",
"url": "https://github.com/aspnet/NodeServices/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
379886466
|
chunkingcookiemanager leaks unused chunks
There seems to be a bug in Microsoft.AspNetCore.Authentication.Cookies.ChunkingCookieManager when used in conjunction with CookieAuthentication. What I've seen is that as the size of the Authentication cookie changes the number of chunks needed to the accommodate the cookie data will change as well. If the number of cookies needed decreases, then chunks that are no longer needed are not being cleaned up.
Normally this isn't a big deal, but I'm currently running in a system that has a hard limit on the header size of 10240 bytes. The specify scenario I'm running into has the Authentication cookies going from 2 chunks (3 total cookies) to 1 chunk (1 cookie). This has the added bonus of replacing the value in the cookie that previously stored the chunk count (really small), to now storing the entire Authentication Ticket (relatively large), in turn nearly doubling the amount of header space consumed by the Authentication cookie.
It seems like it could be beneficial to have ChunkingCookieManager expire these "vestigial" as it's setting the new chunks on the response.
What triggers the resize, a direct call to SignIn? The workaround would be to preceed that with a call to SignOut.
I haven't actually been able to diagnose that yet, because it doesn't happen consistently. What I can say is that it's not tied to a SignIn or SignOut action because the users are in the middle of an active session using the application. This leads me to believe/fear, that something else in one of the middle-ware's is manipulating and dropping some of the claims in the Authentication Ticket, but I can't see where. If that's the case this becomes a different issue for me, though it doesn't necessarily take away from this issue.
I've figured out why the Authentication Cookie is changing size. We're using OIDC and OAuth in our application, OIDC to manage login, and OAuth to call services outside the application. For this to work, we had to implement code to perform sliding expiration on the Access Token used in the OAuth flow so that we would always have a valid access token during a valid application session.
The service we're calling to get and refresh the tokens doesn't always return tokens of the same size. Now, it's entirely possible that such behavior is invalid, but I don't have control over that service so it is what it is at this time. This combined with the fact that our Authentication cookie is right on the boarder of being 4K in size means that as the size of the tokens changes it could push us above or below the threshold for needing to chunk the Authentication Cookie.
As far as I know SignIn doesn't come into play. The refresh logic happens in the OnValidatePrincipal event of the CookieAuthenticationOptions. The tokens are then updated on the AuthenticationProperties of the CookieValidatePrincipalContext using the StoreTokens method.
Ah, you set CookieValidatePrincipalContext.ShouldRenew?
No. We're not explicitly renewing the cookie in the event because we set SlidingExpiration to true when we setup the Cookie Authentication Middleware. Keeping in mind that we're using OIDC to log in via an identity provider then using the AccessToken provided by that identity provider to call Api's that are external to the application, but secured by the same IdentityProvider. The application is not using the AccessToken in the cookie to secure itself, because it doesn't need to. We're only keeping it around to facilitate the Api calls. As such I'm not aware of a way to tell the Oidc Middlware to maintain an unexpired cookie. Thus why we implemented our own code inside OnValidatePrincipal. Here's what that looks like
//properties is of type Microsoft.AspNetCore.Authentication.AuthenticationProperties passed in from the HttpContext
var refreshToken = properties.GetTokenValue(OidcConstants.TokenTypes.RefreshToken);
var expiresAt = GetExpiresAtUtc(properties);
var utcNow = DateTime.UtcNow;
if (!string.IsNullOrWhiteSpace(refreshToken) &&
expiresAt.GetValueOrDefault(DateTime.MaxValue).Subtract(utcNow) < _renewTimespan)
{
var tokenResponse = await RefreshAccessToken(refreshToken);
//The initial token we get at login has an expiration that's precise to the second, so for consistency we'll do the same for renewals
var utcNowTrimmed = new DateTime(utcNow.Ticks - (utcNow.Ticks % TimeSpan.TicksPerSecond), utcNow.Kind);
var newExpiration = utcNowTrimmed.AddSeconds(tokenResponse.ExpiresIn).ToString(DateFormat);
properties.StoreTokens(new List<AuthenticationToken> {
new AuthenticationToken
{
Name = OidcConstants.TokenTypes.AccessToken,
Value = tokenResponse.AccessToken
},
new AuthenticationToken
{
Name = OidcConstants.TokenTypes.RefreshToken,
Value = tokenResponse.RefreshToken
},
new AuthenticationToken
{
Name = OidcConstants.TokenTypes.IdentityToken,
Value = tokenResponse.IdentityToken
},
new AuthenticationToken
{
Name = OidcConstants.TokenResponse.TokenType,
Value = tokenResponse.TokenType,
},
new AuthenticationToken
{
Name = ExpiresAtTokenName,
Value = newExpiration
}});
If we've gone off the rails on the implementation here, then we're open to ways to simplify the code as well.
Updating properties is fine, but those changes arn't saved anywhere unless you call SignIn or ShouldRenew. You could get lucky and make this change on a request that naturally triggers a sliding refresh, but don't count on it.
Ah, I did find where we're calling ShouldRenew, it's just one layer up in the stack from where the code above was. Anyway, I think we've gotten a little side tracked. Thinking about the SingOut + SignIn workaround you mentioned, I assume SignOut will expire the old cookie followed by SignIn to issue a new one. Are there other side effects we should be aware of if we go that route?
Note SignOut will apply a redirect in some scenarios.
I guess you could also call DeleteCookie directly:
https://github.com/aspnet/Security/blob/42dd66647dd389c242ff796a530b880c7b9d7a90/src/Microsoft.AspNetCore.Authentication.Cookies/CookieAuthenticationHandler.cs#L363-L366
This certainly sounds like a bug. It would make sense for the cookie manager to clean up unnecessary cookies as part of setting the new cookies.
Quick follow up. For a workaround I did try using DeleteCookie prior to calling AppendCookie, I actually ended up wrapping ChunkingCookieManager in a new implementation to do this inside the new classes AppendCookieMethod and setting the new class as the CookieManager in startup. Anyway, This worked great locally, but when we deploy to our AWS env's it caused issues. It appears the order in which the Set-Cookie headers is sent back may not be guaranteed, or was manipulated by the AWS infrastructure. So Calling Delete followed by Append could create multiple Set-Cookie headers for the same cookie leading to unexpected events.
So instead I created a helper method that would inspect the Request cookies and the Response cookies for a given key and remove any that existed in the Request but not the Response. I think called this helper method in the wrapper class after calling the base AppendCookie Method. This worked across all our envs:
public class CleanChunkingCookieManager : ICookieManager
{
private const string ChunkSuffix = "C";
private readonly ChunkingCookieManager _innerCookieManager = new ChunkingCookieManager();
public void AppendResponseCookie(HttpContext context, string key, string value, CookieOptions options)
{
_innerCookieManager.AppendResponseCookie(context, key, value, options);
RemoveVestigialChunkCookies(context, key, options);
}
...
private static void RemoveVestigialChunkCookies(HttpContext context, string key, CookieOptions options)
{
var deleteOptions = new CookieOptions
{
Domain = options.Domain,
Path = options.Path,
HttpOnly = options.HttpOnly,
IsEssential = options.IsEssential,
Expires = DateTime.UnixEpoch, //Jan 1, 1970 forces deletion
MaxAge = TimeSpan.Zero, //Max age is zero. Secondary value to force deletion
SameSite = options.SameSite,
Secure = options.Secure
};
string searchString = $"^{key}({ChunkSuffix}\\d+)?";
var requestChunkNames = context.Request.Cookies.Keys.Where(k => Regex.IsMatch(k, searchString));
var responseCookies = context.Response.Headers["Set-Cookie"];
var responseChunkNames = responseCookies.Where(v => Regex.IsMatch(v, searchString)).Select(v => Regex.Match(v, searchString).Value);
var vestigialChunks = requestChunkNames.Except(responseChunkNames);
foreach (var chunkName in vestigialChunks)
{
context.Response.Cookies.Append(chunkName, string.Empty, deleteOptions);
}
}
}
|
gharchive/issue
| 2018-11-12T17:47:38 |
2025-04-01T04:56:04.282864
|
{
"authors": [
"Eilon",
"Tratcher",
"patrick-hampton-avalara"
],
"repo": "aspnet/Security",
"url": "https://github.com/aspnet/Security/issues/1909",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
121059325
|
Skip tests on Mono if KOREBUILD_TEST_SKIPMONO is set.
This change will enable us to run tests on CoreCLR on the CI.
cc @muratg @Eilon @victorhurdugaci
Why not use the FrameworkSkipConditionAttribute?
@victorhurdugaci Because I'd have to put it on every test in every repo. We want to be able to run tests on CoreCLR only across the board. The idea is to have two CIs per non-Windows system, one where we run tests on Mono and another where we run tests on CoreCLR.
Can you wait until I merge the dotnet support (1h or so) and then make the same change in _dotnet-test too?
Sure.
Ping.
:watch:
Checking for 1 or true per @Eilon's order :grin:
:shipit:
|
gharchive/pull-request
| 2015-12-08T17:38:09 |
2025-04-01T04:56:04.290005
|
{
"authors": [
"cesarbs",
"dougbu",
"victorhurdugaci"
],
"repo": "aspnet/Universe",
"url": "https://github.com/aspnet/Universe/pull/330",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1201877250
|
Update install section of documentation
This PR updates the install section of the documentation. The installation guide on the website should be directed to the documentation.
Questions:
Is the issue with M1-chipped Apple computers still there?
Linux is supported, but no installation guide for Linux users on the website.
Todo:
[ ] server installation docs
[ ] style troubleshooting section
Is the issue with M1-chipped Apple computers still there?
Yes. Not 100% sure, but I think there are still some issues.
Linux is supported, but no installation guide for Linux users on the website.
https://asreview.nl/download/ there is!
@J535D165 The installation guide for Linux is the same as that for macOS on the website, but Miniconda has Linux installers. Should we separate them?
Most Linux distros have Python installed. And the users are probably familiar with installing Python packages. I dont think we need any action for the website atm.
@J535D165 The issue #738 and the workaround are based on installing ASReview LAB in DEVELOPMENT mode. Is it an issue with doing pip install asreview?
Oef, yes this is super tricky now development is merged. This requires a hotfix to the docs and website indeed.
Oef, yes this is super tricky now development is merged. This requires a hotfix to the docs and website indeed.
I mean the issue states that the installation from the local git repo by pip install -e . will encounter the issue. A recent comment mentioned pip install asreview worked.
I mean the issue states that the installation from the local git repo by pip install -e . will encounter the issue.
Yes I know. That's not what we want.
I just tried to reproduce it on my M1, and still having the issue with sklearn (the main problem).
@J535D165 Another question: we instruct users to check the Python version by python --version, but Linux and macOS ship with Python 2 (if I am still right). In this case, python3 --version should be used to check the version. Also, pip3 is used. For example, see TensorFlow.
@J535D165 Is server installation a part of Install with pip?
I dont think so. True, it is usually installed with pip, but you dont want to bother 99% of the users with ports and hosts.
I dont think so. True, it is usually installed with pip, but you dont want to bother 99% of the users with ports and hosts.
Makes sense. It can also be installed with Docker on a server?
The installation guide on the website should link to the documentation showing Docker and server installation options.
Thank you for the reminder! After the release, I will update the website so that the hyperlinks do not get messed up.
Most of your changes are incorporated in #971 now. Thanks for all the valuable suggestions. If there is anything missing, please let me know.
|
gharchive/pull-request
| 2022-04-12T13:45:32 |
2025-04-01T04:56:04.324258
|
{
"authors": [
"J535D165",
"Rensvandeschoot",
"terrymyc"
],
"repo": "asreview/asreview",
"url": "https://github.com/asreview/asreview/pull/1035",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
851605232
|
Prevent compiler warning when using extracting with SoftAssertions
I'm not sure about the "fix".
However, it seems we use the same way in ObjectAssert class:
https://github.com/assertj/assertj-core/blob/14cd17293c62376abc8c762084819a1e08db1520/src/main/java/org/assertj/core/api/ObjectAssert.java#L43-L48
Because SoftAssertions (Java6StandardSoftAssertionsProvider) is building a ProxyableObjectAssert I added an override on the extracting method to be able to use the annotation @SafeVarargs on it.
Check List:
Fixes #2161
Unit tests : YES
Javadoc with a code example (on API only) : NA
It seems that broke something 💥 . I will check that later.
We haven't found a good solution to get rid of these annoying warnings.
Soft assertions implementation is based on proxying and collecting the assertion errors, final method can't be proxied as bytebuddy can't replace (AFAIK) final methods or classes. Now @SafeVarargs requires methods to be final so we can't use it in the soft assertion context, we used it for regular assertions as you can see in ObjectAssert but not for soft assertions that use ProxyableObjectAssert.
One might think that extracting does not need to be proxied since it does not perform any assertions but it needs to, extracting returns an assertion instance (AbstractListAssert) that needs to be proxied so that the chained assertions error are captured.
If only @SafeVarargs was allowed on non final methods, it would be usable for soft assertions and we would have had to introduce ProxyableObjectAssert and co, this is a major pain in the ass for me just to get (partially) rid of these warnings.
Reading these comments I got a potential idea. What if we mark the method as public final and annotate it with @SafeVarargs in the AbstractListAssert and then delegate that to a new method:
protected AbstractListAssert<?, List<?>, Object, ObjectAssert<Object>> extractingForProxy(Function<? super ACTUAL, ?>... extractors) {
// Do the same logic as before
}
I think that ByteBuddy can then proxy the extractingForProxy method in the class that it creates. There will of course be warnings in the AssertJ codebase, but not in the user code.
We could even do it generically in the place where we create the proxies by looking for a method names <methodName>ForProx if the <methodName> is annotated with a @SafeVarargs
That's an interesting idea, worth a shot! thanks @filiphr
I've tried to implement @filiphr idea. It seems not working but I'm not sure that I understood everything 😅
I've extracted extractingForProxy protected method in AbstractObjectAssert to be able to mark the public method extracting as final.
However, the same test is failing 💥 I suppose it is related to ByteBuddy not able to proxy the protected method 🤔
@jgiovaresco have a look in the SoftProxies here. I think that you need to tell ByteBuddy not to proxy methods annotated with @SafeVarargs, or better not final methods. Not sure whether ByteBuddy handles that automatically or not.
Actually, the problem is in something else. Look at this. There is something special happening for the extracting method. I think that you need to add the extractingForProxy method there as well, and also make sure that protected such methods are also handled.
You'll need to play with the ByteBuddy interceptors
I had some spare time and went ahead and did https://github.com/assertj/assertj-core/pull/2163. That should cover all possible compiler warnings in user code. Sorry for taking it over @jgiovaresco. As I mentioned in the PR, I am completely fine in giving the entire contribution to you.
|
gharchive/pull-request
| 2021-04-06T16:41:07 |
2025-04-01T04:56:04.334732
|
{
"authors": [
"filiphr",
"jgiovaresco",
"joel-costigliola"
],
"repo": "assertj/assertj-core",
"url": "https://github.com/assertj/assertj-core/pull/2162",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2640530662
|
DatePicker: Pas d'update de la valeur si on presse 'enter'
Description
Si on a saisie une date complète au clavier mais qu'on ne blur pas le champ, quand on appuie sur 'enter' le formulaire est soumis avec la valeur précédente (pas de valeur dans notre cas)
Comment reproduire
Mettre un date picker dans un v-form
saisir une date au clavier
ne pas quitter le champ et appuyer sur 'enter'
la valeur soumise n'est pas celle saisie
Comportement attendu
lors du enter il faut que la valeur interne soit mise a jour avant que le formulaire soit soumis
Environnement
Librairie : @cnamts/synapse-bridge
Version : 3.0.5
Navigateur : Chrome
Priorisation
Prioritaire
Projet
GEDI Pallier 1
Contacts
Benjamin Borlet
Merci pour ton retour. Le sprint ayant déjà démarré. On embarquera cette issue dans le sprint du 26/11 au 16/12.
@bborlet Ce besoin est-il urgent et doit-il être, de ce fait, prioritaire ?
Une release du Bridge devrait être possible semaine prochaine et nous pouvons, selon la complexité rencontrée, essayer d'intégrer cette issue.
Pas de priorité par rapport a d'autre issues. Je te laisse voir avec Romain qui pourra faire l'interface avec les PO comme cela m'a été expliqué.
@bborlet : A tester sur la version 3.0.7 du DS
|
gharchive/issue
| 2024-11-07T10:14:28 |
2025-04-01T04:56:04.346550
|
{
"authors": [
"DavidFyon",
"bborlet",
"valentinbecquet"
],
"repo": "assurance-maladie-digital/design-system",
"url": "https://github.com/assurance-maladie-digital/design-system/issues/3831",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1426863014
|
Default Support
Resolves #60
Thank you for the change!
Heyo @georgyangelov @AGalabov could we cut a release some time soon? Not urgent but I would love to get some of the new schema features into my repos 😃
@samchungy yup same. But having in mind it will be 3.0.0 we wanted to merge all our needs. We just need that last PR that is open.
We'll try to get it done by the end of today/week. By the way if you have any input there - please share it 😃
|
gharchive/pull-request
| 2022-10-28T08:07:08 |
2025-04-01T04:56:04.351957
|
{
"authors": [
"AGalabov",
"georgyangelov",
"samchungy"
],
"repo": "asteasolutions/zod-to-openapi",
"url": "https://github.com/asteasolutions/zod-to-openapi/pull/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
831928991
|
patterns returned by functions used in pattern-matching contexts
Asteroid should support something like the following,
function foo
with *g() do -- g returns a pattern that we use in the with clause.
<something>
end
where patterns returned from functions should be able to be used in pattern matching contexts.
Another example,
let v = x is *goo().
The variable v is either true or false depending if the term stored in x matches the pattern returned
by goo or not.
If we're planning on allowing this, would we also want:
let v = x is *my_structure @ a.
let w = x is *my_structure @ foo().
absolutely! Basically, anything that can hold or return a value should be able to be dereferenced because it could be a pattern!
|
gharchive/issue
| 2021-03-15T15:30:56 |
2025-04-01T04:56:04.371107
|
{
"authors": [
"lutzhamel",
"olwmc"
],
"repo": "asteroid-lang/asteroid",
"url": "https://github.com/asteroid-lang/asteroid/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
537963432
|
Is there any intention of implementing IMPALA?
FAIR has an implementation in Pytorch here if it would help -- https://github.com/facebookresearch/torchbeast/
rlpyt can only be used in one single machine, from my point of view, I think rlpyt's owner will not make it run on multiple machines.
Comparing to rlpyt, the PolyBeast, which is "somewhat harder to install" according to FAIR, I think rlpyt is really too too too easy to install/setup and more friendly.
A single node implementation of IMPALA is also included, and could be helpful
Hi! This is right that rlpyt as it stands is single-node only, and there's only one set of neural net parameters. I'd have to go back and look more closely but I think this means the importance sampling in IMPALA would all just go to 1? This could change if running in asynchronous mode, where there is one optimizer agent and one sampling agent, which can be slightly lagged. This can probably be handled by PPO. (note: async policy gradient isn't currently implemented, would need to make PPO build a replay buffer and learn from it like DQN)
Open to other thoughts/reasons, though!
@astooke I think the workflow would be to create aysnchronous sampling for PG, then implement V-trace algorithm and importance sampling. Totally fine if this is not in the roadmap. Just wondering
Ok good question! It's crossed my mind to implement it, but is not yet a priority.
Would need to find a compelling use case (i.e. the problem is big) in policy gradient where the time spent sampling is roughly equal to the time spent optimizing, so the asynchronous mode could give up to a 2x speedup. This was the case for the R2D2 re-implementation, where it brought experiment time down to ~140 hours. It might also take some effort to tune replay hyperparameters to keep good learning performance...something that had been done for us already in the R2D2 work.
Yeah I think Impala is stronger for the multi-node case
Right, no immediate plans for IMPALA, closing this for now but please reopen if a new use case comes up :)
|
gharchive/issue
| 2019-12-14T20:46:19 |
2025-04-01T04:56:04.375296
|
{
"authors": [
"astooke",
"codelast",
"tarungog"
],
"repo": "astooke/rlpyt",
"url": "https://github.com/astooke/rlpyt/issues/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
416006574
|
Cannot mount on macOS in normal mode
I'm using macOS 10.14.3 with google-drive-ocamlfuse version 0.7.3, installed by opam.
It installed normally without any error.
osxfuse is installed by macport with version 3.8.3.
However, when I execute google-drive-ocamlfuse ~/G, and ls ~/G.
It showed ls: G: Device not configured.
So I use umount -f ~/G with no error, and try google-drive-ocamlfuse -verbose ~/G.
The ~/.gdfuse/default/curl.log showed nothing, but ~/.gdfuse/default/gdfuse.log showed this.
[0.001509] TID=0: Setting up default filesystem...
[0.001567] TID=0: BEGIN: Saving configuration in /Users/tonypottera/.gdfuse/default/config
[0.001911] TID=0: END: Saving configuration in /Users/tonypottera/.gdfuse/default/config
[0.001922] TID=0: Loading application state from /Users/tonypottera/.gdfuse/default/state...done
Current version: 0.7.3
Setting up cache db...done
Setting up CURL...done
[0.009140] TID=0: google-drive-ocamlfuse didn't shut down correctly.
Cleaning up cache...done
Setting up cache db...done
...[0.017965] TID=0: Starting flush DB thread (TID=1, interval=30s)
Refresh token already present.
[0.018016] TID=0: Starting filesystem /Users/tonypottera/G
[0.070632] TID=2: init_filesystem
[0.070806] TID=2: BEGIN: Getting root folder id (team drive id=, root folder=) from server
[0.071618] TID=2: BEGIN: Getting root resource from server
And nothing happened no matter how long I wait.
ls ~/G also still showing ls: G: Device not configured.
So I use umount -f ~/G to unmount it, and try google-drive-ocamlfuse -debug ~/G.
It seems working now.
I can use ls to see files and folders in my Google Drive.
But I can only see folders in it when I use macOS built in Finder.
Please help!
I have the same issue. Do you still have the problem?
Yes, I update it by opam today, and still in same condition.
|
gharchive/issue
| 2019-03-01T09:09:25 |
2025-04-01T04:56:04.383486
|
{
"authors": [
"Tony-HSU",
"barra51"
],
"repo": "astrada/google-drive-ocamlfuse",
"url": "https://github.com/astrada/google-drive-ocamlfuse/issues/531",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2218780536
|
False positive with PLE4703 and immediate return / break
Not the biggest deal, but I got a false positive on this with:
λ cat x.py
def foo():
s = {1, 2, 3}
for e in s:
if e >= 3:
s.remove(e)
return
λ ruff check --select PLE x.py --preview
x.py:3:5: PLE4703 Iterated set `s` is modified within the `for` loop
|
1 | def foo():
2 | s = {1, 2, 3}
3 | for e in s:
| _____^
4 | | if e >= 3:
5 | | s.remove(e)
6 | | return
| |__________________^ PLE4703
|
= help: Iterate over a copy of `s`
Found 1 error.
No fixes available (1 hidden fix can be enabled with the `--unsafe-fixes` option).
cc @boolean-light in case you're interested, and thanks for the new rule :-)
Thanks!
Struggling not to get sniped into fixing this.
Hmm that makes sense. It would be nice to use the control flow graph for this because I assume there are other situations where the loop exits early (continue, break, raise etc) that aren't covered and they might be more subtle.
Excuse me I'm back :D Thank you, and let me have a look on this.
I'm working on this right now, should I exclude warning for unreachable code like this?
s = {1, 2, 3}
for i in s:
break
s.discard(i)
Sorry, actually I don't think I can implement this one by myself. Is there some kind of support for control flow group in Ruff already?
No, there's not. That's what I meant by my comment. We probably want to wait for a more complete control flow analysis or decided to only support a very small subset of control flow patterns for now.
|
gharchive/issue
| 2024-04-01T18:05:47 |
2025-04-01T04:56:04.401008
|
{
"authors": [
"MichaReiser",
"boolean-light",
"charliermarsh",
"hauntsaninja"
],
"repo": "astral-sh/ruff",
"url": "https://github.com/astral-sh/ruff/issues/10721",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2056109755
|
FURB152 should look at additional digits to reduce false positives
For example, it'll trigger on the following program
x = 2.7139914879495697
I think the rule should avoid erroring if there are additional decimal digits that do not match, so:
bad1 = 2.7182
bad2 = 2.7183
good1 = 2.71824
good2 = 2.71820001
Trying to figure out what the best approach is here. I may need to look at what other tools do. E.g., what if the user rounds at some decimal point?
Thanks! Issue description mentions how I think rounding should be done; I opened a PR for this at https://github.com/astral-sh/ruff/pull/9290
|
gharchive/issue
| 2023-12-26T07:10:39 |
2025-04-01T04:56:04.403909
|
{
"authors": [
"charliermarsh",
"hauntsaninja"
],
"repo": "astral-sh/ruff",
"url": "https://github.com/astral-sh/ruff/issues/9281",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2157845870
|
Unable to activate VIRTUAL_ENV when CONDA_PREFIX exists
I'm trying out uv, together with hatch-pip-compile (and hatch). venv creation works fine, and speed is significantly faster, great!
Except when I try to run command with created virtual env, error raises
(base) test-hatch-pip-compile$ hatch run default1:python
error: Both VIRTUAL_ENV and CONDA_PREFIX are set. Please unset one of them.
The error message is clean, that's great, and I can fix it on my computer in 5 seconds (conda deactivate).
I just wonder if there is any difficulty makes this "user action" necessary?
Here is the hatch.toml (very likely unrelated imo since hatch is just an invoker and after I deactivated conda environment everything looks ok)
[env]
requires = [
"hatch-pip-compile",
]
[envs.default]
path = ".venv/default"
[envs.default1]
path = ".venv/default1"
type = "pip-compile"
pip-compile-installer = "uv"
pip-compile-resolver = "uv"
Thank you!
@konstin - Thoughts?
We could have VIRTUAL_ENV take precedence over CONDA_PREFIX if the python in PATH is from that venv; I'm not sure what behavior conda users would expect.
We could have VIRTUAL_ENV take precedence over CONDA_PREFIX if the python in PATH is from that venv; I'm not sure what behavior conda users would expect.
This'd be useful, same error came up when using Rye when the global python comes from Conda.
Having a base conda/mamba env activated is useful for one-off scripts (Rye is a bit new to use everywhere).
This also happens if you try to have uv be installed globally with pixi via pixi global install uv.
The easy alternative is just to globally install uv itself, but it is nice to have pixi global be able to manage all your tools with a pixi global upgrade-all.
We could have VIRTUAL_ENV take precedence over CONDA_PREFIX if the python in PATH is from that venv; I'm not sure what behavior conda users would expect.
When conda is activated it can be considered a globally available Python, therefore when a virtual environment is activated in a conda environment it is equivalent to a virtual environment being activated when there is a system install, the virtual environment should take precedence.
This has come up a few of times on pip issues, I'll try and find the discussion .
But anecdotally when I was a heavy conda user this is what I would have expected.
|
gharchive/issue
| 2024-02-28T00:25:55 |
2025-04-01T04:56:04.409965
|
{
"authors": [
"YuShigurey",
"ananis25",
"charliermarsh",
"konstin",
"matthewfeickert",
"notatallshaw"
],
"repo": "astral-sh/uv",
"url": "https://github.com/astral-sh/uv/issues/2028",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2495877694
|
Add constraints.txt export to uv export
Constraints files are a little different... Specifically, every requirement has to be named, so we can't write relative paths. (You also can't include extras, but we already don't do that.)
Hm. An export of the pyproject.toml constraints?
No, it’s an export of the lockfile in constraints.txt format. requirements.txt files support things that constraints files do not. You can’t export in requirements.txt format and pass it as constraints.
I've had this line in every rye-based Dockerfile for the last few years to work around this.
RUN sed -i '/^-e ./d' requirements.txt
@carderne -- If you structure your project as a non-package (i.e., omit a [build-system] or set package = false under [tool.uv]), that line should be omitted from the export.
Sorry maybe I crossed a wire. To be super explicit, this works:
uv init --package foo
cd foo
uv sync
uv run python -c 'import foo'
But if I set tool.uv.package = false then I obviously can't import foo.
I want foo to be installed in the venv, but I don't want it in the constraints file, as pip rejects that.
(Happy to have a thumbs up and go on my way, I might be derailing the point of this thread.)
|
gharchive/issue
| 2024-08-30T00:19:23 |
2025-04-01T04:56:04.413721
|
{
"authors": [
"carderne",
"charliermarsh",
"zanieb"
],
"repo": "astral-sh/uv",
"url": "https://github.com/astral-sh/uv/issues/6843",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2552906071
|
uv equivalent to running "python -m"
I want to migrate to uv, and my actual project is under an "app/" folder, and we use python -m app.main.py to run it.
How can I achieve the same command using uv?
Generally, you'd use uv run python -m app/main.py — does that work for you?
Related https://github.com/astral-sh/uv/issues/6638
It'd be the same, but with uv run instead of poetry run.
Yes it works! sorry for not seeing the related github issue before
|
gharchive/issue
| 2024-09-27T13:09:13 |
2025-04-01T04:56:04.416646
|
{
"authors": [
"tomaszbk",
"zanieb"
],
"repo": "astral-sh/uv",
"url": "https://github.com/astral-sh/uv/issues/7738",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2642890716
|
fail to run uvx --with="psycopg[binary,pool]" pgcli --help after upgrade to 0.5.0
❯ uvx --with="psycopg[binary,pool]" pgcli --help
error: Failed to parse: `psycopg[binary`
Caused by: Missing closing bracket (expected ']', found end of dependency specification)
psycopg[binary
❯ uv --version
uv 0.5.0 (Homebrew 2024-11-07)
it works at 0.4.x
Looks like a bug although I don't think it's new in v0.5. It works for me in v0.4.20, but not in any of the ten releases between then and v0.5.
Regressed in #7909 where we started to support --with "flask, anyio".
Oh that's a disappointing side-effect.
|
gharchive/issue
| 2024-11-08T05:22:35 |
2025-04-01T04:56:04.419182
|
{
"authors": [
"charliermarsh",
"zanieb",
"zuisong"
],
"repo": "astral-sh/uv",
"url": "https://github.com/astral-sh/uv/issues/8918",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2514257968
|
Prefer smaller wheels when priority is unset
Summary
Closes https://github.com/astral-sh/uv/issues/7216.
Files an issue upstream for the tensorflow problem: https://github.com/tensorflow/tensorflow/issues/75415
Imo this is blocking since it breaks tensorflow.
Is the new resolution any more wrong / right than the old?
The windows package is a stub, just forwarding to tensorflow-intel, while the unix package is a real tensorflow package that users want to install and whose deps users want to have, so i consider the unix one more correct.
That makes sense, thanks @konstin. Do you think the heuristic is a lost cause? Or should we special-case TensorFlow in some way? Something else?
Either way we choose, i believe tensorflow should consolidate their metadata, since coherent metadata is a core assumption in universal resolvers such as poetry. What we do with the PR depends on how much problems the current heuristic causes for our users: If the current behaviour causes trouble, add a workaround for tensorflow and merge. If it doesn't, i'd favor not adding package-specific workarounds and wait until tensorflow has fixed their metadata and that propagated through the ecosystem (and then doing this change).
I don't think it's necessarily causing problems. It can wait (at least for a response). But it will continue to be wrong for older versions no matter what.
|
gharchive/pull-request
| 2024-09-09T15:19:59 |
2025-04-01T04:56:04.422840
|
{
"authors": [
"charliermarsh",
"konstin"
],
"repo": "astral-sh/uv",
"url": "https://github.com/astral-sh/uv/pull/7220",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2413195030
|
Feature(cli): Add service versions to version command
The hard coded service versions will now also be printed when running the astria-go dev version command.
Tested locally and verified that both astria-go dev version and astria-go dev versions work and return
astria-go dev versions
Default Service Versions:
cometbft: v0.38.8
astria-sequencer: v0.15.0
astria-composer: v0.8.1
astria-conductor: v0.19.0
|
gharchive/pull-request
| 2024-07-17T10:05:53 |
2025-04-01T04:56:04.424318
|
{
"authors": [
"jbowen93",
"sambukowski"
],
"repo": "astriaorg/astria-cli-go",
"url": "https://github.com/astriaorg/astria-cli-go/pull/136",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2297948072
|
Update CODEOWNERS
Update CODEOWNERS based on Astronomer organizational changes and new committer.
wondering why we have 2 codeowners files?
I think the original motivation was for changes to the CI, that could incur a cost to Astronomer, to have to be reviewed by someone from Astronomer
|
gharchive/pull-request
| 2024-05-15T13:35:44 |
2025-04-01T04:56:04.426823
|
{
"authors": [
"tatiana"
],
"repo": "astronomer/astronomer-cosmos",
"url": "https://github.com/astronomer/astronomer-cosmos/pull/968",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
64124115
|
capitalization conventions
@duncandc - With the addition of the group aggregation calculation, it is finally an issue that we need to start using a consistent capitalization convention. The same convention should be adopted in the groups keys and the halos keys.
In particular, under what conditions should a key name contain a capitalized letter? I have been using no capital letters for any halo key, e.g., 'id' denotes the halo ID, and 'vmax' denotes V_{\rm max}.
I know this seems trivial, but this is the sort of that we should decide on now rather than wait until it becomes a bug, because it will become a bug with 100% certainty, and there will be far more code to write in the future than now.
We agreed to always use lowercase, this should now be considered resolved.
|
gharchive/issue
| 2015-03-24T23:11:30 |
2025-04-01T04:56:04.504933
|
{
"authors": [
"aphearin"
],
"repo": "astropy/halotools",
"url": "https://github.com/astropy/halotools/issues/37",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
242488350
|
Update Windows install
Try if Script folder is in the path Windows, otherwise put the full path.
when will this be merged?
|
gharchive/pull-request
| 2017-07-12T19:25:11 |
2025-04-01T04:56:04.521610
|
{
"authors": [
"gavinkalika",
"hugoesb"
],
"repo": "asweigart/pyautogui",
"url": "https://github.com/asweigart/pyautogui/pull/173",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
476884521
|
Fix #139 XLNet example with IMDB fails
This was due to a bug in the IMDB processor which resulted in an empty dataset being created, which caused the PyTorch built-in sampler to raise an exception.
Just a clarification, other processor's _create_examples(data_dir) (Like Yelp) which take path as argument don't face this problem, right? Other than that, this looks good to me.
@AvinashBukkittu Yes, the IMDB processor is the only one that takes a directory path. Others should be fine.
|
gharchive/pull-request
| 2019-08-05T14:15:34 |
2025-04-01T04:56:04.522950
|
{
"authors": [
"AvinashBukkittu",
"huzecong"
],
"repo": "asyml/texar-pytorch",
"url": "https://github.com/asyml/texar-pytorch/pull/140",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2209466367
|
async-profiler not working with alpine based containers running on graviton instances
I am trying to profile an application from inside a kubernetes pod.
Pod is running on aws graviton instances. We are using alpine base image for containers.
/app # uname -r
5.10.210-201.852.amzn2.aarch64
/app #
/app # cat /etc/alpine-release
3.17.5
When I run, i get following error:
tmp/async-profiler-3.0-linux-arm64 # cd bin/
/tmp/async-profiler-3.0-linux-arm64/bin # ./asprof 1
Target JVM failed to load /tmp/async-profiler-3.0-linux-arm64/bin/../lib/libasyncProfiler.so
java -agentpath:/tmp/async-profiler-3.0-linux-arm64/lib/libasyncProfiler.so=start,summary,flat -version gives following output
/tmp/async-profiler-3.0-linux-arm64/bin # java -agentpath:/tmp/async-profiler-3.0-linux-arm64/lib/libasyncProfiler.so=start,summary,flat -version
Picked up JAVA_TOOL_OPTIONS: -XX:+ExplicitGCInvokesConcurrent -Xms4G -Dkotlinx.coroutines.scheduler=off
Error occurred during initialization of VM
Could not find agent library /tmp/async-profiler-3.0-linux-arm64/lib/libasyncProfiler.so in absolute path, with error: Error loading shared library libstdc++.so.6: No such file or directory (needed by /tmp/async-profiler-3.0-linux-arm64/lib/libasyncProfiler.so)
After adding libstdc++.so.6, running into following error:
/tmp/async-profiler-3.0-linux-arm64/bin # java -agentpath:/tmp/async-profiler-3.0-linux-arm64/lib/libasyncProfiler.so=start,summary,flat -version
Picked up JAVA_TOOL_OPTIONS: -XX:+ExplicitGCInvokesConcurrent -Xms4G -Dkotlinx.coroutines.scheduler=off
Error occurred during initialization of VM
Could not find agent library /tmp/async-profiler-3.0-linux-arm64/lib/libasyncProfiler.so in absolute path, with error: Error loading shared library ld-linux-aarch64.so.1: No such file or directory (needed by /tmp/async-profiler-3.0-linux-arm64/lib/libasyncProfiler.so)
/tmp/async-profiler-3.0-linux-arm64/bin # ls /lib/
apk/ ld-musl-aarch64.so.1 libc.musl-aarch64.so.1 libssl.so.3 libz.so.1.2.13 modules-load.d/
firmware/ libapk.so.3.12.0 libcrypto.so.3 libz.so.1 mdev/ sysctl.d/
/tmp/async-profiler-3.0-linux-arm64/bin # ls /lib/ld-musl-aarch64.so.1
/lib/ld-musl-aarch64.so.1
Tried adding gcompat, but it did not help.
/tmp/async-profiler-3.0-linux-arm64/bin # apk add gcompat
(1/3) Installing musl-obstack (1.2.3-r0)
(2/3) Installing libucontext (1.2-r0)
(3/3) Installing gcompat (1.1.0-r0)
OK: 318 MiB in 22 packages
/tmp/async-profiler-3.0-linux-arm64/bin # ls /lib/
apk ld-linux-aarch64.so.1 libapk.so.3.12.0 libcrypto.so.3 libssl.so.3 libucontext_posix.so.1 libz.so.1.2.13 modules-load.d
firmware ld-musl-aarch64.so.1 libc.musl-aarch64.so.1 libgcompat.so.0 libucontext.so.1 libz.so.1 mdev sysctl.d
/tmp/async-profiler-3.0-linux-arm64/bin # java -agentpath:/tmp/async-profiler-3.0-linux-arm64/lib/libasyncProfiler.so=start,summary,flat -version
Picked up JAVA_TOOL_OPTIONS: -XX:+ExplicitGCInvokesConcurrent -Xms4G -Dkotlinx.coroutines.scheduler=off
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00000000000053a0, pid=279, tid=280
#
# JRE version: (11.0.20.1+9) (build )
# Java VM: OpenJDK 64-Bit Server VM (11.0.20.1+9-LTS, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-aarch64)
# Problematic frame:
# C 0x00000000000053a0
#
# Core dump will be written. Default location: /tmp/async-profiler-3.0-linux-arm64/bin/core.279
#
# An error report file with more information is saved as:
# /tmp/async-profiler-3.0-linux-arm64/bin/hs_err_pid279.log
#
#
Aborted (core dumped)
How can I fix this issue? Any help is much appreciated.
Alpine/AArch64 binaries are not provided, but you may build them yourself: install g++, make and openjdk packages, then run make in the project directory.
Thank you, it helped.
|
gharchive/issue
| 2024-03-26T22:36:44 |
2025-04-01T04:56:04.535455
|
{
"authors": [
"ajames-branch"
],
"repo": "async-profiler/async-profiler",
"url": "https://github.com/async-profiler/async-profiler/issues/908",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1320167377
|
Does not generate accurate id's for JSON Schema inputs with definitions
Describe the bug
Related to next branch.
Given this schema:
{
"properties": {
"browser_action": {
"$ref": "#/definitions/action",
"description": "Use browser actions to put icons in the main Google Chrome toolbar, to the right of the address bar. In addition to its icon, a browser action can also have a tooltip, a badge, and a popup."
},
"page_action": {
"$ref": "#/definitions/action",
"description": "Use the chrome.pageAction API to put icons inside the address bar. Page actions represent actions that can be taken on the current page, but that aren't applicable to all pages."
},
}
}
Two models will be created, BrowserAction and PageAction even though they should reference the same model which is whatever the model name is for "$ref": "#/definitions/action",
This problem occurs as part of draft-4 chrome-manifest.json blackbox test.
Duplicate of https://github.com/asyncapi/modelina/issues/232
|
gharchive/issue
| 2022-07-27T22:05:07 |
2025-04-01T04:56:04.541817
|
{
"authors": [
"jonaslagoni"
],
"repo": "asyncapi/modelina",
"url": "https://github.com/asyncapi/modelina/issues/820",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1397877774
|
feat: integrate new ParserJS version
Description
Integrate new ParserJS version:
use new version of parser-js
update library tests and examples tests
Related issue(s)
Part of https://github.com/asyncapi/parser-js/issues/481
@jonaslagoni Done :)
|
gharchive/pull-request
| 2022-10-05T14:22:18 |
2025-04-01T04:56:04.544497
|
{
"authors": [
"magicmatatjahu"
],
"repo": "asyncapi/modelina",
"url": "https://github.com/asyncapi/modelina/pull/925",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1606968531
|
FileNotFoundException
Hello,
when i try to insatiate :
PianoAnalytics pa = PianoAnalytics.getInstance(getApplicationContext());
i get this error systematically :
error on ConfigStep.loadConfigurationFromLocalFile: java.io.FileNotFoundException: piano-analytics-config.json
i use this documentation :
https://developers.atinternet-solutions.com/piano-analytics/data-collection/sdks/android-java#integrate-the-library
best regards,
Kais Jaafoura
Hi @JKaiss
This is a known issue, and we'll fix it later on.
Until then, you can create a piano-analytics-config.json file in app assets (app/src/main/assets/). Example file content:
{
"collectEndpoint": "logxxx.xiti.com",
"siteId": 123456789,
"pixelPath": "/event",
"offlineMode": "never",
"ignoreLimitedAdTracking": false,
"crashDetection": true,
"uuidExpirationMode": "fixed",
"uuidDuration": 365,
"sessionBackgroundDuration": 30,
"storeUsers": true,
"sendWhenOptOut": false,
"encryptionMode": "IF_COMPATIBLE",
"visitorIdType": "UUID"
}
Regards,
Ben
when i add the file i had a new error :
ConfigurationKeysEnum.fromString : requested value is unknown
|
gharchive/issue
| 2023-03-02T15:01:45 |
2025-04-01T04:56:04.563662
|
{
"authors": [
"BenDz",
"JKaiss"
],
"repo": "at-internet/piano-analytics-android",
"url": "https://github.com/at-internet/piano-analytics-android/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
997424269
|
🛑 Wedding HTTPS is down
In 531623f, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 7bafda4.
|
gharchive/issue
| 2021-09-15T19:15:31 |
2025-04-01T04:56:04.585281
|
{
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/1923",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1085224278
|
🛑 Wedding HTTPS is down
In 66045fe, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in eaf7f21.
|
gharchive/issue
| 2021-12-20T21:35:41 |
2025-04-01T04:56:04.587316
|
{
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/3820",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.