id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1052722148
|
Docs: Add render performance measurement in test environments
We have a way to measure docs page render times now:
This is the base line, showing WASM render times to be way too high, BSS times are acceptable:
BSS:
components/alert rendered in 80,573ms
components/appbar rendered in 58,1864ms
components/avatar rendered in 82,9861ms
components/badge rendered in 132,3375ms
components/breadcrumbs rendered in 52,4078ms
WASM:
components/alert rendered in 600ms
components/appbar rendered in 496ms
components/avatar rendered in 546ms
components/badge rendered in 571ms
components/breadcrumbs rendered in 515ms
Temporarily the measurement is shown right on the top of the page.
Note, to get a true measurement you need to do a warm-up. You need to click a few times between Alert and AppBar to get a representative measurement.
So far I found this: (the following measurements focus on WASM and only on the alert page)
baseline: ~600ms
w/o page nav menu: ~514ms (so the Menu on the right costs about 80ms)
w/o footer: ~495ms (footer costs about 20ms)
w/o any content: 11ms (the page content costs ~500ms)
I'll deal with menu and footer later, let's first get the page content loading times down
new baseline without nav and footer: ~500ms
w/o API link: ~216ms (API link costs about 300ms)
w/o SEO tags: ~200ms (cheap)
w/o example source code: ~80ms (gains 120ms for alert)
I want to keep it in dev.mudblazor.com also, so I found a way to display it beneath the heading and description where it probably won't irritate anybody (hopefully).
yeah, it is an idea. we have it logging to console in all environments. the measurement is only displayed on localhost or dev.mudblazor.com. I'll leave dealing with it visually to Jonny if he wants to.
I'll merge this so others can try different strategies. Mike wants to try pre-rendering. I'll start chipping away at some of the most obvious problems like the API link and the in-page menu in separate PRs
Thanks for your input @JonBunator!
The change is online now: https://dev.mudblazor.com/components/alert
very interesting is that on dev.mudblazor.com the WASM time is half the time on my machine with DEBUG and RELEASE!
Alert local DEBUG: 600ms
Alert local RELEASE: 600ms
Alert dev.mudblazor.com: 300ms
@mikes-gh can you explain this?
and don't say my machine is slow >:(
Can you also make it working for the API documentation, please? It doesn't refresh there. I had to go to some page in the "Components" menu, then go back to API documentation page to refresh it - but it is tedious and I don't know if the value is correct.
https://user-images.githubusercontent.com/8080496/145002136-132f937c-a118-4c53-8eb4-616d82dc1af7.mp4
Sure, I know this bug. Will look at it today
I have one more issue. I was switching between Bar chart and Line chart pages: Bar chart page, Line chart page, Bar chart page, Line chart page, Bar chart page. Both pages have no sections so the page is displayed only after the whole content is ready to be shown. Bar chart page is opened in about 100 ms according to this timer. But Developer Tools shows that it is rather 300 ms. I also perceive it to be rather 300 ms than 100 ms. So why it says "Rendered in 103 ms" - what does this timer measure?
It should measure the time it takes to render the first three sections. Maybe this is caused by the same bug. Try again after I fixed it for api pages. Of course, using developer tools to trace performance is most reliable. This stopwatch is just an approximation. It measures the time from before rendering the page until the timer item's OnAfterRender method is called.
It was very useful for me to get page load times down significantly.
ok. should be fixed now.
by the way, the measurement does not measure the whole page rendering any more. since my optimization in later PRs, where I implemented incremental rendering for more reactiveness, it measures only the initial render which renders only a part of the page.
To be exact, the timer shows the time it takes until the user gets to see the first screen of the page (even if lower parts of it are still loading).
On Wasm it measures exactly the time between mouse clicks now :D
On Wasm it measures exactly the time between mouse clicks now :D i.e. in the https://dev.mudblazor.com/wasm/api/ menu :D
lol, what a fail. but now I think I nailed it.
Yes, now it shows clearly correlation with the speed of pages in API menu. For fast loading pages it shows 50-100 ms. For slow pages (with many properties) it shows 200-275 ms. I will use it tomorrow to check if I can speed up slow pages. Thanks!
|
gharchive/pull-request
| 2021-11-13T16:50:17 |
2025-04-01T04:55:23.757618
|
{
"authors": [
"henon",
"iwis"
],
"repo": "MudBlazor/MudBlazor",
"url": "https://github.com/MudBlazor/MudBlazor/pull/3337",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1632857518
|
Kanban Board
https://github.com/Muhammad0602/MovieShows/projects/1
We are a team of 2.
Required Changes ♻️ ⚠️ 🚦
Hi @Muhammad0602,
Good job so far!
You still need to work on some issues to go to the next project, but you are almost there!
Good points 👍
Kanban cards created correctly. ✔️
Aspects to improve ♻️
[x] Check the comments under the review.
[ ] Kindly chose who will be student A and student B and assign the students to the cards like the example below:
Optional suggestions
Every comment with the [OPTIONAL] prefix is not crucial enough to stop the approval of this PR. However, I strongly recommend you take them into account as they can make your code better.
Cheers and Happy coding!👏👏👏
Feel free to leave any questions or comments in the PR thread if something is not 100% clear.
Please, do not open a new Pull Request for re-reviews. You should use the same Pull Request submitted for the first review, either valid or invalid unless it is requested otherwise.
As described in the Code reviews limits policy you have a limited number of reviews per project (check the exact number in your Dashboard). If you think the code review was unfair, you can request a second opinion using this form.
|
gharchive/issue
| 2023-03-20T21:02:20 |
2025-04-01T04:55:23.789548
|
{
"authors": [
"Muhammad0602",
"leonardodiasb"
],
"repo": "Muhammad0602/MovieShows",
"url": "https://github.com/Muhammad0602/MovieShows/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
603435907
|
Lighting not loading in The End
Here's what The End looks like on my server:
Latest server log:
https://pastebin.com/niCmx9Uz
mv version -b output:
https://pastebin.com/yvM23PwN
worlds.yml:
https://pastebin.com/QfSMkefS
Multiverse-Core is the only plugin I have. Removing it restores lighting to The End, so I can confirm that it is the cause of the bug. I'm running version 2.5.0 (I've tried versions from 2.4 to 2.5) on a Thermos 1.7.10 server. I understand that it is an older build and I'm not sure if you guys can help with running Thermos, but I'd really appreciate any suggestions because I can't find any info on errors like this on the internet.
I read in #1649 that
If you load a world and it immediately unloads then something is probably wrong with the world.
and I noticed that DIM1 unloads as soon as the server starts in my case. I've created new End dimensions with /mv create, but they all are dark, too. I've tried using Cauldron, KCauldron, and Thermos, but all of them yield the same bug. I'm not running any mods yet either, so that shouldn't be the issue.
I also tried using Multiworld instead, and that plugin loads The End with lighting, but I think I'd prefer to use Multiverse, if at all possible, because it's just a much better plugin lol.
Unfortunately Thermos is not a version that we support as it is a very old version of Minecraft and is a bukkit+forge hack server. These cause various bugs to arise by the server software itself. I'd suggest you contact Thermos about this world lighting issue.
|
gharchive/issue
| 2020-04-20T18:16:43 |
2025-04-01T04:55:23.798030
|
{
"authors": [
"benwoo1110",
"etpof"
],
"repo": "Multiverse/Multiverse-Core",
"url": "https://github.com/Multiverse/Multiverse-Core/issues/2231",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
325642513
|
Feature 3
Added feature 3
Some error occiured
|
gharchive/pull-request
| 2018-05-23T10:22:43 |
2025-04-01T04:55:23.814608
|
{
"authors": [
"MustafaJamal"
],
"repo": "MustafaJamal/GitTest1",
"url": "https://github.com/MustafaJamal/GitTest1/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1984316235
|
fix(gux-listbox): fix keyboard accessibility issue for list boxes wit…
Fixes an accessibility issue for listboxes that contain dividers. When identifying the next or previous option in the list to select on arrow down or up, gux-listbox was selecting the next element sibling. If the list contained a divider or heading, it selected that element, preventing further arrow navigation to the rest of the list items.
See gux-time-zone-picker for a working example.
This fix is somewhat similar to the solution in place for gux-list, in that it identifies a set of valid selectable tag names.
ENGAGEUI-7811
@daragh-king-genesys Thanks for approving and merging! Should I also fix this in v4? This will be an issue there too.
|
gharchive/pull-request
| 2023-11-08T20:07:04 |
2025-04-01T04:55:23.844932
|
{
"authors": [
"caitlanspronk"
],
"repo": "MyPureCloud/genesys-spark",
"url": "https://github.com/MyPureCloud/genesys-spark/pull/102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
138011655
|
SexyMotd console ?
Found another Console spam.. it think it's not that bad but it spam's my console whith this..
http://pastebin.com/WgipsacF
Please use pastebin or something.
Sorry :S never used that
http://pastebin.com
This is nothing to do with this plugin.
ohh srry :s
|
gharchive/issue
| 2016-03-02T22:28:18 |
2025-04-01T04:55:23.869934
|
{
"authors": [
"MylesIsCool",
"Paulomart",
"timr2000"
],
"repo": "MylesIsCool/ViaVersion",
"url": "https://github.com/MylesIsCool/ViaVersion/issues/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2664239067
|
Is there a way to change the font of the text in sidebar?
It's seemingly fixed? Since I altered main-font variable in page.typ and then serve, but nothing has changed. (The original main-font has no influence on sidebar after all, I think.)
The HTML part of style is only affected by the "theme" you are using. We currently only have an only theme, themes/mdbook. To customize a theme, there is a issue to discuss it https://github.com/Myriad-Dreamin/shiroa/issues/89.
|
gharchive/issue
| 2024-11-16T11:37:51 |
2025-04-01T04:55:23.871781
|
{
"authors": [
"Myriad-Dreamin",
"shrike-505"
],
"repo": "Myriad-Dreamin/shiroa",
"url": "https://github.com/Myriad-Dreamin/shiroa/issues/90",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2741166186
|
Formatter entry commands in activity bar
@QuarticCat @Enter-tainer
Motivation
When I upgrade the integrated formatter, I have to Ctrl+S all files (about 50 files) to format and check them. I want to have formatting commands to automate that.
Description
It would be great to provide at least two commands:
A command that formats all files in workspace.
A command that formats all files depended by current document. This is suitable for people who maintain many documents in a workspace.
After providing commands, we can register them to the sidebar to tell people that they exist:
alternatively, you can try the cli. it comes with a format-all command. just let you know.
I know it, but then it is not testing the functionality of integrated formatter. This is the major motivation.
A command that formats all files depended by current document. This is suitable for people who maintain many documents in a workspace.
一个格式化当前文档所依赖的所有文件的命令。这适合在工作区维护多个文档的人。
This feature seems to be too much extra work, I don't think it's that useful. Just format all files in the workspace(or a directory recursively) seems good enough for almost all cases.
Since Typst doesn't have a project structure, it would be hard to define "all files". For example, I don't want it to traverse node_modules (imagine a large project using Typst doc). In this sense, CLI would be much more flexible.
Since Typst doesn't have a project structure, it would be hard to define "all files". For example, I don't want it to traverse node_modules (imagine a large project using Typst doc).
Or we can only format dependencies, but that might not be sufficient (imagine we have multiple entries).
In either case, I won't use this command. CLI would be much more flexible.
I'm not sure what are you arguing about. But what I am sure is that the mentioned problem is a common curse. All tools scanning input directories recursively will run into this curse. For example, googling "rust-analyzer node_modules" you'll see some complaints that rust-analyzer hasn't solved it completely. So do you'll face similar situation when using the typstyle CLI.
It's easy to feed arguments to CLI. But not so for LSP commands.
|
gharchive/issue
| 2024-12-16T02:34:10 |
2025-04-01T04:55:23.877622
|
{
"authors": [
"Enter-tainer",
"Eric-Song-Nop",
"Myriad-Dreamin",
"QuarticCat"
],
"repo": "Myriad-Dreamin/tinymist",
"url": "https://github.com/Myriad-Dreamin/tinymist/issues/1006",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1357840649
|
Explorer: Filter fullnode map by date
We should be able to filter the fullnode map by how recently a fullnode was seen. This should involve keeping track of a timestamp in our DB to collect how recently a fullnode was seen.
@longbowlu That's going to be a separate issue, this is just for tracking by date.
|
gharchive/issue
| 2022-08-31T20:34:07 |
2025-04-01T04:55:23.878784
|
{
"authors": [
"Jordan-Mysten"
],
"repo": "MystenLabs/sui",
"url": "https://github.com/MystenLabs/sui/issues/4417",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2442001167
|
🛑 OARC Wiki is down
In b13d0d5, OARC Wiki (https://wiki.oarc.uk/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: OARC Wiki is back up in 243d349 after 3 minutes.
|
gharchive/issue
| 2024-08-01T09:46:04 |
2025-04-01T04:55:23.881400
|
{
"authors": [
"MysterAitch"
],
"repo": "MysterAitch/oarc-monitor",
"url": "https://github.com/MysterAitch/oarc-monitor/issues/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2506234022
|
🛑 POST to DRM is down
In f40fe99, POST to DRM (https://mysteryexe.xyz/drm.php?) was down:
HTTP code: 403
Response time: 421 ms
Resolved: POST to DRM is back up in 6e4711b after 7 minutes.
|
gharchive/issue
| 2024-09-04T20:40:29 |
2025-04-01T04:55:23.883756
|
{
"authors": [
"Mysteryexe"
],
"repo": "Mysteryexe/uptime-checker",
"url": "https://github.com/Mysteryexe/uptime-checker/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2622470959
|
🛑 POST to DRM is down
In 4646bed, POST to DRM (https://mysteryexe.xyz/drm.php?) was down:
HTTP code: 403
Response time: 222 ms
Resolved: POST to DRM is back up in 07e0ed3 after 7 minutes.
|
gharchive/issue
| 2024-10-29T22:48:54 |
2025-04-01T04:55:23.886469
|
{
"authors": [
"Mysteryexe"
],
"repo": "Mysteryexe/uptime-checker",
"url": "https://github.com/Mysteryexe/uptime-checker/issues/906",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2642610245
|
Simulation result staleness checking
Tickets addressed: #1563
Review: By commit
Merge strategy: Merge (no squash)
Description
This PR implements a staleness check for the internal procedural scheduling implementations. The editable plan instance keeps track of sim results that it has produced using weak references, and can dynamically update their staleness if the plan is changed after it was simulated. The process is this:
InMemoryEditablePlan has a set of weak references to simulation results objects that are currently up-to-date. I used weak references because if the user can't access it anymore, staleness doesn't matter and we might as well let it get gc'ed.
When the user gets simulation results, either through simulation or by getting the latest, it always checks for plan equality between the returned results and the current plan, even if we just simulated. If it is up-to-date, a weak ref is added to the set.
When an edit is made, the sim results in the current set are marked stale; then the set is reset to new reference to an empty set.
When a commit is made, the commit object takes shared ownership of the set. If a new simulation is run (step 2) the plan can still add to the set while it is still jointly owned by the commit. Then when an edit is made (step 3) the commit will become the sole owner of the set.
When changes are rolled back, any sim results currently in the plan's set are marked stale, the previous commit's sim results are marked not stale, then the plan will resume joint ownership of the previous commit's set.
The joint ownership freaks me out a wee bit, but I think it's safe because the commits are only used to keep the previous sets from getting gc'ed in the event of a rollback. Only the plan object actually mutates the set.
Verification
I added some unit tests to the scheduler-driver.
Documentation
This doesn't need to be explained to the user, but I've copied the above description into the doc comment on InMemoryEditablePlan.
Future work
The stateless scheduler will probably need something like this too.
I'm expecting that when I implement deletion, we will get false positives of staleness, because I've intentionally stayed away from constantly performing plan equality checks and storing a bunch of copies of the plan. Maybe that was premature optimization. The false positive would happen like this:
simulate
add activity; sim results marked stale
delete activity (not rollback); sim results not marked un-stale.
sim results now say they are stale when they actually aren't.
Does moving Commit inside of the InMemoryEditablePlan help ease any anxieties about the shared ownership?
Nope. I just did it because the Commit class isn't used anywhere else, so I didn't want to make it look like it was part of a public interface.
|
gharchive/pull-request
| 2024-11-08T01:48:15 |
2025-04-01T04:55:23.923580
|
{
"authors": [
"JoelCourtney"
],
"repo": "NASA-AMMOS/aerie",
"url": "https://github.com/NASA-AMMOS/aerie/pull/1595",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1901833303
|
ANMS_FUN_DAP_004 (Message Groups): Incomplete results
A time-based rule was created to generate bp full reports 1/second for 180 counts.
Agent 1.6 was selected to run this TBR starting at time 42:30. It was expected:
‚¶Å For minute 42: 30 messages would be generated
‚¶Å For minute 43: 60 messages would be generated
‚¶Å For minute 44: 60 messages would be generated
‚¶Å For minute 45: 30 messages would be generated
At time 43:33, the TBR was submitted for agent 2.6 and the same message distribution was expected.
At time 44:30, the TBR was submitted for agent 3.6 and the same message distribution was expected.
The Message Groups per Minute displayed reflected the expected distributions until 44:44. No additional points were displayed for agents after that time, the messages for minute 44 were as expected:
‚¶Å 45 for agent 1.6,
‚¶Å 44 for agent 2.6
‚¶Å 13 for agent 3.6
The Received Reports panel also stopped at this time. It is a known problem with the Received Reports panel that not all reports will be displayed, generally stopping at about 98-99 reports.
At this point, no other commands could be submitted for the agents without receiving an index error.
An attempt to re-add agents was executed with agent 1.6 and 2.6 being added, 3.6 was no longer available.
TBRs were submitted for agents 1.6 and 2.6 to run a report 1/sec for counts of 60, 120, and 180. Each TBR was run to completion before another TBR was started. The Message Groups per Minute panel reflected the messages at the 60 and 120 counts, but the display did not complete for 180 counts. At total of 134 messages were noted on the display.
Similar behavior was noted with the Agents tab Print report option and the Monitor tab Received reports. Refer to Issue 276.
fixed by #29
|
gharchive/issue
| 2023-09-18T22:17:51 |
2025-04-01T04:55:23.927847
|
{
"authors": [
"d-linko"
],
"repo": "NASA-AMMOS/anms",
"url": "https://github.com/NASA-AMMOS/anms/issues/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1313813864
|
Proof of concept with Cumulus on NGAP
💡 Description
@ramesh-maddegoda is deploying cumulus on NGAP. There was an error using docker compose so he is trying the terraform method which worked 80% but there remaining failures.
Progress on-going through discussions w/ Jason W (NGAP)
Status: @ramesh-maddegoda Jason W. fixed an issue server side and is working on redeployment. may be delayed until next sprint.
Cumulus instance now available but not self-installed - needed to be installed by Jason.
Jason (NGAP) still working on Cumulus deployment.
Had a detailed discussion with a Unity project team that uses Cumulus, in order to understand the value additions and pain points related with Cumulus.
Some value additions and pain points shared by the Unity team that uses Cumulus.
Value Additions:
There are several lambda functions available from Cumulus that can reduce the development time related with moving files, archiving etc.
The documentation is well written
There are several existing integrations available with NASA systems such as Earthdata Login and ESDIS Metrics System (EMS)
Cumulus uses an AWS API Gateway integrated with Lambda to enable secure communication
Pain Points:
The data model of Cumulus cannot be directly reused for a new project (E.g: PDS) without introducing updating the Cumulus data model
There is no multiple version support for data files
Processing very large files (50 GB+) with Cumulus lambda function can cause performance issues
The terraform state files should be shared between those who want to deploy a new workflow
Added more slides to the Trade Study slide deck based on recent findings.
https://docs.google.com/presentation/d/1xbaJQ6e9jg3XZn2VXsfygKlUMZ_gW4c_t65Vkq0daKI/edit#slide=id.g195b29ef5c2_1_144
|
gharchive/issue
| 2022-07-21T21:16:17 |
2025-04-01T04:55:23.945806
|
{
"authors": [
"jimmie",
"jordanpadams",
"ramesh-maddegoda",
"tloubrieu-jpl"
],
"repo": "NASA-PDS/nucleus",
"url": "https://github.com/NASA-PDS/nucleus/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2047530953
|
Costing for Small Bodies - PostgreSQL Architecture in MCP
💡 Description
Option 1:
Cost high-availability PostgresSQL DB option (260 GB right now): https://docs.aws.amazon.com/prescriptive-guidance/latest/saas-multitenant-managed-postgresql/availability.html
Option 2:
◦ Sync secondary DB using logical replication.
◦ Develop new logical replication replicator app to export/import the appropriate subscriber states
Clients connecting into primary : ~20 clients
Download (Egress) : 500GB / month
Logical replication
Basic design :
On-prem primary node --> Route53 failover into Cloud --> Multi AZDB PostgresSQL setup with logical replication
⚔️ Parent Epic / Related Tickets
No response
Closing as wontfix. The MPC folks have done their thing, and since we have not heard back from them, we will close this for now
|
gharchive/issue
| 2023-12-18T21:58:36 |
2025-04-01T04:55:23.949142
|
{
"authors": [
"jordanpadams",
"sjoshi-jpl"
],
"repo": "NASA-PDS/planetary-data-cloud",
"url": "https://github.com/NASA-PDS/planetary-data-cloud/issues/96",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
351830923
|
Node stop suddenly and synchronization process fails
Hi there!
I need your help with this.
I have restarted the daemon with -reindex and still the same.
************************ EXCEPTION: St13runtime_error CDB: Error -30974, can't open database ProcessMessages() 2018-08-18 15:52:29 ProcessMessages(block, 468 bytes) FAILED peer=2 2018-08-18 15:52:29 UpdateTip: new best=896561ee0a9b8641bb2b053f11d2b26bf036763762c5516a6ad402986118c94f height=90126 version=0x71000180 log2_work=75.056657 tx=180368 date='2018-08-14 08:15:12' progress=1.000000 cache=25.9MiB(94770tx) 2018-08-18 15:52:29 UpdateTip: new best=7ec337273e3d094dc1ec778ecf48e0bdfc0382de3e918c6423177090aa3ef985 height=90127 version=0x71000180 log2_work=75.056674 tx=180370 date='2018-08-14 08:15:28' progress=1.000000 cache=25.9MiB(94772tx) 2018-08-18 15:52:30 *** System error while flushing: CDB: Error -30974, can't open database 2018-08-18 15:52:30 Error: Error: A fatal internal error occurred, see debug.log for details 2018-08-18 15:52:30 ERROR: ProcessNewBlock: ActivateBestChain faile
@cjconstante what Nav version and OS you are running?
@red010b37 navcoin 4.2 and Ubuntu 16 LTS.
This is the only strange message i see in the log:
2018-08-27 11:27:36 Loaded 125350 blocks from external file in 39023ms
2018-08-27 11:27:36 Reindexing finished
2018-08-27 11:27:36 Pre-allocating up to position 0x100000 in rev00000.dat
2018-08-27 11:27:36 ConnectBlock: Failed to read previous block's logical timestamp
The rest of messages are:
UpdateTip....
AddToWallet...
And its synchronizing well. I ran the -salvagewallet parameter in the daemon but no i dont see the transactions/coins in the wallets...
|
gharchive/issue
| 2018-08-18T15:57:21 |
2025-04-01T04:55:23.967900
|
{
"authors": [
"cjconstante",
"red010b37"
],
"repo": "NAVCoin/navcoin-core",
"url": "https://github.com/NAVCoin/navcoin-core/issues/278",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
669272964
|
Animate_1
closes #198
Original NCL Plot (1/30):
https://www.ncl.ucar.edu/Applications/Images/animate_1_1_lg.png
Generated Plot (1/30):
@erogluorhan I've been looking into how to display the gif in Sphinx this morning and haven't gotten too far. However, I have also been trying to at least display the first projection (day 1) like what the NCL example has, but it appears Sphinx doesn't process the images for display from the code because (I'm assuming) they are being stored locally, even though I call plt.show() in the code which would imply I should see either all 30 of them (as I do in my Spyder kernel) or I should at least see the last one generated. I've tried removing the plt.close() command, but then each projection becomes subsetted in the next and its a mess. Do you have any insight on why this may be happening for just the images specifically? On that same subject, I wanted to also point out that all 30 png files are saving to the "Animations" directory along with saving in my local "Downloads" folder. Is this something I should be worried about for this PR?
|
gharchive/pull-request
| 2020-07-30T23:22:01 |
2025-04-01T04:55:23.972919
|
{
"authors": [
"michaelavs"
],
"repo": "NCAR/GeoCAT-examples",
"url": "https://github.com/NCAR/GeoCAT-examples/pull/201",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1174119565
|
Add Acknowledgements and Contributors sections in the Readme
Add a Contributors section in the Readme where all contributors from our group can be listed as co-authors. Also, an Acknowledgement section where Jamie's effort be acknowledged. Could be something like:
Jamie Bresch (@jamiebresch) for the obs2ioda converter (https://github.com/jamiebresch/obs2ioda) developments. In addition, the script GetRDAobs.csh in this workflow is based on rda_obs2ioda.csh from that repository;
Another acknowledgement for Craig Schwartz for the MPAS meanStateExe source code and build.
Craig Schwartz (@weather4evr) for providing the MPAS mean-state calculation program, which is used in verification and forecasting for ensemble cycling applications
Overall, we need to discuss internally in PARC as well. Let's do that before adding too many more items in this issue (my mistake). Co-contributors that submit PR's are given credit via GitHub PR's as well. My main motivations for these sections is to give credit to those who have contributed significantly, but would otherwise go unnoticed.
|
gharchive/issue
| 2022-03-19T00:09:47 |
2025-04-01T04:55:24.016376
|
{
"authors": [
"ibanos90",
"jjguerrette"
],
"repo": "NCAR/MPAS-Workflow",
"url": "https://github.com/NCAR/MPAS-Workflow/issues/80",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2425764670
|
Variable extents fail to initialize on new variable
To reproduce:
Import the WRF-Chem file as WRF-ARW at /glade/campaign/cisl/vast/vapor/Bugs/3637
Create and enable a new Volume renderer
Change the variable to P to get the error shown below
In the Geometry tab, adjust the minimum and maximum Z slider knobs to acquire valid extents
Quick low hanging fruit might be to add "Try adjusting renderer region in the Geometry Tab" to the warning.
|
gharchive/issue
| 2024-07-23T17:45:58 |
2025-04-01T04:55:24.023195
|
{
"authors": [
"sgpearse"
],
"repo": "NCAR/VAPOR",
"url": "https://github.com/NCAR/VAPOR/issues/3637",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
378553536
|
Update documentation
I updated the documentation for the package, functions and tests, mostly formatting and typos. I added @noRd tags to non-exported functions and re-organized some of the code. I also updated the pkgdown site.
This closes #13 and closes #33. This is also a partial update for #109, but automation for updating the pkgdown site should still be implemented if possible.
some conflicts here, otherwise I can merge
|
gharchive/pull-request
| 2018-11-08T02:35:57 |
2025-04-01T04:55:24.032316
|
{
"authors": [
"drkrynstrng",
"jeanetteclark"
],
"repo": "NCEAS/arcticdatautils",
"url": "https://github.com/NCEAS/arcticdatautils/pull/114",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
319818604
|
接口定义的response header,mock响应的数据中没有
在接口定义的“响应信息”中定义了的header,不会出现在mock的响应数据中。
请试下在线的版本:API Mock Online
|
gharchive/issue
| 2018-05-03T07:51:07 |
2025-04-01T04:55:24.095456
|
{
"authors": [
"corelchen",
"huntbao"
],
"repo": "NEYouFan/nei-toolkit",
"url": "https://github.com/NEYouFan/nei-toolkit/issues/80",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2704888955
|
Update license file to be properly detected by github
Hi Christian @SchmChris ,
I'm proposing to use a proper LICENSE file with the actual license content. This way, github recognizes the license and downstream tools can categorize the meta data of the repository correctly.
I also added the sentence you wrote to the readme.
Best,
Robert
After you merged this, the text in the orange rectangle will change to "CC-BY 4.0":
|
gharchive/pull-request
| 2024-11-29T11:37:24 |
2025-04-01T04:55:24.097244
|
{
"authors": [
"haesleinhuepf"
],
"repo": "NFDI4BIOIMAGE/Postcards-2024",
"url": "https://github.com/NFDI4BIOIMAGE/Postcards-2024/pull/2",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
579065833
|
add prefabs
prefab(NCMBManager、NCMBSettings)の追加
ゴミファイルが混入してしまったため、再度p-rを作成し直し致します。
|
gharchive/pull-request
| 2020-03-11T07:39:17 |
2025-04-01T04:55:24.116406
|
{
"authors": [
"kobayashi-masaya"
],
"repo": "NIFCLOUD-mbaas/ncmb_unity",
"url": "https://github.com/NIFCLOUD-mbaas/ncmb_unity/pull/175",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
712941339
|
Creative FizzBuzz one line solution in python
Creative FizzBuzz one line solution in python
Neat! Thank you :)
|
gharchive/pull-request
| 2020-10-01T15:17:10 |
2025-04-01T04:55:24.124543
|
{
"authors": [
"NLDev",
"suparna13"
],
"repo": "NLDev/Hacktoberfest-2020-FizzBuzz",
"url": "https://github.com/NLDev/Hacktoberfest-2020-FizzBuzz/pull/123",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
2470075497
|
为什么向量模型微调后效果更差了
bge和m3e向量模型均使用一万五千条数据微调,loss下降且数据挺好,但是统计发现检出率下降了,这是怎么回事?有什么注意事项吗?
通常可能是训练数据和测试数据分布不一致了。
因为不了解具体的场景和数据构造方式,所以没有什么好的建议。
都是针对QA问答检索的场景,正例数据由大模型生成(基于给定问题的回答),负例数据是从检出数据里面选出的不符合答案的数据。如果说是数据分布不一致,那么想问一下负例数据有没有更好的创建方式?随机取样是可能会取到正例数据的,还有其他优化方法吗?
负例数据是从检出数据里面选出的不符合答案的数据。
-》》》》
这个是基于向量的难负例挖掘(向量检索出top200,从比如top30-top200里面随机选择一些不属于正例嘛)嘛?
都是针对QA问答检索的场景,正例数据由大模型生成(基于给定问题的回答),负例数据是从检出数据里面选出的不符合答案的数据。如果说是数据分布不一致,那么想问一下负例数据有没有更好的创建方式?随机取样是可能会取到正例数据的,还有其他优化方法吗?
-》》》》》》》》》》》》》》》
听起来生成的query-answer对应该没什么问题,其实可以做个实验,只用q-a对,然后采用batch 内随机负例,来看下效果。排除下是否是挖掘的负例存在问题。
负例数据是从检出数据里面选出的不符合答案的数据。 -》》》》 这个是基于向量的难负例挖掘(向量检索出top200,从比如top30-top200里面随机选择一些不属于正例嘛)嘛?
是的,是从top300-500中挖掘的难负例
|
gharchive/issue
| 2024-08-16T11:24:50 |
2025-04-01T04:55:24.128496
|
{
"authors": [
"NLPJCL",
"huangt1"
],
"repo": "NLPJCL/RAG-Retrieval",
"url": "https://github.com/NLPJCL/RAG-Retrieval/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
171572566
|
歧义纠正辞典加载错误
这里 开始逻辑貌似不对吧 @ansjsun
String[] split = temp.split("\t");
StringBuilder sb = new StringBuilder();
if (split.length % 2 != 0) {
LIBRARYLOG.error("init ambiguity error in line :" + temp + " format err !");
}
for (int i = 0; i < split.length; i += 2) {
sb.append(split[i]);
}
ambiguityForest.addBranch(sb.toString(), split);
知道了,ambiguity.dic 文件格式变了
|
gharchive/issue
| 2016-08-17T04:35:19 |
2025-04-01T04:55:24.130225
|
{
"authors": [
"rockagen"
],
"repo": "NLPchina/ansj_seg",
"url": "https://github.com/NLPchina/ansj_seg/issues/332",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2397362510
|
Weekly self-test failed due to underlying Node version change
See:
https://github.com/NLnetLabs/ploutos-testing/actions/runs/9823032272/job/27201312704#step:3:21
https://github.com/actions/checkout/issues/1809
I've isolated and resolved the issue in testing branch 111-weekly-self-test-failed-due-to-underlying-node-version-change.
A ploutos-testing packaging run that succeeds when using this branch can be seen here: https://github.com/NLnetLabs/ploutos-testing/actions/runs/10091141352
|
gharchive/issue
| 2024-07-09T07:21:46 |
2025-04-01T04:55:24.135231
|
{
"authors": [
"ximon18"
],
"repo": "NLnetLabs/ploutos",
"url": "https://github.com/NLnetLabs/ploutos/issues/111",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
368214867
|
Log per session/user
NLog version: 4.5.6
Platform: .Net 4.5 / Mono 4
Current NLog config (xml or C#, if relevant)
Is it possible to log per session/user? I mean to log data to different files by session or userId.
Thank you
No idea how your application works but maybe you can do something like this:
var userLogger = LogManager.GetLogger(userId + "_" sessionId);
userLogger.Info("Hello World");
Then you can configure your file-target like this:
<?xml version="1.0" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets>
<target name="userFile" xsi:type="File" fileName="${basedir}/${logger}.txt" />
</targets>
<rules>
<logger name="*" minlevel="Debug" writeTo="userFile" />
</rules>
</nlog>
Another option is to use the ${aspnet-sessionid},
e.g. file per session:
<?xml version="1.0" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets>
<target name="f" xsi:type="File" fileName="${basedir}//session-${aspnet-sessionid}.txt" />
</targets>
<rules>
<logger name="*" minlevel="Debug" writeTo="f" />
</rules>
</nlog>
Thank you.
Will try.
@snakefoot this seems to work:
var userLogger = LogManager.GetLogger(userId + "_" sessionId);
userLogger.Info("Hello World");
But my application uses 3 different logger:
public static Logger Logger1 = LogManager.GetLogger("Log1");
public static Logger Logger2 = LogManager.GetLogger("Log2");
public static Logger Logger3 = LogManager.GetLogger("Log3");
configured by the .config file:
<rules>
<logger name="Log1" minlevel="Info" writeTo="log1" />
<logger name="Log2" minlevel="Info" writeTo="log2" />
<logger name="Log3" minlevel="Info" writeTo="log3" />
</rules>
How can I use ${logger} placeholder ?
Think you need to explain what you want. You say that you want log file for each userid and sessionid
Then suddenly you are restricted to only 3 loggers and 3 targets. You can also push the userid and sessionid using MDLC. And add it to the Filename-layout. See also https://github.com/NLog/NLog/wiki/MDLC-Layout-Renderer
Sent from my Sony Xperia
---- Gerardo wrote ----
Reopened #2955https://github.com/NLog/NLog/issues/2955.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/NLog/NLog/issues/2955#event-1911975593, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AK-fnHgsysnT7LxlAqwifvAtUj_8Iwq7ks5umGZMgaJpZM4XTTOq.
|
gharchive/issue
| 2018-10-09T13:38:01 |
2025-04-01T04:55:24.143290
|
{
"authors": [
"304NotModified",
"JTrotta",
"snakefoot"
],
"repo": "NLog/NLog",
"url": "https://github.com/NLog/NLog/issues/2955",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
394801937
|
TargetWithContext: support for IRawValue
Making use of IRawValue-interface
Thanks!
|
gharchive/pull-request
| 2018-12-29T14:56:48 |
2025-04-01T04:55:24.144289
|
{
"authors": [
"304NotModified",
"snakefoot"
],
"repo": "NLog/NLog",
"url": "https://github.com/NLog/NLog/pull/3060",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
933004991
|
google collab bug affecting nwis_client and _restclient. Throws RuntimeError: This event loop is already running
In google collab instantiating a RestClient ( this is implicitly done by nwis_client.IVDataService ) will cause a RuntimeError: This event loop is already running in google collab. This issue is well documented in the jupyter notebook repo. In that thread, a work around using nest_asyncio was mentioned as shown below. The problem and a solution to this issue as shown below.
Reproduce Problem
!pip install hydrotools.nwis_client
from hydrotools import nwis_client
client = nwis_client.IVDataService()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-d37c2bf4ee70> in <module>()
----> 1 service = nwis_client.IVDataService()
4 frames
/usr/lib/python3.7/asyncio/base_events.py in _check_runnung(self)
521 def _check_runnung(self):
522 if self.is_running():
--> 523 raise RuntimeError('This event loop is already running')
524 if events._get_running_loop() is not None:
525 raise RuntimeError(
RuntimeError: This event loop is already running
Solution
!pip install hydrotools.nwis_client
import nest_asyncio
nest_asyncio.apply()
from hydrotools import nwis_client
client = nwis_client.IVDataService()
IMO I think the best way to get around this is to try catch where the error propagates from and then try to import nest_asyncio and call nest_asyncio.apply(). If nest_asyncio is not installed, throw a ModuleNotFoundError refing this issue and noting to install nest_asyncio. Given that this is such an edge case and nest_asyncio is required by nbclient which is required by nbconvert which is required by jupyter notebook, it is unlikely that a user will ever not have nest_asyncio installed and run into this issue. Before I open a PR to resolve this, I'd like to hear your thoughts @jarq6c.
Can we make the try/catch specific enough that it only catches this edge case?
I believe so, here is a generic example:
def raise_exception():
raise Exception("test")
def test_catch():
try:
raise_exception()
except Exception as e:
if str(e) != "test":
raise e
print("caught specific error")
if __name__ == "__main__":
test_catch()
>>> caught specific error
Now down the line if the error message changes this will break, but it is the only alternative I can think of off the top of my head although admittedly it is a little frail.
I reproduced the exception using the bellow snippet:
from hydrotools import nwis_client
import asyncio
async def main():
await asyncio.sleep(0.01)
service = nwis_client.IVDataService(enable_cache=False)
return service.get(sites="01646500")
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main()
Traceback (most recent call last):
File "nwis_client_loop_already_created_bug.py", line 14, in <module>
loop.run_until_complete(main())
File "~/miniconda3/envs/et/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "nwis_client_loop_already_created_bug.py", line 8, in main
service = nwis_client.IVDataService(enable_cache=False)
File "~/evaluation_tools_fork/python/nwis_client/hydrotools/nwis_client/iv.py", line 91, in __init__
self._restclient = RestClient(
File "~/evaluation_tools_fork/python/_restclient/hydrotools/_restclient/_restclient.py", line 109, in __init__
self._session = self._add_to_loop(
File "~/evaluation_tools_fork/python/_restclient/hydrotools/_restclient/async_helpers.py", line 22, in _add_to_loop
return self._loop.run_until_complete(coro)
File "~/miniconda3/envs/et/lib/python3.8/asyncio/base_events.py", line 592, in run_until_complete
self._check_running()
File "~/miniconda3/envs/et/lib/python3.8/asyncio/base_events.py", line 552, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
Exception ignored in: <function RestClient.__del__ at 0x1118ffc10>
Traceback (most recent call last):
File "~/evaluation_tools_fork/python/_restclient/hydrotools/_restclient/_restclient.py", line 297, in __del__
File "~/evaluation_tools_fork/python/_restclient/hydrotools/_restclient/_restclient.py", line 291, in close
AttributeError: 'RestClient' object has no attribute '_session'
sys:1: RuntimeWarning: coroutine 'AsyncToSerialHelper._wrap_func_in_coro.<locals>.wr' was never awaited
|
gharchive/issue
| 2021-06-29T18:39:44 |
2025-04-01T04:55:24.216651
|
{
"authors": [
"aaraney",
"jarq6c"
],
"repo": "NOAA-OWP/hydrotools",
"url": "https://github.com/NOAA-OWP/hydrotools/issues/99",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
276464444
|
time shifted when multiple time plots overlaid on 2D plots
Reported by ansley b manke on 10 Jan 2017 19:53 UTC
This showed up on SOCAT Correlation viewer plots, parameter as a function of time, when the time range is short.
These plots are drawn as PLOT/VS overlays on a blank 2D time-parameter plot, as the time data may be for multiple platforms and so might not define a monotonic time axis. The plot style is a dashed gray line showing time, overlaid with symbols. An example screenshot is attached.
This simple script also shows it.
yes? define axis/t=15-jun-1970:16-jun-1970:60/t0=1-jan-1970/units=seconds tax
yes? def axis/y=-2:2:0.2 yaxis
yes? let tt = t[shade/pal=white/nokey y[gy=yaxis](gt=tax]
yes?) + t[gt=tax]
yes? ! Second plot shifted in time
yes? plot/over/color=red cos(tt/3000)
yes? plot/over/color=blue cos(tt/3000)+0.5
Migrated-From: http://dunkel.pmel.noaa.gov/trac/ferret/ticket/2495
Comment by ansley.b.manke on 10 Jan 2017 20:46 UTC
This is fixed in ppl/plot/pltit.F, and the routines taxis4.F and tayis4.F
Comment by ansley.b.manke on 10 Jan 2017 22:59 UTC
The fix is merged to the PyFerret branch.
Attachment from ansley.b.manke on 10 Jan 2017 19:55 UTC
The example SOCAT lpot
REPLACE THIS TEXT WITH UPLOADED FILE ./attachments/TRAC_2495_GIT_1767/socat_2495.jpg
Attachment from ansley.b.manke on 10 Jan 2017 19:55 UTC
The plot from the example script
REPLACE THIS TEXT WITH UPLOADED FILE ./attachments/TRAC_2495_GIT_1767/bug2495.gif
|
gharchive/issue
| 2017-11-23T20:05:01 |
2025-04-01T04:55:24.221520
|
{
"authors": [
"karlmsmith"
],
"repo": "NOAA-PMEL/Ferret",
"url": "https://github.com/NOAA-PMEL/Ferret/issues/1767",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
276466989
|
control over precision of STAT output
Reported by ansley b manke on 28 Feb 2017 18:54 UTC
We have SET LIST/PREC= and LIST/PREC=
It would be nice to have STAT/PREC=
Also perhaps apply the precision from SET LIST/PREC= to the STAT command, which would be overridden by a STAT/PREC= setting.
Migrated-From: http://dunkel.pmel.noaa.gov/trac/ferret/ticket/2512
Comment by ansley.b.manke on 27 Mar 2017 21:59 UTC
This is fixed. I did not apply the setting from SET LIST/PREC= as it seemed inconsistent with the defaults that the STAT command uses, so only the new STAT/PRECISION= is implemented.
|
gharchive/issue
| 2017-11-23T20:25:20 |
2025-04-01T04:55:24.223808
|
{
"authors": [
"karlmsmith"
],
"repo": "NOAA-PMEL/Ferret",
"url": "https://github.com/NOAA-PMEL/Ferret/issues/1784",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
282119429
|
Saved Scenario in 1.7.0b - can't open in 1.6.1
Hi guys,
I assume there is new functionality in 1.7.0b, I saved a scenario in this client by accident (thought I was in 1.6.1) and now can't see it in the 1.6.1 release client.
Is there a way to convert it back to being compatible with 1.6.1? If not will this 1.7.0b scenario be compatible with 2.0 when it is released?
https://github.com/NPBruce/valkyrie/wiki/QuestIniQuest
Try to set format=8 inside the quest.ini. Be aware of the changes in quest format 9 https://github.com/NPBruce/valkyrie/wiki/QuestFormat
If you are using the added traits or investigator attacks it won't work.
Otherwise you will have to remove all references to $end (you can do this in the editor or manually) and then manually edit quest.ini and change the format to 8.
I had saved it previoulsly in 1.7.0b and it was still visible. I think I saved it after playing around with the sorting functions.
I am not using the added traits or investigator attacks.
I am not near finishing this, so not worried if it will work in the next release? Am I right in thinking a 1.7.0b scenario will work in the next release of Valkyrie (2.0 now I believe)?
Yes it will work in future versions.
|
gharchive/issue
| 2017-12-14T14:30:02 |
2025-04-01T04:55:24.233800
|
{
"authors": [
"NPBruce",
"redwolf2",
"scrubbless"
],
"repo": "NPBruce/valkyrie",
"url": "https://github.com/NPBruce/valkyrie/issues/744",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1109659251
|
Solaris utilities: temporary onboarding
Data_to_tiles transition
In order to facilitate the transition from hdf5 samples to plain .tif (imagery) and .geojson (ground truth) samples (aka tiles), some utilities from the solaris library must be temporarily onboarded.
Once solaris v0.5 is out, some PRs will be made to the main solaris repo in order to merge GDL's needs with the state of solaris at that point.
Why temporarily onboard, then remove?
Note those of you wondering why we shouldn't just use solaris as it is now and adapt once it's updated:
Since solaris was previously considered as dead, the onborading and modification of solaris to meet GDL's needs had already advanced with many hours of bugfixing and adapting. Therefore, it will be easier to add the relevant solaris utilities to GDL directly, make PRs to ensure a seamless transition, then remove those utilities from GDL's repo.
The GDL team has decided to wait for solaris to release v0.5.0 and skip this temporary onboarding.
Meanwhile, a branch with the solaris material and tiling to .tifs and .geojson is on my fork: 215-solaris-tiling2
|
gharchive/issue
| 2022-01-20T18:59:30 |
2025-04-01T04:55:24.237239
|
{
"authors": [
"remtav"
],
"repo": "NRCan/geo-deep-learning",
"url": "https://github.com/NRCan/geo-deep-learning/issues/233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2008237345
|
Navigation: 'Connect with Us' Opens in a New Page.
To enhance user experience, consider redirecting the "Connect with us" link to open in a new page.
It is necessary to open this page in a new tab:
@pradeeptosarkar
go ahead @the-shivam-gupta
|
gharchive/issue
| 2023-11-23T13:28:36 |
2025-04-01T04:55:24.305917
|
{
"authors": [
"pradeeptosarkar",
"the-shivam-gupta"
],
"repo": "NSCC-BPIT/NSCC-BPIT-Website",
"url": "https://github.com/NSCC-BPIT/NSCC-BPIT-Website/issues/499",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
225238906
|
File is not created when using run myconfig.json
I have just saved the config file from nswagstudio and try to run it from command line.
nswag run myconfig.json
NSwag NPM CLI
NSwag command line tool for .NET 4.6+, toolchain v10.6.6324.32485 (NJsonSchema v8.33.6323.36213) (x64)
Visit http://NSwag.org for more information.
NSwag bin directory: C:\Users\Me\AppData\Roaming\nvm\v7.9.0\node_modules\nswag\bin\binaries\full
Executing file 'myconfig.json'...
Done.
Duration: 00:00:00.465226
no file being produced.
however I set "output": "test.ts
here is my config:
{
"swaggerGenerator": {
"jsonSchemaToSwagger": {
"name": "test",
"schema": "{\r\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"listing\": {\r\n \"type\": \"array\",\r\n \"items\": {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"id\": {\r\n \"type\": \"number\"\r\n },\r\n \"type\": {\r\n \"type\": \"string\"\r\n },\r\n \"guests\": {\r\n \"type\": \"string\"\r\n },\r\n \"location\": {\r\n \"type\": \"string\"\r\n },\r\n \"propertyType\": {\r\n \"type\": \"string\"\r\n },\r\n \"placeType\": {\r\n \"type\": \"string\"\r\n },\r\n \"priceType\": {\r\n \"type\": \"string\"\r\n },\r\n \"price\": {\r\n \"type\": \"string\"\r\n },\r\n \"minStay\": {\r\n \"type\": \"string\"\r\n },\r\n \"maxStay\": {\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}",
"output": "test.ts"
}
},
"codeGenerators": {
"swaggerToTypeScriptClient": {
"className": "{controller}Client",
"moduleName": "",
"namespace": "",
"typeScriptVersion": 2.0,
"template": "Angular",
"promiseType": "Promise",
"dateTimeType": "MomentJS",
"nullValue": "Null",
"generateClientClasses": true,
"generateClientInterfaces": false,
"generateOptionalParameters": false,
"wrapDtoExceptions": false,
"useTransformOptionsMethod": false,
"useTransformResultMethod": false,
"generateDtoTypes": true,
"operationGenerationMode": "MultipleClientsFromOperationId",
"markOptionalProperties": true,
"generateCloneMethod": false,
"typeStyle": "Class",
"generateDefaultValues": true,
"excludedTypeNames": [],
"handleReferences": false,
"generateConstructorInterface": true,
"importRequiredTypes": true,
"baseUrlTokenName": "API_BASE_URL",
"output": null
}
}
}
By running same config in nswagstudio all just works.
Define the output of the code generator (last prop in your json)
ahh I c, messed up with jsonSchemaToSwagger output.
How would you specify jsonSchemaToSwagger schema as a file name?
ahh I c, messed up with jsonSchemaToSwagger output.
How would you specify jsonSchemaToSwagger schema as a file name?
Currently I've got that:
"jsonSchemaToSwagger": {
"name": "test",
"schema": "{\r\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"listing\": {\r\n \"type\": \"array\",\r\n \"items\": {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"id\": {\r\n \"type\": \"number\"\r\n },\r\n \"type\": {\r\n \"type\": \"string\"\r\n },\r\n \"guests\": {\r\n \"type\": \"string\"\r\n },\r\n \"location\": {\r\n \"type\": \"string\"\r\n },\r\n \"propertyType\": {\r\n \"type\": \"string\"\r\n },\r\n \"placeType\": {\r\n \"type\": \"string\"\r\n },\r\n \"priceType\": {\r\n \"type\": \"string\"\r\n },\r\n \"price\": {\r\n \"type\": \"string\"\r\n },\r\n \"minStay\": {\r\n \"type\": \"string\"\r\n },\r\n \"maxStay\": {\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}",
"output": null
}
but want something like:
"jsonSchemaToSwagger": {
"name": "test",
"schema": "schema.json",
"output": null
}
UPDATE:
just tried nswag run myconfig.json /input:schema.json by removing jsonSchemaToSwagger from my myconfig.json, seams to be working but the file size like more than 1k lines, in case of nswagstudio got just 400 line file.
|
gharchive/issue
| 2017-04-29T07:04:06 |
2025-04-01T04:55:24.319335
|
{
"authors": [
"kuncevic",
"rsuter"
],
"repo": "NSwag/NSwag",
"url": "https://github.com/NSwag/NSwag/issues/747",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2447203004
|
Missing argument 'numpy_state' in Set_inits()
Error message below when running trachoma_fitting.R. I think the amis integration package is based on an older version of the trachoma code, Set_inits() function now requires argument numpy_state
source("trachoma_fitting.R")
Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
TypeError: Set_inits() missing 1 required positional argument: 'numpy_state'
Run reticulate::py_last_error() for details.
reticulate::py_last_error()
── Python Exception Message ───────────────────────────────────────────────────────────────────────────
Traceback (most recent call last):
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-amis-integration/trachoma_amis/amis_integration.py", line 103, in build_transmission_model
) = setup(initial_infect_frac)
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-amis-integration/trachoma_amis/amis_integration.py", line 50, in setup
vals = Set_inits(parameters, demog, sim_params)
TypeError: Set_inits() missing 1 required positional argument: 'numpy_state'
── R Traceback ────────────────────────────────────────────────────────────────────────────────────────
▆
├─base::source("trachoma_fitting.R")
│ ├─base::withVisible(eval(ei, envir))
│ └─base::eval(ei, envir)
│ └─base::eval(ei, envir)
└─amis_int_mod$build_transmission_model(...) at trachoma_fitting.R:30:1
└─reticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
That's right, this is based on version 1.0.0 (the latest tag) of the trachoma model
Thanks Thibault - I rolled back to v1.0.0 (noting required downgrading NumPy from 2.0.1 to 1.26.4 and adding version tag to trachoma package in pyproject.toml). But now getting the error below.
p.s. sorry for the many questions - I saw that you were going to look at the integration later this week so I can stop here until you've had a chance to do this
Exception type IndexError was raised during the execution of the model:
── Python Exception Message ─────────────────────────────────────────────────────────────────────────────────
Traceback (most recent call last):
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-amis-integration/trachoma_amis/amis_integration.py", line 114, in run_trachoma
results = Parallel(n_jobs=num_cores)(
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-venv/lib/python3.10/site-packages/joblib/parallel.py", line 1918, in __call__
return output if self.return_generator else list(output)
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-venv/lib/python3.10/site-packages/joblib/parallel.py", line 1847, in _get_sequential_output
res = func(*args, **kwargs)
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-venv/lib/python3.10/site-packages/trachoma/trachoma_functions.py", line 1184, in run_single_simulation
results = sim_Ind_MDA_Include_Survey(params=params, Tx_mat = Tx_mat,
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-venv/lib/python3.10/site-packages/trachoma/trachoma_functions.py", line 919, in sim_Ind_MDA_Include_Survey
vals = stepF_fixed(vals=vals, params=params, demog=demog, bet=betas[i])
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-venv/lib/python3.10/site-packages/trachoma/trachoma_functions.py", line 155, in stepF_fixed
lambda_step = 1 - np.exp(- getlambdaStep(params=params, Age=vals['Age'], bact_load=vals['bact_load'],
File "/Users/u2176058/Documents/trachoma-endgame/trachoma-venv/lib/python3.10/site-packages/trachoma/trachoma_functions.py", line 310, in getlambdaStep
returned[vaccinated] = (1 - prob_reduction[vaccinated]) * returned[vaccinated]
IndexError: arrays used as indices must be of integer (or boolean) type
── R Traceback ──────────────────────────────────────────────────────────────────────────────────────────────
▆
1. └─AMISforInfectiousDiseases::amis(...)
2. └─global transmission_model(seeds = 1:n_samples, param, n_tims) at evandrokonzen-AMISforInfectiousDiseases-dev-55f99ef/R/AMIS.R:270:5
3. ├─base::tryCatch(...)
4. │ └─base (local) tryCatchList(expr, classes, parentenv, handlers)
5. │ └─base (local) tryCatchOne(expr, names, parentenv, handlers[[1L]])
6. │ └─base (local) doTryCatch(return(expr), name, parentenv, handler)
7. └─reticulate (local) model_func(seeds, params, n_tims)
8. └─reticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
Error in value[[3L]](cond) :
list(AMISforInfectiousDiseases::amis(prevalence_map, function(seeds, params, n_tims) {
tryCatch(expr = {
model_func(seeds, params, n_tims)
}, error = error_function)
}, prior), transmission_model(seeds = 1:n_samples, param, n_tims), tryCatch(expr = {
model_func(seeds, params, n_tims)
}, error = error_function), tryCatchList(expr, classes, parentenv, handlers), tryCatchOne(expr, names, parentenv, handlers[[1]]), doTryCatch(return(expr), name, parentenv, handler), model_func(seeds, params, n_tims), py_call_impl(callable, call_args$unnamed, call_args$named))c(0, 1, 2, 3, 4, 5, 2, 7)c(TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE)c("AMISforInfectiousDiseases", NA, "base", "base", "base", "base", "reticulate", "reticulate")c("::", "global", "::", "local", "local", "local", "local", ":::")list(AMISforInfectiousDiseases::amis(prevalence_map, function(seeds, params, n_tims) {
tryCatch(expr = {
model_func(seeds, params, n_tims)
}, error = error_func
Yes, I'm getting this one as well and I don't have a fix for it at the moment. It looks like vaccination isn't setup correctly in amis_integration.py. I'm not going to be able to get a close look to this before friday I'm afraid.
|
gharchive/issue
| 2024-08-04T16:18:34 |
2025-04-01T04:55:24.328262
|
{
"authors": [
"RaihaTuiTaura",
"tlestang"
],
"repo": "NTD-Modelling-Consortium/trachoma-amis-integration",
"url": "https://github.com/NTD-Modelling-Consortium/trachoma-amis-integration/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1350480757
|
does the last release (DP) of isaac_ros_visual_slam still support galactic or just Humble?
Hello!
I started using the isaac_ros_visual_slam package around 2 months ago, before the latest release that supports Humble. My application still uses ROS2 Galactic at its core and so, the release I used at the time (v0.9.3-ea3) has been working well for me. Since then I didn't have time to check whether the latest stable release also supports Galactic, and in the documentation, there is no mention of this. So, does it support it or not?
Thank you!
Isaac ROS EA3 supported only ROS2 Foxy but could in theory have worked fine on Galactic with minor modifications. Isaac ROS DP packages, however, takes advantage of new features in ROS2 Humble and wouldn't be able to run on an earlier release.
|
gharchive/issue
| 2022-08-25T07:54:54 |
2025-04-01T04:55:24.336938
|
{
"authors": [
"geoporus-karelics",
"hemalshahNV"
],
"repo": "NVIDIA-ISAAC-ROS/isaac_ros_visual_slam",
"url": "https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
141288856
|
Support for inference on LMDB
A (small) step towards support for non-image data: this makes it possible to perform inference on any 3D blob. Blobs may not be "proper" images (e.g. because they don't have a standard number of channels).
close #619
close #630
A new field and a new button were added to allow inference on LMDB database:
Results are shown as follows (using DB key instead of image path):
Progress
[x] Add "Test DB" form to generic model show page
[x] Create new route
[x] Update inference job to take in a path to a database
[x] Update inference tool to parse an LMDB database
[x] Add tests
Tests added in latest commit.
I am interested in using Digits with my Multispectral images (4 image channels). Since most multispectral images can be output as a separate 8 bit grayscale images (one for each channel), I was wondering if we could get Digits to accept each multispectral images as separate channels images for both training and classification. For example the file-list could contain the image name, channel number and category.
Image1.jpg c1 1
Image1.jpg c2 1
Image1.jpg c3 1
Image1.jpg c4 1
Image2.jpg c1 1
Image2.jpg c2 1
Image2.jpg c3 1
Image2.jpg c4 1
Image3.jpg c1 2
Image3.jpg c2 2
Image3.jpg c3 2
Image3.jpg c4 2
Hi @ajcampisi DIGITS does not support the format you propose out of the box. Are you saying that each image channel - taken in isolation - is sufficient to classify the image? In that case why don't you convert the 4-channel image to grayscale with:
convert <input> -colorspace Gray <output>
and then feed those grayscale images to DIGITS?
Hi Greg,
Thanks for the reply. No unfortunately it is not sufficient an individual channel image to be used for classification. All four channels images need to be feed into the network as four separate network inputs and the combined image data from all four network channels can then be used by the network for both classification and training.
|
gharchive/pull-request
| 2016-03-16T14:18:23 |
2025-04-01T04:55:24.354391
|
{
"authors": [
"ajcampisi",
"gheinrich"
],
"repo": "NVIDIA/DIGITS",
"url": "https://github.com/NVIDIA/DIGITS/pull/638",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2332887370
|
Fix pre-norm weight conversion for nmt
For nmt model, if it uses pre-norm architecture, then it will have final layernorm. (Ref: fairseq encoder init, forward and fairseq decoder init, forward).
We don't need to modify the rest of the script since fairseq will use the original final_layer_norm before ffn despite they call it final_layer_norm. (Ref: https://github.com/facebookresearch/fairseq/blob/main/fairseq/modules/transformer_layer.py#L212)
hi @Pzzzzz5142 , thanks for your contribution, this PR has been merged and will upstream to main branch next week,
|
gharchive/pull-request
| 2024-06-04T08:27:52 |
2025-04-01T04:55:24.400810
|
{
"authors": [
"Pzzzzz5142",
"nv-guomingz"
],
"repo": "NVIDIA/TensorRT-LLM",
"url": "https://github.com/NVIDIA/TensorRT-LLM/pull/1723",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1953138507
|
Graphcast
Earth-2 MIP Pull Request
Description
Add deepmind's graphcast model.
Checklist
[x] I am familiar with the Contributing Guidelines.
[x] New or existing tests cover these changes.
[x] The documentation is up to date with these changes.
[x] The CHANGELOG.md is up to date with these changes.
[ ] An issue is linked to this pull request.
Dependencies
Added:
graphcast (Apache 2)
gcsfs, https://github.com/fsspec/gcsfs, (BSD-3)
jax, https://github.com/google/jax (Apache 2)
Updates:
modulus. requires version of modulus after https://github.com/NVIDIA/modulus/pull/194 merges
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
|
gharchive/pull-request
| 2023-10-19T22:15:53 |
2025-04-01T04:55:24.431870
|
{
"authors": [
"NickGeneva",
"nbren12"
],
"repo": "NVIDIA/earth2mip",
"url": "https://github.com/NVIDIA/earth2mip/pull/76",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
834751923
|
installtion on geforce 2080ti
Hi, just wonder is it possible to install this lib on a 2080ti cluster? Thanks
Hi @dexhunter ,
This library requires a hardware feature, which is available on Tesla and Quadro class GPUs. So, it would not work on your GeForce.
Hi @dexhunter ,
This library requires a hardware feature, which is available on Tesla and Quadro class GPUs. So, it would not work on your GeForce.
ok thanks
|
gharchive/issue
| 2021-03-18T12:35:58 |
2025-04-01T04:55:24.433821
|
{
"authors": [
"dexhunter",
"pakmarkthub"
],
"repo": "NVIDIA/gdrcopy",
"url": "https://github.com/NVIDIA/gdrcopy/issues/183",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2490724546
|
[FEA] CAE metrics
Modulus Pull Request
Description
Adding useful metrics for CAE domain. The integrals are new additions while others are moved into package from individual examples.
Checklist
[x] I am familiar with the Contributing Guidelines.
[x] New or existing tests cover these changes.
[x] The documentation is up to date with these changes.
[x] The CHANGELOG.md is up to date with these changes.
[x] An issue is linked to this pull request.
Dependencies
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
/blossom-ci
|
gharchive/pull-request
| 2024-08-28T02:11:52 |
2025-04-01T04:55:24.442487
|
{
"authors": [
"ktangsali"
],
"repo": "NVIDIA/modulus",
"url": "https://github.com/NVIDIA/modulus/pull/658",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
455913276
|
How to configure for running outside of Docker container? (undefined symbol: _ZN2cv8fastFreeEPv)
I am developing an application based upon the code in this repository and in order to easily debug my code using PyCharm I am running outside of the provided Docker container. I am seeing the same error as is described in issue #15:
Traceback (most recent call last):
File "/home/james/miniconda3/envs/pytorch_retinanet/bin/retinanet", line 6, in <module>
from retinanet.main import main
File "/home/james/miniconda3/envs/pytorch_retinanet/lib/python3.7/site-packages/retinanet/main.py", line 10, in <module>
from retinanet import infer, train, utils
File "/home/james/miniconda3/envs/pytorch_retinanet/lib/python3.7/site-packages/retinanet/infer.py", line 13, in <module>
from .model import Model
File "/home/james/miniconda3/envs/pytorch_retinanet/lib/python3.7/site-packages/retinanet/model.py", line 8, in <module>
from ._C import Engine
ImportError: /home/james/miniconda3/envs/pytorch_retinanet/lib/python3.7/site-packages/retinanet/_C.so: undefined symbol: _ZN2cv8fastFreeEPv
Can anyone advise as to how I can workaround this issue? My plan is to run my code within the provided container once it has been developed, but in the meantime using the container is problematic since it limits my ability to develop my code using PyCharm (although maybe that's easier to accomplish than fixing this issue). I have tried installing PyCharm into the Docker container with no success so far. :(
BTW, in case it helps explain why I'm trying this: I want to use the model to perform inference on video streams rather than on a collection of images as is shown in the inference example. I'm cobbling together an application that loads a fine-tuned version of this project's model and performs inference on video frame images. Target platform is a Jetson Nano.
Since nvidia-docker is not supported on Jetson Nano this appears to be a show stopper for inference using this project's code on that platform. Or am I missing something obvious? Maybe the workaround is to convert the model to a TensorRT engine and then inference using this approach?
Yes, containers are not currently available on the Jetson platforms. Also, not all of the project's dependencies are readily available for Jetson. This makes installing the full project on Jetson difficult, so an alternative workflow is needed. Thankfully, the Jetpack SDK contains the basic dependencies that are needed to create/run TRT engines on Jetson like CUDA/cuDNN/TensorRT.
Check out the instructions for creating a TensorRT engine from your model and deploying into a deepstream application here. We include instructions that have worked well on Jetson Xavier. Alternatively, you can use the generated TRT engine in your own custom application to perform inference. A basic example of this is provided here.
I will mention that deploying on Jetson Nano may not yield great performance, as its Maxwell generation GPU architecture does not support fast FP16/INT8 like the Volta GPU included with Jetson Xavier. Also, support for DeepStream on Jetson Nano is still not available yet, it is expected Q2 2019.
Thanks for the guidance and clarification, @pkashinkunti, very helpful!
#64 Compiled outside of the container
I had installed in in Ubuntu 16.04 outer docker not use Conda
Install opencv-3.4.0
python3 -m pip install opencv-python
python3 -m pip install opencv-contrib-python
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran pylint
sudo apt-get install python3.5-dev
wget -c https://github.com/opencv/opencv/archive/3.4.0.zip
unzip 3.4.0.zip
cd opencv-3.4.0/
wget https://github.com/opencv/opencv_contrib/archive/3.4.0.zip -O opencv_contrib-3.4.0.zip
unzip opencv_contrib-3.4.0.zip
cd opencv-3.4.0
mkdir build && cd build
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH= ../../opencv_contrib-3.4.0/modules -D OPENCV_GENERATE_PKGCONFIG=ON ..
make -j 32
sudo make install
sudo ldconfig
Instal pytorch
python3 -m pip install torch torchvision
Install cocoapi
git clone https://github.com/philferriere/cocoapi
cd cocoapi/PythonAPI/
python3 setup.py install
Install nvidia-dali
python3 -m pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/cuda/10.0 nvidia-dali
upgrade numpy to 1.11.0
python3 -m pip install --upgrade pip
sudo apt-get remove --auto-remove python-numpy
python3 -m pip install -U numpy==1.17.2
Install retinanet-examples
cd retinanet-examples
python3 setup.py clean --all install
CUDA_VISIBLE_DEVICES=0 retinanet infer retinanet_rn50fpn/retinanet_rn50fpn.pth --images ../../coco/val2017/ --annotations ../../coco/annotations/instances_val2017.json --size=2
Loading model from retinanet_rn50fpn.pth...
model: RetinaNet
backbone: ResNet50FPN
classes: 80, anchors: 9
Preparing dataset...
loader: pytorch
resize: 800, max: 1333
backend: pytorch
device: 1 gpu
batch: 2, precision: mixed
Running inference...
[ 914/5000] 0.131s/2-batch (fw: 0.130s), 15.2 im/s
[1814/5000] 0.133s/2-batch (fw: 0.133s), 15.0 im/s
[2726/5000] 0.132s/2-batch (fw: 0.131s), 15.2 im/s
[3632/5000] 0.133s/2-batch (fw: 0.132s), 15.1 im/s
[4526/5000] 0.134s/2-batch (fw: 0.134s), 14.9 im/s
[5000/5000] 0.132s/2-batch (fw: 0.132s), 15.1 im/s
Gathering results...
Writing detections.json...
Evaluating model...
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.359
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.551
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.387
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.204
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.403
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.473
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.309
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.494
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.523
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.333
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.578
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.675
|
gharchive/issue
| 2019-06-13T19:25:31 |
2025-04-01T04:55:24.491598
|
{
"authors": [
"azuryl",
"goktug97",
"monocongo",
"pkashinkunti"
],
"repo": "NVIDIA/retinanet-examples",
"url": "https://github.com/NVIDIA/retinanet-examples/issues/38",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1932991870
|
enable sanitizer in Arm64 build [skip ci]
Close #1388
According to my test with the latest code, there's no failure to enable the sanitizer on Arm64 build.
So, just to enable it in the build script.
build
build
build
skip ci as no arm CI for pre-merge currently
build
|
gharchive/pull-request
| 2023-10-09T12:46:38 |
2025-04-01T04:55:24.494248
|
{
"authors": [
"GaryShen2008",
"jlowe",
"pxLi"
],
"repo": "NVIDIA/spark-rapids-jni",
"url": "https://github.com/NVIDIA/spark-rapids-jni/pull/1482",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1131945094
|
[submodule-sync] bot-submodule-sync-branch-22.04 to branch-22.04 [skip ci] [bot]
submodule-sync to create a PR keeping thirdparty/cudf up-to-date.
HEAD commit SHA: d3f44794419b2cd457a790c6475f1ba84ba64050, cudf commit SHA: https://github.com/rapidsai/cudf/commit/dcac052bd90c53865203ccda1ae4829da9d5db1f
This PR will be auto-merged if test passed. If failed, it will remain open until test pass or manually fix.
HEAD commit SHA: d3f44794419b2cd457a790c6475f1ba84ba64050, CUDF commit SHA: https://github.com/rapidsai/cudf/commit/dcac052bd90c53865203ccda1ae4829da9d5db1f
Test passed: True
SUCCESS - auto-merge
|
gharchive/pull-request
| 2022-02-11T07:17:43 |
2025-04-01T04:55:24.497448
|
{
"authors": [
"nvauto"
],
"repo": "NVIDIA/spark-rapids-jni",
"url": "https://github.com/NVIDIA/spark-rapids-jni/pull/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2175142939
|
[FEA] Generate updated supported CSV files from plugin repo
Fixes https://github.com/NVIDIA/spark-rapids-tools/issues/846
spark-rapids repo added CSV files generation per Apache Spark version under spark-rapids/tools/generated_files.
This PR syncs with the plugin and updated the existing CSV files under spark-rapids-tools/core/src/main/resources. The new files are generated using the python script included in this PR. The high-level approach is to take the union of the CSV files across all spark versions, which is explained in more detail in the python file.
Follow up:
This PR introduces new Execs. We need to add new parsers
Automate this process by setting up a regular Jenkins job
File an issue in the plugin repo to remove the stale files
Thanks @nartal1!
Dependencies for this script(pandas) has to be specified?
I added some comments in the script for dependencies. My original intention was to include this script for review/future development reference. Do we need to specifically specify them in a file like requirements.txt?
What would be the function of jenkins job i.e this python script will update the supported*.csv files and create a new PR?
The jenkins job would likely have the following process:
Use the python script in this PR to generate the union of the CSV files from the spark-rapids repo
Compare the current union files with the existing files in tools repo
Update tools repo and file an automated PR for review
It will be nice if the new python script accepts an input to override the data pulled from the plugin. For example we can have a config files in json format to describe some sort of explicit overriding we need to do do on the tools side.
Thanks @amahussein! I updated the python script to accept a configs file which overrides the final output. The default configs file contains the PromotePrecision information. Next step is to make this configs file optional.
What does CO mean in the supported CSV files?
CC: @amahussein @mattahrens
What does CO mean in the supported CSV files?
CO in this context means Configured Off specifically for read formats. If the readformat is off by default in the plugin, it's marked as CO. Here Avro and Json are off by default, so we end up adding CO for these read formats.
Code where CO is assigned: link
val readOps = types.map { t =>
if (!formatEnabled) {
// indicate configured off by default
"CO"
} else {
read.support(t).text
}
}
JSON and AVRO formats disabled by default.
I think we can consider these same as "NS" as we have CollectLimitExec(disabled by default).
Thanks @amahussein!
Can we have the generated files sorted alphabetically similar to what the plugin sources have?
I looked into the generated files in plugin repo. They are not sorted alphabetically either. If we need to follow the order strictly, I will spend more time on it. Now the order is generally the same, with few exceptions.
It is hard to tell if the new change overrides any manual change we did in those files. For example, if I haven't call out promote_precision, we would not see it. Is there another diff that can show us the change done to each operator? possible work around is to maintain the order of the columns so that the Diff won't be bogus.
For the manual changes done in the tools side, they are currently not tracked (we can use the configs file override_supported_confugs.json to do it). I believe so far promote_precision is the only one from previous Jenkins job messages.
The current bogus diff is due to two new columns DAYTIME and YEARMONTH. If we remove these columns and compare with the old CSV files in plugin again, we see that:
in supportedDataSource.csv:
no difference
in supportedExecs.csv:
@@ -17 +17 @@
-AQEShuffleReadExec,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
+CustomShuffleReaderExec,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
@@ -24,4 +23,0 @@
-WriteFilesExec,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,S,S,S,PS,PS,PS,S
-AppendDataExecV1,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,NS,S,NS,PS,PS,PS,NS
-AtomicCreateTableAsSelectExec,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,NS,S,NS,PS,PS,PS,NS
-AtomicReplaceTableAsSelectExec,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,NS,S,NS,PS,PS,PS,NS
@@ -29 +24,0 @@
-OverwriteByExpressionExecV1,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,NS,S,NS,PS,PS,PS,NS
@@ -52 +46,0 @@
-PythonMapInArrowExec,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NS,PS,NS,PS,NS
@@ -56 +49,0 @@
-WindowGroupLimitExec,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,S,NS,NS,PS,PS,PS,NS
@@ -58 +50,0 @@
-CustomShuffleReaderExec,S,None,Input/Output,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
in supportedExprs.csv:
@@ -112,3 +111,0 @@
-BloomFilterMightContain,S, ,None,project,lhs,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,S,NA,NA,NA,NA,NA
-BloomFilterMightContain,S, ,None,project,rhs,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA
-BloomFilterMightContain,S, ,None,project,result,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
@@ -124,2 +121,2 @@
-CheckOverflowInTableInsert,S, ,None,project,input,S,S,S,S,S,S,S,S,PS,S,S,S,S,S,PS,PS,PS,S
-CheckOverflowInTableInsert,S, ,None,project,result,S,S,S,S,S,S,S,S,PS,S,S,S,S,S,PS,PS,PS,S
+CheckOverflow,S, ,None,project,input,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
+CheckOverflow,S, ,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
@@ -159,3 +156,3 @@
-DateAdd,S,`date_add`; `dateadd`,None,project,startDate,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-DateAdd,S,`date_add`; `dateadd`,None,project,days,NA,S,S,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-DateAdd,S,`date_add`; `dateadd`,None,project,result,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
+DateAdd,S,`date_add`,None,project,startDate,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
+DateAdd,S,`date_add`,None,project,days,NA,S,S,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
+DateAdd,S,`date_add`,None,project,result,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
@@ -165,3 +162,3 @@
-DateDiff,S,`date_diff`; `datediff`,None,project,lhs,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-DateDiff,S,`date_diff`; `datediff`,None,project,rhs,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-DateDiff,S,`date_diff`; `datediff`,None,project,result,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
+DateDiff,S,`datediff`,None,project,lhs,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
+DateDiff,S,`datediff`,None,project,rhs,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
+DateDiff,S,`datediff`,None,project,result,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
@@ -185,6 +181,0 @@
-DivideDTInterval,S, ,None,project,lhs,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-DivideDTInterval,S, ,None,project,rhs,NA,S,S,S,S,S,S,NA,NA,NA,NS,NA,NA,NA,NA,NA,NA,NA
-DivideDTInterval,S, ,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-DivideYMInterval,S, ,None,project,lhs,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-DivideYMInterval,S, ,None,project,rhs,NA,S,S,S,S,S,S,NA,NA,NA,NS,NA,NA,NA,NA,NA,NA,NA
-DivideYMInterval,S, ,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
@@ -196,2 +186,0 @@
-Empty2Null,S, ,None,project,input,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA
-Empty2Null,S, ,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA
@@ -300,2 +288,0 @@
-KnownNullable,S, ,None,project,input,S,S,S,S,S,S,S,S,PS,S,S,S,S,S,PS,PS,PS,S
-KnownNullable,S, ,None,project,result,S,S,S,S,S,S,S,S,PS,S,S,S,S,S,PS,PS,PS,S
@@ -317,2 +304,2 @@
-Length,S,`char_length`; `character_length`; `len`; `length`,None,project,input,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NS,NA,NA,NA,NA,NA
-Length,S,`char_length`; `character_length`; `len`; `length`,None,project,result,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
+Length,S,`char_length`; `character_length`; `length`,None,project,input,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NS,NA,NA,NA,NA,NA
+Length,S,`char_length`; `character_length`; `length`,None,project,result,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
@@ -379,6 +365,0 @@
-MultiplyDTInterval,S, ,None,project,lhs,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-MultiplyDTInterval,S, ,None,project,rhs,NA,S,S,S,S,S,S,NA,NA,NA,NS,NA,NA,NA,NA,NA,NA,NA
-MultiplyDTInterval,S, ,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-MultiplyYMInterval,S, ,None,project,lhs,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-MultiplyYMInterval,S, ,None,project,rhs,NA,S,S,S,S,S,S,NA,NA,NA,NS,NA,NA,NA,NA,NA,NA,NA
-MultiplyYMInterval,S, ,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
@@ -425,8 +406,2 @@
-PythonUDAF,S, ,None,aggregation,param,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NS,PS,NS,PS,NS
-PythonUDAF,S, ,None,aggregation,result,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NA,PS,NS,PS,NA
-PythonUDAF,S, ,None,reduction,param,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NS,PS,NS,PS,NS
-PythonUDAF,S, ,None,reduction,result,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NA,PS,NS,PS,NA
-PythonUDAF,S, ,None,window,param,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NS,PS,NS,PS,NS
-PythonUDAF,S, ,None,window,result,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NA,PS,NS,PS,NA
-PythonUDAF,S, ,None,project,param,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NS,PS,NS,PS,NS
-PythonUDAF,S, ,None,project,result,S,S,S,S,S,S,S,S,PS,S,NS,NS,NS,NA,PS,NS,PS,NA
+PromotePrecision,S, ,None,project,input,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
+PromotePrecision,S, ,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
@@ -443,3 +418,3 @@
-RLike,S,`regexp_like`; `regexp`; `rlike`,None,project,str,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA
-RLike,S,`regexp_like`; `regexp`; `rlike`,None,project,regexp,NA,NA,NA,NA,NA,NA,NA,NA,NA,PS,NA,NA,NA,NA,NA,NA,NA,NA
-RLike,S,`regexp_like`; `regexp`; `rlike`,None,project,result,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
+RLike,S,`rlike`,None,project,str,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA
+RLike,S,`rlike`,None,project,regexp,NA,NA,NA,NA,NA,NA,NA,NA,NA,PS,NA,NA,NA,NA,NA,NA,NA,NA
+RLike,S,`rlike`,None,project,result,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
@@ -479,6 +453,0 @@
-RoundCeil,S, ,None,project,value,NA,S,S,S,S,PS,PS,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
-RoundCeil,S, ,None,project,scale,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-RoundCeil,S, ,None,project,result,NA,S,S,S,S,S,S,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
-RoundFloor,S, ,None,project,value,NA,S,S,S,S,PS,PS,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
-RoundFloor,S, ,None,project,scale,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-RoundFloor,S, ,None,project,result,NA,S,S,S,S,S,S,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
@@ -647 +616 @@
-XxHash64,S,`xxhash64`,None,project,input,S,S,S,S,S,S,S,S,PS,S,S,S,NS,NS,NS,NS,NS,NS
+XxHash64,S,`xxhash64`,None,project,input,S,S,S,S,S,NS,NS,S,PS,S,S,S,NS,NS,NS,NS,NS,NS
@@ -668 +637 @@
-Average,S,`avg`; `mean`,None,aggregation,input,NA,S,S,S,S,S,S,NA,NA,NA,S,S,NA,NS,NA,NA,NA,NA
+Average,S,`avg`; `mean`,None,aggregation,input,NA,S,S,S,S,S,S,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
@@ -670 +639 @@
-Average,S,`avg`; `mean`,None,reduction,input,NA,S,S,S,S,S,S,NA,NA,NA,S,S,NA,NS,NA,NA,NA,NA
+Average,S,`avg`; `mean`,None,reduction,input,NA,S,S,S,S,S,S,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
@@ -672 +641 @@
-Average,S,`avg`; `mean`,None,window,input,NA,S,S,S,S,S,S,NA,NA,NA,S,S,NA,NS,NA,NA,NA,NA
+Average,S,`avg`; `mean`,None,window,input,NA,S,S,S,S,S,S,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
@@ -674,10 +643,6 @@
-BloomFilterAggregate,S, ,None,reduction,child,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-BloomFilterAggregate,S, ,None,reduction,estimatedItems,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-BloomFilterAggregate,S, ,None,reduction,numBits,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
-BloomFilterAggregate,S, ,None,reduction,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA
-CollectList,S,`array_agg`; `collect_list`,None,aggregation,input,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
-CollectList,S,`array_agg`; `collect_list`,None,aggregation,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,PS,NA,NA,NA
-CollectList,S,`array_agg`; `collect_list`,None,reduction,input,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
-CollectList,S,`array_agg`; `collect_list`,None,reduction,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,PS,NA,NA,NA
-CollectList,S,`array_agg`; `collect_list`,None,window,input,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
-CollectList,S,`array_agg`; `collect_list`,None,window,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,PS,NA,NA,NA
+CollectList,S,`collect_list`,None,aggregation,input,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
+CollectList,S,`collect_list`,None,aggregation,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,PS,NA,NA,NA
+CollectList,S,`collect_list`,None,reduction,input,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
+CollectList,S,`collect_list`,None,reduction,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,PS,NA,NA,NA
+CollectList,S,`collect_list`,None,window,input,S,S,S,S,S,S,S,S,PS,S,S,S,S,NS,PS,PS,PS,NS
+CollectList,S,`collect_list`,None,window,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,PS,NA,NA,NA
@@ -766,2 +730,0 @@
-InSubqueryExec,S, ,None,project,input,S,S,S,S,S,S,S,S,PS,S,S,S,NS,NS,NS,NA,NS,NS
-InSubqueryExec,S, ,None,project,result,S,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA
@@ -773,4 +735,0 @@
-CheckOverflow,S, ,None,project,input,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
-CheckOverflow,S, ,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
-PromotePrecision,S,`promote_precision`,None,project,input,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
-PromotePrecision,S,`promote_precision`,None,project,result,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,S,NA,NA,NA,NA,NA,NA,NA
The diff may still look bogus, but if we look closely, we can see the difference is mainly due to:
new execs and expressions
longer SQL func values (date_add; dateadd vs date_add)
There are new rows introduced here, but we have not done any audit to assess the handling of the new operators. Especially for Execs and some of the UDF expressions added in exprs.csv
For the new Execs and Expressions, I should file follow up PRs to add the support.
|
gharchive/pull-request
| 2024-03-08T02:10:10 |
2025-04-01T04:55:24.511858
|
{
"authors": [
"cindyyuanjiang",
"nartal1"
],
"repo": "NVIDIA/spark-rapids-tools",
"url": "https://github.com/NVIDIA/spark-rapids-tools/pull/847",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1734464672
|
[BUG] GpuRegExpReplaceWithBackref with empty string input produces incorrect result on GPU in Spark 3.1.1
Describe the bug
There is a difference in behavior between Spark 3.1.1 and 3.3.2 when evaluating a regexp_replace against an empty string, but we do not currently respect this difference when running on GPU.
Spark 3.1.1 CPU
scala> spark.conf.set("spark.rapids.sql.enabled", "false")
scala> spark.sql("select REGEXP_REPLACE('', '.*$', 'PROD', 1)").show()
+------------------------------+
|regexp_replace(, .*$, PROD, 1)|
+------------------------------+
| |
+------------------------------+
Spark 3.3.2 CPU
scala> spark.conf.set("spark.rapids.sql.enabled", "false")
scala> spark.sql("select REGEXP_REPLACE('', '.*$', 'PROD', 1)").show()
+------------------------------+
|regexp_replace(, .*$, PROD, 1)|
+------------------------------+
| PROD|
+------------------------------+
The current GPU behavior is consistent with Spark 3.3.2.
Steps/Code to reproduce bug
See above.
Expected behavior
We should produce the same result as CPU on all Spark versions.
Environment details (please complete the following information)
Local workstation
Additional context
This was fixed in https://github.com/NVIDIA/spark-rapids/pull/8433
|
gharchive/issue
| 2023-05-31T15:15:23 |
2025-04-01T04:55:24.516776
|
{
"authors": [
"andygrove"
],
"repo": "NVIDIA/spark-rapids",
"url": "https://github.com/NVIDIA/spark-rapids/issues/8448",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1844143286
|
Actuation model like Mujoco
Hi,
Is it possible to apply forces to a gear like actuation model for a rigid body Model simulation? I'd like to actuate gears at joints similarly to how you can do it via ctrl in Mujoco. I noticed the actuators are included in the nv_humanoid.xml example, but I'm not sure how they are used.
Thanks for your help!
@eric-heiden should be able to help.
Hi @areiner222,
We currently do not have support for the actuation models as they are in Mujoco. At the moment, all joint torques (all controllable degrees of freedom, irrespective of the definition in the Mujoco file) can be set in Model.joint_act which applies these controls defined in generalized coordinates as body forces directly. You could implement you own actuation model based on this, or directly work in maximal coordinates by applying linear and angular forces to the state.body_f array at every simulation step.
|
gharchive/issue
| 2023-08-09T22:44:57 |
2025-04-01T04:55:24.519017
|
{
"authors": [
"areiner222",
"eric-heiden",
"mmacklin"
],
"repo": "NVIDIA/warp",
"url": "https://github.com/NVIDIA/warp/issues/137",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2509803215
|
[Runtime Bug]: Max Payne 1 Fire Glitch
Describe the bug
Fire is drawn through character models.
Windows 11 27695
i9 13900kf @5.8ghz all cores
NVIDIA RTX 4090 | 560.94
How do you reproduce the bug?
Stand infront of the fireplace.
What is the expected behavior?
Fire not rendered through characters body.
Version
0.5.4
Logs
[00:51:56.589] info: No default config found for: C:\Program Files (x86)\Steam\steamapps\common\Max Payne\maxpayne.exe
[00:51:56.589] info: Trying to open config file: C:\Program Files (x86)\Steam\steamapps\common\Max Payne.trex\bridge.conf
[00:51:56.589] info: ==================
[00:51:56.589] info: NVIDIA RTX Remix Bridge Client
[00:51:56.589] info: ==================
[00:51:56.589] info: Version: remix-0.5.4+2d8aa921
[00:51:56.589] info: Loaded d3d9.dll from C:\Program Files (x86)\Steam\steamapps\common\Max Payne\maxpayne.exe
[00:51:56.591] warn: Injected DirectInput8Create proc detected!
[00:51:56.745] info: DirectInput8 hook attached.
[00:51:56.745] warn: Injected DirectInputCreate proc detected!
[00:51:56.751] info: DirectInput hook attached.
[00:51:56.751] info: Initializing new shared memory object.
[00:51:56.753] info: Initializing new shared memory object.
[00:51:56.753] info: Initializing new shared memory object.
[00:51:56.787] info: Initializing new shared memory object.
[00:51:56.816] info: Launching server with GUID 5d5b6344-17b9-4974-b8ec-0482a2a1d2a0
[00:51:56.822] info: Process set as DPI aware
[00:51:56.822] info: Sending SYN command, waiting for ACK from server...
[00:51:57.103] info: Ack received! Handshake completed! Telling server to continue waiting for commands...
[00:52:33.056] warn: Window extent != backbuffer extent in fullscreen mode. Forcing window extent to backbuffer size (3840x2160).
[00:52:33.056] info: Creating a NON thread-safe D3D9 device.
[00:52:35.239] info: Message channel UWM_REMIX_BRIDGE_REGISTER_THREADPROC_MSG handshake complete.
[00:52:36.135] info: DirectInput keyboard acquired
[00:52:36.135] info: DirectInput mouse acquired
[00:52:36.656] info: Remix UI deactivated.
[00:52:36.656] info: Remix UI deactivated.
[00:52:39.844] info: DirectInput keyboard unacquired
[00:52:39.844] info: DirectInput keyboard acquired
[00:52:41.954] warn: Non-exclusive DirectInput keyboard message skipped.
[01:08:07.581] info: DirectInput keyboard unacquired
[01:08:07.581] info: DirectInput mouse unacquired
[01:08:07.607] info: Client window became inactive, disabling timeouts for bridge client...
[01:08:07.623] info: About to unload bridge client.
[01:08:07.623] info: Sending Terminate command to server...
[01:08:09.436] info: Server notified that it has cleanly terminated. Cleaning up.
[01:08:09.583] info: Most recent Device Queue commands sent from Client
[01:08:09.583] info: Command sent: Terminate
[01:08:09.583] info: Command sent: IDirect3DDevice9Ex_Destroy
[01:08:09.583] info: Command sent: IDirect3DSurface9_Destroy
[01:08:09.583] info: Command sent: IDirect3DIndexBuffer9_Destroy
[01:08:09.583] info: Command sent: IDirect3DVertexBuffer9_Destroy
[01:08:09.583] info: Command sent: IDirect3DSurface9_Destroy
[01:08:09.583] info: Command sent: Bridge_UnlinkResource
[01:08:09.583] info: Command sent: IDirect3DSwapChain9_Destroy
[01:08:09.583] info: Command sent: IDirect3DDevice9Ex_SetTexture
[01:08:09.583] info: Command sent: IDirect3DDevice9Ex_SetTexture
[01:08:09.583] info: Most recent Device Queue commands received by Server
[01:08:09.583] info: Command received: Terminate
[01:08:09.583] info: Command received: IDirect3DDevice9Ex_Destroy
[01:08:09.583] info: Command received: IDirect3DSurface9_Destroy
[01:08:09.583] info: Command received: IDirect3DIndexBuffer9_Destroy
[01:08:09.583] info: Command received: IDirect3DVertexBuffer9_Destroy
[01:08:09.583] info: Command received: IDirect3DSurface9_Destroy
[01:08:09.583] info: Command received: Bridge_UnlinkResource
[01:08:09.583] info: Command received: IDirect3DSwapChain9_Destroy
[01:08:09.583] info: Command received: IDirect3DDevice9Ex_SetTexture
[01:08:09.583] info: Command received: IDirect3DDevice9Ex_SetTexture
[01:08:09.583] info: Most recent Module Queue commands sent from Client
[01:08:09.583] info: Command sent: IDirect3D9Ex_Destroy
[01:08:09.583] info: Command sent: IDirect3D9Ex_CreateDevice
[01:08:09.583] info: Command sent: IDirect3D9Ex_CheckDeviceMultiSampleType
[01:08:09.583] info: Command sent: IDirect3D9Ex_CheckDeviceMultiSampleType
[01:08:09.583] info: Command sent: IDirect3D9Ex_Destroy
[01:08:09.583] info: Most recent Module Queue commands received by Server
[01:08:09.583] info: Command received: IDirect3D9Ex_Destroy
[01:08:09.583] info: Command received: IDirect3D9Ex_CreateDevice
[01:08:09.583] info: Command received: IDirect3D9Ex_CheckDeviceMultiSampleType
[01:08:09.583] info: Command received: IDirect3D9Ex_CheckDeviceMultiSampleType
[01:08:09.583] info: Command received: IDirect3D9Ex_Destroy
[01:08:09.583] info: Shutdown cleanup successful, exiting now!
[01:08:09.583] info: DirectInput8 hook detached.
[01:08:09.583] info: DirectInput hook detached.
[01:08:09.586] info: [Uptime]: 972s
Crash dumps
No response
Media
https://youtu.be/8kMHJklvjzE?si=ued_KLd0JxuaZzSj
@shoober420 Hi shoober! This happens due to improperly tagged textures. You have probably something tagged as UI that shouldn't be.
My advice to you, which also goes for the white texture problem: Reset your Remix settings, and properly tag the UI textures again. Particle effects like fire, rain, snow, etc should be tagged as Particle. And blood stains, blood splatters, dirt decals, bullet holes, wall graffiti, etc should be tagged as Decal.
Also keep in mind that the aiming reticle doesn't currently work in this game when using Remix, so better just disable that, it's the plain white square texture.
|
gharchive/issue
| 2024-09-06T08:11:18 |
2025-04-01T04:55:24.538789
|
{
"authors": [
"PappaSlask",
"shoober420"
],
"repo": "NVIDIAGameWorks/rtx-remix",
"url": "https://github.com/NVIDIAGameWorks/rtx-remix/issues/603",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1867576206
|
Fvk pff tweaks
Closes #179, #180
For 179, there was a bug that I'd introduced - hadn't realized that object types in pandas could be mixed types. So in the data dictionary, if a variable was present for 2010, 2020 it was interpreted correctly but for the couple cases where the year was 2020 it was interpreted as a float and converted to NaN by .str.split(", ") rather an array of years.
For 180, figured just keep it minimal - replace column names exactly rather than any sort of regex or trimming whitespace, just in case anything down the road requires anything different
Successful build - I've manually QA'd outputs. Made small fix to commit 2 this morning and tested changes locally
@fvankrieken just confirming that the metadata changes are intentional? Everything else looks good
They are, they've been re-generated. I might just remove them from git actually - before they were both and input to us and to AE's work, now we're just generating them and uploading to DO so it doesn't make a ton of sense to check into the repo
@alexrichey since the rest of the files in those folders are sort of archived from when we used them as inputs, I've reverted the checked-in metadata files for decennial to those versions (should we ever go back to pulling from an API) and the new generated ones are just pushed to S3 without checkin now
Successful job here, outputs have been qa'd
https://github.com/NYCPlanning/data-engineering/actions/runs/6002251634
|
gharchive/pull-request
| 2023-08-25T19:22:44 |
2025-04-01T04:55:24.639382
|
{
"authors": [
"fvankrieken"
],
"repo": "NYCPlanning/data-engineering",
"url": "https://github.com/NYCPlanning/data-engineering/pull/181",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
535084668
|
Broken metadata
A recent change in the format of the journal field in original -- which had been a string but is now a dictionary in some cases:
{'id': 'jour.1041807', 'title': 'Justice Quarterly'}
be mindful that changes in format have consequences in terms of troubleshooting and lost time elsewhere
if the original metadata doesn't stay consistent, we'll be forced to ignore it which is a big waste of labor and time
stuff like this is happening because our focus with linkages has been so far, on title and dataset primarily - we'll make sure to start checking for consistency in all fields now.
Right, that attention to detail must all change rapidly now, given the schedule for the next several months and the fact that we're integrating a much larger volume metadata from API calls plus more entities into the graph.
This particular error has been resolved if you want to close and @JasonZhangzy1757 is searching for and updating other instances.
Thank you
|
gharchive/issue
| 2019-12-09T17:45:38 |
2025-04-01T04:55:24.661643
|
{
"authors": [
"ceteri",
"srand525"
],
"repo": "NYU-CI/RCPublications",
"url": "https://github.com/NYU-CI/RCPublications/issues/29",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1000325031
|
Performance on ScanNet
Thanks for your opening source the nice project.
I run attMPTI on the two dataset using the hyper-parameters according to the paper. I can get the reported performance of the paper on S3DIS but fail on ScanNet. Concretly, in 2-way-1shot task on both split-0 and split-1 of ScanNet , i set n_prototypes=100, knn_neighbor=200, Gausian_sigma=5. The meanIoU i get are 39.5% and 37.6% on split-0 and split-1 respectively, which are about 3% dropped from the paper. I have also tried Gausian_sigma=1 but the result changes slightly.
Do you have these problems when you conducting the experiments on ScanNet? Or do you know how i can fix these problems?
Hi,
The different performance might be due to two reasons: 1) insufficient pre-training; 2) sample bias when randomly sampling testing set.
Since ScanNet contains more examples than S3DIS, we did a longer pre-training on ScanNet (150 epochs).
You may want to try: 1) pre-train longer; 2) re-generate test set.
I can also send you my testing set on 2-way 1-shot setting if needed.
Thank you! I will have a try.
Hi,
The different performance might be due to two reasons: 1) insufficient pre-training; 2) sample bias when randomly sampling testing set.
Since ScanNet contains more examples than S3DIS, we did a longer pre-training on ScanNet (150 epochs).
You may want to try: 1) pre-train longer; 2) re-generate test set.
I can also send you my testing set on 2-way 1-shot setting if needed.
Hello! I run attMPTI on the S3DIS-S1-N2-K5 task with the paper's hyper parameters. The mIoU is about 61%, which is 6% lower than that in the paper. I have tried longer pre-training and regenerating the test set, but the improvement is inapparent. Is that the S3DIS-S1-N2-K5 task needs other particular tricks?
Hi, it seems you use different parameters for different dataset and few-shot setting that you did not mention in the paper. And you only provide one set of parameters for S3DIS-N2-K1 in the codebase. Can you provide your parameters on other settings (i.e., S3DIS-N2-K5, Scannet-N2-K1, Scannet-N2-K5)? Thank you very much!
|
gharchive/issue
| 2021-09-19T14:15:04 |
2025-04-01T04:55:24.666232
|
{
"authors": [
"1170801121",
"Na-Z",
"lailvlong"
],
"repo": "Na-Z/attMPTI",
"url": "https://github.com/Na-Z/attMPTI/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
304539651
|
Updated README.md to include object compare
In reference to https://github.com/NagRock/ts-mockito/issues/93 on verifying calls with objects
What do you think about separate section in readme.md about matchers?
Yeah I could create a separate section. I'm not too familiar with it but I can start something.
Added a new section. Let me know if there is any other changes needed. Thanks
Codecov Report
Merging #94 into master will increase coverage by 0.31%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #94 +/- ##
==========================================
+ Coverage 94.58% 94.89% +0.31%
==========================================
Files 35 34 -1
Lines 609 607 -2
Branches 69 69
==========================================
Hits 576 576
+ Misses 24 22 -2
Partials 9 9
Impacted Files
Coverage Δ
src/stub/MethodStub.ts
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 93d6625...996b81f. Read the comment docs.
@NagRock I just started using this library yesterday so i'm a newbie but wondering about PR's like this which pass all tests and were introduced over a year ago. Is this library still being supported? Just a bit worried I may have gotten to the party too late 😆
Good to merge!
Merge when?
|
gharchive/pull-request
| 2018-03-12T21:05:07 |
2025-04-01T04:55:24.679029
|
{
"authors": [
"NagRock",
"alexanderfokau",
"codecov-io",
"designxtek",
"henrikra",
"ksnyde"
],
"repo": "NagRock/ts-mockito",
"url": "https://github.com/NagRock/ts-mockito/pull/94",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1909892853
|
False Negative on Conventional PR?
Overview
I'm using Conventional PR to prevent commits with bad messages to be merged to the main branch. I'm using the default conventional commit pattern
^(build|chore|ci|docs|feat|fix|perf|refactor|revert|style|test){1}(\([\w\-\.]+\))?(!)?: ([\w ])+([\s\S]*)
However, it seems commit messages like test commit is still considered to be valid. It seems like the conventional PR doesn't filter commit messages correctly.
Expected Behavior
The pull request should be considered as invalid, since test commit does not conform the regex.
Actual Behavior
The pull request is considered as valid.
Thanks for filing an issue!
The pattern you submitted seems fine, can you post the action workflow file here?
Sure, here you go
name: Check PR semantics
on:
pull_request_target:
jobs:
cpr:
runs-on: ubuntu-latest
steps:
- name: Check PR semantics
uses: Namchee/conventional-pr@latest
with:
access_token: ${{ secrets.ACCESS_TOKEN }}
title_pattern: ^(build|chore|ci|docs|feat|fix|perf|refactor|revert|style|test){1}(\([\w\-\.]+\))?(!)?: ([\w ])+([\s\S]*)
/add-knowledge
Still writing results it seems. Moreover, there's an unknown @[object Object] there
[
{
"issue_number": 38,
"title": "False Negative on Conventional PR?",
"problem": "Overview\n\nI'm using Conventional PR to prevent commits with bad messages to be merged to the main branch. I'm using the default conventional commit pattern\n\n^(build|chore|ci|docs|feat|fix|perf|refactor|revert|style|test){1}(\\\\(\\[\\w\\\\-\\\\.]+\\\\))?(!)?: (\\[\\w ])+(\\[\\s\\S]\\*)\n\nHowever, it seems commit messages like test commit is still considered to be valid. It seems like the conventional PR doesn't filter commit messages correctly.\n\nExpected Behavior\n\nThe pull request should be considered as invalid, since test commit does not conform the regex.\n\nActual Behavior\n\nThe pull request is considered as valid.",
"solution": "@[object Object]: It seems that the conventional PR is not filtering commit messages correctly. One possible solution is to modify the regular expression pattern to include a check for the minimum length of the commit message. This can be done by adding `{2,}` after `([\\w ])+` in the regular expression. This will ensure that the commit message contains at least two characters."
}
]
|
gharchive/issue
| 2023-09-23T14:35:22 |
2025-04-01T04:55:24.696651
|
{
"authors": [
"Namchee"
],
"repo": "Namchee/duplikat",
"url": "https://github.com/Namchee/duplikat/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2597263798
|
🛑 Namide is down
In f312781, Namide (https://namide.com/en) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Namide is back up in 9108810 after 5 minutes.
|
gharchive/issue
| 2024-10-18T11:25:38 |
2025-04-01T04:55:24.702167
|
{
"authors": [
"Namide"
],
"repo": "Namide/upptime",
"url": "https://github.com/Namide/upptime/issues/655",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
137626256
|
ViewNotFoundException even know the views are in the build output
Exception: Nancy.RequestExecutionException: Oh noes! ---> Nancy.ViewEngines.ViewNotFoundException: Unable to locate view 'Search/Index'
The problem is that the output folder actually contains the standard Views/Search/Index.cshtml.
This does not happen when running locally (through Visual Studio). It happens when running it from the build output.
Any ideas?
Is this a self-hosted application?
Correct, You can see how I am starting the application here:
https://github.com/tidusjar/RequestPlex/blob/master/RequestPlex.UI/Program.cs
using (WebApp.Start<Startup>(uri))
{
Console.WriteLine($"Request Plex is running on {uri}");
Console.WriteLine("Press any key to exit");
Console.ReadLine();
}
Yeah but what Nancy hosting adapter are you using? I'm guessing it's something with the root path that's not functioning correctly. Are you using any of the Nancy.Hosting.* packages?
@tidusjar Since your module is called SearchModule, I suspect you could just do return View["Index"]? Can you post the list of locations Nancy tried to search?
@thecodejunkie I am not using any Nancy.Hosting.* packages, but I am using Microsoft.Owin.Hosting.
@khellang I am not 100% sure (New to Nancy).
My Views folder structure is the following:
Views
---- Search
---- Index
---- Home etc
Here is the whole stacktrace:
Nancy.RequestExecutionException: Oh noes! ---> Nancy.ViewEngines.ViewNotFoundException: Unable to locate view 'Search/Index'
Currently available view engine extensions: sshtml,html,htm
Locations inspected: views/Search/Search/Index-en-US,views/Search/Search/Index,Search/Search/Index-en-US,Search/Search/Index,views/Search/Index-en-US,views/Search/Index,Search/Index-en-US,Search/Index
Root path: C:\Program Files\Request Plex\Release\
If you were expecting raw data back, make sure you set the 'Accept'-header of the request to correct format, for example 'application/json'
at Nancy.ViewEngines.DefaultViewFactory.GetRenderedView(String viewName, Object model, ViewLocationContext viewLocationContext)
at System.Dynamic.UpdateDelegates.UpdateAndExecute4[T0,T1,T2,T3,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3)
at Nancy.ViewEngines.DefaultViewFactory.RenderView(String viewName, Object model, ViewLocationContext viewLocationContext)
at System.Dynamic.UpdateDelegates.UpdateAndExecute4[T0,T1,T2,T3,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3)
at Nancy.Responses.Negotiation.ViewProcessor.Process(MediaRange requestedMediaRange, Object model, NancyContext context)
at System.Dynamic.UpdateDelegates.UpdateAndExecute4[T0,T1,T2,T3,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3)
at Nancy.Responses.Negotiation.DefaultResponseNegotiator.NegotiateResponse(IEnumerable`1 compatibleHeaders, NegotiationContext negotiationContext, NancyContext context)
at Nancy.Responses.Negotiation.DefaultResponseNegotiator.CreateResponse(IList`1 compatibleHeaders, NegotiationContext negotiationContext, NancyContext context)
at Nancy.Responses.Negotiation.DefaultResponseNegotiator.NegotiateResponse(Object routeResult, NancyContext context)
at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2)
at Nancy.Routing.DefaultRouteInvoker.<>c__DisplayClassa.b__6(Task`1 completedTask)
--- End of inner exception stack trace ---
at Nancy.NancyEngine.InvokeOnErrorHook(NancyContext context, ErrorPipeline pipeline, Exception ex)
You have something whacky going on because it's looking in Search/Search - can you post the module?
You haven't installed the razor view engine package.
Currently available view engine extensions: sshtml,html,htm
^^ that's just the built in SSVE view engine, you need to install Nancy.ViewEngines.Razor in order to use Razor (cshtml) views.
He hast he package installed, might be a case of the .net linker removing the reference (deferred loading)
try putting
var foo = typeof(RazorViewEngine) anywhere in your code, before you actual start up Nancy. That will give us an indication if that's the problem
@grumpydev Strange, If I copy my build output manually over to the server it all works. But If I use the build output from the CI build it doesn't (Nancy.ViewEngines.Razor.dll is present there).
Anything to do with requireReinstallation="true" being all over the packages. file?
Is the CI build copying the MS Razor DLL too? Sounds like something is trying to be "clever" in what it deploys depending on what it thinks you need :)
Yeah... You can see what it copies over here: https://ci.appveyor.com/project/tidusjar/requestplex
Found the problem, I'm hoping you guys can explain. Inside my bin (output) folder I have the following
Debug (folder)
Release (folder)
Nancy.ViewEngines.Razor.BuildProviders.dll
Nancy.ViewEngines.Razor.dll
I copied the two dll's above onto the server and replaced what was existing and it now works.
Found it. The Nancy.ViewEngines.Razor.dll was 'blocked' I had to right click -> Properties -> unblock. It now works... I wonder why windows was blocking it.
Thank you for the assistance anyway :+1:
Ah, it will block it if you have a ZIP file you've downloaded from the internet, and if you extract without unblocking the ZIP it will block all the DLLs which then .net won't load :)
Interesting project btw :)
Ah that's why!
Thanks! It should be a replacement for the Original Plex Requests. I'm quite new to Nancy so I'm probably not doing things 100% correctly. I will be revisiting the documentation soon :)
@tidusjar one thing you can change in your modules is call :base() on them. It will let you pass in a module path and all routes, which are declared in the same module` will be relative to that. So no need to do
Get["/search"]
Get["/search/foo"]
Get["/search/bar"]
they will simply be
Get["/"]
Get["/foo"]
Get["/bar"]
https://github.com/NancyFx/Nancy/wiki/Exploring-the-nancy-module#using-modules-to-create-a-root-for-your-routes
What are your static files? All HTML files?
HTML files are considered Views so you could do a directory browse to create an in-memory list of files in the folder. Then do a single route like:
Get["/statics/{file}"]...
This uses the _.file value to look up if its a file from the directory (so that users don't attempt to navigate to files by passing in ../web.config and if it's ok then just doing
return View[(string)_.file];
@thecodejunkie Thanks! I discovered that yesterday, handy!
@phillip-haydon My static files are .cshtml,.js and .css. Do you have some sort of example I could look at?
.cshtml or .html? If they are razor files I would assume you want to process them before returning them...
For .js and .css can't they just come from ./content folder which is already a folder that nancy ignores and serves as static content?
@phillip-haydon I'm using .cshtml so they are all razor files. And yes all of my .js and .css are in the /content folder.
So what are you suggesting? I'm a bit lost now.
Sorry @tidusjar I'm getting confused between issues being raised. Mixed this ticket with another one just raised an hour ago that I replied to.
|
gharchive/issue
| 2016-03-01T16:43:59 |
2025-04-01T04:55:24.717929
|
{
"authors": [
"grumpydev",
"khellang",
"phillip-haydon",
"thecodejunkie",
"tidusjar"
],
"repo": "NancyFx/Nancy",
"url": "https://github.com/NancyFx/Nancy/issues/2330",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
686980104
|
Download from tags list error
Prerequisites
[Y] Did you read FAQ section?
[Y] Did you test with the latest releases or commit ?
Description
Version 20200824
I choose "7. Download from tags list", it will report error after config.
Version 20200726 does NOT have this problem, so I roll back to 20200726.
[LOG]
===log from screen===
Input: 7
Tags list filename [tags.txt]: 0.txt
Use Wildcard[y/n]: y
Oldest first[y/n]: n
Bookmark Count:
Start Page (default=1):
End Page (default=0, 0 for no limit):
Start Date [YYYY-MM-DD]:
End Date [YYYY-MM-DD]:
Reading: 0.txt
Reading D:\pixiv\tag downloader\config.ini ...
Configuration loaded.
Error at process_tags() at page 1: (<class 'AttributeError'>, AttributeError("'list' object has no attribute 'startswith'"), <traceback object at 0x0B972E28>)
Error at process_tags_list(): (<class 'AttributeError'>, AttributeError("'list' object has no attribute 'startswith'"), <traceback object at 0x0B979208>)
Traceback (most recent call last):
File "PixivUtil2.py", line 1555, in main
File "PixivUtil2.py", line 1259, in main_loop
File "PixivUtil2.py", line 942, in menu_download_from_tags_list
File "PixivUtil2.py", line 184, in process_tags_list
File "PixivTagsHandler.pyc", line 48, in process_tags
File "PixivHelper.pyc", line 992, in decode_tags
AttributeError: 'list' object has no attribute 'startswith'
press enter to exit.
===end log from screen===
==log from file===
2020-08-27 14:21:30,241 - PixivUtil20200824 - INFO - Premium User: False.
2020-08-27 14:21:30,241 - PixivUtil20200824 - INFO - Locale =
2020-08-27 14:21:32,222 - PixivUtil20200824 - INFO - Taglist mode (7).
2020-08-27 14:21:40,597 - PixivUtil20200824 - ERROR - Error at process_tags() at page 1: (<class 'AttributeError'>, AttributeError("'list' object has no attribute 'startswith'"), <traceback object at 0x0B972E28>)
2020-08-27 14:21:40,597 - PixivUtil20200824 - ERROR - Traceback (most recent call last):
File "PixivTagsHandler.pyc", line 48, in process_tags
File "PixivHelper.pyc", line 992, in decode_tags
AttributeError: 'list' object has no attribute 'startswith'
2020-08-27 14:21:40,600 - PixivUtil20200824 - ERROR - Error at process_tags_list(): (<class 'AttributeError'>, AttributeError("'list' object has no attribute 'startswith'"), <traceback object at 0x0B979208>)
2020-08-27 14:21:40,600 - PixivUtil20200824 - ERROR - Traceback (most recent call last):
File "PixivUtil2.py", line 184, in process_tags_list
File "PixivTagsHandler.pyc", line 48, in process_tags
File "PixivHelper.pyc", line 992, in decode_tags
AttributeError: 'list' object has no attribute 'startswith'
2020-08-27 14:21:40,605 - PixivUtil20200824 - ERROR - Unknown Error: 'list' object has no attribute 'startswith'
Traceback (most recent call last):
File "PixivUtil2.py", line 1555, in main
File "PixivUtil2.py", line 1259, in main_loop
File "PixivUtil2.py", line 942, in menu_download_from_tags_list
File "PixivUtil2.py", line 184, in process_tags_list
File "PixivTagsHandler.pyc", line 48, in process_tags
File "PixivHelper.pyc", line 992, in decode_tags
AttributeError: 'list' object has no attribute 'startswith'
===end log from file===
yep, got typo when migrating....
can try: https://github.com/Nandaka/PixivUtil2/releases/tag/v20200827-beta1
okay, will do, thank you so much.
v20200827-beta1 is okay, thank you so much.
|
gharchive/issue
| 2020-08-27T06:28:56 |
2025-04-01T04:55:24.732273
|
{
"authors": [
"Nandaka",
"forrenren"
],
"repo": "Nandaka/PixivUtil2",
"url": "https://github.com/Nandaka/PixivUtil2/issues/778",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2049755208
|
Can not push to Github
When I try to push my next js code to Github it fails.
fixed
Deleted files that are too large, such as node_modules files.
|
gharchive/issue
| 2023-12-20T03:09:16 |
2025-04-01T04:55:24.733389
|
{
"authors": [
"Nani1345"
],
"repo": "Nani1345/Business_Final_Project-Na",
"url": "https://github.com/Nani1345/Business_Final_Project-Na/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1424605903
|
Please expand the README
A bit more information would be appreciated in the README:
is it necessary to have Rust and/or Silicon already installed?
does this offer a command (eg. :Silicon) - or how to use the plugin?
where is the default path the image is saved?
While I appreciate having a video, it should not replace documenting your work in the README. For me, the video controls cover the command in the command prompt, making it basically unreadable.
Yes you need silicon to be installed, you can directly install binary and skip installing rust toolchain.
Sadly, there's no :Silicon currently but is planned.
I've posted the commands in reddit post.
Generate images of selected snippet.
require("silicon").visualise(false,false)
Generate images of whole buffer with the selected snippet being highlighted by lighter background.
require("silicon").visualise(true,false)
Copy the image of snippet to clipboard. (Only xclip is supported for Linux, Mac and windows are also supported)
require("silicon").visualise(false,true)
PS: First argument tells whether to show whole buffer content and second tells whether to copy to clipboard.
Fixed with https://github.com/NarutoXY/silicon.lua/commit/e1902924d3600a7b1131a75efc7a638c7d7e1d39
|
gharchive/issue
| 2022-10-26T19:43:11 |
2025-04-01T04:55:24.750466
|
{
"authors": [
"NarutoXY",
"zeitchef"
],
"repo": "NarutoXY/silicon.lua",
"url": "https://github.com/NarutoXY/silicon.lua/issues/3",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1244865556
|
[2pt] Create navigation
Create navigation menu that allows users to open all of the pages you created.
Pull request for this task
|
gharchive/issue
| 2022-05-23T09:28:03 |
2025-04-01T04:55:24.767791
|
{
"authors": [
"NataliaPoletaeva"
],
"repo": "NataliaPoletaeva/recipes",
"url": "https://github.com/NataliaPoletaeva/recipes/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
708174131
|
Descrete plugin not filling blank spaces to the left to autofill
I have the Discrete plugin working fine, the only thing is there seems to be some blank spaces to the left of some items. see pic attached… I know my line was Online during this time but not sure how to have Discrete fill that in with the Online color of green ?? I found a setting in options called “Expand ‘from’ query” and set it to like 15 secs that didn’t work so i set it to 86400 (24hrs) still didn’t change. so scratching my head on how to workaround this ?
i know it’s because Discrete just doesn’t know :slight_smile: what happen before that time but i know it was Online and should be green… see additional pics attached showing what’s in my database table and the query from Grafana for Discrete to use…
here is my query in Grafana
SELECT
$__timeEpoch(line_datetime),
case
when line_status = 'OFFLINE' then 0
when line_status = 'ONLINE' then 1
end as value,
line_name as metric
FROM
line_status_24hrs
WHERE
$__timeFilter(line_datetime)
ORDER BY
line_name ASC
We seem to have the same issue, but using Influx as a datasource.
The 'Expand from query' does not seem to be picked up anymore from grafana 7.1.x. It works fine in Grafana 7.0.x.
The query inspector shows that the from date is the unaltered one, as chosen in the datetime picker.
I uninstalled 7.2.0 and put on 7.0.6 the expand from query did not work for me with my query i used above, i'm using MS SQL
The option expand 'from' query option does not work for me either.
But you can workaround this problem with a slight modification to your query.
InfluxDB example:
SELECT "value" FROM "table WHERE time >= ${__from}ms - 24h AND time <= ${__to}ms
time >= ${__from}ms - 24h will look 24 hours in the past
Queries for other datasources may differ
Any news on this? Expand "from" query still seems to be broken.
|
gharchive/issue
| 2020-09-24T13:31:06 |
2025-04-01T04:55:24.776952
|
{
"authors": [
"herbit",
"modularTaco",
"thanosazlin",
"yves-bourgeois"
],
"repo": "NatelEnergy/grafana-discrete-panel",
"url": "https://github.com/NatelEnergy/grafana-discrete-panel/issues/126",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
311522257
|
show in legend text from value mappings
How show in legend text from value mappings?
I set up a value mapping, but not the text but the value is displayed in the legend. How to fix?
Only one value displays text-this is null
Thanks
on the "Mappings" tab
However, this method still has problems. For example, before choosing Conversion-none, the display looks like this:
But after the conversion-none is selected, it is displayed as follows,
it was necessary to select in Numeric Conversion-none
Thanks! This is very helpful and unintuitive in the same time.
|
gharchive/issue
| 2018-04-05T08:46:44 |
2025-04-01T04:55:24.781166
|
{
"authors": [
"do11",
"tvm2018",
"zhoumin8023"
],
"repo": "NatelEnergy/grafana-discrete-panel",
"url": "https://github.com/NatelEnergy/grafana-discrete-panel/issues/50",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
162282327
|
Question about "Best to use"
Hi!
First, sorry if it's not the place to post a question but I didn't find the correct one.
In the section "How best to use for your project" says:
Create a new framework for your application in src/client/app/frameworks to build your codebase out. Say your app is called AwesomeApp, then create awesomeapp.framework and start building out all your components and services in there. Create other frameworks as you see fit to organize.
I did all the steps until this one. I don't know how to start my project based on the framework I added.
Regards
Hi @yanqui92 fair question. The How best to use guideline is really just a suggested guideline so you are free to do whatever suits your project, but if using the suggestion for instance you could have:
src/client/app/frameworks/awesomeapp.framework/index.ts which would export all your components and services like so:
// components
export * from './components/home/home.component';
export * from './components/contact/contact.component';
// services
export * from './services/awesome.service';
// etc.
Then your other files can reference that. So for example, your src/client/main.web.ts file could bootstrap your home component (for instance) like so:
import {HomeComponent} from './app/frameworks/awesomeapp.framework/';
...
bootstrap(HomeComponent, BOOTSTRAP_PROVIDERS)
.catch((err:any) => console.error(err));
And using services would work much the same way throughout your project. That answer things for you?
|
gharchive/issue
| 2016-06-25T13:58:12 |
2025-04-01T04:55:24.801120
|
{
"authors": [
"NathanWalker",
"yanqui92"
],
"repo": "NathanWalker/angular2-seed-advanced",
"url": "https://github.com/NathanWalker/angular2-seed-advanced/issues/139",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
173367000
|
database version number
Hey, If I ship new database within the app, how can I increment it's version so if it already exists, current get deleted and new comes in?
We are also facing the same concern.
The easiest way to do this; would be to open the existing database; get its version number; check to see if it is older; then if it is older; close the db; copy the new db over...
do you have any sample for incremental updates implementation?
Thanks Nathan. Should have realized that without asking.
Anyone seeking this do it like this:
After instantiating new database connection like documentation says:
new sqliteModule(databaseName, function (err, dbConnection) {....
/* Check current version /
database.version(function (error, version) {
if (version < CONSTANTNEWVERSIONNUMBER) {
/ Delete old database */
sqliteModule.deleteDatabase(databaseName);
if (!sqliteModule.exists(databaseName)) {
sqliteModule.copyDatabase(databaseName);
this.afterDBUpdate (); // to handle version incrementation and new connection.
}
...
SqliteViewModel.prototype.afterDBUpdate = function () {
new sqliteModule(databaseName, function (err, dbConnection) {
database = dbConnection;
database.version(CONSTANTNEWVERSIONNUMBER); // Sets new version number
});
};
|
gharchive/issue
| 2016-08-26T04:17:42 |
2025-04-01T04:55:24.805752
|
{
"authors": [
"NathanaelA",
"sarvagayatri",
"sivamamidi-REISys",
"terhoraj"
],
"repo": "NathanaelA/nativescript-sqlite",
"url": "https://github.com/NathanaelA/nativescript-sqlite/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1004771256
|
Resurrection of 'Modify the Soak Tests for ADOT JS Downstream'
Description
I accidentally messed up #1, so I made this PR to keep a paper trail of the differences.
Closing because this is just to show the differences.
|
gharchive/pull-request
| 2021-09-22T21:01:28 |
2025-04-01T04:55:24.806876
|
{
"authors": [
"NathanielRN"
],
"repo": "NathanielRN/aws-otel-js",
"url": "https://github.com/NathanielRN/aws-otel-js/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2599808159
|
Autoupdate
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I like to keep my ghidra up to date
Describe the solution you'd like
A clear and concise description of what you want to happen.
I would like to (a) notify of new versions, and (b) have an option to automatically download and upgrade ghidra.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
If the team is open to such functionality, i would like to try to make a contribution
(a) is already provided by Github. In the repo, click the down arrow on the "watch" icon and choose "custom".
a) @astrelsky's recommendation is best...GitHub should do a sufficient job in notifying you about new releases
b) We have no plans on supporting this.
|
gharchive/issue
| 2024-10-20T01:23:21 |
2025-04-01T04:55:24.815762
|
{
"authors": [
"astrelsky",
"ryanmkurtz",
"zglozman"
],
"repo": "NationalSecurityAgency/ghidra",
"url": "https://github.com/NationalSecurityAgency/ghidra/issues/7074",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
188140706
|
Remove useless printf.
Update Supported PHP version to 7.1
opencc_close應該交由使用者去負責處理,而且Global把od存起來,無法在同一個Script中處理繁轉簡和簡轉繁的動作,尤其fpm等php完結後module並不會shutdown,所有php pool都無法再open另一個config
|
gharchive/pull-request
| 2016-11-09T00:25:47 |
2025-04-01T04:55:24.894260
|
{
"authors": [
"shtse8"
],
"repo": "NauxLiu/opencc4php",
"url": "https://github.com/NauxLiu/opencc4php/pull/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1588496769
|
Add match page
The page will include a form that has the same fields from the Match object.
Partially resolved with #7.
|
gharchive/issue
| 2023-02-16T23:06:58 |
2025-04-01T04:55:24.902717
|
{
"authors": [
"guy-av"
],
"repo": "NeatTeam1943/2023-ScoutingApp",
"url": "https://github.com/NeatTeam1943/2023-ScoutingApp/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1082955374
|
RPC unavailable on Local Diag --> Unable to Reboot nor to Purge Data
Describe the bug
I have been monitoring a couple of miners for many days:
Behaviour: Switching status between 100% Sync and RPC unavailable on Local Diag continuously.
Rebooting: Unable to reboot.
Purge Data: Unable to Purge Data (see error Loop below)
Killing service 'upnp sha256:059223cdbf528c09361826a200e6baa8d4234ad6a9076c5a088d25f9255d53ff'
Failed to kill service 'upnp sha256:059223cdbf528c09361826a200e6baa8d4234ad6a9076c5a088d25f9255d53ff' due to '(HTTP code 409) unexpected - You cannot remove a running container 8af61954afd2aed9fe7e7215db68ff0c3465249ebff625210226dbd84f4c78ae. Stop the container before attempting removal or force remove '
Killing service 'upnp sha256:059223cdbf528c09361826a200e6baa8d4234ad6a9076c5a088d25f9255d53ff'
Failed to kill service 'upnp sha256:059223cdbf528c09361826a200e6baa8d4234ad6a9076c5a088d25f9255d53ff' due to '(HTTP code 409) unexpected - You cannot remove a running container 8af61954afd2aed9fe7e7215db68ff0c3465249ebff625210226dbd84f4c78ae. Stop the container before attempting removal or force remove '
Killing service 'upnp sha256:059223cdbf528c09361826a200e6baa8d4234ad6a9076c5a088d25f9255d53ff'
Failed to kill service 'upnp sha256:059223cdbf528c09361826a200e6baa8d4234ad6a9076c5a088d25f9255d53ff' due to '(HTTP code 409) unexpected - You cannot remove a running container 8af61954afd2aed9fe7e7215db68ff0c3465249ebff625210226dbd84f4c78ae. Stop the container before attempting removal or force remove '
e.g.:
https://dashboard.balena-cloud.com/devices/9a99fc079d4be877ab23278c86b3247f/summary
@fouad-semaan you can fix this using:
balena-engine container ls
Then get the container ID for the container with the issue... Then you can run...
balena-engine container rm -f 97db54d01814
Where 97db54d01814 is the container ID you copied from above.
This will force kill the container
IIRC @KevinWassermann94 already documented this 409 issue somewhere with the fix?
This is possibly a duplicate of #266
Yes this is partially duplicated regarding error 409 and Kevin has already executed the commands above on that miner.
The remaining issue is the RPC unavailable. It still showing up and each time in different aspect.
e.g. the storage of this one has surpassed 24GB so I decided to purge it. After the purge, it had the RPC until I rebooted it.
https://dashboard.balena-cloud.com/devices/6d746d247172b7c7f47603213e390c29/summary
It will always show that RPC issue for a while until the miner container has loaded and has ingested the latest snapshot
|
gharchive/issue
| 2021-12-17T07:11:18 |
2025-04-01T04:55:24.908953
|
{
"authors": [
"fouad-semaan",
"shawaj"
],
"repo": "NebraLtd/helium-miner-software",
"url": "https://github.com/NebraLtd/helium-miner-software/issues/311",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1343897984
|
Omen weekly kit missing items (Atlas)
Omen weekly kit is missing XL exp candies, super repel, golden lenses, and golden hourglasses.
also mod note refunded
Fixed, thanks for the report.
|
gharchive/issue
| 2022-08-19T03:43:50 |
2025-04-01T04:55:24.911216
|
{
"authors": [
"fatyg5",
"lolanonymous"
],
"repo": "NebulaMC-GG/Bugs-and-Issues",
"url": "https://github.com/NebulaMC-GG/Bugs-and-Issues/issues/1750",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1926652233
|
[Feature] Bug explanation
Describe the bug
/ss and /socialspy on disables socialspy no matter what
To Reproduce
Steps to reproduce the behavior:
run /ss or /scoialspy on
look at chat
Expected behavior
Should enable social spy not disable it
Username:
_Conj
Server name (all applicable):
All
This has been fixed, closing
|
gharchive/issue
| 2023-10-04T17:13:16 |
2025-04-01T04:55:24.914112
|
{
"authors": [
"chefburne",
"iamcjmxv"
],
"repo": "NebulaMC-GG/Bugs-and-Issues",
"url": "https://github.com/NebulaMC-GG/Bugs-and-Issues/issues/2303",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
162963281
|
Add "Pause Uploads" button
Right now if you're uploading a file, especially on a slow connection, it's going to take a while and you might need to stop uploading. I don't see if the current client supports pausing, I don't know what will happen if I close the program and whether or not it will resume or i'll have to start over, etc.
Please add a "Pause Uploads" button that lets you shut down your computer or just use your bandwidth in another way without losing your upload or download progress.
Uploads will resume if you close and reopen the client, but I agree that that behavior is not obvious to the user. Currently there is no support for pausing and resuming an individual upload (other than closing the program). This is a feature that I need to implement on the backend.
|
gharchive/issue
| 2016-06-29T15:57:59 |
2025-04-01T04:55:24.915572
|
{
"authors": [
"AdamBLevine",
"lukechampine"
],
"repo": "NebulousLabs/Sia-UI",
"url": "https://github.com/NebulousLabs/Sia-UI/issues/335",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2347744589
|
🛑 Becpl Angular UI is down
In a0efe73, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in 07f5d82 after 5 minutes.
|
gharchive/issue
| 2024-06-12T03:52:45 |
2025-04-01T04:55:24.932522
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/10361",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2371521265
|
🛑 Becpl Angular UI is down
In a99837e, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in ac1439b after 5 minutes.
|
gharchive/issue
| 2024-06-25T02:55:55 |
2025-04-01T04:55:24.935032
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/10834",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2406094445
|
🛑 Becpl Angular UI is down
In 2c689af, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in 20f8a7b after 6 minutes.
|
gharchive/issue
| 2024-07-12T17:57:32 |
2025-04-01T04:55:24.937299
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/11480",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1973621794
|
🛑 Becpl Angular UI is down
In 6e4ce77, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in 75a42f7 after 5 minutes.
|
gharchive/issue
| 2023-11-02T06:53:01 |
2025-04-01T04:55:24.939637
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/1291",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2000597183
|
🛑 Becpl Angular UI is down
In ad0a625, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in 475bf97 after 7 minutes.
|
gharchive/issue
| 2023-11-18T22:05:48 |
2025-04-01T04:55:24.941761
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/2113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2013838517
|
🛑 Becpl Angular UI is down
In 24c8b61, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in 3b06e62 after 5 minutes.
|
gharchive/issue
| 2023-11-28T07:41:03 |
2025-04-01T04:55:24.944120
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/2586",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2092082532
|
🛑 Becpl Angular UI is down
In 6089f7b, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in c5a155c after 5 minutes.
|
gharchive/issue
| 2024-01-20T14:53:38 |
2025-04-01T04:55:24.946232
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/5132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2139849279
|
🛑 Becpl Angular UI is down
In e1aa6c3, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in 93b68bd after 42 minutes.
|
gharchive/issue
| 2024-02-17T05:57:41 |
2025-04-01T04:55:24.948532
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/6260",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2177165645
|
🛑 Becpl Angular UI is down
In 8346401, Becpl Angular UI ($BECPL_ANGULAR_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Becpl Angular UI is back up in 4ecfdfe after 8 minutes.
|
gharchive/issue
| 2024-03-09T10:09:07 |
2025-04-01T04:55:24.950846
|
{
"authors": [
"NehalDamania"
],
"repo": "NehalDamania/becpl-uptime",
"url": "https://github.com/NehalDamania/becpl-uptime/issues/7153",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
564292781
|
AlgebraicSimplification simplification
Change Multiply and Add to use simple node comparisons instead of the pattern matcher. With my test model BERT Large Squad the pass used to take 32,335ms and now takes 250ms or 130X faster.
I see the error ```/home/dockuser/ngraph/src/ngraph/pass/algebraic_simplification.cpp:67:39: error: unused function 'get_broadcast_label' [-Werror,-Wunused-function]
|
gharchive/pull-request
| 2020-02-12T21:43:32 |
2025-04-01T04:55:25.027436
|
{
"authors": [
"rkimballn1",
"silee2"
],
"repo": "NervanaSystems/ngraph",
"url": "https://github.com/NervanaSystems/ngraph/pull/4326",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
143437579
|
class file for rx.Observable not found when creating hello-world
I am getting the "class file for rx.Observable not found" error when trying to compile after following the instructions in, [1].
Need to add rxjava dependency to resolve the build issue. Better to add this information to the README.
<dependency> <groupId>io.reactivex</groupId> <artifactId>rxjava</artifactId> <version>1.1.2</version> </dependency>
[1]. https://github.com/Netflix/Hystrix/blob/master/README.md
Can you more explicit about which instructions you were following?
When I set up a new Gradle project and added the single line:
compile group: 'com.netflix.hystrix', name: 'hystrix-core', version: '1.5.1'
, I was able to start an application with Hystrix working.
See the project at: https://github.com/mattrjacobs/minimal-hystrix
Sorry @mattrjacobs for not providing the project.
I just created maven project and tried to run from IDE and it complains giving the below error.
Error:(21, 8) java: cannot access rx.Observable
class file for rx.Observable not found
Here is the github project I've used to test. [1].
[1]. https://github.com/arunasujith/hystrix-test
Ah, I think I see what's happening. There was an issue with the build process for 1.4.0-1.4.19 that didn't mark dependencies properly. If you use the most recent versions of the 1.4.x or 1.5.x series (1.4.25/1.5.1), you shouldn't need to include rxjava (or any other Hystrix dependency) explicitly.
This was issue #730
|
gharchive/issue
| 2016-03-25T06:34:15 |
2025-04-01T04:55:25.080015
|
{
"authors": [
"arunasujith",
"mattrjacobs"
],
"repo": "Netflix/Hystrix",
"url": "https://github.com/Netflix/Hystrix/issues/1159",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
122932517
|
EtcdConfigurationSource doesn't support "update" action
If I add callback to existing in etcd property and change value using PUT command callback is not called. Callback is called only if I create new key ("set" command). To support property updates "update" command should be also implemented.
V1 is deprecated and won't be maintained anymore.
|
gharchive/issue
| 2015-12-18T11:34:50 |
2025-04-01T04:55:25.081131
|
{
"authors": [
"dborovikov",
"rgallardo-netflix"
],
"repo": "Netflix/archaius",
"url": "https://github.com/Netflix/archaius/issues/373",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
760810868
|
A couple of security issues found in "central-account" Terraform
I ran Cloudrail against the central-account, and it found a couple of interesting things:
Rule: Ensure no used security groups allow ingress from 0.0.0.0/0 or ::/0 to port 22 (SSH)
- 1 Resources Exposed:
-----------------------------------------------
- Exposed Resource: [module.server.aws_instance.this[0]] (.terraform/modules/server/main.tf:5)
Violating Resource: [aws_security_group.external] (security_group.tf:1)
Evidence:
Internet
| Subnet module.network.aws_subnet.public[0] has Internet Gateway
| Instance module.server.aws_instance.this[0] is on module.network.aws_subnet.public[0]
| Subnet routes traffic from instance to Internet Gateway
| Subnet uses NACL nacl-pseudo-7b63af1c-0465-416e-8fc5-67230d1c9c17 which allows port 22
| Instance uses Security Group ['aws_security_group.external']
| Security Group allows port 22
Instance
-----------------------------------------------
Rule: Ensure IMDSv2 is used and IMDSv1 is disabled
- 1 Resources Exposed:
-----------------------------------------------
- Exposed Resource: [module.server.aws_instance.this[0]] (.terraform/modules/server/main.tf:5)
Violating Resource: [module.server.aws_instance.this[0]] (.terraform/modules/server/main.tf:5)
Evidence:
| The EC2 module.server.aws_instance.this[0] is allowing IMDSv1
-----------------------------------------------
Rule: Ensure VPC Endpoint for S3 is enabled in all route tables in use in a VPC
- 1 Resources Exposed:
-----------------------------------------------
- Exposed Resource: [module.network.aws_vpc.this[0]] (.terraform/modules/network/main.tf:24)
Violating Resource: [module.network.aws_route_table.public[0]] (.terraform/modules/network/main.tf:101)
Evidence:
| The VPC module.network.aws_vpc.this[0] has a S3 Endpoint Gateway, but module.network.aws_subnet.public[0] uses module.network.aws_route_table.public[0], which does not have a route to the endpoint gateway
-----------------------------------------------
Rule: Ensure VPC Endpoint for DYNAMODB is enabled in all route tables in use in a VPC
- 1 Resources Exposed:
-----------------------------------------------
- Exposed Resource: [module.network.aws_vpc.this[0]] (.terraform/modules/network/main.tf:24)
Violating Resource: [module.network.aws_route_table.public[0]] (.terraform/modules/network/main.tf:101)
Evidence:
| The VPC module.network.aws_vpc.this[0] has a DYNAMODB Endpoint Gateway, but module.network.aws_subnet.public[0] uses module.network.aws_route_table.public[0], which does not have a route to the endpoint gateway
The first one about the 0.0.0.0: it's true that @kmcquade and @castrapel mentioned in the examples NOT to use 0.0.0.0, but maybe we should have the example set to the private subnet by default, to avoid mistakes?
The second one is a limitation of the module used to create the server, I opened a ticket for it. However, having a server that is publicly accessible without IMDSv2 can cause problems (especially with a web server).
The following have to do with how the VPC module is being used. You're asking for S3 and DynamoDB VPC endpoints, but not actually using them.
I understand this is an example for people to adopt and adapt, but I think it may be a good idea to update the template. Happy to do it if you give the thumbs up.
Hi @yi2020, thank you for pointing these out. Please feel free to fork this repository and update the template. We're not heavy users of Terraform at Netflix, since we have our own internal templating engines and CI/CD pipelines that predate Terraform. We really appreciate the report.
On it.
|
gharchive/issue
| 2020-12-10T01:51:25 |
2025-04-01T04:55:25.085316
|
{
"authors": [
"castrapel",
"yi2020"
],
"repo": "Netflix/consoleme",
"url": "https://github.com/Netflix/consoleme/issues/260",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
578570462
|
Fix producer metric reporting non-published version
On each cycle that is run, a toVersion is computed.
The currentVersion metric of the producer is updated on each cycle to the above version. I find that misleading because when there is NO state change, there will be no new version published but the metric will still be updated, pointing to a non-existent version.
If I understood correctly, the producer's readState will be null when there is no state change so I removed the else branch in which the version update happened.
Also added a gradle setting to build the subprojects in parallel.
Use-case for this metric: we want to add an alert for consumers that are out-of-sync with the producer for a given amount of time. For that we look at the consumer's reported version and at the producer's version.
ping @toolbear @rpalcolea @akhaku
Closing this in favor of https://github.com/Netflix/hollow/pull/459
|
gharchive/pull-request
| 2020-03-10T13:15:08 |
2025-04-01T04:55:25.088953
|
{
"authors": [
"AlexandruGhergut"
],
"repo": "Netflix/hollow",
"url": "https://github.com/Netflix/hollow/pull/458",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
709706087
|
Getting DB login error when calling plugin-test
I am plagued by a problem:
When calling the plugin Testmdoules I always encounter a DB login problem:
pytest -v lemur/plugins/lemur_digicert/tests/test_digicert.py
gives (among others):
platform linux -- Python 3.7.7, pytest-6.0.2, py-1.9.0, pluggy-0.13.1 -- /opt/lemur/lemur/bin/python
cachedir: .pytest_cache
rootdir: /opt/lemur, configfile: setup.cfg
plugins: celery-4.4.2, Faker-4.1.3, requests-mock-1.8.0
collected 9 items
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_determine_validity_years PASSED [ 11%]
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_determine_end_date PASSED [ 22%]
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_map_fields_with_validity_years PASSED [ 33%]
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_map_fields_with_validity_end_and_start PASSED [ 44%]
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_map_cis_fields_with_validity_years ERROR [ 55%]
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_map_cis_fields_with_validity_end_and_start ERROR [ 66%]
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_signature_hash PASSED [ 77%]
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_issuer_plugin_create_certificate PASSED [ 88%]
lemur/plugins/lemur_digicert/tests/test_digicert.py::test_cancel_ordered_certificate PASSED [100%]
============================================ ERRORS =============================================
___________________ ERROR at setup of test_map_cis_fields_with_validity_years ___________________
Traceback (most recent call last):
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2285, in _wrap_pool_connect
return fn()
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 363, in connect
return _ConnectionFairy._checkout(self)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 773, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
rec = pool._do_get()
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 139, in _do_get
self.dec_overflow()
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 69, in exit
exc_value, with_traceback=exc_tb,
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise
raise exception
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 136, in _do_get
return self._create_connection()
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection
return _ConnectionRecord(self)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 437, in init
self.__connect(first_connect_check=True)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 657, in _connect
pool.logger.debug("Error on connect(): %s", e)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 69, in exit
exc_value, with_traceback=exc_tb,
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise
raise exception
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 490, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/opt/lemur/lemur/lib/python3.7/site-packages/psycopg2/init.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: password authentication failed for user "lemur"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/lemur/lemur/tests/conftest.py", line 78, in db
_db.drop_all()
File "/opt/lemur/lemur/lib/python3.7/site-packages/flask_sqlalchemy/init.py", line 1047, in drop_all
self._execute_for_all_tables(app, bind, 'drop_all')
File "/opt/lemur/lemur/lib/python3.7/site-packages/flask_sqlalchemy/init.py", line 1031, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4347, in drop_all
ddl.SchemaDropper, self, checkfirst=checkfirst, tables=tables
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2057, in _run_visitor
with self._optional_conn_ctx_manager(connection) as conn:
File "/usr/local/lib/python3.7/contextlib.py", line 112, in enter
return next(self.gen)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2049, in _optional_conn_ctx_manager
with self._contextual_connect() as conn:
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2251, in _contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2289, in wrap_pool_connect
e, dialect, self
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1555, in handle_dbapi_exception_noconnection
sqlalchemy_exception, with_traceback=exc_info[2], from=e
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise
raise exception
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2285, in _wrap_pool_connect
return fn()
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 363, in connect
return _ConnectionFairy._checkout(self)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 773, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
rec = pool._do_get()
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 139, in _do_get
self.dec_overflow()
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 69, in exit
exc_value, with_traceback=exc_tb,
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise
raise exception
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 136, in _do_get
return self._create_connection()
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection
return _ConnectionRecord(self)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 437, in init
self.__connect(first_connect_check=True)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 657, in _connect
pool.logger.debug("Error on connect(): %s", e)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 69, in exit
exc_value, with_traceback=exc_tb,
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise
raise exception
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/opt/lemur/lemur/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 490, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/opt/lemur/lemur/lib/python3.7/site-packages/psycopg2/init.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: password authentication failed for user "lemur"
(Background on this error at: http://sqlalche.me/e/e3q8)
<...further similar messages from other modules ommitted)
your db might be busted
https://github.com/Netflix/lemur/blob/master/Makefile#L41-L49
To be able to run all tests via make test you need to install Postgres locally. make reset-db should ideally create the DB environment required for the testing, but is it currently assumed Postgres is installed and has the superuser lemur.
# macOS
brew services start postgresql
psql lemur -c "CREATE USER lemur with PASSWORD 'lemur';"
psql lemur -c "ALTER USER lemur with superuser;"
psql lemur -c "create extension pg_trgm;"
# Ubuntu
service postgresql status # Verify postgresql is running
service postgresql start # To start if it not running
sudo -u postgres -i
createdb lemur
psql
CREATE USER lemur with PASSWORD 'lemur';
ALTER USER lemur with superuser;
create extension pg_trgm;
@hosseinsh Thank you for the explanation. With the default password it works !
|
gharchive/issue
| 2020-09-27T09:39:33 |
2025-04-01T04:55:25.115773
|
{
"authors": [
"hosseinsh",
"sirferl"
],
"repo": "Netflix/lemur",
"url": "https://github.com/Netflix/lemur/issues/3158",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
722559427
|
Use cab_compliant option instead of authority name list
Removed PUBLIC_CA_AUTHORITY_NAMES and instead using option cab_compliant for authorities ( introduced in #3188 )
Coverage increased (+0.02%) to 58.677% when pulling 32c0c5fb003cbf2f59486cb9b3bc8e9c1b7f0a0b on charhate:cab_compliant into cd29b2b870ae7bc89be5bf92491cea0641b48b1b on Netflix:master.
|
gharchive/pull-request
| 2020-10-15T18:13:28 |
2025-04-01T04:55:25.117979
|
{
"authors": [
"charhate",
"coveralls"
],
"repo": "Netflix/lemur",
"url": "https://github.com/Netflix/lemur/pull/3190",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
725866563
|
Support for SNS Expiration Notifications
This change adds support for expiration notifications sent via AWS SNS. Topics are configured on a per-notification basis. Owner/security notifications will still be sent via email.
Coverage increased (+0.4%) to 59.595% when pulling 4f552cb636252c77054879cc736555321084ff8c on jtschladen:sns into ea33fe997941cd3dd208a7b82a75f58a62ad4188 on Netflix:master.
|
gharchive/pull-request
| 2020-10-20T19:04:16 |
2025-04-01T04:55:25.119679
|
{
"authors": [
"coveralls",
"jtschladen"
],
"repo": "Netflix/lemur",
"url": "https://github.com/Netflix/lemur/pull/3201",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
190863503
|
Fixing two policy diff cases.
If provided with a string, json.loads() is used, which often means
strings in the extracted structure are of type unicode. If the same
structure is also provided as a dict, there will often be problems where
a str and a unicode are seen as different objects. To fix, we simply dump
any datastructures and read them back in with the json lib.
The lib was not properly handling null values and was ignoring them entirely
in the output.
Coverage decreased (-0.03%) to 51.67% when pulling 49f7cabdd4af51368e1283c46eb14f9395194a4f on policydiff_fix into a088c7ecfc66ad9b82902ace5f91650c284cba58 on develop.
|
gharchive/pull-request
| 2016-11-21T23:14:17 |
2025-04-01T04:55:25.122469
|
{
"authors": [
"coveralls",
"monkeysecurity"
],
"repo": "Netflix/security_monkey",
"url": "https://github.com/Netflix/security_monkey/pull/453",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
151221029
|
Preconditions takes a formatted string
Preconditions excepts a formatted string so that BasicTag will not create a new string unless there is an exception
LGTM. Thanks!
|
gharchive/pull-request
| 2016-04-26T19:38:05 |
2025-04-01T04:55:25.123276
|
{
"authors": [
"brharrington",
"robertroeser"
],
"repo": "Netflix/servo",
"url": "https://github.com/Netflix/servo/pull/382",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
415386943
|
Use different GRPC Channel per client connector
So we are not limited to connecting always to the same server type. For example we may connect to TitusFederation to get jobs/tasks, but when fetching agent data we have to call TitusGateway directly as the agent management API is not exposed via TitusFederation.
probably worth noting that this will only work for single-cell stacks
|
gharchive/pull-request
| 2019-02-28T00:14:51 |
2025-04-01T04:55:25.124185
|
{
"authors": [
"fabiokung",
"tbak"
],
"repo": "Netflix/titus-control-plane",
"url": "https://github.com/Netflix/titus-control-plane/pull/495",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
156148806
|
411 Content Length required issue
For some reason, my home ISP throws 411 Content length required error when I try to post something to the cloud rest endpoint thru locally running zuul proxy.
So I tried setting content-length header manually from a custom filter like:
ctx.addZuulRequestHeader("content-length", ctx.getRequest().getHeader("Content-Length"));
But it throws
com.netflix.zuul.exception.ZuulException: Forwarding error...
Caused by: org.apache.http.ProtocolException: Content-Length header already present
at org.apache.http.protocol.RequestContent.process(RequestContent.java:95)
at org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:131)
at org.apache.http.protocol.HttpRequestExecutor.preProcess(HttpRequestExecutor.java:165)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:485)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
I'm using Zuul 1.0.28. Am I doing anything wrong? I'm not even sure if the original problem would disappear if I manage to set this header manually. Any suggestion, anything would be very much appreciated.
p.s: ISP allows my post request when its done thru Feign.
You should not need to set the Content-Length header in a filter because apache httpclient should be setting it itself. And as you found out, it will throw an error if you try to set it yourself.
So we need to find out if for some reason the header is not getting sent from Zuul. So can you try either:
Enable apache httpclient debug logging (log4j.logger.org.apache.http=DEBUG - https://hc.apache.org/httpcomponents-client-ga/logging.html), which will then log exactly what headers are being sent over the wire. And therefore see if Content-Length is being sent.
Configure zuul to proxy to http://httpbin.org/post and then see if it is receiving the Content-Length header.
I am getting this error too with the latest zuul com.netflix.zuul:zuul-core:1.2.0, with the scripts in master. But it only happens when proxying under certain conditions. @beku8 did you get to solve it?
@kerumai Thanks for the tip. I'm using spring cloud, which provides a "trace" endpoint.
Here's the request from browser to the server:
info: {
method: "PUT",
path: "/api/core/rest/location/amarbayasgalant",
headers: {
request: {
host: "localhost:9000",
connection: "keep-alive",
content-length: "663",
pragma: "no-cache",
cache-control: "no-cache",
authorization: "...authorization omitted",
accept: "application/json, text/plain, */*",
origin: "http://localhost:9000",
user-agent: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.63 Safari/537.36",
content-type: "application/json;charset=UTF-8",
referer: "http://localhost:9000/manage/mn/reference/location",
accept-encoding: "gzip, deflate, sdch",
accept-language: "en-US,en;q=0.8,bg;q=0.6,ru;q=0.4",
cookie: "....cookie values omitted..."
},
response: {...response omitted...}
}
}
Well it this point browser sends the content-length properly, but from zuul server to the microservice, content-length is not being sent.
info: {
method: "PUT",
path: "/rest/location/amarbayasgalant",
query: "?_t=mn",
remote: true,
proxy: "api/core",
headers: {
request: {
accept: "application/json, text/plain, */*",
origin: "http://localhost:9000",
user-agent: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.63 Safari/537.36",
content-type: "application/json;charset=UTF-8",
referer: "http://localhost:9000/manage/mn/reference/location",
accept-language: "en-US,en;q=0.8,bg;q=0.6,ru;q=0.4",
authorization: "...Bearer token omitted",
x-forwarded-host: "localhost",
x-forwarded-proto: "http",
x-forwarded-prefix: "/api/core",
x-forwarded-port: "9000",
x-forwarded-for: "127.0.0.1",
Accept-Encoding: "gzip"
},
response: {
Server: "httppd",
Date: "Wed, 15 Jun 2016 11:30:35 GMT",
Content-Type: "text/html",
Content-Length: "3892",
Connection: "close",
status: "411"
}
},
body: "... json body omitted"
}
I'm using spring cloud project which configures zuul under the hood, its probably a problem with spring cloud?
I just created a very simple demo app https://github.com/beku8/zuul-411 here, which just posts empty body to a httpbin.org using zuul. Simple spring-boot/cloud app you can run maven or eclipse. if you checkout out the /trace endpoint: same issue, no content-length being set.
I think because most of the ISPs don't "require" content-length, this issue was not looked at before.
Thanks for taking the time to make that https://github.com/beku8/zuul-411 repo @beku8 . That made it a lot easier for me to debug.
But when I run it, the Content-Type header does get sent according to httpbin. Here's the output from me doing a curl against your zuul-411 project:
$ curl -X PUT --data-ascii "blah" -H "Content-Type: text/plain" http://localhost:8080/api/httpbin/put
{
"args": {},
"data": "blah",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Content-Length": "4",
"Content-Type": "text/plain",
"Host": "httpbin.org",
"User-Agent": "curl/7.48.0",
"X-Forwarded-Host": "localhost",
"X-Forwarded-Prefix": "/api/httpbin"
},
"json": null,
"origin": "0:0:0:0:0:0:0:1, 50.152.174.195",
"url": "http://localhost/put"
}
I think the reason that the Content-Length is not being shown in the spring cloud trace endpoint, is because at the point trace records the request - the header is not yet added - it only gets added later on once httpclient start processing it (and therefore hidden from the Zuul and Spring Cloud code).
Also, when I ran your sample app, I got the wire-level logging in stdout, which shows exactly what httpclient has sent over the wire too, eg:
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> PUT /put HTTP/1.1
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> user-agent: curl/7.48.0
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> accept: */*
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> content-type: text/plain
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> x-forwarded-host: localhost
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> x-forwarded-proto: http
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> x-forwarded-prefix: /api/httpbin
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> x-forwarded-port: 8080
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> x-forwarded-for: 0:0:0:0:0:0:0:1
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> Accept-Encoding: gzip
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> Content-Length: 4
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> Host: httpbin.org
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.headers : http-outgoing-3 >> Connection: Keep-Alive
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "PUT /put HTTP/1.1[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "user-agent: curl/7.48.0[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "accept: */*[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "content-type: text/plain[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "x-forwarded-host: localhost[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "x-forwarded-proto: http[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "x-forwarded-prefix: /api/httpbin[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "x-forwarded-port: 8080[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "x-forwarded-for: 0:0:0:0:0:0:0:1[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "Accept-Encoding: gzip[\r][\n]"
2016-06-21 23:17:08.407 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "Content-Length: 4[\r][\n]"
2016-06-21 23:17:08.408 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "Host: httpbin.org[\r][\n]"
2016-06-21 23:17:08.408 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]"
2016-06-21 23:17:08.408 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "[\r][\n]"
2016-06-21 23:17:08.408 DEBUG 26197 --- [nio-8080-exec-9] org.apache.http.wire : http-outgoing-3 >> "blah"
So could you if the Content-Length (or if maybe a "Transfer-Encoding: chunked" header) is appearing in the org.apache.http.wire logging when you run your app through your ISP?
Hello @kerumai thanks for your time. You are right, trace endpoint was not accurate. There was no issue proxying to httpbin.org. However there was an issue with the between zuul & eureka clients.
This turned out to be an issue not related to zuul per se, but a problem jersey library. com.sun.jersey.core.impl.provider.entity.InputStreamProvider.
@Override
public long getSize(InputStream t, Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) {
if (t instanceof ByteArrayInputStream)
return ((ByteArrayInputStream)t).available();
else
return -1;
}
t is an instance of class com.netflix.zuul.http.ServletInputStreamWrapper not ByteArrayInputStream, therefore it returns -1.
Thanks again for your advice how to debug, it helped to pinpoint the issue. Spring cloud had a way around it and committed a fix.
@beku8 do you have a link to the fix/workaround, or how can we pull it into our project?
We still keep getting "Transfer-Encoding: chunked" with spring-cloud-netflix:1.3.0.M1
It turned out to be a spring cloud issue
https://github.com/spring-cloud/spring-cloud-netflix/issues/1042
On Tue, Jan 31, 2017 at 7:42 PM, mjsobremonte notifications@github.com
wrote:
@beku8 https://github.com/beku8 do you have a link to the
fix/workaround, or how can we pull it into our project?
We still keep getting "Transfer-Encoding: chunked" with
spring-cloud-netflix:1.3.0.M1
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Netflix/zuul/issues/223#issuecomment-276342423, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABCnoSn6CTjVqh7oJlcGSz4wvelvAHEAks5rXx4tgaJpZM4Ij-SH
.
|
gharchive/issue
| 2016-05-22T11:59:49 |
2025-04-01T04:55:25.141027
|
{
"authors": [
"beku8",
"kerumai",
"mjsobremonte",
"txomon"
],
"repo": "Netflix/zuul",
"url": "https://github.com/Netflix/zuul/issues/223",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2376598143
|
Plexon overflow fix
Fix #1366
heroic!
Tests will fail until #1494 is merged.
@h-mayorquin : I propose to merge this because the failing is because plexon2 and here you fix plexon1
It seems that we just needed to merge the latest changes from master. I don't see the button in the web interface, do you guys see it? I did not know that was a collaborator perk.
It seems that we just needed to merge the latest changes from master. I don't see the button in the web interface, do you guys see it? I did not know that was a collaborator perk.
It's not a perk. I think it is something you have to configure. I never see it on Neo, but I don't know how to configure it.... I always just do it manually.
Well, so it was that, it can be merge now : )
|
gharchive/pull-request
| 2024-06-27T02:30:30 |
2025-04-01T04:55:25.185595
|
{
"authors": [
"h-mayorquin",
"samuelgarcia",
"zm711"
],
"repo": "NeuralEnsemble/python-neo",
"url": "https://github.com/NeuralEnsemble/python-neo/pull/1497",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
215107155
|
fix some python 3 incompatibilities in axonio
Some values need to be converted to ints in order to work as indices
Some strings need to have .decode() called, not str(), or they have a spurious "b" at the start.
Coverage remained the same at 53.227% when pulling 9d09df4dfd7a66bd5502f69e10ffc5f57a26da7d on melizalab:master into 799fc69d8bf06242ca796372662d1beb1f674ddd on NeuralEnsemble:master.
@dmezila Please could you add your name and affiliation to doc/source/authors.rst
Sure, will do.
|
gharchive/pull-request
| 2017-03-17T19:57:11 |
2025-04-01T04:55:25.188156
|
{
"authors": [
"apdavison",
"coveralls",
"dmeliza"
],
"repo": "NeuralEnsemble/python-neo",
"url": "https://github.com/NeuralEnsemble/python-neo/pull/298",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2448983493
|
Example issue team 11
This is an example issue for a class
hello world.
|
gharchive/issue
| 2024-08-05T16:32:43 |
2025-04-01T04:55:25.189109
|
{
"authors": [
"mitharuka",
"phd-dtchen"
],
"repo": "NeuroHackademy2024/geometry",
"url": "https://github.com/NeuroHackademy2024/geometry/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2449032974
|
trying to solve problems
am i doing this right?
Yeah - this is it!
|
gharchive/pull-request
| 2024-08-05T17:00:21 |
2025-04-01T04:55:25.189805
|
{
"authors": [
"arokem",
"hughesdy"
],
"repo": "NeuroHackademy2024/geometry",
"url": "https://github.com/NeuroHackademy2024/geometry/pull/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
232701621
|
Create new content
We've included what we think is a basic, yet thorough introduction to EEG in EEG 101. However, we're always open to new content and remixing of old stuff.
The app is composed of scenes, each of which has a graphic (either an image, animation, or EEG visualization) and one to several blocks of writing. Each block of writing has a subtitle and some text. Swiping to the left advances to the next block (graphic up above is unchanged). High-level info can be linked to with pop-up windows triggered by selecting certain highlighted sections of text
Here's an example:
Title: Introduction
Subtitle: Your brain produces electricity
Text: Using the EEG...
Pop-up link: EEG
Selecting the pop-up link leads to something like this. This is where we can include 'undergraduate-level' information:
All current lesson text for the app can be found in this document
Here's a template for a new scene:
# Title
> Description of what image or graphic might be present. Current EEG options are raw EEG, filtered EEG, or PSD graph.
---
## Subtitle 1
Body text for first block. !This text will lead to pop-up 1!
> Pop-up 1 Title
> Pop 1 body text explaining something
---
## Subtitle 2
Body text for second block. !This text will lead to pop-up 2!. We can create as many swipeable blocks and pop-ups as are necessary
> Pop-up 2 Title
> We can also add images to pop-ups. They'll go above the title
----
ERPs will be next, closing this for now
|
gharchive/issue
| 2017-05-31T21:27:44 |
2025-04-01T04:55:25.193902
|
{
"authors": [
"jdpigeon"
],
"repo": "NeuroTechX/eeg-101",
"url": "https://github.com/NeuroTechX/eeg-101/issues/16",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.