id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2571765788
|
Update Composer dependencies (2024-10-08-00-06)
Visual regression test failed!
|
gharchive/pull-request
| 2024-10-08T00:06:07 |
2025-04-01T06:41:07.690481
|
{
"authors": [
"jspellman814"
],
"repo": "jspellman814/js-build-tools-test",
"url": "https://github.com/jspellman814/js-build-tools-test/pull/382",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2249483174
|
Update Composer dependencies (2024-04-18-00-12)
Visual regression test failed!
|
gharchive/pull-request
| 2024-04-18T00:12:38 |
2025-04-01T06:41:07.692110
|
{
"authors": [
"jspellman814"
],
"repo": "jspellman814/js-test-tbt-site",
"url": "https://github.com/jspellman814/js-test-tbt-site/pull/229",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2097227655
|
fix: chinese.
失敗したらもとに戻します
|
gharchive/pull-request
| 2024-01-24T01:00:51 |
2025-04-01T06:41:07.694778
|
{
"authors": [
"jun-koyama-h"
],
"repo": "jun-koyama-h/team_project",
"url": "https://github.com/jun-koyama-h/team_project/pull/261",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2641984473
|
Reuse TickerDetail layouts for bucket views
@coderabbitai, this PR is growing to be quite large, and I still have a ways to go. Can you give me a preliminary review that is very not nitpicky? I will likely do some refactoring later, but right now I just want to get this functionality moving forward.
|
gharchive/pull-request
| 2024-11-07T19:31:49 |
2025-04-01T06:41:07.702561
|
{
"authors": [
"jzombie"
],
"repo": "jzombie/etf-matcher",
"url": "https://github.com/jzombie/etf-matcher/pull/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2069317319
|
Add concept of NPCs
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
Current dependencies on/for this PR:
main
PR #79
PR #80 👈
PR #81
This stack of pull requests is managed by Graphite.
Merge activity
Jan 7, 5:25 PM: @kadhirvelm started a stack merge that includes this pull request via Graphite.
|
gharchive/pull-request
| 2024-01-07T22:25:08 |
2025-04-01T06:41:07.712876
|
{
"authors": [
"kadhirvelm"
],
"repo": "kadhirvelm/the-tower-of-cultivation",
"url": "https://github.com/kadhirvelm/the-tower-of-cultivation/pull/80",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2659962978
|
Klestid Poloska HW07
You have conflicts.
2 ok
|
gharchive/pull-request
| 2024-11-14T20:27:54 |
2025-04-01T06:41:07.713807
|
{
"authors": [
"KlestidP",
"kajigor"
],
"repo": "kajigor/fp-2024-cub-fall",
"url": "https://github.com/kajigor/fp-2024-cub-fall/pull/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2321932291
|
AZ Translation for 7.2
closes #37
|
gharchive/pull-request
| 2024-05-28T21:13:59 |
2025-04-01T06:41:07.760925
|
{
"authors": [
"kerimovscreations",
"muradtries"
],
"repo": "kerimovscreations/ML-For-Beginners-AZ",
"url": "https://github.com/kerimovscreations/ML-For-Beginners-AZ/pull/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1401353081
|
Separate MutableX and ProxyX
@KernelDeimos the intent was to just use the MutableX in the boot sequence, it should be the only place it is needed.
|
gharchive/pull-request
| 2022-10-07T15:24:23 |
2025-04-01T06:41:07.763751
|
{
"authors": [
"KernelDeimos",
"jlhughes"
],
"repo": "kgrgreer/foam3",
"url": "https://github.com/kgrgreer/foam3/pull/2108",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1942382393
|
Create permutations.py
.
add only java projects
|
gharchive/pull-request
| 2023-10-13T17:47:17 |
2025-04-01T06:41:07.766923
|
{
"authors": [
"dakshrajput100",
"kishanrajput23"
],
"repo": "kishanrajput23/Java-Projects-Collections",
"url": "https://github.com/kishanrajput23/Java-Projects-Collections/pull/284",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2072501594
|
fix: make sure options are optional in Knock.signUserToken/2
Awesome, thanks @cjbell 🚀
|
gharchive/pull-request
| 2024-01-09T14:36:39 |
2025-04-01T06:41:07.777674
|
{
"authors": [
"cjbell",
"heymartinadams"
],
"repo": "knocklabs/knock-node",
"url": "https://github.com/knocklabs/knock-node/pull/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1524365938
|
Add missing ids in Variable Get return data.
looks right to me
Do you have the power to approve it?
|
gharchive/pull-request
| 2023-01-08T07:01:04 |
2025-04-01T06:41:07.779780
|
{
"authors": [
"DeniseSkidmore"
],
"repo": "komsa-ag/Camunda.Api.Client",
"url": "https://github.com/komsa-ag/Camunda.Api.Client/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1058927563
|
Make plot display prettier
Thank you! :)
|
gharchive/pull-request
| 2021-11-19T21:23:12 |
2025-04-01T06:41:07.780458
|
{
"authors": [
"martin-steinegger",
"milot-mirdita"
],
"repo": "konstin/ColabFold",
"url": "https://github.com/konstin/ColabFold/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1484211871
|
deploy DN FE 13451 to DEV
my VScode got crazy sorry
|
gharchive/pull-request
| 2022-12-08T09:55:20 |
2025-04-01T06:41:07.781054
|
{
"authors": [
"IlyaIzr"
],
"repo": "konturio/disaster-ninja-fe",
"url": "https://github.com/konturio/disaster-ninja-fe/pull/286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2384282932
|
:bug: During source only analsis, we should not also set the scope.
fixes for https://issues.redhat.com/browse/MTA-3169
@jortel This should be changed once success with errors is in so that we can display that the error is running depth retrieval.
|
gharchive/pull-request
| 2024-07-01T16:44:31 |
2025-04-01T06:41:07.782415
|
{
"authors": [
"shawn-hurley"
],
"repo": "konveyor/tackle2-addon-analyzer",
"url": "https://github.com/konveyor/tackle2-addon-analyzer/pull/101",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1312906869
|
Create LICENSE
Codecov Report
:exclamation: No coverage uploaded for pull request base (master@7f22afb). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #13 +/- ##
=========================================
Coverage ? 97.23%
=========================================
Files ? 19
Lines ? 470
Branches ? 0
=========================================
Hits ? 457
Misses ? 13
Partials ? 0
Flag
Coverage Δ
unittests
97.23% <ø> (?)
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7f22afb...6ff8b75. Read the comment docs.
|
gharchive/pull-request
| 2022-07-21T08:50:49 |
2025-04-01T06:41:07.790916
|
{
"authors": [
"codecov-commenter",
"marcosschroh"
],
"repo": "kpn/kstreams",
"url": "https://github.com/kpn/kstreams/pull/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2203038781
|
chore: use semgrep action, not container
Pull Request Test Coverage Report for Build 8394564502
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 34.066%
Totals
Change from base Build 8394501228:
0.0%
Covered Lines:
31
Relevant Lines:
91
💛 - Coveralls
|
gharchive/pull-request
| 2024-03-22T18:04:43 |
2025-04-01T06:41:07.796718
|
{
"authors": [
"coveralls",
"kristof-mattei"
],
"repo": "kristof-mattei/rust-end-to-end-application",
"url": "https://github.com/kristof-mattei/rust-end-to-end-application/pull/1290",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2534201686
|
Unit test stream data transfer
/assign @PrasadG193
|
gharchive/issue
| 2024-09-18T16:36:02 |
2025-04-01T06:41:07.798817
|
{
"authors": [
"PrasadG193",
"carlbraganza"
],
"repo": "kubernetes-csi/external-snapshot-metadata",
"url": "https://github.com/kubernetes-csi/external-snapshot-metadata/issues/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1050604925
|
Create OWNERS
LGTM
|
gharchive/pull-request
| 2021-11-11T06:01:34 |
2025-04-01T06:41:07.800923
|
{
"authors": [
"JohnNiang",
"LinuxSuRen"
],
"repo": "kubesphere/devops-python-sample",
"url": "https://github.com/kubesphere/devops-python-sample/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2298195618
|
feat: click to zoom pictures
@coderabbitai review
|
gharchive/pull-request
| 2024-05-15T15:10:32 |
2025-04-01T06:41:07.803043
|
{
"authors": [
"la3rence"
],
"repo": "la3rence/site",
"url": "https://github.com/la3rence/site/pull/402",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1588451093
|
[WIP] refactored fri tests to easily define more
Stale
|
gharchive/pull-request
| 2023-02-16T22:23:50 |
2025-04-01T06:41:07.803917
|
{
"authors": [
"ElFantasma",
"MauroToscano"
],
"repo": "lambdaclass/lambdaworks",
"url": "https://github.com/lambdaclass/lambdaworks/pull/90",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1633052966
|
引用文に文字数制限を作る
実装できたのでClose致します。
|
gharchive/issue
| 2023-03-21T00:20:58 |
2025-04-01T06:41:07.834614
|
{
"authors": [
"lef237"
],
"repo": "lef237/quotelist",
"url": "https://github.com/lef237/quotelist/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1167313318
|
Fix some issues with zig 0.9.1
Thanks for making a pull request!
I've done some work that brings it up to 0.9 on the sqlite-v3.37.0 branch, I just haven't pushed that to master yet. It also changes the API in significant ways, I just need to get around to finishing the update.
|
gharchive/pull-request
| 2022-03-12T15:09:07 |
2025-04-01T06:41:07.856231
|
{
"authors": [
"leroycep",
"xoich"
],
"repo": "leroycep/sqlite-zig",
"url": "https://github.com/leroycep/sqlite-zig/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1096162687
|
02_transferNum_wangying
|
gharchive/pull-request
| 2022-01-07T09:58:04 |
2025-04-01T06:41:07.857146
|
{
"authors": [
"Pearyman",
"yingxiaobei"
],
"repo": "lets-makesomething/fe-common-coding",
"url": "https://github.com/lets-makesomething/fe-common-coding/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1085023404
|
Secrets - example 2 practice
@Skeltian cela plante si les quotes sont absentes ou c'est juste pour être clean ?
|
gharchive/pull-request
| 2021-12-20T17:12:37 |
2025-04-01T06:41:07.867197
|
{
"authors": [
"Skeltian",
"lgmorand"
],
"repo": "lgmorand/github-action-hello",
"url": "https://github.com/lgmorand/github-action-hello/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1215553759
|
Beta
:tada: This PR is included in version 1.4.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-04-26T07:52:37 |
2025-04-01T06:41:08.385750
|
{
"authors": [
"lkadalski"
],
"repo": "lkadalski/minigun",
"url": "https://github.com/lkadalski/minigun/pull/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1370304917
|
Fix/db reuse connection mark pending
Codecov Report
Merging #964 (83d4e04) into main (1660b9d) will decrease coverage by 1.12%.
The diff coverage is 60.00%.
:exclamation: Current head 83d4e04 differs from pull request most recent head 9d0c6db. Consider uploading reports for the commit 9d0c6db to get more accurate results
@@ Coverage Diff @@
## main #964 +/- ##
==========================================
- Coverage 43.18% 42.06% -1.13%
==========================================
Files 244 244
Lines 13545 13546 +1
==========================================
- Hits 5850 5698 -152
- Misses 7695 7848 +153
Impacted Files
Coverage Δ
lnbits/tasks.py
36.84% <0.00%> (ø)
lnbits/core/models.py
65.57% <75.00%> (+0.28%)
:arrow_up:
lnbits/extensions/boltz/mempool.py
60.34% <0.00%> (-32.76%)
:arrow_down:
lnbits/wallets/cln.py
19.00% <0.00%> (-29.76%)
:arrow_down:
lnbits/wallets/lndrest.py
17.09% <0.00%> (-29.06%)
:arrow_down:
lnbits/extensions/boltz/boltz.py
49.75% <0.00%> (-18.91%)
:arrow_down:
lnbits/db.py
80.80% <0.00%> (-9.61%)
:arrow_down:
lnbits/wallets/macaroon/macaroon.py
27.53% <0.00%> (-5.80%)
:arrow_down:
lnbits/extensions/boltz/crud.py
81.53% <0.00%> (-3.08%)
:arrow_down:
lnbits/bolt11.py
78.57% <0.00%> (-2.39%)
:arrow_down:
... and 2 more
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
gharchive/pull-request
| 2022-09-12T17:54:34 |
2025-04-01T06:41:08.403744
|
{
"authors": [
"callebtc",
"codecov-commenter"
],
"repo": "lnbits/lnbits-legend",
"url": "https://github.com/lnbits/lnbits-legend/pull/964",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1253861968
|
Readme: c-lightning -> Core Lightning
Thanks for the pull request
|
gharchive/pull-request
| 2022-05-31T13:21:13 |
2025-04-01T06:41:08.405104
|
{
"authors": [
"AaronDewes",
"nolim1t"
],
"repo": "lncm/docker-clightning",
"url": "https://github.com/lncm/docker-clightning/pull/3",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
2654668814
|
Feat(DND): Enable Simple DND
Welcome to Codecov :tada:
Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.
Thanks for integrating Codecov - We've got you covered :open_umbrella:
|
gharchive/pull-request
| 2024-11-13T08:34:44 |
2025-04-01T06:41:08.409164
|
{
"authors": [
"codecov-commenter",
"shivamG640"
],
"repo": "lordrip/kaoto",
"url": "https://github.com/lordrip/kaoto/pull/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1763827589
|
Add index.html template to the project
done
|
gharchive/issue
| 2023-06-19T16:05:33 |
2025-04-01T06:41:08.419164
|
{
"authors": [
"luRinaldy"
],
"repo": "luRinaldy/homepage",
"url": "https://github.com/luRinaldy/homepage/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1560778593
|
chore: update OpenTelemetry baseline to 1.4.0
:tada: This PR is included in version 1.17.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-01-28T09:15:48 |
2025-04-01T06:41:08.430179
|
{
"authors": [
"doriaviram",
"mmanciop"
],
"repo": "lumigo-io/opentelemetry-js-distro",
"url": "https://github.com/lumigo-io/opentelemetry-js-distro/pull/155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
362445021
|
Up/Down arrows to navigate file list
Works until the browser widget gets the focus. Can't seem to take the focus away, yet.
|
gharchive/issue
| 2018-09-21T03:42:20 |
2025-04-01T06:41:08.442739
|
{
"authors": [
"macMikey"
],
"repo": "macMikey/scanhammer",
"url": "https://github.com/macMikey/scanhammer/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1989161591
|
Add default images to Admin page
Done in https://github.com/macterra/artx-market/compare/5f3f14e2bfa4...54e36da0a8d2
|
gharchive/issue
| 2023-11-11T23:59:03 |
2025-04-01T06:41:08.443688
|
{
"authors": [
"macterra"
],
"repo": "macterra/artx-market",
"url": "https://github.com/macterra/artx-market/issues/172",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2128627203
|
Feat id tracking
Todo: put request id header on streamHandler too
Todo: put request id header on streamHandler too
nah won't do - everything is to be moving in serverlessHandler direction anyway
now gotta fix a few logging bugs
|
gharchive/pull-request
| 2024-02-10T17:52:08 |
2025-04-01T06:41:08.444970
|
{
"authors": [
"maddsua"
],
"repo": "maddsua/lambda",
"url": "https://github.com/maddsua/lambda/pull/155",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2261974768
|
Login
where is the code of login page ?
are you changed anything in about page ?
I need you to push only login page code in this pull request not all of these changes in these files
|
gharchive/pull-request
| 2024-04-24T18:59:06 |
2025-04-01T06:41:08.447174
|
{
"authors": [
"Mostafa12248",
"mahmood-mohie"
],
"repo": "mahmood-mohie/the-career-journey",
"url": "https://github.com/mahmood-mohie/the-career-journey/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
100519104
|
AUI-1956 Use the "noconflict" version of ACE editor
New PR sent to master-deprecated: https://github.com/mairatma/alloy-ui/pull/243
|
gharchive/pull-request
| 2015-08-12T11:01:25 |
2025-04-01T06:41:08.451639
|
{
"authors": [
"ambrinchaudhary"
],
"repo": "mairatma/alloy-ui",
"url": "https://github.com/mairatma/alloy-ui/pull/242",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1393630832
|
Create CoinChange.cpp
kindly provide a description or question link of your code
Given an integer array of coins[ ] of size N representing different types of currency and an integer sum, The task is to find the number of ways to make sum by using different combinations from coins[].
|
gharchive/pull-request
| 2022-10-02T03:46:59 |
2025-04-01T06:41:08.456387
|
{
"authors": [
"manan-shxrma",
"samyakjain26"
],
"repo": "manan-shxrma/dsalgo",
"url": "https://github.com/manan-shxrma/dsalgo/pull/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2762174231
|
Skimmer
Nyos skimmer
|
gharchive/issue
| 2024-12-29T09:28:40 |
2025-04-01T06:41:08.463017
|
{
"authors": [
"marcus-38"
],
"repo": "marcus-38/nixcfg",
"url": "https://github.com/marcus-38/nixcfg/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1296738468
|
Event info
Weren't these changes already merged into development? https://github.com/marcusblake/software-product-sprint-2022/pull/4
|
gharchive/pull-request
| 2022-07-07T03:12:25 |
2025-04-01T06:41:08.463966
|
{
"authors": [
"marcusblake",
"seokhakang9"
],
"repo": "marcusblake/software-product-sprint-2022",
"url": "https://github.com/marcusblake/software-product-sprint-2022/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2116398296
|
ui.TextBox: Delete key not working
I implemented Delete alongside with other shortcuts like: Home, End, Ctrl+Left, Ctrl+Right, Ctrl+C, Ctrl+V. Try them out
Verified and closed.
|
gharchive/issue
| 2024-02-03T10:04:38 |
2025-04-01T06:41:08.465485
|
{
"authors": [
"skejeton",
"vtereshkov"
],
"repo": "marekmaskarinec/tophat",
"url": "https://github.com/marekmaskarinec/tophat/issues/143",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2364703165
|
Feature/336 - depth fix on web scraper
https://github.com/masa-finance/masa-oracle/issues/336
|
gharchive/pull-request
| 2024-06-20T15:15:43 |
2025-04-01T06:41:08.594882
|
{
"authors": [
"mudler",
"nolanjacobson"
],
"repo": "masa-finance/masa-oracle",
"url": "https://github.com/masa-finance/masa-oracle/pull/355",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1230642949
|
Add a Codacy badge to README.md
Note: Codacy analysis is meant to be a tool that can help you make your code better and more robust (and also learn to use code analysis tools). It is not mandatory to implement suggested changes.
|
gharchive/pull-request
| 2022-05-10T05:39:14 |
2025-04-01T06:41:08.652826
|
{
"authors": [
"codacy-badger",
"ivan-ristovic"
],
"repo": "matf-pp/2022_MATDAQ",
"url": "https://github.com/matf-pp/2022_MATDAQ/pull/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1216022749
|
Proofread last Catalog docs
@Adam-Dupaski these are the final docs I have on the list for my proofreading task. I made a few changes and additions, please check it out when you have some time.
|
gharchive/pull-request
| 2022-04-26T14:12:31 |
2025-04-01T06:41:08.691503
|
{
"authors": [
"johannahemminger"
],
"repo": "mendix/docs",
"url": "https://github.com/mendix/docs/pull/4502",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1159387734
|
feat: Example of preview deploy
As this was just for demonstration sake, I will close it.
|
gharchive/pull-request
| 2022-03-04T08:38:28 |
2025-04-01T06:41:08.693050
|
{
"authors": [
"Jelledb"
],
"repo": "meshcloud/meshcloud-docs",
"url": "https://github.com/meshcloud/meshcloud-docs/pull/477",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2232815184
|
fix: first pass
:tada: This PR is included in version 1.0.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-04-09T07:42:33 |
2025-04-01T06:41:08.710662
|
{
"authors": [
"michaelpeterswa"
],
"repo": "michaelpeterswa/talvi",
"url": "https://github.com/michaelpeterswa/talvi/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1175125527
|
Fix service client test timeout
@mergify backport main foxy
|
gharchive/pull-request
| 2022-03-21T09:42:24 |
2025-04-01T06:41:08.711271
|
{
"authors": [
"Acuadros95"
],
"repo": "micro-ROS/micro_ros_renesas_testbench",
"url": "https://github.com/micro-ROS/micro_ros_renesas_testbench/pull/114",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1229074676
|
Ts modeling
Model testing looks good!
|
gharchive/pull-request
| 2022-05-09T02:06:06 |
2025-04-01T06:41:08.711825
|
{
"authors": [
"TyronSamaroo",
"sateshr"
],
"repo": "microsoft-malware-detection/microsoft-malware-detection",
"url": "https://github.com/microsoft-malware-detection/microsoft-malware-detection/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1536226525
|
FOUNDATIONS: Artifact Download
@geetika-jain This can be promoted from Draft to PR
|
gharchive/pull-request
| 2023-01-17T11:32:14 |
2025-04-01T06:41:08.713524
|
{
"authors": [
"geetika-jain",
"mimendel"
],
"repo": "microsoft/mixedreality.dmx.agent",
"url": "https://github.com/microsoft/mixedreality.dmx.agent/pull/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1324755148
|
DATA: Lab Commands
@mimendel @terrypalo data is for data migration - models alone are not data. they just become a part of whatever task you are trying to accomplish.
|
gharchive/pull-request
| 2022-08-01T17:39:48 |
2025-04-01T06:41:08.714238
|
{
"authors": [
"hassanhabib",
"mimendel"
],
"repo": "microsoft/mixedreality.dmx.gatekeeper",
"url": "https://github.com/microsoft/mixedreality.dmx.gatekeeper/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
967072525
|
moc-sdk : null ptr check for SourceType for backward compatibility
Can we make sure we test everything end to end and then raise PRs.
That way we dont have to keep making sdk changes.
Testing with wssdtest was done with my changes which rested the feature , so this was not caught earlier. Vanilla wssdtest with latest tags from all repos should be tested. Will keep this scenario in mind next time.
|
gharchive/pull-request
| 2021-08-11T16:53:24 |
2025-04-01T06:41:08.715342
|
{
"authors": [
"cleonard-git"
],
"repo": "microsoft/moc-sdk-for-go",
"url": "https://github.com/microsoft/moc-sdk-for-go/pull/70",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1821237384
|
Create action to display test results on Github | test
Nice errors:
@jcwchen could you take a look? Thanks!
After approval I will revert the test changes
|
gharchive/pull-request
| 2023-07-25T22:06:12 |
2025-04-01T06:41:08.716966
|
{
"authors": [
"justinchuby"
],
"repo": "microsoft/onnxscript",
"url": "https://github.com/microsoft/onnxscript/pull/921",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1536258033
|
[open62541] Update version to 1.3.4
Please get failure logs here.
Pinging @autoantwort for response. Is work still being done for this PR?
I think I can close this temporarily
|
gharchive/pull-request
| 2023-01-17T11:56:27 |
2025-04-01T06:41:08.718487
|
{
"authors": [
"JonLiu1993",
"autoantwort"
],
"repo": "microsoft/vcpkg",
"url": "https://github.com/microsoft/vcpkg/pull/29004",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1457743426
|
Test
test
|
gharchive/pull-request
| 2022-11-21T11:04:19 |
2025-04-01T06:41:08.719022
|
{
"authors": [
"madhavichetan2121"
],
"repo": "microsoft/winwithappplatpoc",
"url": "https://github.com/microsoft/winwithappplatpoc/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1422507278
|
Update CI
Coverage remained the same at 99.745% when pulling 067478d147d9c9b0351a6762ba3cc043f9156e07 on update-ci into 0fdde73c1118df94ff53ce5769c1d8995bb3a8e9 on main.
|
gharchive/pull-request
| 2022-10-25T13:53:30 |
2025-04-01T06:41:08.726227
|
{
"authors": [
"coveralls",
"miguelnietoa"
],
"repo": "miguelnietoa/stellar_sdk",
"url": "https://github.com/miguelnietoa/stellar_sdk/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2184143434
|
chore: latest backport changes for version.lua
:tada: This PR is included in version 4.2.3 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-03-13T14:22:42 |
2025-04-01T06:41:08.729258
|
{
"authors": [
"mikesmithgh"
],
"repo": "mikesmithgh/kitty-scrollback.nvim",
"url": "https://github.com/mikesmithgh/kitty-scrollback.nvim/pull/209",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1322274752
|
Implement LowerTriangularMatrix for spectral transform
@white-alistair This PR is a bit hectic, but I've been restructuring the PrognosticVariables and DiagnosticVariables such that one can now do something like
julia> using SpeedyWeather
julia> progn,diagn,model = initialize_speedy();
julia> progn.
layers lmax mmax n_leapfrog nlev pres
so PrognosticVariables contains another struct with layers and pres (as it's just surface separated). Hence, access to the variables is straight forward like
julia> progn.layers[1].leapfrog[1].vor
32×32 LowerTriangularMatrix{ComplexF32}:
...
and it should be super easy to loop over all layers like (which can be a single loop very early on at every time step)
for layer in progn.layers
func!(layer,...)
end
With LowerTriangularMatrix as the default struct for spectral coefficients looping in spectral domain is either as before
for m in 1:mmax+1
for l in m:mmax+1
vor[l,m]
or more convenient (and faster)
for lm in eachharmonic(vor)
vor[lm]
which is equivalent to
for lm in 1:length(vor)
vor[lm]
@white-alistair This PR is a bit hectic, but I've been restructuring the PrognosticVariables and DiagnosticVariables such that one can now do something like
julia> using SpeedyWeather
julia> progn,diagn,model = initialize_speedy();
julia> progn.
layers lmax mmax n_leapfrog nlev pres
so PrognosticVariables contains another struct with layers and pres (as it's just surface separated). Hence, access to the variables is straight forward like
julia> progn.layers[1].leapfrog[1].vor
32×32 LowerTriangularMatrix{ComplexF32}:
...
and it should be super easy to loop over all layers like (which can be a single loop very early on at every time step)
for layer in progn.layers
func!(layer,...)
end
@milankl looks good, nice one!
I see that you moved the parametrization vars into a file column_variables.jl :+1: On a related note, say there are intermediate quantities used in the calculation of some parametrization, would you put those arrays in the IntermediateVariables struct or the ParametrizationVariables struct?
Let me know if you want an actual review of this at some point.
@white-alistair I answered that question in #118, I suggest creating a ColumnVariables struct.
|
gharchive/pull-request
| 2022-07-29T13:46:31 |
2025-04-01T06:41:08.735151
|
{
"authors": [
"milankl",
"white-alistair"
],
"repo": "milankl/SpeedyWeather.jl",
"url": "https://github.com/milankl/SpeedyWeather.jl/pull/117",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1884024947
|
Fix videocomposer bugs
Remove model_weights_ddd/clip/bpe_simple_vocab_16e6.txt.gz pls. I was supposed to git add model_weights/bpe_simple_vocab_16e6.txt.gz, but accidentally added this one.
|
gharchive/pull-request
| 2023-09-06T13:29:58 |
2025-04-01T06:41:08.738211
|
{
"authors": [
"SamitHuang",
"wtomin"
],
"repo": "mindspore-lab/mindone",
"url": "https://github.com/mindspore-lab/mindone/pull/128",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2559449693
|
Template update for nf-core/tools version 2.14.2.dev0
Version 2.14.2.dev0 of the nf-core/tools pipeline template has just been released. This pull-request is now outdated and has been closed in favour of https://github.com/mirpedrol/customnonnfcoresync/pull/2
Please use https://github.com/mirpedrol/customnonnfcoresync/pull/2 to merge in the new changes from the nf-core template as soon as possible.
|
gharchive/pull-request
| 2024-10-01T14:39:38 |
2025-04-01T06:41:08.748115
|
{
"authors": [
"mirpedrol"
],
"repo": "mirpedrol/customnonnfcoresync",
"url": "https://github.com/mirpedrol/customnonnfcoresync/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
148722507
|
Add regression test for base href on separate origin
Looks like this needs to run a restricted set of browsers.
@taion so it seems you did add my proposal change but tests still not pass :(
tell me if I can help but not sure to know how to interpret these test results. Is the problem "Some of your tests did a full page reload!" ?
That's correct – that will fail a test in Karma.
Looks like progress here has stalled. Closing unless someone wants to pick this up and run with it.
As far as I know, our dev env which use base href does not work on Safari because of this issue, and we still didn't find any workaround
|
gharchive/pull-request
| 2016-04-15T17:41:01 |
2025-04-01T06:41:08.760655
|
{
"authors": [
"mjackson",
"slorber",
"taion"
],
"repo": "mjackson/history",
"url": "https://github.com/mjackson/history/pull/274",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1051399248
|
Initial Setup - The root.html.erb will only have one html tag: <div id="root">React Broken</div>
finished
|
gharchive/issue
| 2021-11-11T22:06:34 |
2025-04-01T06:41:08.761438
|
{
"authors": [
"mjlomeli"
],
"repo": "mjlomeli/jcp",
"url": "https://github.com/mjlomeli/jcp/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1093517810
|
Use introsort.
This appears to improve performance very slightly for medium sized n and one-hot variables, but reduces performance for continuous variables.
Gnuplot not found, using plotters backend
Benchmarking forest
Benchmarking forest: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 106.0s.
Benchmarking forest: Collecting 10 samples in estimated 106.02 s (10 iterations)
Benchmarking forest: Analyzing
forest time: [9.9991 s 10.128 s 10.281 s]
change: [-2.8555% -1.0205% +0.8443%] (p = 0.33 > 0.05)
No change in performance detected.
Running unittests (target/release/deps/bench_tree-e4e43edb68de800e)
WARNING: HTML report generation will become a non-default optional feature in Criterion.rs 0.4.0.
This feature is being moved to cargo-criterion (https://github.com/bheisler/cargo-criterion) and will be optional in a future version of Criterion.rs. To silence this warning, either switch to cargo-criterion or enable the 'html_reports' feature in your Cargo.toml.
Gnuplot not found, using plotters backend
Benchmarking tree_split/tree_n=100, d=10, max_depth=4, mtry=10
Benchmarking tree_split/tree_n=100, d=10, max_depth=4, mtry=10: Warming up for 3.0000 s
Benchmarking tree_split/tree_n=100, d=10, max_depth=4, mtry=10: Collecting 100 samples in estimated 5.0350 s (56k iterations)
Benchmarking tree_split/tree_n=100, d=10, max_depth=4, mtry=10: Analyzing
tree_split/tree_n=100, d=10, max_depth=4, mtry=10
time: [91.210 us 91.496 us 91.794 us]
change: [+19.423% +19.829% +20.227%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
2 (2.00%) low severe
9 (9.00%) low mild
Benchmarking tree_split/tree_n=1000, d=10, max_depth=4, mtry=10
Benchmarking tree_split/tree_n=1000, d=10, max_depth=4, mtry=10: Warming up for 3.0000 s
Benchmarking tree_split/tree_n=1000, d=10, max_depth=4, mtry=10: Collecting 100 samples in estimated 9.0133 s (10k iterations)
Benchmarking tree_split/tree_n=1000, d=10, max_depth=4, mtry=10: Analyzing
tree_split/tree_n=1000, d=10, max_depth=4, mtry=10
time: [872.86 us 876.92 us 881.02 us]
change: [+1.0759% +1.5909% +2.1139%] (p = 0.00 < 0.05)
Performance has regressed.
Benchmarking tree_split/tree_n=10000, d=10, max_depth=4, mtry=10
Benchmarking tree_split/tree_n=10000, d=10, max_depth=4, mtry=10: Warming up for 3.0000 s
Benchmarking tree_split/tree_n=10000, d=10, max_depth=4, mtry=10: Collecting 100 samples in estimated 5.4537 s (600 iterations)
Benchmarking tree_split/tree_n=10000, d=10, max_depth=4, mtry=10: Analyzing
tree_split/tree_n=10000, d=10, max_depth=4, mtry=10
time: [9.1067 ms 9.1431 ms 9.1797 ms]
change: [+0.1906% +0.7948% +1.4053%] (p = 0.01 < 0.05)
Change within noise threshold.
Benchmarking tree_split/tree_n=10000, d=10, max_depth=16, mtry=10
Benchmarking tree_split/tree_n=10000, d=10, max_depth=16, mtry=10: Warming up for 3.0000 s
Benchmarking tree_split/tree_n=10000, d=10, max_depth=16, mtry=10: Collecting 100 samples in estimated 6.6631 s (200 iterations)
Benchmarking tree_split/tree_n=10000, d=10, max_depth=16, mtry=10: Analyzing
tree_split/tree_n=10000, d=10, max_depth=16, mtry=10
time: [33.355 ms 33.500 ms 33.643 ms]
change: [+0.4069% +0.9934% +1.5758%] (p = 0.00 < 0.05)
Change within noise threshold.
Benchmarking tree_split/tree_n=10000, d=100, max_depth=4, mtry=10
Benchmarking tree_split/tree_n=10000, d=100, max_depth=4, mtry=10: Warming up for 3.0000 s
Benchmarking tree_split/tree_n=10000, d=100, max_depth=4, mtry=10: Collecting 100 samples in estimated 6.7835 s (200 iterations)
Benchmarking tree_split/tree_n=10000, d=100, max_depth=4, mtry=10: Analyzing
tree_split/tree_n=10000, d=100, max_depth=4, mtry=10
time: [33.324 ms 33.964 ms 34.607 ms]
change: [+11.553% +14.812% +18.152%] (p = 0.00 < 0.05)
Performance has regressed.
Benchmarking tree_split/tree_n=10000, d=100, max_depth=4, mtry=100
Benchmarking tree_split/tree_n=10000, d=100, max_depth=4, mtry=100: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 10.6s, or reduce sample count to 40.
Benchmarking tree_split/tree_n=10000, d=100, max_depth=4, mtry=100: Collecting 100 samples in estimated 10.622 s (100 iterations)
Benchmarking tree_split/tree_n=10000, d=100, max_depth=4, mtry=100: Analyzing
tree_split/tree_n=10000, d=100, max_depth=4, mtry=100
time: [101.90 ms 102.33 ms 102.78 ms]
change: [+0.5477% +1.3369% +2.1588%] (p = 0.00 < 0.05)
Change within noise threshold.
Benchmarking tree_split/tree_n=100000, d=10, max_depth=4, mtry=10
Benchmarking tree_split/tree_n=100000, d=10, max_depth=4, mtry=10: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 11.3s, or reduce sample count to 40.
Benchmarking tree_split/tree_n=100000, d=10, max_depth=4, mtry=10: Collecting 100 samples in estimated 11.270 s (100 iterations)
Benchmarking tree_split/tree_n=100000, d=10, max_depth=4, mtry=10: Analyzing
tree_split/tree_n=100000, d=10, max_depth=4, mtry=10
time: [111.56 ms 112.08 ms 112.60 ms]
change: [+3.9732% +4.5678% +5.1672%] (p = 0.00 < 0.05)
Performance has regressed.
Running unittests (target/release/deps/bench_utils-5e89e0c474c48fdf)
WARNING: HTML report generation will become a non-default optional feature in Criterion.rs 0.4.0.
This feature is being moved to cargo-criterion (https://github.com/bheisler/cargo-criterion) and will be optional in a future version of Criterion.rs. To silence this warning, either switch to cargo-criterion or enable the 'html_reports' feature in your Cargo.toml.
Gnuplot not found, using plotters backend
Benchmarking argsort/argsort_continuous/1000
Benchmarking argsort/argsort_continuous/1000: Warming up for 3.0000 s
Benchmarking argsort/argsort_continuous/1000: Collecting 100 samples in estimated 5.0855 s (76k iterations)
Benchmarking argsort/argsort_continuous/1000: Analyzing
argsort/argsort_continuous/1000
time: [67.907 us 68.091 us 68.277 us]
change: [+52.530% +53.327% +54.157%] (p = 0.00 < 0.05)
Performance has regressed.
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) low mild
Benchmarking argsort/argsort_one_hot/1000
Benchmarking argsort/argsort_one_hot/1000: Warming up for 3.0000 s
Benchmarking argsort/argsort_one_hot/1000: Collecting 100 samples in estimated 5.0135 s (965k iterations)
Benchmarking argsort/argsort_one_hot/1000: Analyzing
argsort/argsort_one_hot/1000
time: [5.1262 us 5.1462 us 5.1686 us]
change: [-28.386% -28.054% -27.745%] (p = 0.00 < 0.05)
Performance has improved.
Benchmarking argsort/sample_weight, size=1000
Benchmarking argsort/sample_weight, size=1000: Warming up for 3.0000 s
Benchmarking argsort/sample_weight, size=1000: Collecting 100 samples in estimated 5.0142 s (672k iterations)
Benchmarking argsort/sample_weight, size=1000: Analyzing
argsort/sample_weight, size=1000
time: [7.4495 us 7.4902 us 7.5280 us]
change: [-5.6246% -5.1484% -4.6678%] (p = 0.00 < 0.05)
Performance has improved.
Benchmarking argsort/sample_indices_from_weights, size=1000
Benchmarking argsort/sample_indices_from_weights, size=1000: Warming up for 3.0000 s
Benchmarking argsort/sample_indices_from_weights, size=1000: Collecting 100 samples in estimated 5.0007 s (2.0M iterations)
Benchmarking argsort/sample_indices_from_weights, size=1000: Analyzing
argsort/sample_indices_from_weights, size=1000
time: [2.5101 us 2.5212 us 2.5324 us]
change: [+0.9455% +1.5571% +2.1822%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) low mild
Benchmarking argsort/oob_samples_from_weights, size=1000
Benchmarking argsort/oob_samples_from_weights, size=1000: Warming up for 3.0000 s
Benchmarking argsort/oob_samples_from_weights, size=1000: Collecting 100 samples in estimated 5.0036 s (5.6M iterations)
Benchmarking argsort/oob_samples_from_weights, size=1000: Analyzing
argsort/oob_samples_from_weights, size=1000
time: [915.81 ns 920.57 ns 925.28 ns]
change: [+4.3305% +5.1706% +5.9416%] (p = 0.00 < 0.05)
Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) low mild
1 (1.00%) high mild
Benchmarking argsort/argsort_continuous/10000
Benchmarking argsort/argsort_continuous/10000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.2s, enable flat sampling, or reduce sample count to 60.
Benchmarking argsort/argsort_continuous/10000: Collecting 100 samples in estimated 6.1799 s (5050 iterations)
Benchmarking argsort/argsort_continuous/10000: Analyzing
argsort/argsort_continuous/10000
time: [1.2114 ms 1.2169 ms 1.2233 ms]
change: [+45.201% +46.079% +47.020%] (p = 0.00 < 0.05)
Performance has regressed.
Benchmarking argsort/argsort_one_hot/10000
Benchmarking argsort/argsort_one_hot/10000: Warming up for 3.0000 s
Benchmarking argsort/argsort_one_hot/10000: Collecting 100 samples in estimated 5.3663 s (61k iterations)
Benchmarking argsort/argsort_one_hot/10000: Analyzing
argsort/argsort_one_hot/10000
time: [87.918 us 88.335 us 88.737 us]
change: [-3.7658% -3.2375% -2.7066%] (p = 0.00 < 0.05)
Performance has improved.
Benchmarking argsort/sample_weight, size=10000
Benchmarking argsort/sample_weight, size=10000: Warming up for 3.0000 s
Benchmarking argsort/sample_weight, size=10000: Collecting 100 samples in estimated 5.4432 s (30k iterations)
Benchmarking argsort/sample_weight, size=10000: Analyzing
argsort/sample_weight, size=10000
time: [180.80 us 181.41 us 181.99 us]
change: [-5.6198% -5.1232% -4.6308%] (p = 0.00 < 0.05)
Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) low mild
Benchmarking argsort/sample_indices_from_weights, size=10000
Benchmarking argsort/sample_indices_from_weights, size=10000: Warming up for 3.0000 s
Benchmarking argsort/sample_indices_from_weights, size=10000: Collecting 100 samples in estimated 5.3596 s (50k iterations)
Benchmarking argsort/sample_indices_from_weights, size=10000: Analyzing
argsort/sample_indices_from_weights, size=10000
time: [105.11 us 105.36 us 105.60 us]
change: [-3.1431% -2.7386% -2.3162%] (p = 0.00 < 0.05)
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
2 (2.00%) low mild
3 (3.00%) high mild
Benchmarking argsort/oob_samples_from_weights, size=10000
Benchmarking argsort/oob_samples_from_weights, size=10000: Warming up for 3.0000 s
Benchmarking argsort/oob_samples_from_weights, size=10000: Collecting 100 samples in estimated 5.1426 s (116k iterations)
Benchmarking argsort/oob_samples_from_weights, size=10000: Analyzing
argsort/oob_samples_from_weights, size=10000
time: [43.120 us 43.273 us 43.428 us]
change: [-4.3595% -3.9207% -3.5132%] (p = 0.00 < 0.05)
Performance has improved.
Benchmarking argsort/argsort_continuous/100000
Benchmarking argsort/argsort_continuous/100000: Warming up for 3.0000 s
Benchmarking argsort/argsort_continuous/100000: Collecting 100 samples in estimated 6.5469 s (400 iterations)
Benchmarking argsort/argsort_continuous/100000: Analyzing
argsort/argsort_continuous/100000
time: [16.423 ms 16.485 ms 16.546 ms]
change: [+51.802% +52.474% +53.105%] (p = 0.00 < 0.05)
Performance has regressed.
Benchmarking argsort/argsort_one_hot/100000
Benchmarking argsort/argsort_one_hot/100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 5.2s, enable flat sampling, or reduce sample count to 60.
Benchmarking argsort/argsort_one_hot/100000: Collecting 100 samples in estimated 5.2193 s (5050 iterations)
Benchmarking argsort/argsort_one_hot/100000: Analyzing
argsort/argsort_one_hot/100000
time: [1.0226 ms 1.0258 ms 1.0288 ms]
change: [-23.345% -23.032% -22.716%] (p = 0.00 < 0.05)
Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high mild
Benchmarking argsort/sample_weight, size=100000
Benchmarking argsort/sample_weight, size=100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.5s, enable flat sampling, or reduce sample count to 60.
Benchmarking argsort/sample_weight, size=100000: Collecting 100 samples in estimated 6.4582 s (5050 iterations)
Benchmarking argsort/sample_weight, size=100000: Analyzing
argsort/sample_weight, size=100000
time: [1.2706 ms 1.2736 ms 1.2766 ms]
change: [-0.2038% +0.2651% +0.7174%] (p = 0.26 > 0.05)
No change in performance detected.
Benchmarking argsort/sample_indices_from_weights, size=100000
Benchmarking argsort/sample_indices_from_weights, size=100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 7.1s, enable flat sampling, or reduce sample count to 50.
Benchmarking argsort/sample_indices_from_weights, size=100000: Collecting 100 samples in estimated 7.0850 s (5050 iterations)
Benchmarking argsort/sample_indices_from_weights, size=100000: Analyzing
argsort/sample_indices_from_weights, size=100000
time: [1.3802 ms 1.3859 ms 1.3921 ms]
change: [-2.1769% -1.5761% -0.9605%] (p = 0.00 < 0.05)
Change within noise threshold.
Benchmarking argsort/oob_samples_from_weights, size=100000
Benchmarking argsort/oob_samples_from_weights, size=100000: Warming up for 3.0000 s
Benchmarking argsort/oob_samples_from_weights, size=100000: Collecting 100 samples in estimated 7.4265 s (15k iterations)
Benchmarking argsort/oob_samples_from_weights, size=100000: Analyzing
argsort/oob_samples_from_weights, size=100000
time: [489.68 us 491.38 us 493.10 us]
change: [-1.6254% -1.1573% -0.6621%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) low mild
Benchmarking argsort/argsort_continuous/1000000
Benchmarking argsort/argsort_continuous/1000000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 21.1s, or reduce sample count to 20.
Benchmarking argsort/argsort_continuous/1000000: Collecting 100 samples in estimated 21.079 s (100 iterations)
Benchmarking argsort/argsort_continuous/1000000: Analyzing
argsort/argsort_continuous/1000000
time: [223.35 ms 224.81 ms 226.20 ms]
change: [+70.409% +71.754% +73.015%] (p = 0.00 < 0.05)
Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) low mild
Benchmarking argsort/argsort_one_hot/1000000
Benchmarking argsort/argsort_one_hot/1000000: Warming up for 3.0000 s
Benchmarking argsort/argsort_one_hot/1000000: Collecting 100 samples in estimated 5.2668 s (500 iterations)
Benchmarking argsort/argsort_one_hot/1000000: Analyzing
argsort/argsort_one_hot/1000000
time: [10.465 ms 10.509 ms 10.553 ms]
change: [+5.3742% +5.9750% +6.5720%] (p = 0.00 < 0.05)
Performance has regressed.
Benchmarking argsort/sample_weight, size=1000000
Benchmarking argsort/sample_weight, size=1000000: Warming up for 3.0000 s
Benchmarking argsort/sample_weight, size=1000000: Collecting 100 samples in estimated 5.2993 s (600 iterations)
Benchmarking argsort/sample_weight, size=1000000: Analyzing
argsort/sample_weight, size=1000000
time: [8.7805 ms 8.8160 ms 8.8525 ms]
change: [-3.3162% -2.7423% -2.1684%] (p = 0.00 < 0.05)
Performance has improved.
Benchmarking argsort/sample_indices_from_weights, size=1000000
Benchmarking argsort/sample_indices_from_weights, size=1000000: Warming up for 3.0000 s
Benchmarking argsort/sample_indices_from_weights, size=1000000: Collecting 100 samples in estimated 5.6539 s (400 iterations)
Benchmarking argsort/sample_indices_from_weights, size=1000000: Analyzing
argsort/sample_indices_from_weights, size=1000000
time: [14.087 ms 14.159 ms 14.231 ms]
change: [-4.9967% -4.4051% -3.9018%] (p = 0.00 < 0.05)
Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
Benchmarking argsort/oob_samples_from_weights, size=1000000
Benchmarking argsort/oob_samples_from_weights, size=1000000: Warming up for 3.0000 s
Benchmarking argsort/oob_samples_from_weights, size=1000000: Collecting 100 samples in estimated 5.4853 s (1100 iterations)
Benchmarking argsort/oob_samples_from_weights, size=1000000: Analyzing
argsort/oob_samples_from_weights, size=1000000
time: [5.0249 ms 5.0364 ms 5.0470 ms]
change: [+3.1074% +3.5555% +3.9981%] (p = 0.00 < 0.05)
Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low severe
4 (4.00%) low mild
|
gharchive/pull-request
| 2022-01-04T16:26:18 |
2025-04-01T06:41:08.771955
|
{
"authors": [
"mlondschien"
],
"repo": "mlondschien/biosphere",
"url": "https://github.com/mlondschien/biosphere/pull/31",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2403582025
|
Add test cases with actual TfL results
resolved with #49
|
gharchive/issue
| 2024-07-11T16:20:14 |
2025-04-01T06:41:08.773818
|
{
"authors": [
"mnbf9rca"
],
"repo": "mnbf9rca/pydantic_tfl_api",
"url": "https://github.com/mnbf9rca/pydantic_tfl_api/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2642309108
|
add static analysis section to README.md
Commented back.
I updated to make the description more accurate and concise. I don't think we need to provide concrete gradle command for end-user should know the basics of gradle usage. As long as we give the relevant gradle tasks, it is good enough. It also makes the section short and to-the-point.
|
gharchive/pull-request
| 2024-11-07T22:06:36 |
2025-04-01T06:41:08.781490
|
{
"authors": [
"NathanQingyangXu"
],
"repo": "mongodb/mongo-hibernate",
"url": "https://github.com/mongodb/mongo-hibernate/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2054760380
|
系统日志场景如何自动获取调用方法名、请求参数、URL、IP、耗时等?
这部分我已经实现了,可以参考
https://toscode.mulanos.cn/miaoyinjun/jjche-boot
你这个是自己又实现了一个注解吧?我意思是基于这个框架的有没有可以扩展自动获取调用方法名、请求参数、URL、IP、耗时等。
|
gharchive/issue
| 2023-12-23T11:32:02 |
2025-04-01T06:41:08.788058
|
{
"authors": [
"dengliming",
"miaoyinjun"
],
"repo": "mouzt/mzt-biz-log",
"url": "https://github.com/mouzt/mzt-biz-log/issues/146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2420575073
|
pulling latest changes
pulling latest
|
gharchive/pull-request
| 2024-07-20T05:10:53 |
2025-04-01T06:41:08.792010
|
{
"authors": [
"haydarmajeed"
],
"repo": "mrwadams/stride-gpt",
"url": "https://github.com/mrwadams/stride-gpt/pull/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1143776056
|
fix: Fixes on slide ingestion
Codecov Report
Merging #283 (6f75193) into dev (f43337c) will increase coverage by 2.28%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## dev #283 +/- ##
==========================================
+ Coverage 70.75% 73.03% +2.28%
==========================================
Files 100 51 -49
Lines 5057 3542 -1515
==========================================
- Hits 3578 2587 -991
+ Misses 1479 955 -524
Impacted Files
Coverage Δ
pyluna-pathology/luna/pathology/cli/slide_etl.py
90.47% <100.00%> (+0.18%)
:arrow_up:
...y/proxy_table/regional_annotation/test_generate.py
...pathology/tests/luna/pathology/cli/test_dsa_viz.py
...a-pathology/tests/luna/pathology/common/test_ml.py
...tests/luna/radiology/cli/test_extract_radiomics.py
...hology/tests/luna/pathology/cli/test_save_tiles.py
...ogy/tests/luna/pathology/common/test_preprocess.py
pyluna-core/tests/luna/project/test_generate.py
...thology/tests/luna/pathology/spatial/test_stats.py
...logy_annotations/test_get_pathology_annotations.py
... and 40 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f43337c...6f75193. Read the comment docs.
|
gharchive/pull-request
| 2022-02-18T21:04:56 |
2025-04-01T06:41:08.807892
|
{
"authors": [
"aauker",
"codecov-commenter"
],
"repo": "msk-mind/luna",
"url": "https://github.com/msk-mind/luna/pull/283",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2753811525
|
Could you please provide the Canny Control dateset?
Hi
You can just run the canny algorithm in the opencv package on the satellite images to create the dataset.
Thanks
With default parameters?
All default parameters except the 1st threshold set to 100 and the 2nd threshold set to 200.
Probably worth throwing a script in to do that so people can replicate results.
Could you please provide the Canny dateset?The effect I generated myself is not good.Thanks!
Below I'm sharing a script to load the Canny dataset. After reading the satellite image, I run the canny detector algorithm defined in the ControlNet repo:
import json
import cv2
import numpy as np
from ..ControlNet.annotator.canny import CannyDetector
from ..ControlNet.annotator.util import HWC3
from torch.utils.data import Dataset
class Dataset(Dataset):
def __init__(self, prompt_path, location_embeds_path):
self.data = json.load(open(prompt_path, "rt"))
self.apply_canny = CannyDetector()
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
item = self.data[idx]
target_filename = item["target"]
prompt = item["prompt"]
target = cv2.imread("../" + target_filename)
# Do not forget that OpenCV read images in BGR order.
target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB)
# Apply Canny Edge Algorithm
source = self.apply_canny(target, 100, 200)
source = HWC3(source)
# Normalize source images to [0, 1].
source = source.astype(np.float32) / 255.0
# Normalize target images to [-1, 1].
target = (target.astype(np.float32) / 127.5) - 1.0
return dict(
jpg=target, txt=prompt, hint=source
)
Below I'm sharing a script to load the Canny dataset. After reading the satellite image, I run the canny detector algorithm defined in the ControlNet repo (which uses opencv):
import json
import cv2
import numpy as np
from ..ControlNet.annotator.canny import CannyDetector
from ..ControlNet.annotator.util import HWC3
from torch.utils.data import Dataset
class Dataset(Dataset):
def __init__(self, prompt_path):
self.data = json.load(open(prompt_path, "rt"))
self.apply_canny = CannyDetector()
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
item = self.data[idx]
target_filename = item["target"]
prompt = item["prompt"]
target = cv2.imread("../" + target_filename)
# Do not forget that OpenCV read images in BGR order.
target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB)
# Apply Canny Edge Algorithm
source = self.apply_canny(target, 100, 200)
source = HWC3(source)
# Normalize source images to [0, 1].
source = source.astype(np.float32) / 255.0
# Normalize target images to [-1, 1].
target = (target.astype(np.float32) / 127.5) - 1.0
return dict(
jpg=target, txt=prompt, hint=source
)
Thanks for the code, I will try it out and thanks again for this great work.
|
gharchive/issue
| 2024-12-21T06:55:46 |
2025-04-01T06:41:08.839537
|
{
"authors": [
"Li-Jihong",
"Vishu26",
"jacobsn",
"yinliaoabc"
],
"repo": "mvrl/GeoSynth",
"url": "https://github.com/mvrl/GeoSynth/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1990231862
|
Cookie-trøbbel når man kjører Nais Teams lokalt, sannsynligvis samesite-relatert
Fixed in https://github.com/nais/teams-backend/commit/ec00797163e96365913f52967222c2dff4bcea15
|
gharchive/issue
| 2023-11-13T09:27:16 |
2025-04-01T06:41:08.940029
|
{
"authors": [
"sechmann",
"tronghn"
],
"repo": "nais/teams-backend",
"url": "https://github.com/nais/teams-backend/issues/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1996220638
|
Implement $+
This should resolve a few compile errors in the nightly specs (library/defined, library/predefined and English)
|
gharchive/pull-request
| 2023-11-16T07:20:31 |
2025-04-01T06:41:08.946218
|
{
"authors": [
"herwinw"
],
"repo": "natalie-lang/natalie",
"url": "https://github.com/natalie-lang/natalie/pull/1491",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2080509928
|
reduce amount of CI builds
thanks
|
gharchive/pull-request
| 2024-01-13T23:25:33 |
2025-04-01T06:41:08.953188
|
{
"authors": [
"diceroll123",
"nathanielfernandes"
],
"repo": "nathanielfernandes/imagetext-py",
"url": "https://github.com/nathanielfernandes/imagetext-py/pull/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1696790567
|
Update JwtUtils to support complete "nats" Data Structure.
As we discussed over the slack channel, please consider authenticating NATS using user JWT, let me add snippet.
` private static Connection connectNatsUsingJwt(String userJwt) throws IOException, InterruptedException, GeneralSecurityException {
Options options = new Options.Builder()
.server("nats://localhost:4222")
.authHandler(new AuthHandler() {
@Override
public byte[] sign(byte[] nonce) {
return new byte[0];
}
@Override
public char[] getID() {
return new char[0];
}
@Override
public char[] getJWT() {
return userJwt.toCharArray();
}
}).build();
Connection nc = Nats.connect(options);
System.out.println("Connected to NATS server");
return nc;
}`
|
gharchive/issue
| 2023-05-04T22:39:51 |
2025-04-01T06:41:08.955535
|
{
"authors": [
"netesh3",
"scottf"
],
"repo": "nats-io/nats.java",
"url": "https://github.com/nats-io/nats.java/issues/903",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
857822235
|
Deploy av 20210414134818-b598f13
/promote dev-gcp
/promote dev-sbs
|
gharchive/issue
| 2021-04-14T11:54:32 |
2025-04-01T06:41:08.957888
|
{
"authors": [
"jan-olaveide"
],
"repo": "navikt/foreldrepengesoknad",
"url": "https://github.com/navikt/foreldrepengesoknad/issues/2693",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
350194589
|
Figure out blog posts for marketing Civic Camp
https://docs.google.com/document/d/1H4V5DmP4OEIGkGE_GiK9DblUudXb4TkT_F_3TuaBPQ0/edit?usp=sharing
Can you give me a deadline for my post?
On Mon, Sep 3, 2018 at 8:07 AM Janel notifications@github.com wrote:
https://docs.google.com/document/d/1H4V5DmP4OEIGkGE_GiK9DblUudXb4TkT_F_3TuaBPQ0/edit?usp=sharing
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/ncopenpass/NCOpenPass/issues/1132#issuecomment-418096124,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB3Ie0b2-zgJEvJ6ibfTR0znL1QDkZF-ks5uXRuDgaJpZM4V7TMw
.
|
gharchive/issue
| 2018-08-13T21:07:15 |
2025-04-01T06:41:08.963786
|
{
"authors": [
"dtraleigh",
"jcwlib"
],
"repo": "ncopenpass/NCOpenPass",
"url": "https://github.com/ncopenpass/NCOpenPass/issues/1132",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2038188703
|
Better menu
@gagdiez I don't see any changes here? 🤔
|
gharchive/pull-request
| 2023-12-12T16:54:51 |
2025-04-01T06:41:08.965422
|
{
"authors": [
"bucanero",
"gagdiez"
],
"repo": "near/docs",
"url": "https://github.com/near/docs/pull/1623",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2207701093
|
Add go mod vendor check in CI
/approve
|
gharchive/pull-request
| 2024-03-26T09:26:24 |
2025-04-01T06:41:08.972930
|
{
"authors": [
"jotak"
],
"repo": "netobserv/network-observability-operator",
"url": "https://github.com/netobserv/network-observability-operator/pull/603",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1388641440
|
if nsm has plans to use k8s CNI to communicate? Because some pods function is bound with k8s cni. If they don't use k8s cni, they can't realize their function
Could you say more?
I think it'd be more concrete if you could provide the next points:
scenario
2 .actual behavior
expected behavior
|
gharchive/issue
| 2022-09-28T02:53:27 |
2025-04-01T06:41:08.974730
|
{
"authors": [
"denis-tingaikin",
"mayulin123456"
],
"repo": "networkservicemesh/deployments-k8s",
"url": "https://github.com/networkservicemesh/deployments-k8s/issues/7469",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2594750658
|
Need a vignette for covid19 data
Duplicate #44
|
gharchive/issue
| 2024-10-17T13:28:55 |
2025-04-01T06:41:08.977402
|
{
"authors": [
"Lextuga007",
"tomjemmett"
],
"repo": "nhs-r-community/NHSRdatasets",
"url": "https://github.com/nhs-r-community/NHSRdatasets/issues/91",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1698886952
|
新功能:随机密码生成工具。
Associated Commits
4905cba154f4ec0323d38c42f2471091e0636eb3
|
gharchive/issue
| 2023-05-07T04:32:42 |
2025-04-01T06:41:08.978167
|
{
"authors": [
"netowls-studio"
],
"repo": "niacomsoft/disco",
"url": "https://github.com/niacomsoft/disco/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1292067786
|
fix yvals_core.h #error 2
thanks for the pr but i've fixed that bug
|
gharchive/pull-request
| 2022-07-02T14:27:37 |
2025-04-01T06:41:08.999069
|
{
"authors": [
"TRDP1404",
"nifanfa"
],
"repo": "nifanfa/MOOS",
"url": "https://github.com/nifanfa/MOOS/pull/36",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1198712483
|
add config
Codecov Report
Merging #27 (bb899da) into master (f376656) will increase coverage by 1.85%.
The diff coverage is 93.54%.
@@ Coverage Diff @@
## master #27 +/- ##
==========================================
+ Coverage 81.96% 83.81% +1.85%
==========================================
Files 62 72 +10
Lines 1558 1718 +160
Branches 242 189 -53
==========================================
+ Hits 1277 1440 +163
+ Misses 266 257 -9
- Partials 15 21 +6
Impacted Files
Coverage Δ
vedro/plugins/skipper/_skipper.py
20.61% <0.00%> (ø)
vedro/__init__.py
70.37% <50.00%> (+11.39%)
:arrow_up:
vedro/core/_lifecycle.py
89.47% <71.42%> (-10.53%)
:arrow_down:
vedro/plugins/director/_director_init_event.py
85.71% <85.71%> (ø)
vedro/core/_module_loader/_module_file_loader.py
92.59% <92.59%> (ø)
vedro/_config.py
100.00% <100.00%> (ø)
vedro/_context.py
75.00% <100.00%> (+8.33%)
:arrow_up:
vedro/_interface.py
100.00% <100.00%> (ø)
vedro/core/__init__.py
100.00% <100.00%> (ø)
vedro/core/_config_loader/__init__.py
100.00% <100.00%> (ø)
... and 21 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f376656...bb899da. Read the comment docs.
|
gharchive/pull-request
| 2022-04-09T16:52:50 |
2025-04-01T06:41:09.017586
|
{
"authors": [
"codecov-commenter",
"nikitanovosibirsk"
],
"repo": "nikitanovosibirsk/vedro",
"url": "https://github.com/nikitanovosibirsk/vedro/pull/27",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1065624316
|
Decouple dockerfile backend from gRPC server.
Codecov Report
Merging #7 (0ccf0cf) into develop (a88acaf) will increase coverage by 8.44%.
The diff coverage is 96.77%.
@@ Coverage Diff @@
## develop #7 +/- ##
===========================================
+ Coverage 43.96% 52.41% +8.44%
===========================================
Files 9 7 -2
Lines 257 145 -112
===========================================
- Hits 113 76 -37
+ Misses 132 68 -64
+ Partials 12 1 -11
Flag
Coverage Δ
unittests
52.41% <96.77%> (+8.44%)
:arrow_up:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
pkg/server/docker/from.go
80.00% <85.71%> (+2.85%)
:arrow_up:
pkg/server/docker/add.go
100.00% <100.00%> (ø)
pkg/server/docker/config.go
100.00% <100.00%> (+64.28%)
:arrow_up:
pkg/server/docker/copy.go
100.00% <100.00%> (ø)
pkg/server/docker/run.go
100.00% <100.00%> (ø)
pkg/server/docker/server.go
100.00% <100.00%> (ø)
pkg/backend/dockerfile/container.go
pkg/backend/dockerfile/store.go
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a88acaf...0ccf0cf. Read the comment docs.
|
gharchive/pull-request
| 2021-11-29T05:21:58 |
2025-04-01T06:41:09.032561
|
{
"authors": [
"codecov-commenter",
"tjholm"
],
"repo": "nitrictech/boxygen",
"url": "https://github.com/nitrictech/boxygen/pull/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1846129733
|
87 redefine containment
Codecov Report
Merging #88 (74125d6) into main (5deec81) will increase coverage by 0.17%.
The diff coverage is 100.00%.
:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.
@@ Coverage Diff @@
## main #88 +/- ##
==========================================
+ Coverage 96.72% 96.90% +0.17%
==========================================
Files 23 23
Lines 3909 3937 +28
==========================================
+ Hits 3781 3815 +34
+ Misses 128 122 -6
Files Changed
Coverage Δ
src/traits/interval/overlap.rs
100.00% <100.00%> (ø)
src/traits/interval/subtract.rs
99.31% <100.00%> (+2.36%)
:arrow_up:
|
gharchive/pull-request
| 2023-08-11T03:02:48 |
2025-04-01T06:41:09.041920
|
{
"authors": [
"codecov-commenter",
"noamteyssier"
],
"repo": "noamteyssier/bedrs",
"url": "https://github.com/noamteyssier/bedrs/pull/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1565000007
|
add screenshot to repo readme
added in #126
|
gharchive/issue
| 2023-01-31T21:02:25 |
2025-04-01T06:41:09.052523
|
{
"authors": [
"ntno"
],
"repo": "ntno/mkdocs-terminal",
"url": "https://github.com/ntno/mkdocs-terminal/issues/122",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2207092921
|
Update URL in code sample in README.md
oops, good catch. thanks!
|
gharchive/pull-request
| 2024-03-26T02:12:32 |
2025-04-01T06:41:09.061881
|
{
"authors": [
"abusch",
"fdncred"
],
"repo": "nushell/nu_plugin_template",
"url": "https://github.com/nushell/nu_plugin_template/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2220137628
|
Get variables from oTree server
It is possible to decode the base64 variable encoding. In oTree the base64 is generated as follows
binascii.b2a_base64(pickle.dumps(dict(value))).decode('utf-8')
and loaded as
pickle.loads(binascii.a2b_base64(value.encode('utf-8')))
from database.py
|
gharchive/issue
| 2024-04-02T10:27:28 |
2025-04-01T06:41:09.067055
|
{
"authors": [
"recap"
],
"repo": "obeliss-nlesc/otree-waiting-room",
"url": "https://github.com/obeliss-nlesc/otree-waiting-room/issues/47",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1306444969
|
Average medal always returns at least 1 star
Wait how are we getting 0 here?
If avgStars is less than 0.5 mod 5
|
gharchive/pull-request
| 2022-07-15T20:11:04 |
2025-04-01T06:41:09.082113
|
{
"authors": [
"builder-247",
"howardchung"
],
"repo": "odota/core",
"url": "https://github.com/odota/core/pull/2583",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1168001308
|
refactor: propagate context to downstream calls
Pull Request Test Coverage Report for Build 1979249308
7 of 8 (87.5%) changed or added relevant lines in 3 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 72.72%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
run/service.go
2
3
66.67%
Totals
Change from base Build 1963843028:
0.0%
Covered Lines:
5158
Relevant Lines:
7093
💛 - Coveralls
@sbchaos lets resolve the conflicts
|
gharchive/pull-request
| 2022-03-14T07:41:52 |
2025-04-01T06:41:09.088204
|
{
"authors": [
"coveralls",
"sbchaos",
"sravankorumilli"
],
"repo": "odpf/optimus",
"url": "https://github.com/odpf/optimus/pull/216",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1159824582
|
feat: GMP-449 minor fixes related to runner environment
Update commits message to reflect the "fix" (not the feature!)
|
gharchive/pull-request
| 2022-03-04T16:27:17 |
2025-04-01T06:41:09.090621
|
{
"authors": [
"lucky-edu",
"rbrisuda"
],
"repo": "ohpensource/platform-cicd",
"url": "https://github.com/ohpensource/platform-cicd/pull/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1492570781
|
Readme, docs
Readme: https://github.com/ojo-network/price-feeder#readme
|
gharchive/issue
| 2022-12-12T19:13:34 |
2025-04-01T06:41:09.091509
|
{
"authors": [
"adamewozniak"
],
"repo": "ojo-network/price-feeder",
"url": "https://github.com/ojo-network/price-feeder/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2330205213
|
issue35
done
|
gharchive/issue
| 2024-06-03T05:28:13 |
2025-04-01T06:41:09.103498
|
{
"authors": [
"FamousTiger"
],
"repo": "onlinehub0808/onlinement",
"url": "https://github.com/onlinehub0808/onlinement/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1386251741
|
Store events metadata
what is the goal of the origin field? Can't it be included in the metadata json?
The idea behind the origin field is to discriminate a bit between the different metadata.
Events from starknet will have different metadata than events from the API or from another chain.
But we cannot know which type of metadata it will be just based on the aggregate type itself. So this extra field will allow us to programmatically discriminate between this diversity of metadata schema.
It can be included in the json, but I think it's better to have it outside in order to know the target before deserialization.
It's also more high-level/domain-related than block_number or transaction_hash. Like timestamp it is common to all events, regardless of their nature/origin. For those reasons I think it's better to have it flattened.
I think the concept of event metadata should only exist at the infrastructure layer.
StorableEvent is defined in the domain. It contains everything to be stored with an event.
If we want to take metadata out of StorableEvent we change the Store trait
fn append(&self, aggregate_id: &A::Id, events: Vec<StorableEvent<A>>) -> Result<(), Error>;
can become
fn append(&self, aggregate_id: &A::Id, events: Vec<(StorableEvent<A>, json::Value)>) -> Result<(), Error>;
But this trait is still defined in the domain, so it changes nothing regarding your concern.
|
gharchive/pull-request
| 2022-09-26T15:08:18 |
2025-04-01T06:41:09.107045
|
{
"authors": [
"tdelabro"
],
"repo": "onlydustxyz/marketplace-backend",
"url": "https://github.com/onlydustxyz/marketplace-backend/pull/252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1724239974
|
bump to 0.12.0
/lgtm
|
gharchive/pull-request
| 2023-05-24T15:16:29 |
2025-04-01T06:41:09.108330
|
{
"authors": [
"xuezhaojun",
"zhiweiyin318"
],
"repo": "open-cluster-management-io/registration-operator",
"url": "https://github.com/open-cluster-management-io/registration-operator/pull/356",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1866351163
|
use path join
I'm not sure whether it is the best practice of Rust. You can ask the review from @dae.
The rest of the changes here seem reasonable, and it actually wouldn't be a big deal if this were merged without waiting - once Burn gets updated, it would just be a case of removing the .to_str().unwrap() to get non-UTF8 support.
|
gharchive/pull-request
| 2023-08-25T05:53:33 |
2025-04-01T06:41:09.110684
|
{
"authors": [
"L-M-Sherlock",
"asukaminato0721",
"dae"
],
"repo": "open-spaced-repetition/fsrs-optimizer-burn",
"url": "https://github.com/open-spaced-repetition/fsrs-optimizer-burn/pull/30",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2191255365
|
Finalize 2.1.1_spec
Reviewers is @PrathibaJee
|
gharchive/pull-request
| 2024-03-18T05:18:19 |
2025-04-01T06:41:09.114098
|
{
"authors": [
"ManasaBM1"
],
"repo": "openBackhaul/AdministratorAdministration",
"url": "https://github.com/openBackhaul/AdministratorAdministration/pull/319",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1554737867
|
Update oas to latest pattern
update to latest pattern before adding individual mwdi changes
correct linting
|
gharchive/issue
| 2023-01-24T10:47:08 |
2025-04-01T06:41:09.114819
|
{
"authors": [
"kmohr-soprasteria"
],
"repo": "openBackhaul/MicroWaveDeviceInventory",
"url": "https://github.com/openBackhaul/MicroWaveDeviceInventory/issues/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1528014047
|
ADR: Analytic Reporting and Display Tech Selection
https://github.com/openedx/openedx-oars/pull/37 is in review
https://github.com/openedx/openedx-oars/blob/main/docs/decisions/0003_superset.rst
|
gharchive/issue
| 2023-01-10T21:11:22 |
2025-04-01T06:41:09.120985
|
{
"authors": [
"bmtcril"
],
"repo": "openedx/openedx-oars",
"url": "https://github.com/openedx/openedx-oars/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2358537122
|
update probability of failure for binfhe paramsets
Benchmark Time CPU Iterations
FHEW_BINGATE2/TOY_2_GINX_OR 1.98 ms 1.98 ms 355
FHEW_BINGATE2/MEDIUM_2_GINX_OR 24.4 ms 24.4 ms 29
FHEW_BINGATE2/STD128_2_AP_OR 46.9 ms 46.9 ms 15
FHEW_BINGATE2/STD128_2_GINX_OR 28.9 ms 28.9 ms 24
FHEW_BINGATE3/STD128_3_GINX_OR 46.1 ms 46.1 ms 15
FHEW_BINGATE4/STD128_4_GINX_OR 46.0 ms 46.0 ms 15
FHEW_BINGATE2/STD128Q_2_GINX_OR 41.2 ms 41.2 ms 17
FHEW_BINGATE2/STD256Q_2_GINX_OR 192 ms 192 ms 4
FHEW_BINGATE3/STD256Q_3_GINX_OR 274 ms 274 ms 3
FHEW_BINGATE4/STD256Q_4_GINX_OR 317 ms 317 ms 2
FHEW_BINGATE2/STD128_2_LMKCDEY_OR 31.5 ms 31.5 ms 22
FHEW_BINGATE2/STD128Q_2_LMKCDEY_OR 33.8 ms 33.8 ms 21
FHEW_BINGATE2/STD256Q_2_LMKCDEY_OR 144 ms 144 ms 5
FHEW_BINGATE2/LPF_STD128_2_GINX_OR 42.9 ms 42.9 ms 16
FHEW_BINGATE2/LPF_STD128Q_2_GINX_OR 49.5 ms 49.5 ms 14
FHEW_BINGATE2/LPF_STD128_2_LMKCDEY_OR 38.4 ms 38.4 ms 18
FHEW_BINGATE2/LPF_STD128Q_2_LMKCDEY_OR 54.4 ms 54.4 ms 13
|
gharchive/pull-request
| 2024-06-18T00:12:49 |
2025-04-01T06:41:09.125554
|
{
"authors": [
"pascoec"
],
"repo": "openfheorg/openfhe-development",
"url": "https://github.com/openfheorg/openfhe-development/pull/804",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2570515416
|
chore: bumping version to 1.10.8
@oscerd can you merge this PR ?
|
gharchive/pull-request
| 2024-10-07T13:53:17 |
2025-04-01T06:41:09.131186
|
{
"authors": [
"claudio4j"
],
"repo": "openshift-integration/kamelet-catalog",
"url": "https://github.com/openshift-integration/kamelet-catalog/pull/347",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1202609300
|
:robot: Triggering CI on branch 'release-next' after synching to upstream/main
OCF Webhook is merging this PR
|
gharchive/pull-request
| 2022-04-13T00:35:17 |
2025-04-01T06:41:09.131852
|
{
"authors": [
"serverless-qe"
],
"repo": "openshift-knative/eventing-kafka",
"url": "https://github.com/openshift-knative/eventing-kafka/pull/644",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1619797505
|
:robot: Triggering CI on branch 'release-next' after synching to upstream/main
OCF Webhook is merging this PR
|
gharchive/pull-request
| 2023-03-11T00:34:45 |
2025-04-01T06:41:09.132534
|
{
"authors": [
"serverless-qe"
],
"repo": "openshift-knative/serving",
"url": "https://github.com/openshift-knative/serving/pull/213",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1613349800
|
[release-4.13] OCPBUGS-8475: Disable TestBoundTokenSignerController
/jira refresh
/retest
/jira refresh
/lgtm
/approve
/retest-required
/label cherry-pick-approved
|
gharchive/pull-request
| 2023-03-07T12:33:18 |
2025-04-01T06:41:09.135314
|
{
"authors": [
"dgrisonnet",
"gangwgr",
"mfojtik"
],
"repo": "openshift/cluster-kube-apiserver-operator",
"url": "https://github.com/openshift/cluster-kube-apiserver-operator/pull/1457",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.