id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
475371886
|
clean up unit tests
Improvement Description
The unit tests for this plugin are overgrown. It is time to get organized.
Current Behavior
At the time of writing, the unit tests are 1546 lines long. Some cleanup of the code may also improve test runtime.
Proposed Behavior
split up test_classifier.py into multiple thematic test modules. E.g., types/formats should be tested separately. Visualizations and utilities could also be tested separately.
split up or combine some test classes, based on shared test data
clean up the test data to simplify tests where possible. E.g., the full-sized datasets could be replaced by toy datasets for more tests
a. TestHeatmap should use toy data, the tests take close to 2 minutes to run for a fairly trivial set of tests.
Note: @Oddant1 mostly addressed this in #188
The remaining task is to transition other tests to a toy dataset (e.g., the tests that loop through all estimators could use a toy dataset)... but for now we can wait and see if we still need test runtime shortened. If not, we can close this issue.
Just wanted to add here in regards to "if test runtime is shortened we can close this issue" test runtime was cut from ~145 seconds to ~70-75 seconds on my machine.
Are we interested in shortening the test runtime here any further with a toy dataset? I don't really know how to build a toy dataset for this, but test runtime is down from ~145 seconds to ~70-75 seconds.
hey @Oddant1 — I have heard no more complaints from @thermokarst about these tests holding up busywork.... I am happy to close this issue if @thermokarst is happy with the current test runtime, since I agree at this point we can't expect much more than shaving off many 20 more sec...
No complaints here, ATM. Just wait, I'm sure I'll come up with somethin...
|
gharchive/issue
| 2019-07-31T21:45:59 |
2025-04-01T06:40:09.326006
|
{
"authors": [
"Oddant1",
"nbokulich",
"thermokarst"
],
"repo": "qiime2/q2-sample-classifier",
"url": "https://github.com/qiime2/q2-sample-classifier/issues/167",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
227739912
|
API: check whether path is valid artifact or visualization
Improvement Description
This has come up on the forum and in q2cli -- it'd be handy to have an API to determine whether a file path is a valid QIIME 2 artifact or visualization. .peek() works but a more explicit API would make discovering/accessing this functionality easier.
I think this is probably staying in qiime tools peek
|
gharchive/issue
| 2017-05-10T16:33:25 |
2025-04-01T06:40:09.327536
|
{
"authors": [
"Oddant1",
"jairideout"
],
"repo": "qiime2/qiime2",
"url": "https://github.com/qiime2/qiime2/issues/259",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1598105644
|
可以实现protostuff序列化吗
可以实现protostuff序列化吗
protostuff是一个基于protobuf实现的序列化方法,它较于protobuf最明显的好处是,在几乎不损耗性能的情况下做到了不用我们写.proto文件来实现序列化。
暂时还不支持,可以临时用自定义脚本来实现格式化,在视图的最下面
后面可以看看呼声,高的话可以加到默认视图中去
|
gharchive/issue
| 2023-02-24T07:58:07 |
2025-04-01T06:40:09.332476
|
{
"authors": [
"1076965238",
"qishibo"
],
"repo": "qishibo/AnotherRedisDesktopManager",
"url": "https://github.com/qishibo/AnotherRedisDesktopManager/issues/1042",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1860483026
|
有Bug吗?redis里面没有配置密码,使用another desktop工具配置连接时,随便写一个密码居然可以连接上去!
使用版本:
最新版1.6.1
你用cli在没密码情况下随便指定也能上
|
gharchive/issue
| 2023-08-22T03:31:20 |
2025-04-01T06:40:09.333463
|
{
"authors": [
"qishibo",
"startjava"
],
"repo": "qishibo/AnotherRedisDesktopManager",
"url": "https://github.com/qishibo/AnotherRedisDesktopManager/issues/1118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1445302811
|
Improved content for Tutorial 8 (QAMP fall'22)
Summary
Enhancing the documentation of Tutorial "08_quantum_kernel_trainer" as a mentee for QAMP 2022.
Mentor: @ElePT
Mentee: @SanyaNanda
Details and comments
Restructured the notebook (Overview, Introduction, Objective, Tutorial, accuracy table and what was learned)
Imports placed where required
Added more explanations and inline comments for better understanding of the code
Hyperlinks to API references wherever required
Pull Request Test Coverage Report for Build 3444482012
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 88.934%
Totals
Change from base Build 3437545713:
0.0%
Covered Lines:
3544
Relevant Lines:
3985
💛 - Coveralls
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Closing this PR whilst working on updating tutorials. For more info - see [here].(https://github.com/qiskit-community/qiskit-machine-learning/pull/491#issuecomment-1979205893)
|
gharchive/pull-request
| 2022-11-11T10:56:38 |
2025-04-01T06:40:09.342144
|
{
"authors": [
"CLAassistant",
"SanyaNanda",
"coveralls",
"oscar-wallis"
],
"repo": "qiskit-community/qiskit-machine-learning",
"url": "https://github.com/qiskit-community/qiskit-machine-learning/pull/519",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1358303437
|
Test Issue - qsahm
Test Body - nyqcerfgkj
Test Comment - cratg
|
gharchive/issue
| 2022-09-01T06:08:50 |
2025-04-01T06:40:09.354814
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/11625",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1365093326
|
Test Issue - xoazp
Test Body - hveimhuvsf
Test Comment - snnay
|
gharchive/issue
| 2022-09-07T19:26:31 |
2025-04-01T06:40:09.355567
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/12119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1394418489
|
Test Issue - qivhx
Test Body - tfotancrgf
Test Comment - rlyxb
|
gharchive/issue
| 2022-10-03T09:26:39 |
2025-04-01T06:40:09.356326
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/14038",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1414403866
|
Test Issue - fsnlc
Test Body - jaanftmzef
Test Comment - ludsh
|
gharchive/issue
| 2022-10-19T06:50:15 |
2025-04-01T06:40:09.357035
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/15219",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1496258696
|
Test Issue - sjyyf
Test Body - eauihqjkxj
Test Comment - wjibe
|
gharchive/issue
| 2022-12-14T10:14:54 |
2025-04-01T06:40:09.357732
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/18652",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1526882460
|
Test Issue - puesn
Test Body - dwhwxcsbtd
Test Comment - onjex
|
gharchive/issue
| 2023-01-10T07:14:28 |
2025-04-01T06:40:09.358438
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/20248",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1564697880
|
Test Issue - oqmgk
Test Body - almltrlijf
Test Comment - rsiwa
|
gharchive/issue
| 2023-01-31T17:06:13 |
2025-04-01T06:40:09.359165
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/21344",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1611286428
|
Test Issue - ermjw
Test Body - lghrnfijgo
Test Comment - mtrbd
|
gharchive/issue
| 2023-03-06T12:16:20 |
2025-04-01T06:40:09.359923
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/22793",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1623137932
|
Test Issue - bxvnc
Test Body - gdfuaptmry
Test Comment - vdole
|
gharchive/issue
| 2023-03-14T10:06:38 |
2025-04-01T06:40:09.360866
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/23164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1648808575
|
Test Issue - tljio
Test Body - qvwgyylled
Test Comment - hfbfc
|
gharchive/issue
| 2023-03-31T07:14:10 |
2025-04-01T06:40:09.361588
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/23929",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1660056712
|
Test Issue - bsibq
Test Body - ybttgtvgpk
Test Comment - zcmtf
|
gharchive/issue
| 2023-04-10T00:25:17 |
2025-04-01T06:40:09.362343
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/24280",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1667068710
|
Test Issue - xobaw
Test Body - vxdohnemmk
Test Comment - yjrpy
|
gharchive/issue
| 2023-04-13T20:14:28 |
2025-04-01T06:40:09.363097
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/24510",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1778360800
|
Test Issue - qhglp
Test Body - ihhbvputyj
Test Comment - dqvuw
|
gharchive/issue
| 2023-06-28T07:19:49 |
2025-04-01T06:40:09.363821
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/27778",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1850461050
|
Test Issue - cmroi
Test Body - oafwdddrnb
Test Comment - tjzwp
|
gharchive/issue
| 2023-08-14T20:06:35 |
2025-04-01T06:40:09.364555
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/29888",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1869995510
|
Test Issue - kmogi
Test Body - jbgkkmhysc
Test Comment - infzx
|
gharchive/issue
| 2023-08-28T15:33:50 |
2025-04-01T06:40:09.365303
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/30708",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1951057734
|
Test Issue - zmqos
Test Body - tfntcqixjj
Test Comment - xmqjx
|
gharchive/issue
| 2023-10-19T02:16:24 |
2025-04-01T06:40:09.366135
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/33355",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1977037017
|
Test Issue - orwzr
Test Body - yipvamfatu
Test Comment - yeyvb
|
gharchive/issue
| 2023-11-03T23:14:27 |
2025-04-01T06:40:09.366880
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/34232",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2064677135
|
Test Issue - ywfbv
Test Body - ojsueynbpi
Test Comment - bvvxg
|
gharchive/issue
| 2024-01-03T21:34:36 |
2025-04-01T06:40:09.367638
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/37325",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2127206360
|
Test Issue - xfeuy
Test Body - lpnvzssmwl
Test Comment - zhani
|
gharchive/issue
| 2024-02-09T13:57:26 |
2025-04-01T06:40:09.368393
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/39170",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2249590387
|
Test Issue - bdsmi
Test Body - fnvoswmysc
Test Comment - rywik
|
gharchive/issue
| 2024-04-18T02:12:34 |
2025-04-01T06:40:09.369107
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/43276",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1257096160
|
Test Issue - qotwf
Test Body - ljcephrjub
Test Comment - pwpbw
|
gharchive/issue
| 2022-06-01T21:36:07 |
2025-04-01T06:40:09.369994
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/4412",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2287675401
|
Test Issue - dzynb
Test Body - orysojeroh
Test Comment - zhvxs
|
gharchive/issue
| 2024-05-09T13:15:03 |
2025-04-01T06:40:09.370694
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/44551",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2313687574
|
Test Issue - rduip
Test Body - vddmcelbhl
Test Comment - btzpv
|
gharchive/issue
| 2024-05-23T19:36:00 |
2025-04-01T06:40:09.371402
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/45422",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2386663751
|
Test Issue - iqxic
Test Body - jfnlhlgutb
Test Comment - vqxtg
|
gharchive/issue
| 2024-07-02T16:15:38 |
2025-04-01T06:40:09.372107
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/47766",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2396021549
|
Test Issue - fzqiz
Test Body - beixdvpyky
Test Comment - xmgez
|
gharchive/issue
| 2024-07-08T16:03:48 |
2025-04-01T06:40:09.372813
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/48100",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2464761828
|
Test Issue - hnrfh
Test Body - uanfnfgqux
Test Comment - wfuzg
|
gharchive/issue
| 2024-08-14T02:25:17 |
2025-04-01T06:40:09.373747
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/50314",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2571849482
|
Test Issue - sxqiv
Test Body - nzrabyuvwu
Test Comment - ydepq
|
gharchive/issue
| 2024-10-08T01:43:29 |
2025-04-01T06:40:09.374473
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/53594",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2578247555
|
Test Issue - ltxok
Test Body - ohkicmpigz
Test Comment - qymni
|
gharchive/issue
| 2024-10-10T09:40:54 |
2025-04-01T06:40:09.375214
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/53791",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2580984202
|
Test Issue - tcdcv
Test Body - zgkfxbgxlj
Test Comment - smcna
|
gharchive/issue
| 2024-10-11T10:07:24 |
2025-04-01T06:40:09.375950
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/53877",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2605055798
|
Test Issue - vcgzq
Test Body - dxabtpyila
Test Comment - mgdvv
|
gharchive/issue
| 2024-10-22T10:43:12 |
2025-04-01T06:40:09.376686
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/54464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2614076060
|
Test Issue - weyex
Test Body - bxgyvtozpj
Test Comment - usmde
|
gharchive/issue
| 2024-10-25T13:12:22 |
2025-04-01T06:40:09.377393
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/54722",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2660906027
|
Test Issue - pmemq
Test Body - uahefvomvh
Test Comment - pncjn
|
gharchive/issue
| 2024-11-15T06:16:20 |
2025-04-01T06:40:09.378098
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/55730",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1292472194
|
Test Issue - ngugv
Test Body - lasxkgvzrv
Test Comment - fkjqz
|
gharchive/issue
| 2022-07-04T00:50:00 |
2025-04-01T06:40:09.379025
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/6857",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1293888907
|
Test Issue - kldtg
Test Body - rqdjackicq
Test Comment - gtret
|
gharchive/issue
| 2022-07-05T07:14:58 |
2025-04-01T06:40:09.379739
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/6994",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1302754780
|
Test Issue - atdbe
Test Body - maskagwmgb
Test Comment - joqcw
|
gharchive/issue
| 2022-07-13T00:27:42 |
2025-04-01T06:40:09.380479
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/7627",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1333171109
|
Test Issue - wguqr
Test Body - vshqgznoil
Test Comment - dwabl
|
gharchive/issue
| 2022-08-09T12:16:30 |
2025-04-01T06:40:09.381202
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/9809",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1209102041
|
Test Issue - padnx
Test Body - qivubafkdb
Test Comment - uccyk
|
gharchive/issue
| 2022-04-20T03:04:02 |
2025-04-01T06:40:09.382081
|
{
"authors": [
"qlikqaa"
],
"repo": "qlikqaa/dummyrepo",
"url": "https://github.com/qlikqaa/dummyrepo/issues/986",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
305964041
|
File Manager: download broken in Firefox with no error in console
See: https://www.virtualmin.com/node/56365
I can't find the issue, I suspect changes in the javascript are to blame, but it seems only the minified versions of the javascript is present here.
Is the original source for the file manager javascript for recent versions available somewhere so I can examine the files to try and find the issue?
It's been already fixed.
Cool! Thanks for the update!
|
gharchive/issue
| 2018-03-16T15:07:34 |
2025-04-01T06:40:09.451634
|
{
"authors": [
"qooob",
"reidbiztech"
],
"repo": "qooob/authentic-theme",
"url": "https://github.com/qooob/authentic-theme/issues/1057",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
82219030
|
Cant navigate within Webmin Servers
Hi,
After the last update (13.00 and 13.02) I could not navigate to other webmin servers. I use "Login via Webmin" method..
So... I connect from my main server to another server and I get doubled sidebars
If I press anything from the side pannel it sends me back to the main server information page
Thanks
I got it! Will fix it in 13.03 very soon. After update please report here if it's working or not! Thanks for reporting!
|
gharchive/issue
| 2015-05-29T02:24:19 |
2025-04-01T06:40:09.454352
|
{
"authors": [
"ignaworn",
"qooob"
],
"repo": "qooob/authentic-theme",
"url": "https://github.com/qooob/authentic-theme/issues/178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2704432435
|
Setting referenced values
When using quam, we found that we often need to change values that are references, and doing so generally instantiates a "treasure-hunt", following references in our machine until the "bare" value is found. In practice, doing this can become a bit tedious - would it be possible to add a method QuamComponent, e.g. .set_at_reference(attr: str, new_value) which follows the references for you and sets it at the correct place?
And for added user-clarity, maybe this function could even search the machine from the "bare" value, to see if other components reference this value, and correspondingly print out exactly which components were affected by the change?
I like it! I created a PR introducing this functionality.
The second part about searching the machine for other attributes that are a reference to the updated value is a tricky one as it's not always apparent when something is referencing a given attribute. I'll leave this as a follow-up feature.
Amazing, thanks!
|
gharchive/issue
| 2024-11-29T09:00:04 |
2025-04-01T06:40:09.496635
|
{
"authors": [
"JacobHast",
"nulinspiratie"
],
"repo": "qua-platform/quam",
"url": "https://github.com/qua-platform/quam/issues/85",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1174194454
|
Language support
OMG this is amazing. Thanks for sharing this. I would like to ask if it is possible to support other language? For e.g., when I use Vietnamese, the font is messed up. Is there a solution for this? Thanks!
Yup, I’m looking into what made Unicode break
awesome thanks!
|
gharchive/issue
| 2022-03-19T06:57:25 |
2025-04-01T06:40:09.498086
|
{
"authors": [
"justanhduc",
"quackduck"
],
"repo": "quackduck/devzat",
"url": "https://github.com/quackduck/devzat/issues/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2190829167
|
🛑 yami-sora is down
In 6b01aab, yami-sora (https://www.yami-sora.org/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: yami-sora is back up in 2925c43 after 1 day, 3 minutes.
|
gharchive/issue
| 2024-03-17T19:32:58 |
2025-04-01T06:40:09.511711
|
{
"authors": [
"quanhieu"
],
"repo": "quanhieu/alive_up",
"url": "https://github.com/quanhieu/alive_up/issues/10543",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2212463226
|
🛑 otakusan is down
In ea30e1f, otakusan (https://otakusan.net/LightNovel) was down:
HTTP code: 0
Response time: 0 ms
Resolved: otakusan is back up in b36a069 after 1 hour, 50 minutes.
|
gharchive/issue
| 2024-03-28T06:34:17 |
2025-04-01T06:40:09.514031
|
{
"authors": [
"quanhieu"
],
"repo": "quanhieu/alive_up",
"url": "https://github.com/quanhieu/alive_up/issues/10903",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1708502417
|
🛑 olshiro is down
In a12a810, olshiro (http://olshiro.net/load/vietsub/10) was down:
HTTP code: 0
Response time: 0 ms
Resolved: olshiro is back up in 04b2010.
|
gharchive/issue
| 2023-05-13T07:33:08 |
2025-04-01T06:40:09.516315
|
{
"authors": [
"quanhieu"
],
"repo": "quanhieu/alive_up",
"url": "https://github.com/quanhieu/alive_up/issues/2812",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1711210995
|
🛑 otakusan is down
In 122da6f, otakusan (https://otakusan.net/LightNovel) was down:
HTTP code: 0
Response time: 0 ms
Resolved: otakusan is back up in c2463e0.
|
gharchive/issue
| 2023-05-16T04:22:13 |
2025-04-01T06:40:09.518618
|
{
"authors": [
"quanhieu"
],
"repo": "quanhieu/alive_up",
"url": "https://github.com/quanhieu/alive_up/issues/2935",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1903020665
|
🛑 kokocon is down
In 70c7308, kokocon (http://kokocon.net/) was down:
HTTP code: 500
Response time: 158 ms
Resolved: kokocon is back up in db973db after 7 minutes.
|
gharchive/issue
| 2023-09-19T13:40:29 |
2025-04-01T06:40:09.520941
|
{
"authors": [
"quanhieu"
],
"repo": "quanhieu/alive_up",
"url": "https://github.com/quanhieu/alive_up/issues/7174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1942782700
|
🛑 hako.re is down
In 7045633, hako.re (https://mangochan.site/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: hako.re is back up in 3e9a7c3 after 6 minutes.
|
gharchive/issue
| 2023-10-13T23:46:54 |
2025-04-01T06:40:09.524145
|
{
"authors": [
"quanhieu"
],
"repo": "quanhieu/alive_up",
"url": "https://github.com/quanhieu/alive_up/issues/7870",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
283068553
|
Bayesian Tear Sheet fails in Research
Calling a Bayesian Tear Sheet fails with AssertionError: axis must be between 0 and 1, input was columns. I'm pretty sure it's due to https://github.com/quantopian/pyfolio/blob/master/pyfolio/bayesian.py#L68.
I got the same error.
Closed by https://github.com/quantopian/pyfolio/pull/508.
|
gharchive/issue
| 2017-12-19T00:42:35 |
2025-04-01T06:40:09.525872
|
{
"authors": [
"jaycode",
"mmargenot",
"twiecki"
],
"repo": "quantopian/pyfolio",
"url": "https://github.com/quantopian/pyfolio/issues/498",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
726243866
|
Added functions to init file
Finally fixed my commit history!
I've gone through and either exposed or made explicitly hidden all (I think) functions in the repo. Some of the names were a bit vague (e.g. 'two_body'), LMK if you think something should be exposed that isn't or vice versa.
@obriente Even resolving the merge conflict it still seems like there is an issue with importing. Maybe cutting this down to something less than 21 files. Maybe just bumping this back to when it was passing tests. Then we can add the extra files in a following PR.
@googlebot I fixed it.
@googlebot I signed it.
@googlebot I consent.
@obriente bump. I went in and cleaned up some of the imports that were broken from the move of a method. Let me know if everything else looks in place.
Thanks for sorting that out, looks like it must have been a pain to fix.
Everything looks good to me, I guess we just need one extra review now.
No need for extra review. thanks for kicking this off and sorting out the extra imports that users were not finding. This is an impactful PR.
Ah ok - I can't merge right now without an extra review, maybe you have more privileges than I do here. But everything LGTM
|
gharchive/pull-request
| 2020-10-21T08:07:17 |
2025-04-01T06:40:09.529172
|
{
"authors": [
"ncrubin",
"obriente"
],
"repo": "quantumlib/OpenFermion",
"url": "https://github.com/quantumlib/OpenFermion/pull/673",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1513900664
|
java.lang.IllegalArgumentException: There are multiple EventSources registered for type when upgrading from 5.0.0.Beta2 to 5.0.0
In the same project that worked fine in 5.0.0.Beta2, now I have the error:
java.lang.IllegalArgumentException: There are multiple EventSources registered for type io.strimzi.api.kafka.model.KafkaTopic
I have multiple dependent KafkaTopics and I use a ResourceDiscriminator. I am not using useEventSourceWithName as I want to avoid implementing prepareEventSources method in reconciler, as the java-operator-sdk example does. This approach worked fine in 5.0.0.Beta2 but it doesn't work now.
Any ideas? Thanks!
Hi @juangon,
Can you provide a more complete stack trace, please? Thanks in advance.
Here you have @metacosm . Thanks!
2022-12-29 23:13:54,248 ERROR [io.jav.ope.pro.eve.ReconciliationDispatcher] (ReconcilerExecutor-teis-app-controller-240) Error during event processing ExecutionScope{ resource id: ResourceID{name='teis-app-proactivanet', namespace='teis-proactivanet-soriana-demo'}, version: 795330345} failed.: io.javaoperatorsdk.operator.AggregatedOperatorException: Exception(s) during workflow execution. Details:
com.teis.operator.teisbackend.TeisBackendDependent -> java.lang.NullPointerException
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowResult.throwAggregateExceptionIfErrorsPresent(WorkflowResult.java:40)
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowReconcileResult.throwAggregateExceptionIfErrorsPresent(WorkflowReconcileResult.java:9)
at io.javaoperatorsdk.operator.processing.dependent.workflow.DefaultWorkflow.reconcile(DefaultWorkflow.java:92)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:140)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:103)
at io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:206)
at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:102)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:141)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64)
at io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:415)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-12-29 23:13:54,333 WARN [io.fab.kub.cli.dsl.int.VersionUsageUtils] (InformerWrapper [kafkas.kafka.strimzi.io/v1beta2] 255) The client is using resource type 'kafkas' with unstable version 'v1beta2'
2022-12-29 23:13:54,592 INFO [io.jav.ope.pro.Controller] (ForkJoinPool.commonPool-worker-3) 'teis-backend-controller' controller started, pending event sources initialization
2022-12-29 23:13:54,598 INFO [io.jav.ope.pro.dep.AbstractDependentResource] (pool-8-thread-3) Updating 'teis-backend-secrets' Secret for primary ResourceID{name='teis-backend-teis-proactivanet-soriana-demo', namespace='teis-proactivanet-soriana-demo'}
2022-12-29 23:13:54,618 INFO [io.jav.ope.pro.Controller] (ForkJoinPool.commonPool-worker-13) 'teis-kafka-controller' controller started, pending event sources initialization
2022-12-29 23:13:54,617 ERROR [io.jav.ope.pro.eve.ReconciliationDispatcher] (ReconcilerExecutor-teis-backend-controller-270) Error during event processing ExecutionScope{ resource id: ResourceID{name='teis-backend-teis-proactivanet-soriana-demo', namespace='teis-proactivanet-soriana-demo'}, version: 795330341} failed.: io.javaoperatorsdk.operator.AggregatedOperatorException: Exception(s) during workflow execution. Details:
com.teis.operator.teisbackend.kafka.KafkaBackendTicketPredictedDependent -> java.lang.IllegalArgumentException: There are multiple EventSources registered for type io.strimzi.api.kafka.model.KafkaTopic, you need to provide a name to specify which EventSource you want to query. Known names: kafka-backend-topic-ticket-predicted,kafka-backend-topic-ticket-import-errors,kafka-backend-topic-ticket-added
- com.teis.operator.teisbackend.kafka.KafkaBackendTicketImportErrorsDependent -> java.lang.IllegalArgumentException: There are multiple EventSources registered for type io.strimzi.api.kafka.model.KafkaTopic, you need to provide a name to specify which EventSource you want to query. Known names: kafka-backend-topic-ticket-predicted,kafka-backend-topic-ticket-import-errors,kafka-backend-topic-ticket-added
- com.teis.operator.teisbackend.kafka.KafkaBackendTicketAddedDependent -> java.lang.IllegalArgumentException: There are multiple EventSources registered for type io.strimzi.api.kafka.model.KafkaTopic, you need to provide a name to specify which EventSource you want to query. Known names: kafka-backend-topic-ticket-predicted,kafka-backend-topic-ticket-import-errors,kafka-backend-topic-ticket-added
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowResult.throwAggregateExceptionIfErrorsPresent(WorkflowResult.java:40)
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowReconcileResult.throwAggregateExceptionIfErrorsPresent(WorkflowReconcileResult.java:9)
at io.javaoperatorsdk.operator.processing.dependent.workflow.DefaultWorkflow.reconcile(DefaultWorkflow.java:92)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:140)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:103)
at io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:206)
at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:102)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:141)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64)
at io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:415)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Hi @juangon,
Can you provide a more complete stack trace, please? Thanks in advance. Could you also explain the reasoning behind not using useEventSourceWithName? I tend to think that ResourceDiscriminator should be removed but it seems your use case is proving me wrong so would like to hear more about it… smile
It's fine for me to use useEventSourceWithName but as long as I don't have to reimplement prepareEventSources because in reconcilers I have more than one type of DependentResource and wouldn't like to prepare all of those event sources, as I have all annotations in place.
You shouldn't have to re-implement prepareEventSources, I think simply providing an explicit name for your dependents should be enough to fix the issue. You probably don't need to use useEventSourceWithName either. Would you mind sharing your controller's configuration along with the dependents definition (the class signature ought to be enough)?
Yes, I have a name for those dependents (@DependentResource name), in fact those james are showing un logs I've sent. Any other idea? Thanks very much!
Here you have the reconciler config @metacosm :
@ControllerConfiguration(name = TeisBackendReconciler.TEIS_BACKEND_CONTROLLER,
dependents = {
@Dependent(type = KafkaBackendTicketImportErrorsDependent.class, name = "kafka-backend-topic-ticket-import-errors", readyPostcondition = KafkaBackendTicketImportErrorsDependent.class),
@Dependent(type = KafkaBackendTicketAddedDependent.class, name = "kafka-backend-topic-ticket-added", readyPostcondition = KafkaBackendTicketAddedDependent.class),
@Dependent(type = KafkaBackendTicketPredictedDependent.class, name = "kafka-backend-topic-ticket-predicted", readyPostcondition = KafkaBackendTicketPredictedDependent.class),
@Dependent(type = TeisBackendMariaDBDependent.class, name = "teis-backend-mariadb", readyPostcondition = TeisBackendMariaDBDependent.class),
@Dependent(type = TeisBackendConfigMapDependent.class, name="teis-backend-config"),
@Dependent(type = TeisBackendSecretDependent.class, name="teis-backend-secret"),
@Dependent(type = TeisBackendDeploymentDependent.class,
dependsOn = {
"kafka-backend-topic-ticket-import-errors",
"kafka-backend-topic-ticket-added",
"kafka-backend-topic-ticket-predicted",
"teis-backend-mariadb",
"teis-backend-config",
"teis-backend-secret"}),
@Dependent(type = TeisBackendServiceDependent.class)
})
I'm investigating the root cause, there seems to be funny going on. In the mean time, could you try the following configuration and let me know how it goes, please:
@ControllerConfiguration(name = TeisBackendReconciler.TEIS_BACKEND_CONTROLLER,
dependents = {
@Dependent(type = KafkaBackendTicketImportErrorsDependent.class, name = "kafka-backend-topic-ticket-import-errors", readyPostcondition = KafkaBackendTicketImportErrorsDependent.class, useEventSourceWithName="kafka-backend-topic-ticket-predicted"),
@Dependent(type = KafkaBackendTicketAddedDependent.class, name = "kafka-backend-topic-ticket-added", readyPostcondition = KafkaBackendTicketAddedDependent.class, useEventSourceWithName="kafka-backend-topic-ticket-predicted"),
@Dependent(type = KafkaBackendTicketPredictedDependent.class, name = "kafka-backend-topic-ticket-predicted", readyPostcondition = KafkaBackendTicketPredictedDependent.class),
@Dependent(type = TeisBackendMariaDBDependent.class, name = "teis-backend-mariadb", readyPostcondition = TeisBackendMariaDBDependent.class),
@Dependent(type = TeisBackendConfigMapDependent.class, name="teis-backend-config"),
@Dependent(type = TeisBackendSecretDependent.class, name="teis-backend-secret"),
@Dependent(type = TeisBackendDeploymentDependent.class,
dependsOn = {
"kafka-backend-topic-ticket-import-errors",
"kafka-backend-topic-ticket-added",
"kafka-backend-topic-ticket-predicted",
"teis-backend-mariadb",
"teis-backend-config",
"teis-backend-secret"}),
@Dependent(type = TeisBackendServiceDependent.class)
})
Ok so with your snippet @metacosm it seems that one of two Exceptions went away. The NullPointer is still there though:
com.teis.operator.teisbackend.TeisBackendDependent -> java.lang.NullPointerException
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowResult.throwAggregateExceptionIfErrorsPresent(WorkflowResult.java:40)
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowReconcileResult.throwAggregateExceptionIfErrorsPresent(WorkflowReconcileResult.java:9)
at io.javaoperatorsdk.operator.processing.dependent.workflow.DefaultWorkflow.reconcile(DefaultWorkflow.java:92)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:140)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:103)
at io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:206)
at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:102)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:141)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64)
at io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:415)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Any idea about that? Thanks very much!
Just sent to your email @metacosm . Thanks very much for your help!
|
gharchive/issue
| 2022-12-29T16:56:59 |
2025-04-01T06:40:09.541633
|
{
"authors": [
"juangon",
"metacosm"
],
"repo": "quarkiverse/quarkus-operator-sdk",
"url": "https://github.com/quarkiverse/quarkus-operator-sdk/issues/465",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
511145780
|
use varbinary(max) instead of image data-type - Fixes #157
As discussed before this is the minimal approach.
Hello there,
is there anything I can do better? What do you think about this minimal approach?
Keep up the good work,
~fw
Hello! Thank you very much for your contribution and interest in helping improve the Quartz community.
After a period of dormancy, the Quartz project is back under steady maintenance by multiple volunteers, who are working to once again handle contributions such as yours.
We notice that your contribution was made without use of the DCO feature (the sign-off feature on commits via the -s option). Can you please update your PR with commits that use the -s option, agreeing to assign copyright ownership and other terms as described at the contributor agreement referenced here: https://github.com/quartz-scheduler/contributing/blob/main/CONTRIBUTING.md
You can easily add signoff to your previous commits by running (on your PR branch):
git commit --amend --signoff --no-edit
git push -f
Or (for multiple commits):
# change 5 to the number of commits
git rebase HEAD~5 --signoff
git push -f
Hello @akomakom ,
I signed-off my commit.
Good to see the project back from hibernation :)
~fw
Thanks!
|
gharchive/pull-request
| 2019-10-23T07:57:40 |
2025-04-01T06:40:09.660985
|
{
"authors": [
"akomakom",
"efwe",
"jhouserizer"
],
"repo": "quartz-scheduler/quartz",
"url": "https://github.com/quartz-scheduler/quartz/pull/515",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1920662635
|
Fix PyPi package deployment
With the initial release the automatic package deployment to PyPi failed with
ERROR Source /home/runner/work/bluetooth_2_usb/bluetooth_2_usb does not appear to be a Python project: no pyproject.toml or setup.py
Investigate and fix.
https://packaging.python.org/en/latest/tutorials/packaging-projects/
|
gharchive/issue
| 2023-10-01T09:23:03 |
2025-04-01T06:40:09.694445
|
{
"authors": [
"quaxalber"
],
"repo": "quaxalber/bluetooth_2_usb",
"url": "https://github.com/quaxalber/bluetooth_2_usb/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
730699347
|
Maven artifacts missing
Maven artifacts are missing for qds-sdk-java release 1.3.0 ( https://mvnrepository.com/artifact/com.qubole.qds-sdk-java/qds-sdk-java)
@VenkataKarthikP Its now available in maven central repo https://mvnrepository.com/artifact/com.qubole.qds-sdk-java/qds-sdk-java/1.3.0
|
gharchive/issue
| 2020-10-27T18:17:14 |
2025-04-01T06:40:09.713286
|
{
"authors": [
"VenkataKarthikP",
"ranganathhr"
],
"repo": "qubole/qds-sdk-java",
"url": "https://github.com/qubole/qds-sdk-java/issues/121",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
450727439
|
Resnet models
It looks that your own RestNet implementation is based on Resnet Version 2 (Preact / BN->Relu->Conv). Could you please also consider adding ResNet Version 1 (code + pre-trained weights)? I'm unfortunately limited on GPU resources and can't therefore do a full train in imagenet, etc. I know that keras-applications has both V1/V2, but I find that your toolbox is better organized.
Hi @tinalegre
I did not train any of models, just convert or transfer weights from other frameworks/repos.
If you will find such pretrained models I will consider such option. Models from keras_applications are inside of this repo.
Hi @qubvel thank you. I was wondering why on your ResNet model, just after the last residual block and before the top classification layer, we need to have the additional BN+Relu layers (named respectively bn1 and relu1)?
Once again:
ResNet models was not created by me. I have just rewrite architecture from MXNet models zoo as it is and convert weights.
So the answer to your question is: "Because they have been created and trained with such architecture by someone (I actually don`t know the authors of this implementation)"
|
gharchive/issue
| 2019-05-31T10:23:21 |
2025-04-01T06:40:09.716612
|
{
"authors": [
"qubvel",
"tinalegre"
],
"repo": "qubvel/classification_models",
"url": "https://github.com/qubvel/classification_models/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
574496314
|
Problems with imagesize of 1024x1024
I am using the U-Net-model. When i tried to set the input size to 1024x1024 the first 20 epochs had a validation iou-score of 1e-11. I had no problems with other sizes like 512x512, 256x256 and 768x768. I already tried another backbone. So why does this input-size not work?
If it worked on other data it should work with other sizes too if the data is the same. Do you train multiclass segmentation? Have you checked that the data is correct and labels match input images? It take me ~300 epochs to gain first validation scores for my task, so 20 epochs looks a bit few for results. On the other hand if you use pretrained weights it should cover faster and I advice you to look for problems in input data.
|
gharchive/issue
| 2020-03-03T08:25:29 |
2025-04-01T06:40:09.718101
|
{
"authors": [
"VladislavAD",
"johanneskpp"
],
"repo": "qubvel/segmentation_models",
"url": "https://github.com/qubvel/segmentation_models/issues/307",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2446541152
|
Root owned files/directories are owned by "nobody" when running under boxxy
> ls -la /etc/environment
-rw-r--r-- 1 root root 97 Apr 11 04:47 /etc/environment
> boxxy ls -la /etc/environment
INFO boxxy::config > loading rules from /home/aflutter/.config/boxxy/boxxy.yaml
INFO boxxy::config > loaded 0 total rule(s)
INFO boxxy::enclosure > boxed "ls" ♥
-rw-r--r-- 1 nobody nobody 97 Apr 11 04:47 /etc/environment
I'm running into this problem because ssh checks to make sure a file is owned by root, and errors out for security reasons if not (Bad owner or permissions on /etc/ssh/ssh_config.d/20-systemd-ssh-proxy.conf).
Somehow, this is only a recent issue. I've used boxxy with ssh regularly until around June when it broke. I checked a few older version of boxxy, with the oldest being 5.1, and they all had the issue, though.
This is related to #6.
#6 is the tracking issue for this problem.
|
gharchive/issue
| 2024-08-03T18:30:19 |
2025-04-01T06:40:09.720332
|
{
"authors": [
"Afluttera",
"queer"
],
"repo": "queer/boxxy",
"url": "https://github.com/queer/boxxy/issues/234",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
14515665
|
Limit gets ignored in subquery
I have a Dog and Race table, dogs reference race. I wanted to do an update: "Update all dogs without race (is null) with most popular race."
My code looks like this (Pes=Dog, Rasa=Race):
QPes p = new QPes("pes"); QRasa r = p.rasa;
em.getTransaction().begin();
SimpleSubQuery<Rasa> mostPopularRaceSubquery = new JPASubQuery().from(p)
.join(r).groupBy(r).orderBy(r.count().desc()).limit(1).unique(r);
long execute = new JPAUpdateClause(em, QPes.pes)
.where(QPes.pes.rasa.id.isNull())
.set(QPes.pes.rasa, mostPopularRaceSubquery).execute();
System.out.println("execute = " + execute);
em.getTransaction().commit();
Generated query misses any limit though:
Hibernate: update Pes set rasa_id=(select pes1_.rasa_id from Pes pes1_ inner join Rasa rasa2_ on pes1_.rasa_id=rasa2_.id group by pes1_.rasa_id order by count(pes1_.rasa_id) desc) where rasa_id is null
And it generates exception because of this error:
ERROR: Scalar subquery contains more than one row; SQL statement:
update Pes set rasa_id=(select pes1_.rasa_id from Pes pes1_ inner join Rasa rasa2_ on pes1_.rasa_id=rasa2_.id group by pes1_.rasa_id order by count(pes1_.rasa_id) desc) where rasa_id is null [90053-170]
Exception in thread "main" javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not execute statement
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1387)
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1310)
at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:1397)
at org.hibernate.ejb.AbstractQueryImpl.executeUpdate(AbstractQueryImpl.java:108)
at com.mysema.query.jpa.impl.JPAUpdateClause.execute(JPAUpdateClause.java:67)
at sqldemo.Main.main(Main.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
The same exception occurs when I remove limit(1) from the code, generated query is the same.
Limit works fine with simple query:
QPes p = new QPes("pes"); QRasa r = p.rasa;
Rasa mostPopularRace = new JPAQuery(em).from(p)
.join(r).groupBy(r).orderBy(r.count().desc()).limit(1).uniqueResult(r);
System.out.println("mostPopularRace = " + mostPopularRace);
Works, one result is written out.
Used version: 3.1.1
hey, i know this is closed, but i think it's useful for deep pagination, eg select * from table where id in (select id from table limit 99999, 10)
|
gharchive/issue
| 2013-05-20T11:11:16 |
2025-04-01T06:40:09.744265
|
{
"authors": [
"nistal97",
"virgo47"
],
"repo": "querydsl/querydsl",
"url": "https://github.com/querydsl/querydsl/issues/421",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2152403535
|
🛑 Search Service is down
In 357093c, Search Service (https://quessttechnologies.com/search/healthcheck) was down:
HTTP code: 429
Response time: 33 ms
Resolved: Search Service is back up in 3ee284e after 22 minutes.
|
gharchive/issue
| 2024-02-24T16:59:09 |
2025-04-01T06:40:09.746941
|
{
"authors": [
"QuesstTechnologies"
],
"repo": "quesst-technologies/qst-admin-status-all",
"url": "https://github.com/quesst-technologies/qst-admin-status-all/issues/401",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2267476606
|
connected and unconnected sockets
Currently the proxy uses connected sockets. This is a good idea performance-wise, but comes with the obvious limitation on the number of proxied connections.
We should have an API that allows an application more fine-grained control. One option would be a callback on the Server:
type Server struct {
// ... existing stuff
// PacketConnForRemote is called when establishing a new proxied connection.
// It is possible to return the same net.PacketConn (an unconnected UDP socket) for multiple distinct remote address.
// However, the same net.PacketConn cannot be used for the same remote address.
PacketConnForRemote(*net.UDPAddr) net.PacketConn
}
The problem here is that the same net.PacketConn can't be used for the same remote address: We need to know which QUIC connection to put a packet on. It's also not clear how timeouts should work: If one proxied connection is closed, it should be possible to reuse the same net.PacketConn at some point, but probably not immediately, since UDP packets might still be in flight between the remote and the proxy.
There are multiple ways to slice and dice it. One option that comes to mind is using one (unconnected) socket per client. This might make in a setting where the client is using the proxy to proxy all its traffic over the proxy.
However, it also breaks as soon as the client requests to connect to the same IP (not domain!) multiple times. This will be pretty common, given the current centralization of the edge infrastructure in the hands of a few giant CDN providers.
#43 opens up a path to using unconnected UDP sockets: When the application wishes to proceed with a masque.Request, it can either:
call Proxy.ProxyConnectedSocket(w http.ResponseWriter, _ *Request, conn *net.UDPConn), passing us a fresh connected UDP socket
call Proxy.ProxyUnconnectedSocket(w http.ResponseWriter, _ *Request, conn *net.UDPConn, target *net.UDPAddr), reusing a UDP socket
For ProxyUnconnectedSocket, we can then perform the necessary checks (are we already proxying another connection to the same net.UDPAddr as target?), and reject the proxying attempt with a masque.ErrAlreadyProxyingToThisTarget (better naming tbd). The application can then decide to either switch to a connected socket, attempt again using another unconnected socket, or even create a new unconnected socket.
|
gharchive/issue
| 2024-04-28T10:26:38 |
2025-04-01T06:40:09.760258
|
{
"authors": [
"marten-seemann"
],
"repo": "quic-go/masque-go",
"url": "https://github.com/quic-go/masque-go/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2304624277
|
http3: response body not implemented http3.HTTPStreamer
Seems that this is broken by https://github.com/quic-go/quic-go/pull/4469
It makes my http3 dialer not working
streamer, ok := resp.Body.(http3.HTTPStreamer)
if !ok {
return nil, errors.New("proxy: read from " + d.Host + " error: resp body not implemented http3.HTTPStreamer")
}
Any suggestion?
This was called out as a breaking change in the release notes of v0.43. It is now the http.ResponseWriter that implements the interface.
Thanks. In my understanding the http.ResponseWriter only can be used in server side. (Please correct me if I'm wrong)
How to unwrap the stream in client side (*http.Response), my existing code is here
You'll need to take the path via the SingleDestinationRoundTripper then.
Last question, shall/do we have a RoundTripOpt to let this underlying stream stay opening?
https://github.com/quic-go/quic-go/blob/v0.44.0/http3/client.go#L299
Finally I gave up this approach, I turn to another way with more compatibility and but lower performance in https://github.com/phuslu/liner/commit/1054f3b89798ab13dc677914b388a92b0be8147c
If the stream unwarp of *http.Response in quic-go/http3 comes back in future, please let me know. thanks again.
Please take a look at the WebTransport dialer: https://github.com/quic-go/webtransport-go/blob/master/client.go
I believe it does exactly what you need.
Understood, but currently I'd like keep unique/same logic in server side, like https://github.com/phuslu/liner/blob/master/handler_http_forward.go#L278-L298
There’s no change to the server side. This is purely a client side API change.
thanks for your patience, finally I have a modified webtransport-go dialer of https://github.com/phuslu/liner/blob/master/dialer_http3.go thanks again!
|
gharchive/issue
| 2024-05-19T13:55:20 |
2025-04-01T06:40:09.767539
|
{
"authors": [
"marten-seemann",
"phuslu"
],
"repo": "quic-go/quic-go",
"url": "https://github.com/quic-go/quic-go/issues/4526",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1334907888
|
Remove access token cache accessible via qf.Course type
This PR removes the access token cache associated with the qf.Course type.
Pull Request Test Coverage Report for Build 2834318725
3 of 6 (50.0%) changed or added relevant lines in 3 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 25.213%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
database/gormdb.go
2
3
66.67%
qf/course.go
0
2
0.0%
Totals
Change from base Build 2834268491:
0.0%
Covered Lines:
2663
Relevant Lines:
10562
💛 - Coveralls
Codecov Report
Merging #714 (27340c7) into master (6938ffe) will not change coverage.
The diff coverage is 50.00%.
@@ Coverage Diff @@
## master #714 +/- ##
=======================================
Coverage 22.69% 22.69%
=======================================
Files 79 79
Lines 10175 10175
=======================================
Hits 2309 2309
Misses 7552 7552
Partials 314 314
Flag
Coverage Δ
unittests
22.69% <50.00%> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
qf/course.go
0.00% <0.00%> (ø)
database/gormdb.go
64.40% <66.66%> (ø)
database/gormdb_course.go
50.34% <100.00%> (ø)
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
gharchive/pull-request
| 2022-08-10T16:53:19 |
2025-04-01T06:40:09.797153
|
{
"authors": [
"codecov-commenter",
"coveralls",
"meling"
],
"repo": "quickfeed/quickfeed",
"url": "https://github.com/quickfeed/quickfeed/pull/714",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1237210253
|
Add some key metrics on the ingestion
Now that we have an indexing server, we can expose some nice prometheus metrics.
Possibly interesting indexing metrics:
ingested_bytes
num_docs_in / num_docs_out
|
gharchive/issue
| 2022-05-16T14:02:50 |
2025-04-01T06:40:09.799148
|
{
"authors": [
"fmassot"
],
"repo": "quickwit-oss/quickwit",
"url": "https://github.com/quickwit-oss/quickwit/issues/1468",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1887307268
|
Use index UID instead of index ID for the queue_id
We missed to change that when we introduced index UID.
See https://github.com/quickwit-oss/quickwit/blob/ce856974177a26fc96bfabdb7768b0bcae2cbdea/quickwit/quickwit-indexing/src/source/ingest_api_source.rs#L82
duplicate of #3559
|
gharchive/issue
| 2023-09-08T09:34:09 |
2025-04-01T06:40:09.800609
|
{
"authors": [
"fmassot"
],
"repo": "quickwit-oss/quickwit",
"url": "https://github.com/quickwit-oss/quickwit/issues/3816",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1275915543
|
add support for matching distance in phrase query
The first commit still allows ~ in words, might be a bit flimsy.
Second one exposes distance for all leafs. Do we need to explicitly fail for leaves that do not handle slop/distance + implement it for those who do ? (Term maybe, I haven't checked)
Requires a doc update also :)
Not realy sure on the name, slop is hard to understand, distance is maybe a little to vague.
closes #1390
Codecov Report
Merging #1393 (1ec8212) into main (83d0c13) will increase coverage by 0.00%.
The diff coverage is 100.00%.
:exclamation: Current head 1ec8212 differs from pull request most recent head c83bbb7. Consider uploading reports for the commit c83bbb7 to get more accurate results
@@ Coverage Diff @@
## main #1393 +/- ##
=======================================
Coverage 94.29% 94.30%
=======================================
Files 236 236
Lines 43418 43472 +54
=======================================
+ Hits 40942 40996 +54
Misses 2476 2476
Impacted Files
Coverage Δ
query-grammar/src/query_grammar.rs
99.67% <100.00%> (+0.01%)
:arrow_up:
query-grammar/src/user_input_ast.rs
97.87% <100.00%> (+0.09%)
:arrow_up:
src/query/phrase_query/phrase_query.rs
91.54% <100.00%> (+0.37%)
:arrow_up:
src/query/query_parser/logical_ast.rs
88.37% <100.00%> (+1.19%)
:arrow_up:
src/query/query_parser/query_parser.rs
94.95% <100.00%> (+0.07%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 83d0c13...c83bbb7. Read the comment docs.
@fulmicoton @PSeitz can we define the slop as being an u8 as it is the type that was chosen for the fuzzy query distance and 255 seems to be good enough of a max distance whatever the Query impl for the time being ?
Right now let's not handle fuzzy query in the grammar, it would break quickwit.
For the naming, I'd stick to slop in the grammar for the moment.
I've got this PR https://github.com/saroh/tantivy/pull/1/files which I can push in here or open later. Adds support for "foo"~1
|
gharchive/pull-request
| 2022-06-19T00:16:44 |
2025-04-01T06:40:09.814264
|
{
"authors": [
"codecov-commenter",
"fulmicoton",
"saroh"
],
"repo": "quickwit-oss/tantivy",
"url": "https://github.com/quickwit-oss/tantivy/pull/1393",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
327078088
|
Bug in correct_reed_solomon_decode()
I have a small problem with this function. If there was no error while the transmission, the last parameter is still empty. To fix this problem, I have to do something like this:
err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg); if (rs->recvmsg[0] == 0) memcpy(rs->recvmsg, rs->encoded, rs->block_length);
I think this behavior is quite confusing, especially while the parameter is modified, if there was an error!
That's why Im really confused. I couldn't locate, why this is happening.
I use this as a helper:
#ifndef _correct_cpp_h
#define _correct_cpp_h
extern "C" {
#include <correct.h>
}
struct rs_struct{
size_t const_block_length;
size_t const_message_length;
size_t block_length;
size_t message_length;
size_t min_distance;
char *msg;
uint8_t *encoded;
correct_reed_solomon *encoder;
int *indices;
uint8_t *corrupted_encoded;
uint8_t *erasure_locations;
unsigned char *recvmsg;
rs_struct(size_t block_length, size_t min_distance) {
this->const_block_length = this->block_length = block_length;
this->const_message_length = this->min_distance = min_distance;
this->message_length = block_length - min_distance;
this->msg = (char*) calloc(message_length, sizeof(char));
this->encoded = (uint8_t*) malloc(block_length * sizeof(uint8_t));
this->encoder = correct_reed_solomon_create(correct_rs_primitive_polynomial_ccsds, 1, 1, min_distance);
this->indices = (int*) malloc(block_length * sizeof(int));
this->corrupted_encoded = (uint8_t*) malloc(block_length * sizeof(uint8_t));
this->erasure_locations = (uint8_t*) malloc(min_distance * sizeof(uint8_t));
this->recvmsg = (unsigned char*) malloc(sizeof(unsigned char) * block_length);
}
~rs_struct() {}
void reset() {
this->block_length = this->const_block_length;
this->min_distance = this->const_message_length;
this->message_length = block_length - min_distance;
memset(this->msg, 0, this->message_length);
memset(this->encoded, 0, this->block_length);
memset(this->indices, 0, this->block_length);
memset(this->corrupted_encoded, 0, this->block_length);
memset(this->erasure_locations, 0, this->min_distance);
memset(this->recvmsg, 0, this->block_length);
}
};
#endif
And then use it like that:
...
int main(int argc, char *argv[]) {
rs_struct *rs = new rs_struct(255, 32);
...
while(1) {
if (mySwitch.available()) {
rs->encoded = (uint8_t*)mySwitch.getReceivedValue();
cout<<"Empfangen: "<<rs->encoded<<"\n";
int err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
if (rs->recvmsg[0] == 0)
memcpy(rs->recvmsg, rs->encoded, rs->block_length);
cout<<"Decodiert: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<"\n";
...
rs->reset();
mySwitch.resetAvailable();
}
}
exit(0);
}
Overall I have to say, that it works really nice for me. Thanks for that 👍
Thanks, I hope it works well for you.
The one thing I can think of here is that it won't copy the received message when there are too many errors. Is err -1 when you're seeing an empty recvMsg? I didn't think it'd be useful to copy back a corrupted message, so it doesn't do that, but maybe it should.
Code:
memcpy(rs->encoded, (uint8_t*)mySwitch.getReceivedValue(), rs->block_length);
cout<<"Received: "<<rs->encoded<<"\n";
int err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
cout<<"Decoded: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<")\n";
Output:
Received: test
Decoded: (Message Length: 223 | String Length: 0)
Now comes the crazy part...
Code:
if ( recvfrom_inet_dgram_socket(sfd,rs->encoded,rs->block_length, src_host,sizeof(src_host),src_service,sizeof(src_service),0,LIBSOCKET_NUMERIC) < 0 ){
perror(0);
exit(1);
}
cout<<"Connection from "<<src_host<<" port "<<src_service<<": "<<rs->encoded<<"\n";
int err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
cout<<"Decoded: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<")\n";
Output:
Connection from 127.0.0.1 port 55723: 1234
Decoded: 1234(Message Length: 223 | String Length: 4)
The only diffrence is, that getReceivedValue() delivers a char* as return. The recvfrom_inet_dgram_socket() wants a void* as a parameter.
Unfortunately I still don't have quite enough code here to see what's going on. Can you come up with a short example that demonstrates this behavior and then provide me with the whole thing? I've reviewed the code in the decoder and can't find a likely explanation for how it could return a positive value but not write to the received message pointer.
Okay, it took me some hours, but I think I found the bug.
Code:
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <unistd.h>
#include <string.h>
#include "correct_cpp.h"
using namespace std;
int main() {
rs_struct *rs = new rs_struct(255, 32);
char buf[500] = { 72, 97, 108, 108, 111, 32, 87, 101, 108, 116, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 217, 1, 214,
5, 237, 210, 42, 9, 139, 10, 177, 149, 18, 112, 11, 151, 27, 202, 75, 66, 116, 2, 121,
145, 199, 123, 108, 53, 222, 90, 92, 121};
char buf2[500] = { 72, 97, 108, 108, 111, 32, 87, 101, 108, 116, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
memcpy(rs->encoded, buf2, rs->block_length);
int err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
cout<<"Decoded: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<"\n";
memcpy(rs->encoded, buf, rs->block_length);
err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
cout<<"Decoded: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<"\n";
delete rs;
return 0;
}
Output:
Decoded: (Message Length: 223 | String Length: 0
Decoded: Hallo Welt
(Message Length: 223 | String Length: 11
If the message is cut off at the end, the byte for the redundancy are missing (aka zeros). I would assume, that the decoder return -1, because it can not decode anything. But unfortunately it returns the message length.
This is a pretty fascinating bug report, but ultimately I've decided this is actually the correct behavior.
The first oddity to notice here is that a message that's all 0s has a parity section that's also all 0s (this may not be true for other primitive polynomials/FCR/root gaps, but is true for CCSDS/1/1). That means that a block with all 0s is actually a valid block, and it decodes to a message containing all 0s.
The second issue is that your message is less than 16 characters long. For a block with 32 roots, up to 16 of the 255 bytes can be corrupted entirely and the message can still be recovered. By removing the last 32 bytes of the message and replacing them with 0, it would seem that decode should indeed error out. Instead, it succeeds but gives you back a payload of all 0s. That's because the block you gave to decode is actually less than 16 bytes away from the valid, all-0s block, and with the bytes repaired, it seems to be a successful decode from Reed-Solomon's point of view.
Unfortunately, with enough bytes replaced, this can happen. I have tried to make a note of this in the comments in correct.h:
In most cases, if the block is too corrupted, this function
will return -1 and not perform decoding. It is possible but
unlikely that the payload written to msg will contain
errors when this function returns a positive value.
It seems you hit one of these unfortunate but possible cases. If you want to reduce the likelihood that this can happen, you may want to add a CRC32 checksum to your message. Reed-Solomon can reject some message failures but not as well a good checksum can.
|
gharchive/issue
| 2018-05-28T16:48:58 |
2025-04-01T06:40:09.839495
|
{
"authors": [
"Hiffi",
"brian-armstrong"
],
"repo": "quiet/libcorrect",
"url": "https://github.com/quiet/libcorrect/issues/20",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
309652911
|
Profitability chart for dashboard item pop-up looks very ugly
Are you able to reproduce this consistently? Although I don't see how this could affect anything, does it persist after providing a translation for the hydrogen turbine? Does it happen with a default scenario?
If it's still broken could you provide details of the branches, scenario, and browser being used?
It looks normal to me locally and on beta:
On beta this still doesn't look ideal:
|
gharchive/issue
| 2018-03-29T07:49:27 |
2025-04-01T06:40:09.868250
|
{
"authors": [
"ChaelKruip",
"antw"
],
"repo": "quintel/etmodel",
"url": "https://github.com/quintel/etmodel/issues/2750",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
983880195
|
CCUS for waste incineration should be added to the investment table
The investment table now only includes the waste incinerator and waste CHP without CCS:
We are maybe removing this investment table: https://github.com/quintel/etmodel/issues/3781
Let's remove deploy September 2021. Don't you think @redekok Roos?
Do you know when the investment table will be removed @MartLubben? It's just a minor effort to update it so I wouldn't mind fixing that before the deploy.
What are your thought on this @mabijkerk ?
As I understood from Mart it is not yet fully clear what will happen with the investment table. It might be decided that it will stay in the model. If @MartLubben agrees and you have time to pick this up @redekok then that would be great!
Unfortunately by now I don't have any time to pick this up anymore anytime soon. Perhaps @Charlottevm or @MartLubben could help out here?
|
gharchive/issue
| 2021-08-31T13:43:34 |
2025-04-01T06:40:09.871392
|
{
"authors": [
"MartLubben",
"mabijkerk",
"redekok"
],
"repo": "quintel/etmodel",
"url": "https://github.com/quintel/etmodel/issues/3817",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1593255383
|
Revert SavedScenario to older version through the API
Add the ability to revert to an older version of a SavedScenario.
Possible solution: when making a PUT request with a scenario_id present, and this id is present in the history, we revert to this scenario. This will erase all of the 'future' after said scenario without any possibility to get it back. To ensure users don't revert on accident, we could add an extra parameter revert_to instead, or as a compliment.
References quintel/etengine#1320
I think this one is still on the wishlist!
|
gharchive/issue
| 2023-02-21T10:51:50 |
2025-04-01T06:40:09.873082
|
{
"authors": [
"noracato"
],
"repo": "quintel/etmodel",
"url": "https://github.com/quintel/etmodel/issues/4085",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2010326445
|
[Future-Proofing] - Better replication of API calls
When writing the library I started with replicating all of the headers sent in the requests, when experimenting I found out all I needed to send in the headers was the authorization header, so when I made the library for simplicity that's all I included. With this, the developers could easily distinguish packets from from the actual app and Lapsepy just by checking something as simple as the user agent. The device ID is also sent so I'm not sure how spoofing that will work, though users could probably use their own unique ID with few modifications to my LapseRefreshTokenSniffer project.
Still needs better replication on other endpoints, but since most of it is done in sync-service, this is enough for now.
|
gharchive/issue
| 2023-11-24T23:43:28 |
2025-04-01T06:40:09.874879
|
{
"authors": [
"quintindunn"
],
"repo": "quintindunn/lapsepy",
"url": "https://github.com/quintindunn/lapsepy/issues/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1145160317
|
puppets bunker superior sharp turn incorrect callout
Description
During the Puppets bunker's split boss with the three superior flight units, in the 2nd half of the fight the callout for the "Formation: Sharp Turn" ability was incorrect. It said to move outside when the safe spot was actually on the inside. At that time 1 of the three flight units was already destroyed.
Its possible that the skill IDs we used are simple wrong for inside/outside, or perhaps something changes once one of the bosses dies. I've attached the log file which has the incorrect prediction. The 2nd use of Formation Sharp Turn should have said "inside" (I didn't capture screen footage of this unfortunately).
The first outside callout was correct, and I know I've had correct callouts for inside vs outside in the past. I suspect that the root cause is the death of one of the three units.
Additional information
puppets-bunker-inside-outside.tar.gz
we might want to revert this to just a warning about an upcoming sharp turn for now.
Perhaps heading is enough. I'd have to check. I think that it was caused by one of the three units being dead but not precisely sure how that impacted it yet.
|
gharchive/issue
| 2022-02-20T23:02:51 |
2025-04-01T06:40:09.884755
|
{
"authors": [
"jacob-keller"
],
"repo": "quisquous/cactbot",
"url": "https://github.com/quisquous/cactbot/issues/4152",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1136926499
|
loading forever
ned model is running in forever loop its not giving any output just stuck between loading embeddings and done prompt
Without a more detailed error message, it is not possible to solve your problem.
It is likely that some sub-process has terminated due to some problem and the error log is not propagated from the sub-process.
In order to repeat the computation in single process mode, set all the _PROCESSES configuration variables to 0.
Example:
https://github.com/qurator-spk/sbb_ned/blob/master/qurator/sbb_ned/webapp/de-config-debug.json
Then, you should observe some error message that can help to finally solve the underlying problem.
i made said changes in config file now getting this error
*
**
Could you provide your config file?
I'm using this file
https://github.com/qurator-spk/sbb_ned/blob/master/qurator/sbb_ned/webapp/en-config.json
There were some entries missing in that file. I added the missing entries.
Could you update the file and retry?
|
gharchive/issue
| 2022-02-14T08:09:12 |
2025-04-01T06:40:09.899174
|
{
"authors": [
"Ashbajawed",
"labusch"
],
"repo": "qurator-spk/sbb_ned",
"url": "https://github.com/qurator-spk/sbb_ned/issues/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
85837889
|
set locale of moment.js
It would be great to expose a prop to set the locale
I never thought about that, this is a good point
@chollier , what is the status of supporting locale? More specifically, i'm interested in being able to translate like 'june 12' and the week days. This is a great project by the way. Thanks for that.
@andrerpena , I made a fork of the project and translated it to pt-br, but it is hardcoded for now.
|
gharchive/issue
| 2015-06-07T02:04:48 |
2025-04-01T06:40:09.900802
|
{
"authors": [
"andrerpena",
"chollier",
"ipy",
"jneto"
],
"repo": "quri/react-bootstrap-datetimepicker",
"url": "https://github.com/quri/react-bootstrap-datetimepicker/issues/58",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2328684634
|
functionality for permuting tensor product order of Qobj
This pr addresses issue #95. It creates a function permute which permutes the subsystem order of composite Ket, Bra, and Operator objects.
A new file src/qobj/tensor_functions.jl was created to store tensor-related methods.
I believe I addressed all of the comments with this PR, the implementation is totally different. Thanks to @albertomercurio for pointing out a much cleaner way to address this.
Great! You have to include the missing docstrings in the documentation. Could you also format the documents you changed?
I will clean up this PR today, I also remembered I need to add tests for error handling.
Great, everything seems fine, except for documentation and format checking. Can you add the function in the documentation, and formatting all the changed files?
yes! I totally omitted those two issues.
BTW, can you remove the JuliaFormatter dependency? We don't need that. You can format the code by just using JuliaFormatter from another environment (or just calling Format Document from vscode)
@aarontrowbridge
I have added some comments above.
@ytdHuang thanks! I saw, but I came down with something yesterday and have been rather under the weather. I will try to resolve everything today
@aarontrowbridge
Thank you for addressing all the comments.
I saw you moved back the tensor and kron functions. But seems that you didn't delete the qobj/tensor_functions.jl file.
|
gharchive/pull-request
| 2024-05-31T22:54:51 |
2025-04-01T06:40:09.909330
|
{
"authors": [
"aarontrowbridge",
"albertomercurio",
"ytdHuang"
],
"repo": "qutip/QuantumToolbox.jl",
"url": "https://github.com/qutip/QuantumToolbox.jl/pull/152",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
972359179
|
Add info to readme on how to use
This adds basic information to initialize a Qobj from a CuPyDense array
Pull Request Test Coverage Report for Build 1138269199
2 of 3 (66.67%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.2%) to 87.821%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
src/qutip_cupy/init.py
2
3
66.67%
Totals
Change from base Build 1127753560:
-0.2%
Covered Lines:
274
Relevant Lines:
312
💛 - Coveralls
|
gharchive/pull-request
| 2021-08-17T06:51:00 |
2025-04-01T06:40:09.916056
|
{
"authors": [
"MrRobot2211",
"coveralls"
],
"repo": "qutip/qutip-cupy",
"url": "https://github.com/qutip/qutip-cupy/pull/42",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1721051824
|
🛑 Adguard Home DoT is down
In a1bd1d6, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in a1b4202.
|
gharchive/issue
| 2023-05-23T02:52:03 |
2025-04-01T06:40:09.929718
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/1414",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1777680311
|
🛑 Adguard Home DoT is down
In 50f1b3f, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in 3b012f9.
|
gharchive/issue
| 2023-06-27T20:08:46 |
2025-04-01T06:40:09.931892
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/2539",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1868104743
|
🛑 Adguard Home DoT is down
In 32d28b8, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in efbf4f8 after 135 days, 9 hours, 19 minutes.
|
gharchive/issue
| 2023-08-26T13:49:22 |
2025-04-01T06:40:09.934087
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/4489",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1895350918
|
🛑 Adguard Home DoT is down
In 3ee3ac6, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in 10192a9 after 23 minutes.
|
gharchive/issue
| 2023-09-13T22:31:58 |
2025-04-01T06:40:09.936270
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/5116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2087962689
|
🛑 Adguard Home DoT is down
In 169adb7, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in b3a6004 after 23 minutes.
|
gharchive/issue
| 2024-01-18T10:17:20 |
2025-04-01T06:40:09.938593
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/8854",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2088454377
|
🛑 Adguard Home DoT is down
In 17c889c, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in 1537047 after 7 minutes.
|
gharchive/issue
| 2024-01-18T14:52:10 |
2025-04-01T06:40:09.940769
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/8859",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2103757530
|
🛑 Adguard Home DoT is down
In 97795b1, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in daa0c16 after 7 minutes.
|
gharchive/issue
| 2024-01-27T19:42:39 |
2025-04-01T06:40:09.942885
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/9168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2106488457
|
🛑 Adguard Home DoT is down
In c815d45, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in 23ebd70 after 7 minutes.
|
gharchive/issue
| 2024-01-29T21:43:12 |
2025-04-01T06:40:09.945189
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/9239",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2461189481
|
[📖]In the instructions for installing the styled kit, highlight that the "Make it yours" button is on the current page
Suggestion
On this page: https://qwikui.com/docs/styled/install/
It states Click on "make it yours" in order to customise the theme.
It wasn't immediately obvious to me that this is a button in the top header.
For people who are a bit slow like me, is it worth explicitly stating Click on "make it yours" in the header section of this page or similar?
Agreed, up for a PR? 😁
|
gharchive/issue
| 2024-08-12T14:39:27 |
2025-04-01T06:40:09.964290
|
{
"authors": [
"drk-mtr",
"maiieul"
],
"repo": "qwikifiers/qwik-ui",
"url": "https://github.com/qwikifiers/qwik-ui/issues/924",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2381495544
|
Fix header space
This fix works now but it's not perfect. Depending on the content and wrapping of multicols, the gap can still slightly vary. How can I fix it more reliably?
It's fine. In the rewritten rule book, we just used vspaces (sometimes conditionally) to fix those.
|
gharchive/pull-request
| 2024-06-29T05:09:36 |
2025-04-01T06:40:09.965963
|
{
"authors": [
"qwrtln",
"tomaas-zeman"
],
"repo": "qwrtln/Homm3BG-mission-book",
"url": "https://github.com/qwrtln/Homm3BG-mission-book/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
634824825
|
Unit test that fails: expect_error(filter(chondro, spc > 250))
https://github.com/r-hyperspec/hySpc.dplyr/blob/6ae23ea0883f0984e304341647ea8cfe8a798c1a/R/filter.R#L78
I suggest accepting #18 before fixing this issue.
fixed by 131c7dbe
|
gharchive/issue
| 2020-06-08T18:07:25 |
2025-04-01T06:40:09.986110
|
{
"authors": [
"GegznaV",
"cbeleites"
],
"repo": "r-hyperspec/hySpc.dplyr",
"url": "https://github.com/r-hyperspec/hySpc.dplyr/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1722333121
|
Parameter groups that are more than just tensors?
Should we make sure that our logic is able to handle parameter groups that are more than a single tensor? For example if I defined a "parameter group" to be both the weight and the bias in a feed forward layer (because I want to get a bit of boost from using a larger file or because I don't want them to be changed independently) our code could be able to hash, serialize, save, etc this collection of tensors as a single group?
We already have something similar in how updates with multiple values are serialized. This shouldn't be too difficult to implement with something like jax's tree_map abstraction.
I really like the abstraction of "parameter group is a single tensor". Apart from convenience in terms of grouping things in other semantically-meaningful ways, is there any other benefit?
I didn't have a clear use-case, was just thinking about how one review asked how the groups were identified. That is part of the checkpoint plugin so technically it is currently user-overrideable.
Also some of the update classes processes dicts instead of single tensors (i.e. the two matrices in LoRA or the index and value in sparse updates) which have special code to deal with that. If all code could transparently do that it could simplify things.
I was mostly posting to see if someone else had a good use-case for it lol.
|
gharchive/issue
| 2023-05-23T15:40:04 |
2025-04-01T06:40:10.118639
|
{
"authors": [
"blester125",
"craffel"
],
"repo": "r-three/git-theta",
"url": "https://github.com/r-three/git-theta/issues/218",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
339113480
|
Script don't download with and after lectures with /
Hey @r0oth3x49
I just noticed something.... There are a few courses, where has a / on lectures name, the script can't download the lecture with the / neither any lecture after it, even if I use the flag '--lecture-start' to get the lecture after it..
It returns this error
Traceback (most recent call last):
File "udemy-dl.py", line 1441, in main()
File "udemy-dl.py", line 1437, in main udemy.course_download(path=options.output, quality=options.quality, unsafe=options.unsafe)
File "udemy-dl.py", line 510, in course_download lecture.dump(filepath=filepath, unsafe=unsafe)
File "F:\Udemy\udemy-dl\udemy_shared.py", line 253, in dump with open(filename, 'wb') as f:
FileNotFoundError: [Errno 2] No such file or directory:
Cheers =)
@tofanelli please do follow Issue Reporting guideline i cannot fix the issue until unless i don't understand what the root cause is and what is the url etc...
@r0oth3x49 do you have an email address where I can send you more data?
@tofanelli yeah. you can email me
|
gharchive/issue
| 2018-07-07T03:31:47 |
2025-04-01T06:40:10.126672
|
{
"authors": [
"r0oth3x49",
"tofanelli"
],
"repo": "r0oth3x49/udemy-dl",
"url": "https://github.com/r0oth3x49/udemy-dl/issues/250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1725186844
|
Add ActiveDirectoryServicePrincipal authentication mechanism
Feature Request
Add ActiveDirectoryServicePrincipal authentication mechanism
Duplicate of #234
|
gharchive/issue
| 2023-05-25T06:42:27 |
2025-04-01T06:40:10.132573
|
{
"authors": [
"jbkervyn",
"mp911de"
],
"repo": "r2dbc/r2dbc-mssql",
"url": "https://github.com/r2dbc/r2dbc-mssql/issues/269",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
923795718
|
🛑 Pet Project is down
In 6d491fe, Pet Project ($RINFO_SITE) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Pet Project is back up in 9dfa24b.
|
gharchive/issue
| 2021-06-17T11:08:33 |
2025-04-01T06:40:10.176786
|
{
"authors": [
"rachitkataria13"
],
"repo": "rachitkataria13/upptime",
"url": "https://github.com/rachitkataria13/upptime/issues/245",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
137019378
|
BFF code cleanup.
Inorder for me to make changes to talk to the new API, I felt the existing code/setup needs some cleanup. In that process I had made the following changes.
Created vagrant configuration so that we can run grafana with blueflood finder code pointing to local blueflood run.
Added logging capability. I cant deal with print statements. Also figured out a way to enable these logs in staging/prod servers.
Fixed the current enum bug that exists in prod while populating drop down for enum metrics.
Organized dependencies(setup.py, test_requirements.txt).
Wrote brand new tests to capture the behavior of finder for various scenarios.
I learned a little bit about the grafana server setup that happens with the heat template. I had documented all that info. I would appreciate if you can review the wiki as well along with this.
[https://one.rackspace.com/display/cloudmetrics/Blueflood+Finder+grafana+server+internal+details](Grafana server setup with blueflood finder)
Other than clarifying my vagrant question, lgtm.
me too... looks good other than the logging clarifications.
|
gharchive/pull-request
| 2016-02-28T07:12:10 |
2025-04-01T06:40:10.180232
|
{
"authors": [
"ChandraAddala",
"shintasmith",
"usnavi"
],
"repo": "rackerlabs/blueflood",
"url": "https://github.com/rackerlabs/blueflood/pull/634",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
99831354
|
1.0.0-beta3 Nested route with params(and without) issue
I use route setup like this
<Route component={CoreLayout}>
<Route name='home' path='/' component={HomeView} />
<Route name='user-info' path='user/:id' component={Profile} />
<Route name='about' path='about' component={AboutView} />
<Route name='dashboard' path='dashboard' component={DashboardView} />
<Redirect from='/admin' to='dashboard' />
</Route>
Everything works fine except 'user/:id' path, and just any route like 'path/path' (when there is slash in path string). Server always returns 404.
Tried without params, just something like path='/user/profile', still no result.
Tried nesting them like
<Route name='user-info' path='user' component={UsersView}>
<Route path='profile' component={Profile} />
</Route>
Where users view was just a container that rendered {this.props.children}.
Maybe anyone have some tips about what might be the problem? Or maybe i'm missing the point?
This issue stemmed from use with my react-redux-starter-kit, and I believe it was just caused by the webpack dev server not handling the routes correctly, and has been fixed here: https://github.com/davezuko/react-redux-starter-kit/commit/e29692c33871e9339834308fcdc9af6ded88a203.
Hopefully that clarifies things for the react-router authors, as it's probably just a non-issue.
sounds like its not us, let me know if otherwise
|
gharchive/issue
| 2015-08-08T20:11:15 |
2025-04-01T06:40:10.189133
|
{
"authors": [
"davezuko",
"ryanflorence",
"zoilorys"
],
"repo": "rackt/react-router",
"url": "https://github.com/rackt/react-router/issues/1682",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
723611731
|
Dump to zarr fails partway through
I've tried dumping to zarr several times with repeatable failures, e.g.:
In [9]: cube = SpectralCube.read('G327.29_B6_spw3_12M_h2co303.image/', format='casa_image')
WARNING: StokesWarning: Cube is a Stokes cube, returning spectral cube for I component [spectral_cube.io.core]
In [10]: cube
Out[10]:
DaskVaryingResolutionSpectralCube with shape=(147, 784, 1080) and unit=Jy / beam and chunk size (7, 56, 72):
n_x: 1080 type_x: RA---SIN unit_x: deg range: 238.242362 deg: 238.325221 deg
n_y: 784 type_y: DEC--SIN unit_y: deg range: -54.636348 deg: -54.601548 deg
n_s: 147 type_s: FREQ unit_s: Hz range: 216331601824.006 Hz:216367735320.595 Hz
In [11]: cube = cube.rechunk(save_to_tmp_dir=True)
Illegal instruction (core dumped)
and
Beginning field W43-MM2 band 6 config 12M line sio spw 1 suffix .image
WARNING: StokesWarning: Cube is a Stokes cube, returning spectral cube for I component [spectral_cube.io.core]
Saving to tmpdir
[####### ] | 19% Completed | 1.3sIllegal instruction (core dumped)
This issue may be solved by changes to default chunk sizing. I'm closing this for now but it might be a real issue.
|
gharchive/issue
| 2020-10-17T01:27:50 |
2025-04-01T06:40:10.238726
|
{
"authors": [
"keflavich"
],
"repo": "radio-astro-tools/spectral-cube",
"url": "https://github.com/radio-astro-tools/spectral-cube/issues/674",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2372725119
|
Avoid samp obs
Use of observation class to pass sampling options to fits writer
avoid astropy versions >6.1.0
|
gharchive/pull-request
| 2024-06-25T13:36:01 |
2025-04-01T06:40:10.248295
|
{
"authors": [
"Kevin2"
],
"repo": "radionets-project/pyvisgen",
"url": "https://github.com/radionets-project/pyvisgen/pull/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2606751709
|
REQUEST: New membership for sk593
GitHub Username
@sk593
Requirements
[x] I have reviewed the community membership guidelines
[x] I have enabled 2FA on my GitHub account, see https://github.com/settings/security
[x] I have subscribed to the Radius community Discord server
[x] I am contributing (any of the following apply: issues, discussions, PRs, reviews) to 1 or more Radius repositories
List of contributions to the Radius project
/radius: APPROVER
Authored ~120 PRs
Opened ~50 issues
Key areas of work: Recipes, Bicep extensibility, Portable Resources, CLI, Terraform
/bicep-types-aws: MAINTAINER
Refactored a lot of the repository code and workflows: https://github.com/radius-project/bicep-types-aws/pull/43
I have also opened issues for errors in our workflows and PRs that I would like to fix, but would need maintainer-level status to do: https://github.com/radius-project/bicep-types-aws/issues/61
AB#13500
+1
+1
+1 to both
|
gharchive/issue
| 2024-10-22T23:34:18 |
2025-04-01T06:40:10.254011
|
{
"authors": [
"rynowak",
"sk593",
"sylvainsf",
"willtsai"
],
"repo": "radius-project/community",
"url": "https://github.com/radius-project/community/issues/62",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1732435697
|
🛑 GPS Receiver API is down
In dc5acf2, GPS Receiver API (http://tracker.ace-energy.co.th/receiver/api/v1/health) was down:
HTTP code: 504
Response time: 15033 ms
Resolved: GPS Receiver API is back up in 71071d7.
|
gharchive/issue
| 2023-05-30T14:50:47 |
2025-04-01T06:40:10.256921
|
{
"authors": [
"chindanai"
],
"repo": "radiuszon/upptime",
"url": "https://github.com/radiuszon/upptime/issues/1019",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.