id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2670175230
|
Allow adjustment of adc ranges without providing delay ranges
Currently, you have to provide delay_ranges to delay_calibration if you provide adc_ranges. This should be changed such that delay ranges are picked from a file if possible.
The underlying issue is that it is not well defined what to do if calibration parameters are already present. Should just these be used, or the new ones be read from the file of e.g. a different scan. To solve this, a new option has been added in PR #525
|
gharchive/issue
| 2024-11-18T22:42:44 |
2025-04-01T04:55:28.297494
|
{
"authors": [
"rettigl"
],
"repo": "OpenCOMPES/sed",
"url": "https://github.com/OpenCOMPES/sed/issues/521",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
743769152
|
OpenCSPM not picking up custom output.json
Describe the bug
Having followed instructions at https://github.com/OpenCSPM/opencspm/blob/main/site/data_collection.md#aws-cloud-resources, OpenCSPM does not pick up my custom data.
# ls -l assets/custom
total 316
-rw-r--r-- 1 christophetd christophetd 12 nov. 16 12:35 manifest.txt
-rwxr-xr-x 1 christophetd christophetd 316195 nov. 16 12:35 output.json
$ cat assets/custom/manifest.txt
output.json
$ cat config/config.yaml
---
db:
host: redis
port: 6379
buckets:
# - gs://darkbit-collection-us-cspm
# - s3://my-other-bucket-here
local_dirs:
- /app/data/custom
# - /app/data/test
I did run docker-compose down && docker-compose up after making these modifications.
No visible error in logs
Expected behavior
OpenCSPM should pick up the new resources file
Screenshots
UI:
Docker information (please complete the following information):
$ docker --version
Docker version 19.03.13, build 4484c46d9d
$ docker-compose --version
docker-compose version 1.27.4, build 40524192
Cloud provider (if applicable):
AWS
Solved: I had to manually delete Docker volumes (docker volume rm opencspm_core opencspm_postgres opencspm_redis) and restart OpenCSPM. Might be worth mentioning in the docs
This is a UI bug. The new files will be loaded since the config.yaml is re-read every time the background job runs, but it is not reflected correctly in the UI.
|
gharchive/issue
| 2020-11-16T11:43:56 |
2025-04-01T04:55:28.301834
|
{
"authors": [
"christophetd",
"joshlarsen"
],
"repo": "OpenCSPM/opencspm",
"url": "https://github.com/OpenCSPM/opencspm/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2703535524
|
Verify authncontext in assertion from IdP
This issue is imported from pivotal - Originaly created at May 7, 2020 by Thijs Kinkhorst
For a given combination IdP, SP, AuthnContextClassRef (A, B, R) the authnrequest will be enhanced with a RequestedAuthnContext which the IdP will use to e.g. trigger an MFA scenario or other policy.
The authnrequest is just a "request" and for various reasons not secure. We therefore need to verify that the IdP actually performed the authentication within the requested context (e.g., verified MFA) in the assertion we receive back and which is secure.
We can do this the spec compliant way but ADFS does not conform to this spec/does not use the spec in the way we want it to. It will always list the authncontextclassref of the first factor so we can see no difference there.
If the authncontextclassref has been configured for this IdP and SP combination, the authentication can succeed (that is, be forwarded to the SP) when:
The requested authncontextclassref R is present as a the "AuthnContextClassRef" element of the assertion (generic, for SAML 2.0 IdPs) ; OR
The requested authncontextclassref R is present as a value of the (multi valued) attribute (just another attribute, like other attributes in the assertion such as urn:mace:dir;attribute-def:uid): http://schemas.microsoft.com/claims/authnmethodsreferences.
Checking both will make the chance higher that it works also with non-ADFS implementations.
Example assertion message from generic IdP:
<AuthnContext>
<AuthnContextClassRef>R</AuthnContextClassRef>
</AuthnContext>
MS (succeeeds for R = "http://schemas.microsoft.com/claims/multipleauthn"):
<AttributeStatement>
<Attribute Name="urn:mace:dir:attribute-def:mail">
<AttributeValue>medewerker04@hartingcollege.nl</AttributeValue>
</Attribute>
<Attribute Name="http://schemas.microsoft.com/claims/authnmethodsreferences">
<AttributeValue>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</AttributeValue>
<AttributeValue>http://schemas.microsoft.com/ws/2012/12/authmethod/phoneappnotification</AttributeValue>
<AttributeValue>http://schemas.microsoft.com/claims/multipleauthn</AttributeValue>
</Attribute>
</AttributeStatement>
<AuthnStatement AuthnInstant="2020-05-05T12:09:21.031Z"
SessionIndex="_d884044e-5f13-4b8e-9fcf-5779cf1dee2b">
<AuthnContext>
<AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</AuthnContextClassRef>
</AuthnContext>
</AuthnStatement>
When neither is present, fail with a specific exception (message logs which idp, sp involved and which authncontextclassref we're searching for). See: #172718794 for details.
This validation can be added to the existing input filter: \EngineBlock_Corto_Filter_Command_ValidateAuthnContextClassRef. I suggest adding a validation helper (that resides in the new EngineBlock source folder) that is tasked with verifying the requirements stated in this story.
This will entail moving the blacklist validation to a similar validation helper service. Alternatively, another input filter could be created that is specifically tasked with the manage configured sp/idp authncontext settings validation. (Michiel Kodde - Jun 30, 2020)
@thijskh you requested:
When neither is present, fail with a specific exception (message logs which idp, sp involved and which authncontextclassref we're searching for).
Have you thought about the specifics of this user facing error message? I'd like to know the proposed error path name and the English/Dutch error messages. Or do I incorrectly assume this will be a custom user-facing error message?
Nevermind, I later read #172718794 (Michiel Kodde - Jun 30, 2020)
|
gharchive/issue
| 2024-11-29T00:09:14 |
2025-04-01T04:55:28.349140
|
{
"authors": [
"phavekes"
],
"repo": "OpenConext/OpenConext-engineblock",
"url": "https://github.com/OpenConext/OpenConext-engineblock/issues/1641",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
487157575
|
Dificuldade no entendimento do README
Gostaria de contribuir adicionando meus resumos, mas o README não é intuitivo. Devo rodar o script de qualquer maneira ou apenas quando não existe a pasta da disciplina em questão? Se não tiver, deve ser o nome completo da disciplina ou apenas a sigla? Que README.md com {discipline} é esse?
Quando acesso o Contributing para tentar ter mais informações sobre como contribuir, encontro as mesmas informações que já vi e nada novo. Sempre devo criar uma issue quando quero contribuir com algo que não tem uma issue (exemplo, um novo resumo de SO)? Crio uma nova branch para abrir PR?
Olá, @marcellasiqueira, muito obrigado pelo interesse em contribuir! :smile:
Se a disciplina não existir, você pode rodar o script do jeito que está lá e fazer os ajustes que achar necessário na pasta que foi gerada (O {discipline} mostrado no README.md após a geração é um bug do script, e realmente é uma falha nossa). Normalmente, utilizamos os nomes abreviados das disciplinas (como tc, so, eda, etc.).
O passo-a-passo a respeito de como realizar uma contribuição está descrito na sessão Como contribuir? do Contributing.md. Resumidamente, você precisa fazer um fork, que vai criar uma cópia do nosso repositório para a sua conta, fazer os commits lá (criar uma branch nova nesse repo forkado é totalmente opção sua) e submeter a Pull Request. Ao abrir a PR, terá um mini-template para ajudar no processo também.
No geral, a gente tenta encorajar ao máximo a contribuição, não precisa ficar com receio de enviar suas alterações ou de cometer algum "erro", existe todo um cuidado na hora da revisão e, caso necessário, as mudanças que a gente vai sugerir sempre virão de maneira totalmente amigável e transparente. Aguardamos seus resumos! :heart:
Ah, e caso não exista uma issue para o material que você quer adicionar, você pode abrir uma nova tranquilamente. Inclusive, caso ajude, você pode seguir algum dos modelos abaixo:
Modelo 1 - novo resumo
Modelo 2 - nova pasta
Modelo 3 - adição de leite
Modelo 4 - correção de algum conteúdo
Modelo 5 - atualização de conteúdo de alguma disciplina
@marcellasiqueira esse {discipline} provavelmente é um bug, ele deveria colocar o nome da disciplina que você quer criar.
Quanto as outras dúvidas, @lucasmedeiros respondeu, caso tenha mais por favor use essa issue pra perguntar! Queremos saber para melhorar nosso readme e contributing.
Vou abrir algumas issues para adicionar as informações que você perguntou aqui
|
gharchive/issue
| 2019-08-29T20:20:29 |
2025-04-01T04:55:28.378427
|
{
"authors": [
"lucasmedeiros",
"marcellasiqueira",
"thayannevls"
],
"repo": "OpenDevUFCG/Tamburetei",
"url": "https://github.com/OpenDevUFCG/Tamburetei/issues/227",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2328935495
|
Persistent docker doesn't work with AgentSkills
@SmartManoj could you please take a look at this?
Persistent docker thing doesn't seem to work well with agent skills. Repro command:
ONLY_TEST_NAME=test_edits TEST_ONLY=true ONLY_TEST_AGENT=CodeActAgent ./tests/integration/regenerate.sh
(with persist_sandbox = true)
I think https://github.com/OpenDevin/OpenDevin/pull/1998 causes a regression.
The test passed because you set persist_sandbox = false in CI.
If we make this persistence thing by default ON, we shall probably also turn it on in CI?
Originally posted by @li-boxuan in https://github.com/OpenDevin/OpenDevin/issues/2139#issuecomment-2138723188
Creating an issue to track the problem
works if workdir='/'
Action link
ls -la /workspace returns just total 0\n
The typical two lines are missing. Open to suggestions!
works if workdir='/'.
New issue pip not found raised after this.
Are there any tests that clear sandbox storage?
Has anyone experienced this behavior after running the test?
Has anyone experienced this behavior after running the test?
regenerate.sh removes for pretty much every test the _test_workspace folder before starting each single test (rm -rf).
If a test fails, it also does that. And even if you're in "OpenDevin" in terminal, after the test it thinks it is in the now removed test folder. If you manually go a folder back and forward, it'll reset.
If not done that way, you get that "FileNotFoundError" running regenerate again.
That's at least my experience here the last days and it looks pretty close to what happens in your screenshot?
@li-boxuan can this issue be closed now that PERSIST_SANDBOX is set to false by default and likely needs an overhaul if implemented correctly?
@mamoodi I think we should keep this issue open unless we fix it, remove PERSIST_SANDBOX completely, OR have another issue/PR that tracks an overhaul?
Makes sense. Let's keep it open until one of those happen. I think what needs to be done is a little bit in the air.
|
gharchive/issue
| 2024-06-01T05:48:43 |
2025-04-01T04:55:28.389004
|
{
"authors": [
"SmartManoj",
"li-boxuan",
"mamoodi",
"tobitege"
],
"repo": "OpenDevin/OpenDevin",
"url": "https://github.com/OpenDevin/OpenDevin/issues/2176",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2220429486
|
chore: update and fix broken links in table of contents in README.md
The table of contents in README.md appears to be outdated and the in-page navigation is also broken.
In this PR I updated the table of contents to match the latest updates of the README.md contents and fixed the in-page navigation links.
Thank you!
|
gharchive/pull-request
| 2024-04-02T12:43:17 |
2025-04-01T04:55:28.391495
|
{
"authors": [
"rbren",
"tgasla"
],
"repo": "OpenDevin/OpenDevin",
"url": "https://github.com/OpenDevin/OpenDevin/pull/574",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1975378631
|
读取接口出错解决方法
手动替换这两个文件
怎么替换
库里面的短信连接只有几条可以用了
大部分已经失效了
解决了可以用了,就是短信少 感谢大佬,已经star
替换之后可以用了,但是除了一开始几条显示成功,后面只有白色和红色的api代码结果,这些都是不成功的吗?
|
gharchive/issue
| 2023-11-03T02:51:50 |
2025-04-01T04:55:28.535840
|
{
"authors": [
"DovisW",
"Typing1022",
"chenjiamin-dotcom",
"dzz10",
"fakeuser2000",
"huangjin999"
],
"repo": "OpenEthan/SMSBoom",
"url": "https://github.com/OpenEthan/SMSBoom/issues/300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2491909164
|
Map++: log_initialization_information function is always calling write_summary_file()
Bug description
Look here. If a summary file is specified then write_summary_file() is called twice. Otherwise just once.
I am pretty sure line 2684 should be dropped
OpenFAST Version
OpenFAST dev branch, commit e204587c4aa9e81f0d47a92c13ab9c7d7cc7aaf5
System Information (please complete the following information):
OS: Arch Linux
Compiler: gcc version 14.2.1 20240805 (GCC)
Compiler settings: --target mappplib
I agree. It looks like the additional line was added in commit c6bc9d18 (in March 2015), which was a merge of a couple of branches in svn.
Corrected with #2405. Thanks @sanguinariojoe for reporting this!
|
gharchive/issue
| 2024-08-28T12:12:05 |
2025-04-01T04:55:28.539043
|
{
"authors": [
"andrew-platt",
"bjonkman",
"sanguinariojoe"
],
"repo": "OpenFAST/openfast",
"url": "https://github.com/OpenFAST/openfast/issues/2396",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
394984362
|
x系列设备显示效果不一致
我在使用该库时,发现iphone X/XS/XS max /XR中与其他屏幕的设备显示不一致,其中也包括字体问题.
具体表现为,较其余设备widget更高,间距更大,字体缺没有太大变化,导致像被左右挤压了一样.
设计稿使用sketch,画板为iphone8,大小为750*1334,除以上设备以外,其余均比较正常,求解决方法.
刚刚看到了一个被关闭的问题,让统一使用width,那么这样会不会有什么问题呢.比如两个部件的上下间距也使用width.
统一使用setWidth的缺点就是 设计稿上 可能一个屏幕显示出来的 实际并没有显示完全
如果是滚动页面我觉得没有影响
如果是万全塞满的静态页面 可能会有显示异常
如果在测试的时候多注意一下 我觉得没有什么问题
|
gharchive/issue
| 2018-12-31T13:59:20 |
2025-04-01T04:55:28.543116
|
{
"authors": [
"changeyan",
"lizhuoyuan"
],
"repo": "OpenFlutter/flutter_screenutil",
"url": "https://github.com/OpenFlutter/flutter_screenutil/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1843393963
|
add the whole_rings filter to defaults
also fixed up whole_rings_only filter
Hello @richardjgowers! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
In the file src/kartograf/atom_mapper.py:
Line 290:80: E501 line too long (87 > 79 characters)
Line 341:80: E501 line too long (86 > 79 characters)
Line 615:80: E501 line too long (91 > 79 characters)
Line 616:80: E501 line too long (83 > 79 characters)
Line 649:37: E225 missing whitespace around operator
Line 649:41: E701 multiple statements on one line (colon)
Line 649:80: E501 line too long (89 > 79 characters)
Line 695:37: E225 missing whitespace around operator
Line 695:41: E701 multiple statements on one line (colon)
Line 695:80: E501 line too long (110 > 79 characters)
In the file src/kartograf/filters/ring_changes.py:
Line 84:80: E501 line too long (84 > 79 characters)
|
gharchive/pull-request
| 2023-08-09T14:32:23 |
2025-04-01T04:55:28.579569
|
{
"authors": [
"pep8speaks",
"richardjgowers"
],
"repo": "OpenFreeEnergy/kartograf",
"url": "https://github.com/OpenFreeEnergy/kartograf/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1615087274
|
java outputs has null exception
When I deploy java demo according this link https://github.com/OpenFunction/samples/tree/main/functions/knative/java/with-output-binding, after deploy, I access function, but has exception as:
[main] INFO org.eclipse.jetty.server.Server - jetty-11.0.9; built: 2022-03-30T17:44:47.085Z; git: 243a48a658a183130a8c8de353178d154ca04f04; jvm 18.0.1.1+2
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@6f5ffd5c{/,null,AVAILABLE}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@5e792d1b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
[main] INFO org.eclipse.jetty.server.Server - Started Server@ee6f42b2{STARTING}[11.0.9,sto=0] @397ms
plugin plugin-example:v1.0.0 exec pre hook for http function at 2023-03-08 11:01:55.Z
receive event: {"message":"Awesome OpenFunction!"}
Mar 08, 2023 11:01:55 AM dev.openfunction.invoker.runtime.SynchronizeRuntime$OpenFunctionServlet service
SEVERE: Failed to execute function
java.lang.NullPointerException: Cannot invoke "io.dapr.client.DaprClient.invokeBinding(java.lang.String, java.lang.String, byte[], java.util.Map)" because "this.daprClient" is null
at dev.openfunction.invoker.context.UserContext.send(UserContext.java:116)
at dev.openfunction.samples.OpenFunctionImpl.accept(OpenFunctionImpl.java:15)
at dev.openfunction.invoker.runtime.SynchronizeRuntime$OpenFunctionServlet.service(SynchronizeRuntime.java:166)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:587)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:764)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:508)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:221)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1375)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:176)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:463)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:174)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1297)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:129)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
at org.eclipse.jetty.server.Server.handle(Server.java:562)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$0(HttpChannel.java:505)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:762)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:497)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:282)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:319)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100)
at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:894)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1038)
at java.base/java.lang.Thread.run(Unknown Source)
From above, it may be not init dapr client for java sdk.
@wrongerror user need to add annotation manually? https://openfunction.dev/docs/concepts/baas_integration/
When I deploy java demo according this link https://github.com/OpenFunction/samples/tree/main/functions/knative/java/with-output-binding, after deploy, I access function, but has exception as:
[main] INFO org.eclipse.jetty.server.Server - jetty-11.0.9; built: 2022-03-30T17:44:47.085Z; git: 243a48a658a183130a8c8de353178d154ca04f04; jvm 18.0.1.1+2
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@6f5ffd5c{/,null,AVAILABLE}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@5e792d1b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
[main] INFO org.eclipse.jetty.server.Server - Started Server@ee6f42b2{STARTING}[11.0.9,sto=0] @397ms
plugin plugin-example:v1.0.0 exec pre hook for http function at 2023-03-08 11:01:55.Z
receive event: {"message":"Awesome OpenFunction!"}
Mar 08, 2023 11:01:55 AM dev.openfunction.invoker.runtime.SynchronizeRuntime$OpenFunctionServlet service
SEVERE: Failed to execute function
java.lang.NullPointerException: Cannot invoke "io.dapr.client.DaprClient.invokeBinding(java.lang.String, java.lang.String, byte[], java.util.Map)" because "this.daprClient" is null
at dev.openfunction.invoker.context.UserContext.send(UserContext.java:116)
at dev.openfunction.samples.OpenFunctionImpl.accept(OpenFunctionImpl.java:15)
at dev.openfunction.invoker.runtime.SynchronizeRuntime$OpenFunctionServlet.service(SynchronizeRuntime.java:166)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:587)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:764)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:508)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:221)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1375)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:176)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:463)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:174)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1297)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:129)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
at org.eclipse.jetty.server.Server.handle(Server.java:562)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$0(HttpChannel.java:505)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:762)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:497)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:282)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:319)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100)
at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:894)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1038)
at java.base/java.lang.Thread.run(Unknown Source)
From above, it may be not init dapr client for java sdk.
Add these annotations to the Function.
openfunction.io/enable-dapr: true
openfunction.io/dapr-service-mode: sidecar
Add these annotations to the Function.
openfunction.io/enable-dapr: true
openfunction.io/dapr-service-mode: sidecar
All samples should work under both proxy and sidecar modes, why proxy mode doesn't work for java? @wanjunlei @wrongerror
When I deploy java demo according this link https://github.com/OpenFunction/samples/tree/main/functions/knative/java/with-output-binding, after deploy, I access function, but has exception as:
[main] INFO org.eclipse.jetty.server.Server - jetty-11.0.9; built: 2022-03-30T17:44:47.085Z; git: 243a48a658a183130a8c8de353178d154ca04f04; jvm 18.0.1.1+2
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@6f5ffd5c{/,null,AVAILABLE}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@5e792d1b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
[main] INFO org.eclipse.jetty.server.Server - Started Server@ee6f42b2{STARTING}[11.0.9,sto=0] @397ms
plugin plugin-example:v1.0.0 exec pre hook for http function at 2023-03-08 11:01:55.Z
receive event: {"message":"Awesome OpenFunction!"}
Mar 08, 2023 11:01:55 AM dev.openfunction.invoker.runtime.SynchronizeRuntime$OpenFunctionServlet service
SEVERE: Failed to execute function
java.lang.NullPointerException: Cannot invoke "io.dapr.client.DaprClient.invokeBinding(java.lang.String, java.lang.String, byte[], java.util.Map)" because "this.daprClient" is null
at dev.openfunction.invoker.context.UserContext.send(UserContext.java:116)
at dev.openfunction.samples.OpenFunctionImpl.accept(OpenFunctionImpl.java:15)
at dev.openfunction.invoker.runtime.SynchronizeRuntime$OpenFunctionServlet.service(SynchronizeRuntime.java:166)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:587)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:764)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:508)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:221)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1375)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:176)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:463)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:174)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1297)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:129)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
at org.eclipse.jetty.server.Server.handle(Server.java:562)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$0(HttpChannel.java:505)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:762)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:497)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:282)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:319)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100)
at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:894)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1038)
at java.base/java.lang.Thread.run(Unknown Source)
From above, it may be not init dapr client for java sdk.
Add these annotations to the Function.
openfunction.io/enable-dapr: true
openfunction.io/dapr-service-mode: sidecar
I has try this and can work well. But another question is I has try go with-binding demo, it's can work well not need this annotation, demo link is https://github.com/OpenFunction/samples/tree/main/functions/knative/with-output-binding.
Can you explain this why?
Add these annotations to the Function.
openfunction.io/enable-dapr: true
openfunction.io/dapr-service-mode: sidecar
All samples should work under both proxy and sidecar modes, why proxy mode doesn't work for java? @wanjunlei @wrongerror
For java, just sidecar mode is well, but go two mode are well, whether is because openfunction has not complete this for java?
For java, just sidecar mode is well, but go two mode are well, whether is because openfunction has not complete this for java?
It's bug, the proxy mode should work for Java as well. @wanjunlei has fix that.
@wanjunlei would you update all the readme, docs, samples to reflect the new java functions framework version?
For java, just sidecar mode is well, but go two mode are well, whether is because openfunction has not complete this for java?
@wanjunlei has fixed this you can delete the old build https://github.com/OpenFunction/samples/blob/main/functions/knative/java/with-output-binding/function-front.yaml#L18 and download the new one.
|
gharchive/issue
| 2023-03-08T11:07:43 |
2025-04-01T04:55:28.591940
|
{
"authors": [
"benjaminhuo",
"gongxh13",
"wanjunlei"
],
"repo": "OpenFunction/samples",
"url": "https://github.com/OpenFunction/samples/issues/118",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1857435258
|
Standardize wiki page titles
The project’s official and legal name is “OpenHistoricalMap”, but the wiki keeps calling it “Open Historical Map”, at least in article titles, based on an old logo that’s no longer used on English-language articles. There seems to be a quiet consensus to rename the wiki articles to start with “OpenHistoricalMap” instead.
I plan to move the pages so en masse using my administrator privileges. I’ll also need to fix up the documentation wiki pages (P31) statements in each data item to match the new spelling.
One wrinkle is that iD assumes that tagging article titles are prefixed with “Open Historical Map” as of OpenHistoricalMap/iD#178. The code is rather painful to rewrite to handle multiple spellings, so I’ve pushed OpenHistoricalMap/iD#180 with a small change to expect the new spelling. We’ll need to coordinate the deployment of this change with the actual page move.
This is done.
|
gharchive/issue
| 2023-08-19T00:41:05 |
2025-04-01T04:55:28.619749
|
{
"authors": [
"1ec5"
],
"repo": "OpenHistoricalMap/issues",
"url": "https://github.com/OpenHistoricalMap/issues/issues/587",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1125257621
|
Segfault or invalid instruction on SIGHUP
при обновлении конфига из WebUI сыпятся segfault, изредка - illegal instruction
18:44:02 [ levent] signal_hup_cb@50 Signal HUP received, reloading config
18:44:02 [ gpio] set_gpio@25 set_gpio(65, 0)
18:44:02 [ gpio] set_gpio@25 set_gpio(64, 0)
18:44:02 [ sdk] stop_chn@2179 Stopped 1 channel
18:44:02 [ vpss] VPSS_DisableChn@5434 [ vpss] !! Disable VpssChn:0 timeout 120ms!!!
18:44:02 [ sdk] stop_chn@2179 Stopped 0 channel
18:44:02 [ sdk] ISP_thread_proc@160 Shutdown isp_run thread
18:44:02 [ viu] VIU_DRV_DisableChn@1309 [ viu] !! Disable ViChn:0 timeout 120ms!!!
18:44:02 [ sdk] stop_sdk@3405 Stop sdk Ok!
18:44:02 [app_conf] load_config@345 Using /etc/majestic.yaml as main configuration
18:44:02 [ sdk] sdk_specific_config@3530 SENSOR=sc1145_i2c
18:44:02 [ sdk] find_sensor_config@3487 matched sensor config: sc1145_i2c_720p.ini
18:44:02 [ sdk] find_sensor_config@3504 Using /etc/sensors/sc1145_i2c_720p.ini as sensor configuration
18:44:02 [app_conf] parse_app_config@684 app_config.osd_template %Y-%m-%d %H:%M:%S %Z
18:44:02 [ sdk] start_sdk@251 App was built with MPP version: Hi3518EV200_MPP_V1.0.5.0.B060 Release
18:44:02 [ sdk] start_sdk@254 Current MPP version: HI_VERSION=Hi3518EV200_MPP_V1.0.5.0 B060 Release
18:44:03 [ sdk] start_sdk@277 sensor sc1145
18:44:03 [ sdk] start_sdk@282 input_mode CMOS_18V, WDR NONE
18:44:03 [ sdk] start_sdk@289 dev [1280x720]@0x0 30fps, BGGR
18:44:03 [ sensor] try_to_load@19 trying to load /usr/lib/sensors/libsns_sc1145.so
18:44:03 [ sdk] dump_vb_configuration@2456 VB configuration:
18:44:03 [ sdk] dump_vb_configuration@2466 [0]: 1399680 x 9
18:44:03 [ sdk] dump_vb_configuration@2466 [1]: 1843200 x 1
18:44:03 [ sdk] dump_vb_configuration@2466 [2]: 346368 x 1
18:44:03 [ sdk] init_sensor@2659 Sensor driver loaded
18:44:03 [ puts] linear mode
18:44:03 [ puts] =========================================================
18:44:03 [ puts] ==SC1145 sensor 720P30fps(Parallel port) init success! ==
18:44:03 [ puts] =========================================================
18:44:03 [ sdk] log_venc_chn@1573 H.264 1280x720 30fps 3072Kbit 30 GOP
18:44:03 [ sdk] create_vpss_chn@1326 new venc: 0 vpss_grp: 0, vpss_chn: 0
18:44:03 [ sdk] init_chn@1608 JPEG snapshot snapshot venc_chn 1 1280x720
18:44:03 [ osd] init_osd@92 OSD initialized
18:44:03 [image_tu] start_image_params_tuning@63 Image tuning task started
18:44:03 [ sdk] start_sdk@830 HiSilicon SDK started
18:44:03 [ httpd] new_http_server@349 HTTP server started on :::80
18:44:03 [ rtsp] rtsp_init@32 RTSP server started on port 554
18:44:03 [ netip] netip_start@2013 NETIP server started on port 34567
18:44:03 [ gpio] set_gpio@25 set_gpio(65, 0)
18:44:03 [ gpio] set_gpio@25 set_gpio(64, 0)
18:44:03 [ night] dn_start@220 Starting monitor for light sensor
18:44:03 [ gpio] set_gpio@25 set_gpio(65, 1)
18:44:03 [ gpio] set_gpio@25 set_gpio(64, 0)
18:44:03 [ gpio] set_gpio@25 set_gpio(65, 0)
18:44:03 [ gpio] set_gpio@25 set_gpio(64, 1)
18:44:03 [ gpio] set_gpio@25 set_gpio(65, 0)
18:44:03 [ gpio] set_gpio@25 set_gpio(64, 0)
Segmentation fault
в dmesg при этом
hi_i2c_wait_rxfifo_notempty->297:
abort! int_raw_status: 0x550!
hi_i2c_abortprocess->103:
tx_abrt_src is 1.
hi_i2c_wait_rxfifo_notempty->297:
abort! int_raw_status: 0x550!
hi_i2c_abortprocess->103:
tx_abrt_src is 1.
hi_i2c_wait_rxfifo_notempty->297:
abort! int_raw_status: 0x550!
hi_i2c_abortprocess->103:
tx_abrt_src is 1.
hi_i2c_wait_rxfifo_notempty->297:
abort! int_raw_status: 0x550!
hi_i2c_abortprocess->103:
tx_abrt_src is 1.
MMB LEAK(pid=4658): 0x83490000, 225280 bytes, ''
mmz_userdev_release: mmb<0x83490000> mapped to userspace 0xb6625000 will be force unmaped!
MMB LEAK(pid=4658): 0x834C7000, 225280 bytes, ''
mmz_userdev_release: mmb<0x834C7000> mapped to userspace 0xb65ee000 will be force unmaped!
MMB LEAK(pid=4658): 0x834FE000, 4096 bytes, ''
mmz_userdev_release: mmb<0x834FE000> mapped to userspace 0xb65ed000 will be force unmaped!
MMB LEAK(pid=4658): 0x83500000, 466944 bytes, 'MD_ASSIST'
mmz_userdev_release: mmb<0x83500000> mapped to userspace 0xb657a000 will be force unmaped!
воспроизводится 100% при смене isp: секции
один раз проскочило
18:57:15 [ levent] signal_hup_cb@50 Signal HUP received, reloading config
18:57:15 [ gpio] set_gpio@25 set_gpio(65, 0)
18:57:15 [ gpio] set_gpio@25 set_gpio(64, 0)
Illegal instruction
в dmesg были только ошибки i2c. в последнем случае менял только slowShutter
конфиг:
system:
logLevel: TRACE
updateChannel: stable
buffer: 1024
webPort: 80
staticDir: /var/www/html
isp:
blkCnt: 10
exposure: auto
antiFlicker: 50Hz
memMode: normal
rawMode: slow
slowShutter: medium
image:
mirror: false
flip: false
rotate: none
contrast: auto
hue: 50
saturation: 50
luminance: auto
osd:
enabled: true
font: /usr/share/fonts/truetype/UbuntuMono-Regular.ttf
template: "%Y-%m-%d %H:%M:%S %Z"
posX: 2
posY: 2
nightMode:
enabled: true
irSensorPin: 62
irCutPin1: 65
irCutPin2: 64
pinSwitchDelayUs: 150
nightAPI: false
irSensorPinInvert: true
records:
enabled: false
path: /mnt/mmc/%Y/%m/%d/%H.mp4
maxUsage: 95
video0:
enabled: true
codec: h264
gopMode: smart
rcMode: avbr
bitrate: 3072
gopSize: 2
video1:
enabled: false
codec: h264
jpeg:
enabled: true
toProgressive: false
mjpeg:
size: 640x360
fps: 5
bitrate: 1024
audio:
enabled: false
volume: auto
srate: 8000
codec: mp3
outputEnabled: false
rtsp:
enabled: true
port: 554
hls:
enabled: true
youtube:
enabled: false
motionDetect:
enabled: true
profile: outdoor
visualize: true
debug: false
ipeye:
enabled: false
netip:
enabled: true
user: admin
password: 6V0Y4HLF
port: 34567
snapshots: true
ignore_set_time: false
ignore: false
onvif:
enabled: true
raw:
mode: slow
enabled: false
watchdog:
enabled: true
timeout: 10
cloud:
enabled: false
воспроизводится практически всегда после ребута на 3518EV200 при воспроизведении HLS + смене настроек.
Просьба проверить, осталась ли проблема на свежей версии.
На моих камерах это не возпроизводится
|
gharchive/issue
| 2022-02-06T17:03:23 |
2025-04-01T04:55:28.626425
|
{
"authors": [
"nitr0man",
"widgetii"
],
"repo": "OpenIPC/majestic",
"url": "https://github.com/OpenIPC/majestic/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
120924289
|
[8wd4]广州 直播现场 报名/记要
地点: 广州 天河区 珠江新城 华夏路30号 富力盈通31F 3110 会议室
时间: 周四 19:42~22:22
报名: 回复说明以下关键信息
github 帐号
大约到达时间
期望现场解决的技术问题
项目初步/迭代/组队想法
参考 [8w~迭代作品] 项目公示
截止: 周四 10:10 前有效
提醒: 必须带上电脑,以便现场明确技术问题的细节 ;-)
~ 大家一致商定现场不来 妹子 就不合影了...
~ 所以,广州的妹子,你懂的.
另,想远程电话连线参与广州 直播现场的伙伴可以在周四中午 14:42 前把 Idea 回复到此 issue 下
格式:
勇猛精进组微信群昵称
项目名称
想现场探讨的三个问题
这样工作人员就会提前联系,届时大妈就会与你现场连线啦。
机会难得,想和大妈即时交流的小伙伴速度回复信息报名嘞。
项目仓库地址: yondjune/imatch
大约到达时间:开始前:求大妈管饭!!
期望现场解决的技术问题:
大妈这次你看懂了我们的产品了么?
然后向我们组推荐用哪个web框架..
明星学员负责移动端web版
后进生负责写pc端 web版
wechat:啊Jie
项目Octodog/章鱼狗
想ask @ZoomQuiet :
对🐙🐶Octodog功能发展方面的建议
又想,
在不影响Runmap开发的情况下,邀请Runmap项目发起人对产品设计方面进行一次培训,有助于其他小组在项目设计和跟进上进行优化,输出可复用性开发日志
问题有2:
产品设计上需要考虑的东西
使用到的工具软件推荐(可以自己查询)
还想:
顺便搜集各个小组的教程/开发日志,请各组回复教程链接到 https://github.com/bambooom/Octodog/issues/4
如果没有日志,请填写8w小组进度
勇猛精进组微信群昵称:上-win-晨曲
项目名称:imoodmap
想现场探讨的三个问题:
i 没有大牛的团队在不知道难度如何的情况下如何设置项目目标?对于团队的建议。[仓库地址](https://github.com/imoodmap/imoodmapGroup), [issues](https://github.com/imoodmap/imoodmapGroup/issues)
ii git flow的团队实现方法演示:希望在场的同学和大妈共同演示git flow的协作流程
iii 看版图如何有效预期项目进度和目标?为团队将任务映射到时间 “发布—特性”、“迭代—故事”和“工作日—任务”有没有例子,看得很玄乎~
github: junjielizero
预估到达时间: 19:15
github: penguinjing
ETA: 19:20
上面已经报名了的junjiezero的问题:
思路
背景:利用谷歌搜索找到大牛潜在推荐的书籍(site:amazon.com "Editorial Reviews" "XXX",其中XXX指的是大牛名字),点击每页出现的链接,查询链接页是否有大牛的推荐语。现想利用 python 模拟人工方式查询获取,将任务分解如下:
获取每页搜索结果链接 → 获取链接对应页面的Editorial Reviews → 将获取的Editorial Reviews 与 大牛名字匹配 → 获取成功匹配的页面信息(书籍名字、作者)
问题1:
问题:尝试谷歌API获取每页搜索结果链接,但免费用户请求数有限制,获取页面数有限制,不知有没有方法更好获取谷歌搜索每页链接?
问题2:
背景:由于美国亚马逊版权问题,不能使用亚马逊API抓取 Editorial Reviews(已和亚马逊技术人员邮件确认),改用BeautifulSoup方式抓取
问题:采用BeautifulSoup抓取,遇到有些网页隐藏内容的话,不能成功抓取隐藏的Editorial Reviews。
勇猛精进组微信群昵称:James周慧明
项目名称:社群参与者数据库
想现场探讨的三个问题:
数据安全是一个比较重要的话题,希望可以有一些提点关于如何保证数据库的数据不会被轻易破解
本周疑问请移步 #113 提交。收到大妈的反馈后原地增补收获和接下来的计划啊~
|
gharchive/issue
| 2015-12-08T03:52:06 |
2025-04-01T04:55:28.878301
|
{
"authors": [
"WhaleChen",
"aJiea",
"cnfeat",
"ishanshan",
"jameszhou89",
"janice-lu-zeng",
"junjielizero",
"penguinjing",
"yondjune"
],
"repo": "OpenMindClub/OMOOC2py",
"url": "https://github.com/OpenMindClub/OMOOC2py/issues/109",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
213958003
|
Cuda runtime error
Hi, Nice repo. I am running the example for training with the given dataset. I am getting a cuda runtime error. I am attaching the log file.
log.txt
Hmm I suspect there's something wrong with your cutorch. Can you try th -lcutorch -e "cutorch.test()" and see the results?
"Completed 76020 asserts in 180 tests with 0 failures and 0 errors"
I have tried it on two machines both had the error.
I was able to test the model but not train it.
@arunpatala Unfortunately, I had again encountered the same problem on both Titan x pascal and Maxwell. I have cheched the cutorch, but didn't find any problems. Have you solved this problem ?
This problem may attribute to the recent update of cutorch https://github.com/torch/cutorch/issues/708. However, after adding CUDA_LAUNCH_BLOCKING=1, it fails in the same way as before.
Can you try that again? I figured out a bug that may lead to that problem. @SuperWu090
@da03 Thanks very much ! I have tested the program. This problem have been solved. However, due to the recent update of openNMT in Batch.lua, there seems to be a new problem "~/torch/install/bin/luajit: ~/torch/install/share/lua/5.1/onmt/data/Batch.lua:78: attempt to index a nil value" . This problem may be solved with the earlier version of openNMT.
|
gharchive/issue
| 2017-03-14T03:21:09 |
2025-04-01T04:55:29.187583
|
{
"authors": [
"SuperWu090",
"arunpatala",
"da03"
],
"repo": "OpenNMT/Im2Text",
"url": "https://github.com/OpenNMT/Im2Text/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
543270640
|
Consider synthesizing an XML Schema for a project's XSP files
If, instead of providing a completion participant for XSP files, the LSP4XML plugin instead synthesized an XML schema based on known components and custom controls, that could allow for the full breadth of IDE help to come in without having to worry about the fiddly edge cases of code completion.
The big thing to investigate to see if this is viable would be how frequently the method to ask for a schema is called. If it's just once per workspace, that'd be useless, as it wouldn't allow for different XPages projects. If it's once per project or per file, that would be better, but best would be if it's called repeatedly, allowing the code to check for changes to Custom Controls.
It's probably possible to do this in pieces: provide schemas for the http://www.ibm.com/xsp/core and http://www.ibm.com/xsp/coreex namespaces that will be unchanging in a workspace, and then figure out if the CCs can be handled similarly.
To be really clever, the schema provider could provide distinct schemas based on any "minimum XPage release" setting provided.
It seems like the "description" annotation node is treated differently by different IDEs. Eclipse acts like it's HTML, but VS Code treats it as plain text.
Actually, that was partially PEBKAC: I was double-encoding the HTML. However, though both Eclipse and VS Code will translate the HTML to line breaks and basic formatting, Eclipse oddly no longer displays dls nicely when not double-encoded. That's a shame, but it's a good argument to switch to basic lists.
|
gharchive/issue
| 2019-12-28T19:18:15 |
2025-04-01T04:55:29.226123
|
{
"authors": [
"jesse-gallagher"
],
"repo": "OpenNTF/org.openntf.nsfodp",
"url": "https://github.com/OpenNTF/org.openntf.nsfodp/issues/139",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2504412995
|
RAG0016: Populate Knowledge Graph on Graph Database
Description
This project involves the population of a knowledge graph within a graph database. The aim is to store triples and structured data, which represent entities and their relationships, into the graph database.
Expected Output
The ability to store a knowledge graph in a JSON file.
The creation of a schema within the graph database that accurately represents the knowledge graph structure.
The capability to efficiently retrieve information from the knowledge graph based on user queries.
Implementation Plan
[x] Choose a graph database.
[ ] Create a schema within the graph database.
[x] Insert data into the graph database.
[ ] Retrieve data based on user requests.
[ ] Update the schema as needed when new data is added.
TerminusDB was our first choice as it is a graph database that supports versioning with its data. However, their team has shifted focus to other projects, and due to the small community, we decided to move away from TerminusDB.
There are many graph database options available, but very few offer a free community edition. One that does, and has the largest community in the world, is Neo4j.
Another interesting option is Memgraph, which has following features compared to Neo4j
compatible with Neo4j
uses less memory compared to Neo4j
can deliver speeds up to 120 times faster than Neo4j
Cypher Languages
Necessary codes for memgraph Lab
Show all data: MATCH (n) RETURN n;
Delete all data: MATCH (n) DETACH DELETE n;
Insert knowledge graph triplets
from neo4j import GraphDatabase
URI = "bolt://localhost:7687"
AUTH = ("", "")
def insert_triplets(triplets):
with GraphDatabase.driver(URI, auth=AUTH) as driver:
with driver.session() as session:
for head, relation, tail in triplets:
session.run(
f"MERGE (h:Entity {{name: $head}}) "
f"MERGE (t:Entity {{name: $tail}}) "
f"MERGE (h)-[:{relation}]->(t)",
head=head, tail=tail
)
triplets = [
("DalaiLama", "WasBornIn", "Taktser"),
("Taktser", "isLocatedIn", "Dokham"),
("Dokham", "isPartOf", "Tibet"),
("Khampa", "LivesIn", "Dokham"),
("Dokham","DescendsTo","China"),
("DalaiLama","WasBornIn","WoodHogYear"),
("AmiChiri","IsSouthOf","Taktser"),
]
insert_triplets(triplets)
fetch knowledge graph triplets
from neo4j import GraphDatabase
URI = "bolt://localhost:7687"
AUTH = ("", "")
def fetch_data():
with GraphDatabase.driver(URI, auth=AUTH) as driver:
with driver.session() as session:
result = session.run("MATCH (h)-[r]->(t) RETURN h.name, type(r), t.name")
for record in result:
print(record["h.name"], record["type(r)"], record["t.name"])
fetch_data()
Graph Visualization from the Memgraph Lab
Graph Data schema
Insert Knowledge Graph triplets with Properties
Data is from here
from neo4j import GraphDatabase
URI = "bolt://localhost:7687"
AUTH = ("", "")
def insert_triplets(data):
with GraphDatabase.driver(URI, auth=AUTH) as driver:
with driver.session() as session:
# Insert nodes
for node in data['nodes']:
entity_type = node["type"]
properties = node.get('attributes', {})
properties['name'] = node['label']
session.run(f"CREATE (n:{entity_type} $props)", {'props': properties})
# Insert edges
for edge in data['edges']:
source = edge['source']
target = edge['target']
relation = edge['relation']
session.run(
f"MATCH (a {{name: $source}}), (b {{name: $target}}) "
f"CREATE (a)-[:{relation}]->(b)",
{'source': source, 'target': target}
)
import json
with open('kg_data.json', 'r') as file:
data = json.load(file)
insert_triplets(data)
|
gharchive/issue
| 2024-09-04T06:22:32 |
2025-04-01T04:55:29.239614
|
{
"authors": [
"tenzin3"
],
"repo": "OpenPecha/rag_prep_tool",
"url": "https://github.com/OpenPecha/rag_prep_tool/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2486036562
|
STT0051: Code Cleanup for Improved Maintainability and Performance
Description
The task involves performing a thorough cleanup of the codebase to enhance readability, maintainability, and overall performance. This process will include organizing the code structure, removing unused and redundant code, simplifying complex functions, and ensuring that the code follows best practices for readability and performance.
The repo thats need cleaning are:
stt split audio
stt combine dataset
stt-wav2vec2
Completion Criteria
Organized Directory Structure: Files and directories should be logically organized and follow a consistent naming convention.
Removal of Unused Code: All unused imports, variables, functions, and classes should be identified and removed.
Refactored Code: Complex functions should be broken down into simpler, more manageable pieces, with redundant code eliminated.
Improved Readability: Variable, function, and class names should be meaningful and self-explanatory. Comments and documentation should be clear and helpful, focusing on the "why" behind the code.
Code Review: The cleaned-up code should be peer-reviewed to ensure it meets all criteria and follows best practices.
Implementation
Backup and Preparation:
Ensure the current codebase is backed up using version control (e.g., Git).
Define the scope of the cleanup and identify key areas to focus on.
Organize Code:
Review the directory structure and reorganize files into logical groups.
Rename files and directories for consistency.
Remove Unused Code:
Use static analysis tools like vulture to identify and remove unused code.
Remove unnecessary imports with tools like isort.
Refactor and Simplify:
Break down large functions into smaller, single-responsibility functions.
Eliminate redundant code by creating helper functions or utility classes.
Improve Readability:
Ensure consistent formatting using tools like black.
Rename variables and functions to be more descriptive.
Add or update comments and documentation, focusing on explaining complex logic.
Optimize Performance:
Profile the code using tools like cProfile.
Optimize loops and data structures for better performance.
Test and Validate:
Run all existing tests to ensure functionality is maintained.
Write new tests to cover any refactored code.
Integrate the code with a continuous integration tool to automate testing.
Code Review and Finalization:
Submit the cleaned-up code for peer review.
Address any feedback and make final adjustments.
Commit and push the changes to the version control system.
Subtask
[ ] Clean the Split Audio Repo
Backup and Preparation:
[x] Ensure the current code is committed to the main branch.
[x] Create a new branch (e.g., cleanup/split-audio) for cleanup work.
Organize Code:
[x] Review the directory structure for logic and consistency.
[x] Move utility functions to a utils directory, if not already done.
[x] Rename files and directories for consistency.
Remove Unused Code:
[x] Use vulture or similar tools to identify unused functions, classes, and imports.
[x] Remove identified unused code after verifying its redundancy.
[x] Clean up unused imports using isort and format them for consistency.
Refactor and Simplify:
[x] Break down any complex functions into smaller, modular functions.
[ ] Identify and remove any redundant or duplicate code by consolidating similar functionality into shared functions.
Improve Readability:
[ ] Use black to ensure code formatting consistency.
[ ] Rename variables and functions to be more descriptive and self-explanatory.
[ ] Add comments to explain the purpose and functionality of complex code blocks.
[ ] Update the README and any existing documentation to reflect the cleaned-up code.
Optimize Performance:
[ ] Profile the code using cProfile or similar tools to identify performance bottlenecks.
[ ] Optimize any identified inefficient loops or data structures.
Test and Validate:
[ ] Run existing tests to verify that the refactored code still works as expected.
[ ] Write new unit tests for any newly refactored or added code.
[ ] Ensure all tests pass before proceeding.
Code Review and Finalization:
[ ] Submit the cleanup branch for peer review.
[ ] Address any feedback received during the review process.
[ ] Once approved, merge the cleanup branch into the main branch.
[ ] Clean the STT Combine Dataset Repo
Backup and Preparation:
[ ] Ensure all changes are committed and create a backup branch.
[ ] Start a new branch (e.g., cleanup/combine-datasets).
Organize Code:
[ ] Reorganize directories to logically group related files.
[ ] Move dataset combination logic into a dedicated datasets or combiner module, if appropriate.
[ ] Standardize naming conventions across files and directories.
Remove Unused Code:
[ ] Use vulture to identify and remove dead code.
[ ] Clean up any remaining unnecessary imports using isort.
Refactor and Simplify:
[ ] Decompose large functions, particularly those handling dataset merging, into smaller, more manageable pieces.
[ ] Consolidate repeated code into helper functions or utility classes.
Improve Readability:
[ ] Ensure consistent formatting using black.
[ ] Rename ambiguous variables and functions.
[ ] Add detailed comments, especially around the logic for combining datasets.
[ ] Update README and other documentation to reflect changes.
Optimize Performance:
[ ] Use cProfile to identify performance issues.
[ ] Optimize dataset processing and merging steps.
Test and Validate:
[ ] Run and validate existing tests.
[ ] Write tests for refactored functions, particularly for different dataset merging scenarios.
[ ] Validate dataset output integrity after cleanup.
Code Review and Finalization:
[ ] Submit the cleanup branch for review.
[ ] Address review feedback and finalize the cleanup.
[ ] Merge changes into the main branch.
[ ] Clean the Wav2Vec2 Repo
Backup and Preparation:
[ ] Commit all existing changes and create a backup branch.
[ ] Start a new cleanup branch (e.g., cleanup/wav2vec2).
Organize Code:
[ ] Review and reorganize the directory structure, separating training, evaluation, and utility scripts.
[ ] Standardize file and directory names for clarity.
Remove Unused Code:
[ ] Run vulture to detect and remove unused code.
[ ] Clean up imports using isort.
Refactor and Simplify:
[ ] Break down complex training and evaluation scripts into more modular components.
[ ] Consolidate common code, like data loading and preprocessing, into reusable functions.
Improve Readability:
[ ] Format code consistently using black.
[ ] Rename variables, functions, and classes for clarity.
[ ] Add detailed comments, especially around training and evaluation logic.
[ ] Update README, training instructions, and documentation to reflect refactored code.
Optimize Performance:
[ ] Use cProfile to identify bottlenecks in the training pipeline.
[ ] Optimize data loading, model training loops, and evaluation processes.
Test and Validate:
[ ] Run existing tests and training scripts to ensure they work correctly.
[ ] Add tests for any newly refactored functions.
[ ] Verify that model training and evaluation outputs are consistent with previous results.
Code Review and Finalization:
[ ] Submit the cleanup branch for review.
[ ] Implement any suggested changes from the review.
[ ] Merge the cleaned-up branch into the main branch.
conversion on notebook to python file
Final Directory Structure:
|
gharchive/issue
| 2024-08-26T06:29:32 |
2025-04-01T04:55:29.272443
|
{
"authors": [
"gangagyatso4364"
],
"repo": "OpenPecha/stt-documentation",
"url": "https://github.com/OpenPecha/stt-documentation/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
634727360
|
Option to allow customizing share print queue name
An option in ipp-usb.conf allowing user to specify a more meaningful print queue name rather than the default.
Hi,
ipp-usb actually doesn't create a print queue. It takes printer's DND-SD name (usually it is configurable via printer's web console), adds " (USB)" suffix to it, and uses the resulting string for announcing the printer.
How CUPS converts the resulting DNS-SD name into the print queue name is out of control of the ipp-usb
Actually, as you advised earlier, I have stop and remove CUPS from my server. I'm the guy who don't like a lot of daemon running in the background. My print queue is now advertised by ipp-usb I believe, since I have to set interface = all to make it works. The name is exactly the return of the txt record of ty plus usb. It's too long, client side software, no matter ubuntu or android, can't show them all in one line and start using …. As a admin, this is not a big deal, but end-user kind of think this is weird.
End user may change ty using printer's web console. Usually this parameter is called like "Bonjour name" or similar. I believe, for end user it is much simpler to use printer's web console rather that tweaking ipp-usb.conf
Android and iOS user can't change the print queue name pn their phone. Besides, I tried to change the print queue name in web console's system tab. It accepted and the new name seemed to be saved. But nothing changed on the client side.
Android and iOS user can't change ipp-usb.conf file too :-)
Changing printer's DNS-SD (Bonjour) name should take effect only after ipp-usb restart or printer disconnect/connect. ipp-usb has no notification that this parameter has been changed on printer.
My main concern regarding this option, is how to implement and explain it so it will be convenient and understandable to the "normal" user.
You are right. Considering this should also work in situation that there are more than one printer, setting names for each in a conf file is itself a headache. But there must be a way, or we will have users scratching their heads on what the hell is M28a_0ABF83_USB means.
Change it in Web console would be nice.
Closed - have no idea how to implement
|
gharchive/issue
| 2020-06-08T15:39:13 |
2025-04-01T04:55:29.278219
|
{
"authors": [
"alexpevzner",
"scubajeff"
],
"repo": "OpenPrinting/ipp-usb",
"url": "https://github.com/OpenPrinting/ipp-usb/issues/7",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2175046467
|
Sample extension's About page errors
The About page for the sample extension fails to render.
To Reproduce
Steps to reproduce the behavior:
Go to http://127.0.0.1:3333/extension/sample/
Current Results
(From new Butterfly error page for improve readability)
HTTP ERROR 500 Butterfly Error Butterfly caught the following error while processing the request:
URI: | /extension/sample/
-- | --
500
Butterfly Error Butterfly caught the following error while processing the request:
refine
org.mozilla.javascript.EcmaError: TypeError: Cannot call property stringArrayLength in object [JavaPackage com.google.refine.sampleExtension.SampleUtil]. It is not a function, it is "object". (file:/Users/tfmorris/git/OpenRefine2/main/webapp/../../extensions/sample/module/MOD-INF/controller.js#75)
Caused by:org.mozilla.javascript.EcmaError: TypeError: Cannot call property stringArrayLength in object [JavaPackage com.google.refine.sampleExtension.SampleUtil]. It is not a function, it is "object". (file:/Users/tfmorris/git/OpenRefine2/main/webapp/../../extensions/sample/module/MOD-INF/controller.js#75)
at org.mozilla.javascript.ScriptRuntime.constructError(ScriptRuntime.java:4563)
at org.mozilla.javascript.ScriptRuntime.constructError(ScriptRuntime.java:4544)
at org.mozilla.javascript.ScriptRuntime.typeError(ScriptRuntime.java:4576)
at org.mozilla.javascript.ScriptRuntime.typeErrorById(ScriptRuntime.java:4581)
at org.mozilla.javascript.ScriptRuntime.notFunctionError(ScriptRuntime.java:4662)
at org.mozilla.javascript.ScriptRuntime.getPropFunctionAndThisHelper(ScriptRuntime.java:2585)
at org.mozilla.javascript.ScriptRuntime.getPropFunctionAndThis(ScriptRuntime.java:2568)
at org.mozilla.javascript.gen.file__Users_tfmorris_git_OpenRefine2_main_webapp_______extensions_sample_module_MOD_INF_controller_js_18._c_process_2(file:/Users/tfmorris/git/OpenRefine2/main/webapp/../../extensions/sample/module/MOD-INF/controller.js:75)
at org.mozilla.javascript.gen.file__Users_tfmorris_git_OpenRefine2_main_webapp_______extensions_sample_module_MOD_INF_controller_js_18.call(file:/Users/tfmorris/git/OpenRefine2/main/webapp/../../extensions/sample/module/MOD-INF/controller.js)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:380)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3868)
at org.mozilla.javascript.gen.file__Users_tfmorris_git_OpenRefine2_main_webapp_______extensions_sample_module_MOD_INF_controller_js_18.call(file:/Users/tfmorris/git/OpenRefine2/main/webapp/../../extensions/sample/module/MOD-INF/controller.js)
at edu.mit.simile.butterfly.ButterflyModuleImpl$Controller.process(ButterflyModuleImpl.java:399)
at edu.mit.simile.butterfly.ButterflyModuleImpl$Controller.run(ButterflyModuleImpl.java:377)
at org.mozilla.javascript.Context.call(Context.java:535)
at org.mozilla.javascript.ContextFactory.call(ContextFactory.java:472)
at edu.mit.simile.butterfly.ButterflyModuleImpl.processScript(ButterflyModuleImpl.java:654)
at edu.mit.simile.butterfly.ButterflyModuleImpl.process(ButterflyModuleImpl.java:427)
at edu.mit.simile.butterfly.Butterfly.service(Butterfly.java:524)
at com.google.refine.RefineServlet.service(RefineServlet.java:217)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
Expected Behavior
Velocity template renders correctly.
Versions
OpenRefine: 3.7 and current HEAD of master (not sure how far back it goes)
Additional context
At first glance it looks like perhaps the SampleUtil class isn't getting compiled or packaged in the right place.
@tfmorris You're right, I guess the com.google.refine.sampleExtension.SampleUtil Java package isn't correctly installed and accessible and stringArrayLength can't be referenced as a function
I think a proposed solution is making sure SampleUtil is getting compiled correctly and it's accessible,
or what do you think?
I'd be in favor of dropping the sample extension altogether. As mentioned in #2300, the fact that it is hosted in our repository makes it of little use for third parties to develop their extensions outside of OpenRefine's repository. Although we could migrate it to a separate repository indeed, I don't think it will be actively maintained if it doesn't implement things that people actually need.
We could take an existing extension (developed outside of the repository) and turn it into a model extension, following all best practices. I have proposed a GSoC internship on this subject, which also incorporates the feedback from @antoine2711 about the lack of tooling for Java debugging. @tfmorris I have re-centered the proposal on development, away from documentation, to make it more suitable for GSoC.
|
gharchive/issue
| 2024-03-08T00:13:38 |
2025-04-01T04:55:29.448159
|
{
"authors": [
"Redeem-Grimm-Satoshi",
"tfmorris",
"wetneb"
],
"repo": "OpenRefine/OpenRefine",
"url": "https://github.com/OpenRefine/OpenRefine/issues/6426",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2284879527
|
feat: openrefine icon compliant with new MacOS guidelines
Fixes #6399
What
feat: openrefine icon compliant with new MacOS guidelines
Thanks!
Pinging @OpenRefine/designers to see if they have any feedback.
When pinging designers it's generally helpful to provide a screenshot for them to be able to judge the changes easily - especially since GitHub doesn't provide previews for .icns files. Here is one:
(the logo on the left-hand side is the new one, the old one is on the right-hand side)
still think we can go just a bit larger... to the maximum that's allowed for the icon grid? as I mentioned in this comment https://github.com/OpenRefine/OpenRefine/issues/6399#issuecomment-2103960061
I have yet to see the designers reply to a ping.
Yes we should delete this GitHub team, especially because the recommended way is to add issues/PRs to a GitHub project instead:
https://forum.openrefine.org/t/design-workflows-in-github/1396
I have to say that I have doubts about how effective this workflow is (it doesn't seem to bring a lot of designer attention to issues either), but if that's the one that designers are asking for, well, it's worth trying…
@wetneb have you been successful in getting replies by including screenshots?
It looks like in this case it helped @thadguidry chime in, no? (Especially knowing that Thad generally uses Windows)
I mean, it does save people the trouble from pulling a branch via git and compiling (and in this case even packaging), which is time consuming even for people who are totally comfortable with the process, so it's hard for me to imagine how that would not help…
No tweaks from me, but I'm not a designer by training 🫠
|
gharchive/pull-request
| 2024-05-08T07:23:41 |
2025-04-01T04:55:29.453850
|
{
"authors": [
"teolemon",
"tfmorris",
"thadguidry",
"wetneb"
],
"repo": "OpenRefine/OpenRefine",
"url": "https://github.com/OpenRefine/OpenRefine/pull/6592",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
303176283
|
Paragraph based amendments / Diff [WIP]
Current decisions:
We reuse the motion-table
The amended paragraphs are stored as an array of strings (and null's for non-changing paragraphs) in a new JSONfield in the motionversion-table.
The Apply text for new amendments-setting is changed into a drop-down, enabling the two previous behaviors, plus the new one (not implemented yet).
Amendments are referencing motions - not motion versions. The latter would be more precise, but will be added at a later iteration.
@CatoTH Nice work!
I've one request: Could you add a function to a motion returning an array of paragraphed based amendments? So this function would return [motion, motion, ...] or just the ids (this is not important) of all amendments. And additionally a function that returns the one selected paragraph. So that I can get the original paragraph, if I have an paragraph based amendment with the changes.
This would be very helpful implementing the overview table for all changes of all motions.
If I have overseen a feature that you already have implemented, please tell me how do you named this function.
Thanks!
@FinnStutzenstein
The first function would be getParagraphBasedAmendments().
For the second one, calling .getAmendmentParagraphsByMode("original")[0] should return the selected paragraph of the base motion.
Perfect. This should be helpful!
Rebased to one commit and fixed travis.
Rebased on top of master.
@normanjaeckel Please review again the remarks and approve that we can merge it. Thx.
|
gharchive/pull-request
| 2018-03-07T16:51:22 |
2025-04-01T04:55:29.538541
|
{
"authors": [
"CatoTH",
"FinnStutzenstein",
"emanuelschuetze"
],
"repo": "OpenSlides/OpenSlides",
"url": "https://github.com/OpenSlides/OpenSlides/pull/3637",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
482326310
|
Overwork motion detail UI
set Default action to "create amendment" (if permitted)
put edit in the menu
put next and prev motion in the menu (mobile)
resort motion menu
move "show enitre motion text" under the motion-preview
change "show entire motion text" from button to checkbox
create a custom tooltip for the main action in os-head-bar
create a custom "cancel edit" function in os-head-bar
@emanuelschuetze
The layout improvements should not be a problem.
Please carefully test out the back button, I overworked it a little. Also try to reload and see if the back-button behaves like you would expect
|
gharchive/pull-request
| 2019-08-19T13:53:40 |
2025-04-01T04:55:29.540835
|
{
"authors": [
"tsiegleauq"
],
"repo": "OpenSlides/OpenSlides",
"url": "https://github.com/OpenSlides/OpenSlides/pull/4927",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
451162059
|
Need Custom Filter Box
Hi, Colin.
Your extension is very great. I'm using it in my app build from AI2. Yet, it lack of important feature. It need custom filter box to be more useful.
Hope you can figure it out on how to do it.
Thanks for suggestion, I'm not sure what filter box is about.. A search box?
I consider that things like search box should not be contain in a list view component. As this is a list view component, we should keep it a container and a view of data, but not a functional system. (At least that is not what I define this component)
What's more, filtering strategy is very complex as well.
So the most doable method of filter box is:
listen on textbox's changing event or a submit button's click event
when event triggered, go through all items in the list
put filtered list into listview.
Yes it is like a search box. Just see this extension for the idea:
https://community.thunkable.com/t/listview-custom-filter-extension/10351
Hope you got it.
|
gharchive/issue
| 2019-06-02T07:18:29 |
2025-04-01T04:55:29.547421
|
{
"authors": [
"ColinTree",
"zmd94"
],
"repo": "OpenSourceAIX/ColinTreeListView",
"url": "https://github.com/OpenSourceAIX/ColinTreeListView/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
664343047
|
Update readme
Include logo in readme.
Thanks @Yuleii !
|
gharchive/pull-request
| 2020-07-23T09:55:04 |
2025-04-01T04:55:29.548507
|
{
"authors": [
"Yuleii",
"peisenha"
],
"repo": "OpenSourceEconomics/econsa",
"url": "https://github.com/OpenSourceEconomics/econsa/pull/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
182088797
|
20161010 102700 placeable calibrator speed
In this PR:
performance tuning
refactored calibration, vector and placeable for performance
updated Calibrator to be more usable by client code
added performance test
tested works fast!
|
gharchive/pull-request
| 2016-10-10T18:57:35 |
2025-04-01T04:55:29.587639
|
{
"authors": [
"andySigler",
"astaff"
],
"repo": "OpenTrons/opentrons_sdk",
"url": "https://github.com/OpenTrons/opentrons_sdk/pull/31",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
170490038
|
Added SConstruct to common category
This now sets file permissions correctly for items in the common
category. For unknown reasons, if files in the manual category
had permissions 755, the sconsPostAction would go into an endless loop.
This was observed on Ubuntu 14.04.2 LTS
This fixes issue #26
|
gharchive/pull-request
| 2016-08-10T18:48:03 |
2025-04-01T04:55:29.670177
|
{
"authors": [
"DanIverson"
],
"repo": "OpenVnmrJ/OpenVnmrJ",
"url": "https://github.com/OpenVnmrJ/OpenVnmrJ/pull/28",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2578554682
|
🛑 Translator - NLLB - Smart'Gic is down
In 9fbab87, Translator - NLLB - Smart'Gic (https://translator.smartgic.io/nllb/status) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Translator - NLLB - Smart'Gic is back up in ad205c2 after 8 minutes.
|
gharchive/issue
| 2024-10-10T11:25:42 |
2025-04-01T04:55:29.672871
|
{
"authors": [
"goldyfruit"
],
"repo": "OpenVoiceOS/status",
"url": "https://github.com/OpenVoiceOS/status/issues/1552",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2710254581
|
🛑 Text-to-Speech - Mimic - Smart'Gic is down
In 0b82144, Text-to-Speech - Mimic - Smart'Gic (https://tts.smartgic.io/mimic/status) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Text-to-Speech - Mimic - Smart'Gic is back up in ed75a09 after 44 minutes.
|
gharchive/issue
| 2024-12-02T02:41:31 |
2025-04-01T04:55:29.675328
|
{
"authors": [
"goldyfruit"
],
"repo": "OpenVoiceOS/status",
"url": "https://github.com/OpenVoiceOS/status/issues/1841",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
953102760
|
EPANET-MSX Low-Level API and Refactoring
Hello, my name is Kyle Arrowood, this summer I was given the opportunity to intern at Xylem where I was given the task to redesign the EPANET-MSX toolkit. The goal of this redesign was to remove dependencies on EPANET as well as dependencies
on the specific input files (.inp and .msx) meanwhile maintaining full backwards compatibility.
This is the current progress of the new MSX API redesign. I have put a lot of the specific information about the new structure in the Updates section of the new Readme.
I look forward to discussion about the changes made!
The goals of this refactoring were pretty straightforward:
maintain full CLI backward compatibility
maintain full API backward compatibility
decouple the core functionality from dependency on EPANET
allow safe programmatic creation/mutation of network data
allow safe programmatic injection of Hydrualic results data
This is a "very large" PR, as it must be to achieve the goals - however the heaviest lift was simply moving functions around to reorganize the code around the above principles.
I will be more formally reviewing the code over the next week, and I invite others so inclined to join as well. Using this PR thread as a way to anchor conversations will help us track whether we are meeting these goals.
Nice job @karrowood . The re-organization of the code was well thought out and it appears that your objectives have been met. I do have a couple of suggestions that I think can improve things with little extra effort:
Make the reference to the MSXproject struct passed into each API function an opaque pointer instead of a direct one. That will better encapsulate the project data and avoid the need to distribute msxtypes.h with the library. (Consult the EPANET 2.2 code to see how it was done there.)
To reduce the amount of memory allocation in the MSXadd functions consider allocating memory for the Node, Link, and concentration arrays in batches whose size doubles each time their current capacity is reached. Also consider adding an MSXsetSize function that allows one to set the initial size of these vectors.
thanks for the quick thoughts @LRossman - much appreciated. I had to laugh about the second item, because Kyle had originally implemented MSXadd in exactly that way (double size when capacity is reached) and I thought it looked like premature optimization ;) And I can conjure up cases where you'll be consuming almost 2x the memory needed -- however, I really like the idea of adding a MSXsetSize function to allow a client to hint at an allocation size, in much the way std::vector allocation works.
Thanks for the feedback @LRossman, I checked out the EPANET 2.2 code and see that since the msxtypes.h will not be included in with the CLI that I should probably also create something like a msxenums.h, and also I will switch the implementation back to the more dynamic allocation that doubles every time that it reaches capacity.
Here's a blueprint for implementing my first suggestion about using opaque pointers (not tested so user beware):
As mentioned, select all of the relevant enums in msxtypes.h and cut and paste them to a new file msxenums.h. Add an include statement for this file in msxtypes.h and coretoolkit.h.
Re-define the MSXproject struct in msxtypes.h as:
typedef struct Project {
...
} *MSXproject;
Add typedef struct Project *MSXproject; to the top of coretoolkit.h.
Do a global Find and Replace operation on all files, replacing (MSXproject *MSX with (MSXproject MSX.
Restore the MSXproject *MSX signature to MSX_open() and place the following code at the top of it:
struct Project *p = (struct Project *) calloc(1, sizeof(struct Project));
*MSX = p;
Add free(MSX) to the bottom of MSX_close().
If I messed up somewhere I'm sure that @samhatchett will correct it.
I have updated the code, it now uses an opaque pointer for the MSXproject structure rather than a direct pointer. I have also created a new toolkit function called MSXsetSize that sets the size of the arrays within the MSXproject data structure. This function is also used within the adder functions whenever there is not enough space to add an object.
I had a short discussion with @samhatchett, and have not yet implemented the doubling array sizes to save time. Sam believes that there are cases where extreme amounts of memory will be wasted for example when adding a 250,000th node and the size were to double up to 500,000. I see the advantages of doubling array sizes as it will save a lot of time during the allocation process. I am open to discussion about whether or not we should change this from the way it is currently implemented. The change is very trivial.
thanks @karrowood - yes, I think that size-doubling can be problematic in some cases. I figure that the client code will probably know how big the network is going to be, and anything else is probably a rare use case...
I'm OK with keeping the incremental dynamic vector resizing as currently implemented. But I have a different question regarding the code that adds a new Node or Link to a project. I don't see where the arrays for c and c0 get allocated for the new Node or Link. They do get allocated if a new species is added, but it seems not so when species already exist. Am I missing something?
BTW, allocation of the c arrays for nodes can be postponed until a simulation begins since they hold computed values, not input values.
Previously all of the c and c0 values were allocated when adding species, but I just changed it to where the c values are allocated after everything has been added and MSX_init() is called. Now the c0 values are allocated with in addNode/addLink or addSpecies, just whichever one gets called second. Now the toolkit functions are not as strict in terms of order since now a user could either add species first or add the network first.
Just a note here that Kyle has completed his internship with Xylem, and we thank him for the valuable contribution to the codebase in the form of this pull request. I have been slower than usual to personally review the work here (which is saying something!), but will be devoting some time in the coming week. Just wanted to re-open the floor to other developers and engineers for comment. Thanks!!
|
gharchive/pull-request
| 2021-07-26T17:00:11 |
2025-04-01T04:55:29.688642
|
{
"authors": [
"LRossman",
"karrowood",
"samhatchett"
],
"repo": "OpenWaterAnalytics/epanet-msx",
"url": "https://github.com/OpenWaterAnalytics/epanet-msx/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
378675262
|
Alerts API
Add a user Alerts API. For more information, see the documentation.
Pull Request Test Coverage Report for Build 192
46 of 47 (97.87%) changed or added relevant lines in 8 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.2%) to 91.011%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
app/policies/alert_policy.rb
8
9
88.89%
Totals
Change from base Build 185:
0.2%
Covered Lines:
1458
Relevant Lines:
1602
💛 - Coveralls
|
gharchive/pull-request
| 2018-11-08T10:45:56 |
2025-04-01T04:55:29.703758
|
{
"authors": [
"coveralls",
"floriandejonckheere"
],
"repo": "OpenWebslides/openwebslides-backend",
"url": "https://github.com/OpenWebslides/openwebslides-backend/pull/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
195514243
|
Energy Shield Pool Wrong Calculation
Path of Buiding v1.2.36 shows 14843 ES pool whereas in game i have 10311 ES (both with Discipline Aura level 19).
The character's tree and items have been imported through the "Import/Export Build" function.
Screenshot available below.
Thank you for your work.
Is all your armor in game q20?
On 14 Dec 2016 13:06, "QbleD3" notifications@github.com wrote:
Path of Buiding v1.2.36 shows 14843 ES pool whereas in game i have 10311
ES (both with Discipline Aura level 19).
Screenshot available.
Thank you for your work.
[image: pob wrong es pool calculation]
https://cloud.githubusercontent.com/assets/24281359/21181395/fd8ead96-c1fd-11e6-8c45-82e71f07b027.png
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Openarl/PathOfBuilding/issues/102, or mute the thread
https://github.com/notifications/unsubscribe-auth/AMQ7F6n3zP3sh_Vik8W6AYszoDsINfXOks5rH9vPgaJpZM4LM2Xb
.
All skills (including auras) are initially enabled when imported. In your case, that includes Vaal Discipline. You can see which skills are enabled in the View Skill Details section at the top of the Calcs tab.
@zocke1r yes all my items are 20Q ingame.
Good catch @Openarl, the difference was coming from Vaal Discipline.
Maybe Vaal skills should be disabled by default ? I'm sure many energy shield users might be thrown off because of Vaal Discipline. I've been trying to figure this out for a week now until i finally came to post here.
Thanks guy.
|
gharchive/issue
| 2016-12-14T12:06:38 |
2025-04-01T04:55:29.787863
|
{
"authors": [
"Openarl",
"QbleD3",
"zocke1r"
],
"repo": "Openarl/PathOfBuilding",
"url": "https://github.com/Openarl/PathOfBuilding/issues/102",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
319178960
|
Usability: drop tip before detach pipette
As a user I would like to avoid crashing the tip when the pipette lowers to detach.
Background
Ian first brought this up after testing, and I experienced it twice during use as well.
Acceptance Criteria
~Remind user to remove tip in modal~ Fixed with #3959
Robot drops tip automatically after homing before moving forward
Design
-TBD
Partially fixed: warning added to change tip instructions (#3959).
Holding off on automatic tip drop until HTTP functionality is added for drop tip, which is planned as part of the calibration project(s).
|
gharchive/issue
| 2018-05-01T11:47:19 |
2025-04-01T04:55:29.792796
|
{
"authors": [
"pantslakz",
"umbhau"
],
"repo": "Opentrons/opentrons",
"url": "https://github.com/Opentrons/opentrons/issues/1310",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2181572716
|
bug: cancel a protocole during its run lead an infinite "cancelling" message
Overview
When I try to cancel a protocole which is running, the button "cancelling" is rolling indefinitely so that I have to switch off the robot to regain control.
Steps to reproduce
No response
Current behavior
No response
Expected behavior
When I want cancel a protocol, I should be able to regain control without switching off the robot!
Operating system
Linux
System and robot setup or anything else?
App v7.2.0
@mumugelin
Could you try to use the latest app and system image?
https://github.com/Opentrons/opentrons/releases/tag/v7.2.2
Hi
I have just upgraded to v7.2.2 and there is still the same problem when
cancelling a protocole: never endind "cancelling run" button !
Muriel
Le 21/04/2024 à 06:46, koji a écrit :
@mumugelin https://github.com/mumugelin
Could you try to use the latest app and system image?
https://github.com/Opentrons/opentrons/releases/tag/v7.2.2
—
Reply to this email directly, view it on GitHub
https://github.com/Opentrons/opentrons/issues/14630#issuecomment-2067902827,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BG4VNCPAUWU5K26GQALH6J3Y6NABXAVCNFSM6AAAAABESGKDNKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRXHEYDEOBSG4.
You are receiving this because you were mentioned.Web Bug from
https://github.com/notifications/beacon/BG4VNCP5AJJ3HN677SLT6ATY6NABXA5CNFSM6AAAAABESGKDNKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTT3IGYWW.gifMessage
ID: @.***>
[ { @.": "http://schema.org", @.": "EmailMessage",
"potentialAction": { @.": "ViewAction", "target":
"https://github.com/Opentrons/opentrons/issues/14630#issuecomment-2067902827","url":
"https://github.com/Opentrons/opentrons/issues/14630#issuecomment-2067902827",
"name": "View Issue" }, "description": "View this Issue on GitHub",
"publisher": { @.": "Organization", "name": "GitHub", "url":
"https://github.com" } } ]
--
Dr Muriel Gelin
Research Engineer
Centre de biochimie Structurale de Montpellier
Team "ABCIS: Avanced Biology, Chemistry, Informatics Studio"
CNRS UMR 5048 - UM - INSERM U 1054
29 rue de Navacelles
34090 MONTPELLIER Cedex - FRANCE
http://www.cbs.cnrs.fr
Tel: (33) 04 67 41 77 12
--------------m44G0HqI6RvdCi0yaLX1Gm5G
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: 8bit
Hi
I have just upgraded to v7.2.2 and there is still the same
problem when cancelling a protocole: never endind "cancelling run"
button !
Muriel
Le 21/04/2024 à 06:46, koji a écrit :
Could you try to use the latest app and system image?
https://github.com/Opentrons/opentrons/releases/tag/v7.2.2
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message
ID: <Opentrons/opentrons/issues/14630/2067902827@github.com>
[
{
***@***.***": "http://schema.org",
***@***.***": "EmailMessage",
"potentialAction": {
***@***.***": "ViewAction",
"target":
"https://github.com/Opentrons/opentrons/issues/14630#issuecomment-2067902827","url":
"https://github.com/Opentrons/opentrons/issues/14630#issuecomment-2067902827",
"name": "View Issue"
},
"description": "View this Issue on GitHub",
"publisher": {
***@***.***": "Organization",
"name": "GitHub",
"url": "https://github.com"
}
}
]
--
Dr Muriel Gelin
Research Engineer
Centre de biochimie Structurale de Montpellier
Team "ABCIS: Avanced Biology, Chemistry, Informatics Studio"
CNRS UMR 5048 - UM - INSERM U 1054
29 rue de Navacelles
34090 MONTPELLIER Cedex - FRANCE
http://www.cbs.cnrs.fr
Tel: (33) 04 67 41 77 12
--------------m44G0HqI6RvdCi0yaLX1Gm5G--
@mumugelin
Is it possible to share your protocol?
I haven't heard any update for a few days.
I'll close this. If needed, please re-open this.
I still have the same problem with v7.2.2 and the problem occurs for all protocols! So I don't think it will help.
But in case, here are one of my protocols.
crystallo-kit-to-1-SwissCi-plate_viscous.txt
We're unable to reproduce this. If it's happening to anyone on the latest software version (v7.2.2 at the time of writing), please:
Provide your complete protocol files, if you haven't already.
Let us know exactly where in the protocol the problem happens. Videos are very helpful.
Let us know if it happens every time you cancel the protocol, or only sometimes.
Provide your robot's troubleshooting logs.
Here a video of when I try to cancel ... but it never ends !
https://github.com/Opentrons/opentrons/assets/137809780/2e8b0561-2f23-4a5f-ba47-ba00a757f0c6
|
gharchive/issue
| 2024-03-12T13:04:04 |
2025-04-01T04:55:29.813755
|
{
"authors": [
"MurielGelin",
"SyntaxColoring",
"koji",
"mumugelin"
],
"repo": "Opentrons/opentrons",
"url": "https://github.com/Opentrons/opentrons/issues/14630",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
283711486
|
Inconsistent return values for wells
overview
When accessing wells, the return type is unpredictable depending on the number of arguments passed. It is possible to get a single Well, a WellSeries of Wells, or a WellSeries of WellSeries of Wells.
behavior
Current behavior: See #319, #320, #331, and #409
Desired behavior: Any function that should return some number of wells should have a return type of List[Well], and functions that take a List[Well] should behave predictably when given an empty list, a list with one element, or a list with more than one element.
This is true (and a wontfix) in v1, and nice and consistent in v2.
|
gharchive/issue
| 2017-12-20T22:14:15 |
2025-04-01T04:55:29.816118
|
{
"authors": [
"btmorr",
"sfoster1"
],
"repo": "Opentrons/opentrons",
"url": "https://github.com/Opentrons/opentrons/issues/537",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
305003202
|
API: Catch JSON decode errors on robot config
overview
Title says it all. This reportedly is causing issues on machines during assembly, given some sequence of events the config.json file became empty, cause robot_configs.load() to raise a json.decoder.JSONDecodeError, preventing any script that uses robot to run
Approved pending CI
|
gharchive/pull-request
| 2018-03-14T02:40:05 |
2025-04-01T04:55:29.817878
|
{
"authors": [
"andySigler",
"btmorr"
],
"repo": "Opentrons/opentrons",
"url": "https://github.com/Opentrons/opentrons/pull/1022",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2164167414
|
fix(app-shell, app-shell-odd): Prevent excessive notify logs
Closes RQA-2431
Overview
Because subscription logic is handled by components currently, logs are excessive in some spots, especially on the OT-2. Let's comment out the logs until we refactor subscription logic to be handled directly by the robot-server (next release). Note that there aren't any excessive network requests - it's just logs.
Test Plan
Start a dev build and look at the console.
Navigate to an OT-2 and start a pipette flow, run a protocol, etc.
Verify that there are no [notify] logs from the app-shell.
Changelog
Removed spammy notification logs.
Risk assessment
low
LGTM
|
gharchive/pull-request
| 2024-03-01T20:59:13 |
2025-04-01T04:55:29.821842
|
{
"authors": [
"mjhuff",
"ncdiehl11"
],
"repo": "Opentrons/opentrons",
"url": "https://github.com/Opentrons/opentrons/pull/14582",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2624919283
|
fix(protocol-designer): fix the display when dropdown menu option is only one
Overview
fix the display when dropdown menu option is only one
close RQA-3434
Test Plan and Hands on Testing
Changelog
Review requests
low
Risk assessment
low
[ ] update text
|
gharchive/pull-request
| 2024-10-30T18:08:58 |
2025-04-01T04:55:29.824842
|
{
"authors": [
"koji"
],
"repo": "Opentrons/opentrons",
"url": "https://github.com/Opentrons/opentrons/pull/16640",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
264012663
|
Add more explicit typing on all instances of Object and Array proptypes
#474
Adding more explicit proptypes like shape and arrayOf instead of PropTypes.object and PropTypes.array.
Totally agree and changed it
updated the commit
To resolve the conflicts, you can just get rid of the work you did on /squads-related work.
Just a heads up that once #hacktoberfest finishes, we will be combing PRs and finishing them. Let us know if you have a problem with taking the work you've done and completing it ourselves. Your work is still graciously appreciated, and we know that this is all a volunteer effort. Please understand that it is simply a complement - we want your work so bad, that we're going to put it in ASAP!
Closed by @Cooperbuilt as per https://github.com/OperationCode/operationcode_frontend/pull/666#issuecomment-340303310
|
gharchive/pull-request
| 2017-10-09T20:13:37 |
2025-04-01T04:55:29.830262
|
{
"authors": [
"devronhansen",
"kylemh"
],
"repo": "OperationCode/operationcode_frontend",
"url": "https://github.com/OperationCode/operationcode_frontend/pull/666",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2341741480
|
(%Pure Hackmons) gen9tradeabilities: Tiering
%Pure Hackmons: "2nd times the charm"
Created via Iolanthe on Sat, 08 Jun 2024 17:03:35 GMT.
Superseded by https://github.com/OperationTourCode/OTC/pull/495/files
|
gharchive/pull-request
| 2024-06-08T17:03:47 |
2025-04-01T04:55:29.831713
|
{
"authors": [
"IolantheOTC",
"WeWuzNidokangz"
],
"repo": "OperationTourCode/OTC",
"url": "https://github.com/OperationTourCode/OTC/pull/495",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1552471356
|
Plugins not getting installed
I tried to find something related to latest release, but I couldn't. On a brand new cluster deployment, repository-s3 and repository-azure are not being installed, even being listed at pluginsList.
Has anyone else faced this issue?
I am also facing the same issue ? How you @danielbichuetti fixed this issue ? Could you please guide me ?
|
gharchive/issue
| 2023-01-23T03:50:39 |
2025-04-01T04:55:29.834088
|
{
"authors": [
"ashutoshkumaranshu",
"danielbichuetti"
],
"repo": "Opster/opensearch-k8s-operator",
"url": "https://github.com/Opster/opensearch-k8s-operator/issues/414",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1563339842
|
operator doesn't update image version while upgrading a cluster
I deployed an Opensearch v2.4.1 with opensearch-operator-2.1.1
Then I tried to upgrade the cluster to v.2.5.0 by changing the spec.general.version(and plugin version as well)
Changed from
apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
name: my-system
namespace: my-system
spec:
general:
version: "2.4.1"
httpPort: 9200
vendor: opensearch
serviceName: my-system
pluginsList: ["repository-s3", "https://github.com/aiven/prometheus-exporter-plugin-for-opensearch/releases/download/2.4.1.0/prometheus-exporter-2.4.1.0.zip"]
<SKIP THE REST>
To
apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
name: my-system
namespace: my-system
spec:
general:
version: "2.5.0"
httpPort: 9200
vendor: opensearch
serviceName: my-system
pluginsList: ["repository-s3", "https://github.com/aiven/prometheus-exporter-plugin-for-opensearch/releases/download/2.5.0.0/prometheus-exporter-2.5.0.0.zip"]
<SKIP THE REST>
Data nodes got upgraded to v2.5.0 after apply the yaml file. But new master nodes and client nodes failed to boot up with errors below
Exception in thread "main" java.lang.IllegalArgumentException: Plugin [prometheus-exporter] was built for OpenSearch version 2.5.0 but version 2.4.1 is running
at org.opensearch.plugins.PluginsService.verifyCompatibility(PluginsService.java:394)
at org.opensearch.plugins.InstallPluginCommand.loadPluginInfo(InstallPluginCommand.java:820)
at org.opensearch.plugins.InstallPluginCommand.installPlugin(InstallPluginCommand.java:875)
at org.opensearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:275)
at org.opensearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:249)
at org.opensearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:104)
at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138)
at org.opensearch.cli.MultiCommand.execute(MultiCommand.java:104)
at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138)
at org.opensearch.cli.Command.main(Command.java:101)
at org.opensearch.plugins.PluginCli.main(PluginCli.java:60)
Pod events show it's still pulling the v2.4.1 image
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m57s default-scheduler Successfully assigned my-system/my-system-masters-2 to host3
Normal Pulled 3m56s kubelet Container image "public.ecr.aws/opsterio/busybox:1.27.2-buildx" already present on machine
Normal Created 3m56s kubelet Created container init
Normal Started 3m56s kubelet Started container init
Normal Pulled 3m2s (x4 over 3m56s) kubelet Container image "docker.io/opensearchproject/opensearch:2.4.1" already present on machine
Normal Created 3m2s (x4 over 3m56s) kubelet Created container opensearch
Normal Started 3m2s (x4 over 3m55s) kubelet Started container opensearch
Warning BackOff 2m27s (x10 over 3m48s) kubelet Back-off restarting failed container
Pod description shows it uses v2.4.1 as well
Containers:
opensearch:
Container ID: containerd://743950cfdef64a76f394416a6a51e9277a610cd5f58cb45b44d281b40e3a8e8a
Image: docker.io/opensearchproject/opensearch:2.4.1
Related: https://github.com/Opster/opensearch-k8s-operator/issues/404
I was able to reproduce this. Looks like the operator starts with the upgrade of the data nodes but somehow at the same time starts a rolling restart of the master nodes due to the changed plugin which then fails due to old image version and new plugin version. Seems the operator is not correctly checking for upgrade-in-progress before starting the rolling restart. Not yet sure where in the code exactly and how to fix it.
As a workaround: For me it worked to wait until the data nodes were finished upgrading and then manually delete (kubectl delete pod) the crashing pods. After that the operator continued with the upgrade as expected.
Thank you @swoehrl-mw but the workaround doesn't work for me. I upgraded the operator to 2.2.0, and then all data node to 2.5.0, tried deleting master and client pods that are in a CrashLoopBackOff state. But they still use the OS v2.4.1 image and try to install a v2.5.0 exporter plugin(I need this plugin), so they stuck in CrashLoopBackOff
Did some more testing and found the problem: Non-data nodepools are not upgraded by the operator bust it instead delegates that to the statefulset controller of Kubernetes. This lead to the pods being recreated prematurely with a half-old-half-new config (old docker image, new plugin version) and subsequently to the crashes. That also got the operator into an unexpected state, so the upgrade stalled.
I've created a PR (#431) that should fix the problem.
|
gharchive/issue
| 2023-01-30T23:01:49 |
2025-04-01T04:55:29.841072
|
{
"authors": [
"arve0",
"deng47",
"swoehrl-mw"
],
"repo": "Opster/opensearch-k8s-operator",
"url": "https://github.com/Opster/opensearch-k8s-operator/issues/420",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
787138796
|
Unable to log query params or post data
How can we able to log post data and query params from the request ?
Can't with out plugin as we specifically do not want that detail. You can review the Kong HTTP log plugin and fork our plugin to add custom logic back into the splunk logger to meet your use case.
|
gharchive/issue
| 2021-01-15T19:23:07 |
2025-04-01T04:55:29.847273
|
{
"authors": [
"KedharnathGoud",
"jeremyjpj0916"
],
"repo": "Optum/kong-splunk-log",
"url": "https://github.com/Optum/kong-splunk-log/issues/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
101623501
|
Error: [ngRepeat:dupes] Duplicates in a repeater are not allowed (Fix)
Updated to swagger 0.2.1 and I am getting an Error like this:
Had to change 1 thing in line 665 at /dist/scripts/swagger-ui.js
// parse resources
if (!swagger.tags) {
resources.push({
name: 'default',
open: true
});
map['default'] = 0;
}
to this:
// parse resources
if (swagger.tags) {
resources.push({
name: 'default',
open: true
});
map['default'] = 0;
}
Hi !
This should not be the way to fix the issue ...
Do you mind sending your swagger descriptor as i can't reproduce using my samples?
http://ntrc-delta.neoteric.eu:9000/api/neodocs/swagger.json
Can't reproduce the issue with your file on the demo: http://orange-opensource.github.io/angular-swagger-ui/
Could you please show how you use the component ?
actually I am having the very same issue but only when I use the swagger-ui directive in a partial as part of a singe page app. if I put it in the index.html page like your example it works fine.
should be fixed in 0.2.2, can you confirm ?
0.2.2 did fix the ngRepeat:dupes error but now I have a new issue. every
time I hit the "open/hide", "list operations", "expand operations" buttons
I get two additional duplicate operations in the list. (see attached).
there should only be one. I also see the "error{}" now, which use to be
verify.
thanks
--greg
swagger doc:
{"info": {"version": "1.0.0", "contact": {"name":
"bodinegl@gmail.com"}, "description": "#### base connectors return the
contents of a file\n", "license": {"url":
"http://www.test.com/legal/license-agreements.aspx", "name": "Test
Inc."}, "title": "Base - Cloud Connectors", "copyright": "2015 Test
Inc. ALL RIGHTS RESERVED"}, "paths": {"/base": {"get": {"description":
"base connector description for get verb", "tags": ["base"],
"summary": "base connector summary for get verb", "responses": {"200":
{"description": "successful operation"}}, "produces":
["application/json"]}}}, "schemes": ["http"], "produces":
["application/json"], "basePath": "/connector", "tags": [{"name":
"base", "description": "base connector"}], "host": "localhost:8080",
"swagger": "2.0"}
[image: Inline images 1]
when I click the "error {}" I see the following:
[{"level":"error","message":"Can't read from file
http://localhost:8080/connectors/base"}]
but if I go to that link directly I get the swagger doc back like above.
On 21 August 2015 at 09:13, Mathieu Alès notifications@github.com wrote:
should be fixed in 0.2.2, can you confirm ?
—
Reply to this email directly or view it on GitHub
https://github.com/Orange-OpenSource/angular-swagger-ui/issues/17#issuecomment-133422580
.
I finally reproduced the issue when using 'ngRoute'. If you do so, you can fix the issue by adding reloadOnSearch = false in your route configuration
|
gharchive/issue
| 2015-08-18T10:28:56 |
2025-04-01T04:55:29.866210
|
{
"authors": [
"bSm1le",
"bodinegl",
"mathieuales"
],
"repo": "Orange-OpenSource/angular-swagger-ui",
"url": "https://github.com/Orange-OpenSource/angular-swagger-ui/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1463514917
|
[Bug]: Accessibility List - Item with lines
Prerequisites
[X] I have searched the backlog for duplicate or closed feature requests
Your test device
iPhone 13 pro max
OS Version
iOS 16.1
App version
0.8.0
Describe the issue
Accessibility is well managed but the screen is complex; there is an oversight with the Info "I" button:
Voice Over => allow user to use info button
Selection control => allow user to use info button
The deletion as well as the detail is well managed
Switch Control :
Voice Over :
NB: to activate the options with voice over, the focus must be on the button and slide down with one finger
https://user-images.githubusercontent.com/20354434/203817947-6634cc54-2d03-4da1-ba9b-614609d69a53.MP4
Expected Behavior
Get inspired by the official Apple app "Telephone" on iOS -> "Recents" tab
Example with switch control :
Example with VoiceOver:
I deleted the item (swipe down then tap)
I displayed the detail with the "I" info button (drag down then tap)
I clicked double clicked on the item to call (double tap)
https://user-images.githubusercontent.com/20354434/203821130-5cc1236a-1734-4994-b18a-4a09d618bb9d.MP4
https://github.com/Orange-OpenSource/ods-ios/issues/36
ok accessibility
|
gharchive/issue
| 2022-11-24T15:31:02 |
2025-04-01T04:55:29.874214
|
{
"authors": [
"Tayebsed93"
],
"repo": "Orange-OpenSource/ods-ios",
"url": "https://github.com/Orange-OpenSource/ods-ios/issues/298",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
148486356
|
BlockQuote element
In addition to having the Break and Heading h1-6 content elements, I would like to add a BlockQuote element, which would map to the <blockquote> HTML element.
I created a PR here: #6773
I'm getting a little concerned that we're rebuilding an HTML editor instead of a layout editor. Until now, elements were structural, containers, or higher-level content blocks. For instance, both hr and headings are structural. This one is an HTML element that isn't structural, and it's not a container in the sense that it will not contain any structure richer than text. It looks like it would be marginally more difficult to use a HTML element with a blockquote inside. This looks like an element that should be in a gallery module rather than in the core distro.
An alternative design that would kill more than one bird with a stone would be to make it easy to configure the outer tag (and if there is one) on all elements, like we do for CSS classes.
Do you mean because a Blockquote element is not structural because it is like a container that can contain other HTML elements?
I can see that the line becomes a little bit blurry when dealing with this type of element, because it would arguably make sense as well to be able to add child elements to a Blockquote element (a <cite> element for example, but also paragraphs and images) using drag & drop.
I'll give this some thought, and raise it as a topic coming Tuesday.
Pretty much. Where do we draw the line? Are we going to add elements for all HTML tags?
From the meeting people seem to agree with a generic container tag where you would select the name of the tag, from its editor. Dropdown list with available tags, and textbox to enter a custom one.
It is important you can set tag when create a new element through UI inheriting from the generic container element. Currently you cannot inherit from containers.
As discussed today at the meeting:
CssClass is very analogous to a possible Tag property to all tags. Objections to one almost all work for the other. We might want to think about adding exceptions to CssClass, as a consequence. The absence of an outer tag is one possible exception where the CssClass loses meaning.
The text and container elements at least could be enhanced with a new tag property. This would cover the blockquote, cite, and other scenarios.
I'd add that it may even be useful in some cases to suppress the default outer tag entirely, both for text and container.
Action:
New Tag property on Text element to support the BlockQuote issue.
Add Tag property to Canvas element
|
gharchive/issue
| 2016-04-14T21:26:28 |
2025-04-01T04:55:29.883535
|
{
"authors": [
"bleroy",
"jersiovic",
"sebastienros",
"sfmskywalker"
],
"repo": "OrchardCMS/Orchard",
"url": "https://github.com/OrchardCMS/Orchard/issues/6772",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
181106744
|
Orchard.Setup - Blog recipe doesn't create widgets correctly
I thought #7169 and friends would be simple issues but they have caught me out.
My blog recipe doesn't actually work as intended. I checked the admin after it had run and the UI showed the widgets (Archives, Recent) with the correct blog selected but this was just because that was the only value in the list. The actual BlogId on the part is 0 which means when posts are added it doesn't list them.
I can't see any way to pass a blog id in through the widget create RecentBlogPosts command.
I'm thinking instead I should catch the blog id being 0 and attempt to set a sensible default in RecentBlogPostsPartHandler.cs for example:
using Orchard.Blogs.Models;
using Orchard.Blogs.Services;
using Orchard.ContentManagement.Handlers;
using Orchard.Data;
namespace Orchard.Blogs.Handlers {
public class RecentBlogPostsPartHandler : ContentHandler {
public RecentBlogPostsPartHandler(
IRepository<RecentBlogPostsPartRecord> repository,
BlogService blogService) {
Filters.Add(StorageFilter.For(repository));
OnCreated<RecentBlogPostsPart>((context, part) => {
// If the part.BlogId = 0 then assume its being created via a command (ui shouldn't allow it)
// the widget create command can't specify a blog id so we attempt to infer it here:
// get list of blogs ordered by last modified date
// set Part.BlogId to the last modified blog
});
}
}
}
This should mean that in a recipe as long as the recent/archive parts are created after the blog they want to be assigned to they will be attached to that blog.
//cc @sebastienros before I code it, is this a good approach?
In this case this needs to be a custom command from the blogs module.
Related question: How do we pass the default widgets' text for the default recipe?
how text is passed to widgets
In answer to the related question, there is a /Text command param and in the code it puts it into a BodyPart if it exists:
var text = String.Empty;
if (widget.Has<BodyPart>()) {
if (UseLoremIpsumText) {
text = T(LoremIpsum).Text;
}
else {
if (!String.IsNullOrEmpty(Text)) {
text = Text;
}
}
widget.As<BodyPart>().Text = text;
}
From WidgetCommands.cs.
custom command
Creating a custom command would solve the issue but seems like an inelegant solution because it means that the widget create command will happily make widgets with invalid data.
extending parameters
One option I would like to explore in discussion is making the widget command extensibile.
I initially suggested adding a json type block on the end of the command for custom elements, considered doing it with an args... style block but I think the elegant solution would be to make the command arguments extensible.
Individual modules can register their support for extra args and would get passed and use a harvester and delegates and whatnot to manage it all.
Can you see drawbacks to this?
how to handle modules duplicating extended args (priorities?)
how to provide a way for another recipe item, such as recentblogposts wanting to depend on the id of the blog (pass the collection of completedrecipestepcontext's in to the event?)
too complicated solution?
Then we create a way for a widget to opt out of being able to be created via widget create which forces the developer to look for its custom command and stops invalid widgets being created.
Also fixes #1659
|
gharchive/issue
| 2016-10-05T09:37:06 |
2025-04-01T04:55:29.890479
|
{
"authors": [
"rtpHarry",
"sebastienros"
],
"repo": "OrchardCMS/Orchard",
"url": "https://github.com/OrchardCMS/Orchard/issues/7251",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
118898005
|
Feature Request: Take content permissions into account when indexing projections
In Orchard 1, projections don't respect content item permissions. It would be great if content permissions were first class properties on content and having projections support this.
@sebastienros can we make a first class implementation for this?
|
gharchive/issue
| 2015-11-25T18:37:04 |
2025-04-01T04:55:29.892040
|
{
"authors": [
"Jetski5822",
"dcinzona"
],
"repo": "OrchardCMS/Orchard2",
"url": "https://github.com/OrchardCMS/Orchard2/issues/76",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
227583878
|
[1.10.2] Auto-restart of sound system freezing game after some amount of time.
*Mod Version: 3.4.1.0 (this appears on old versions too)
*Forge Version: 12.18.3.2281 (1.10.2)
*Link to crash log: https://pastebin.com/V6YCgra9 (latest.log)
fml-client-latest.zip (log is just too large, i didn't find any site, that might load it)
Description:
After auto-restart of sound system, lags begin appears, and after some amount of time game just freeze out.
And, if at this moment look in Task Manager, i see "java.exe" consuming a very small amount of RAM (15-30 MB).
So, how i can avoid this?
Actually I think the lag is causing the sound system to crash not the other way around. This is the reason I put it in. In my experience when the client gets really laggy the streaming music thread in PaulsCode (the underlying sound system/library) loses it's mind and crashes without recovery. BTW, I have seen this happen back on 1.7.10 as well as 1.10.2 - haven't seen it yet on 1.11.x.
Questions on the log:
This is the extent of the log?
The beetroot1/beetroot2 in the log - saw that somewhere and don't recall. :\ Google doesn't show anything worthwhile.
it's full logs.
i just look in the log and saw this, omg. i don't know, what causing it.
Ok. For the next BETA I will be adding the following:
Option to disable auto-restart of the sound system
A crash check will occur every 30 seconds or so
If crashed chat messages will be given regarding what it will do (restart, or tell you to manually restart the client) as well as the current FPS of the client
I do think, though, that the lag problem is something other than Dynamic Surroundings. Unfortunately I can't tell which mod(s) from the log.
i tried play without dynamic surroundings - it freezed too after 2 hours. after 1 hour of gaming i turned on music in minetunes, that's maybe why it crashed. so, you right - problem in soundsystem.
and in fml-client-latest i've seen that:
[22:43:03] [Server thread/DEBUG] [FML/]: The world 3ce46ed9 (Mega Leftract 5 44 h) may have leaked: seen 125 times.
Pushed a v3.4.2.0 version that has an option to turn off the auto-restart. As mentioned I do not think it will affect your situation. Wish I could be of more help.
|
gharchive/issue
| 2017-05-10T06:57:03 |
2025-04-01T04:55:29.967274
|
{
"authors": [
"OreCruncher",
"pPkMnh4to8994h"
],
"repo": "OreCruncher/DynamicSurroundings",
"url": "https://github.com/OreCruncher/DynamicSurroundings/issues/86",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1858208522
|
Balancer off chain min BPT check
Do a dry-run of how much BPT should depositing liquidity yield, and then pass it along to VaultAdmin's depositToStrategy. This PR also disables withdraw / deposit functions on Balancer strategy since this is never going to be a default strategy.
Things left to do:
make it possible to withdraw using multiple assets with 1 transaction
make it possible to pass min expected assets on withdrawal
TBD:
we need a sanity check if strategist is hijacked. We could:
still calculate price of BPT on chain and leave ~2% of wiggle room when strategist defines expected BPT on deposit / assets on withdrawal
limit a number of TX a strategist can execute in a give time interval -> so a circular transactions of depositing / withdrawing to/from strategy can't drain too much funds
something else?
We've decided to use the VaultValue checker and not rely on off chain calculations
|
gharchive/pull-request
| 2023-08-20T17:34:31 |
2025-04-01T04:55:30.008798
|
{
"authors": [
"sparrowDom"
],
"repo": "OriginProtocol/origin-dollar",
"url": "https://github.com/OriginProtocol/origin-dollar/pull/1768",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1295414435
|
加key
大佬,3.7.0.29 这个版本的ke没有,求加入
难免会有小版本遗漏,现已添加。支持的版本可以参考https://github.com/Ormicron/Sharp-dumpkey/blob/main/Address.json,欢迎RP。
|
gharchive/issue
| 2022-07-06T08:09:24 |
2025-04-01T04:55:30.014942
|
{
"authors": [
"Ormicron",
"pxss"
],
"repo": "Ormicron/Sharp-dumpkey",
"url": "https://github.com/Ormicron/Sharp-dumpkey/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
155426709
|
Where to place the configuration file for the live-cd
Hi, i want to make a test iso based on debian and i like your tool. I have 2 questions:
1- The live user will be root?
2- Where to place the configuration files for the live-cd (like openbox customizations files autostart.sh or menu.xml etc...) before building in /etc/skel or /root?
Thanks a lot
When you use this script : https://github.com/Oros42/CustomDebian, yes, by default, the user is root.
You can create a folder CustomDebian/custom_setup/ and put in this folder a file like «adduser.sh» with :
useradd <MyUser>
You can put all your custom conf in CustomDebian/custom_conf/. All content of this folder is copy and paste in «/» of live-cd. So you can create CustomDebian/custom_conf/etc/skel/ and CustomDebian/custom_conf/root/
Ok...Thanks a lot for your work! ;)
Good question. I haven't yet use debian-installer.
I think you can put a script in CustomDebian/custom_setup/ with the setup of debian-installer but i don't know how does it works.
If you found the solution, I am interested to add this to CustomDebian.
I will search something about it. Another question, how to enable contrib non-free repo? i need to edit source.list in chroot or i can enable repos before with a script in custom_setup?
3 options :
create CustomDebian/custom_conf/etc/apt/source.list, but this file is copy/paste just before the build of the ISO
edit source.list in chroot
and I think the best, create CustomDebian/custom_setup/0source.sh whith some think like :
echo "deb http://..... contrib non-free" > /etc/apt/source.list
thanks a lot for your support!! i will ask you something in the future if i need help ;)
Okay, no problem ;-)
Tryied to add user with your suggestion but seems it doesn't work. I have another question, there is a way to startx in chroot? thanks
Sorry, it's adduser for an interactive add and useradd for automatic add (check the man for options).
startx in chroot ? Euh... I never try it. After 2min of search I found this : http://www.gentoo-wiki.info/HOWTO_startx_in_a_chroot
Thanks for your reply. Is it normal that when i make rebuild option i have this output on terminal during the mksquashfs process?
Source directory entry boot already used! - trying boot_1 Source directory entry opt already used! - trying opt_1 Source directory entry initrd.img already used! - trying initrd.img_1 Source directory entry mnt already used! - trying mnt_1 Source directory entry etc already used! - trying etc_1 Source directory entry vmlinuz already used! - trying vmlinuz_1 Source directory entry lib64 already used! - trying lib64_1 Source directory entry media already used! - trying media_1 Source directory entry lib already used! - trying lib_1 Source directory entry srv already used! - trying srv_1 Source directory entry var already used! - trying var_1 Source directory entry usr already used! - trying usr_1 Source directory entry run already used! - trying run_1 Source directory entry home already used! - trying home_1 Source directory entry proc already used! - trying proc_1 Source directory entry tmp already used! - trying tmp_1 Source directory entry sys already used! - trying sys_1 Source directory entry sbin already used! - trying sbin_1 Source directory entry root already used! - trying root_1 Source directory entry dev already used! - trying dev_1 Source directory entry bin already used! - trying bin_1
Oups :-/
I have found some other bugs. I will fix (normally) it this week.
:D Do Have you an idea to make the ISO UEFI ready?
No one of my computers have the fucking UEFI, so I didn't test it.
But if you open (for example) an ISO of Minux Mint, you've got this :
.
├── EFI
│ └── BOOT
│ ├── BOOTx64.EFI
│ └── grubx64.efi
But if you open an ISO of Debian, you see nothing. So I don't known how does it works :-/
I have made few corrections on my script :
remove filesystem.squashfs before make new one
fix the exit of chroot (kill all process in chroot)
update the config. Now you can set your sources.list contents
i'm testing your new files with the corrections and now, after rebuild, did not boot into the live system with live user or root...
arf :-/
I make new tests ....
ok mee too ... May be that depends on some new package that I install. I will inform you
New test for me and it's works.
My test :
git clone https://github.com/Oros42/CustomDebian.git
cd CustomDebian
sudo ./build_custom_debian.sh new
And run the ISO in virtualbox.
Now i'm testing the same option (new) but with my configuration files and i will inform you with my result. :)
|
gharchive/issue
| 2016-05-18T06:37:42 |
2025-04-01T04:55:30.027659
|
{
"authors": [
"Oros42",
"gmstyle"
],
"repo": "Oros42/CustomDebianSetup",
"url": "https://github.com/Oros42/CustomDebianSetup/issues/1",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
945456115
|
Fixed timer disappearing on mooch
The timer and the moving bar disappear when a fish bites on mooch because _catchHandled doesn't get reset.
Thanks!
|
gharchive/pull-request
| 2021-07-15T14:36:48 |
2025-04-01T04:55:30.037990
|
{
"authors": [
"Ottermandias",
"arrowkeyuser"
],
"repo": "Ottermandias/GatherBuddy",
"url": "https://github.com/Ottermandias/GatherBuddy/pull/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
615371142
|
Wrong data type for "O" in oscillator.h
At line 30 and 50 in oscillator.h, variable "O" is defined as a 'unsigned int' instead of 'int'.
"O" have negative value such as when Otto walk (Otto9.cpp line 282).
There is no apparent impact on Arduino/Atmega but math results are out of wack on ESP32.
Solution: Change data type from "unsigned int" to "int" for variable "O" and "A" at lines 29,30,49,50 in oscillator.h
Hi @TokenTotem
Thanks now we have a separated oscillator library for ESP
|
gharchive/issue
| 2020-05-10T11:26:05 |
2025-04-01T04:55:30.041292
|
{
"authors": [
"TokenTotem",
"cparrapa"
],
"repo": "OttoDIY/OttoDIYESP",
"url": "https://github.com/OttoDIY/OttoDIYESP/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2248960234
|
No change is different from negative change
Pull Request Checklist:
[x] This PR addresses an already opened issue (for bug fixes / features)
This PR fixes #1690
[x] Tests for the changes have been added (for bug fixes / features)
[x] (If applicable) Documentation has been added / updated (for bug fixes / features)
[x] CHANGES.rst has been updated (with summary of main changes)
[x] Link to issue (:issue:number) and pull request (:pull:number) has been added
What kind of change does this PR introduce?
In robustness_fractions
Computes the fraction of negative change explicitly and returns it
The agreement fraction is the maximum of the positive, negative or no change fractions.
Does this PR introduce a breaking change?
Yes, agree_frac has changed. However, it now better reflects its definition and usual expectations. And the case where "no change" is the largest group should not be very frequent, it usually happens with zero-bounded indicators.
Other information:
FYI @RondeauG
coverage: 90.192% (+0.002%) from 90.19%
when pulling 0bb653efcb3d6e8cde987834c461b8aebdd8aa7a on fix-robustness-agreefrac
into 2d8726b6c6e6a1ac82befb35ac88238cc89dc618 on main.
|
gharchive/pull-request
| 2024-04-17T18:27:03 |
2025-04-01T04:55:30.055095
|
{
"authors": [
"aulemahal",
"coveralls"
],
"repo": "Ouranosinc/xclim",
"url": "https://github.com/Ouranosinc/xclim/pull/1711",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
369227697
|
rolling n-day max precip; minor modifications to max_1_day
Added Max n-day cumulated precip. Minor changes to max_1_day
Note- strict Nan-behvior when using rolling calculations cannot rely on 'skipna=False' as in max_1_day index because rolling calculations results in nans to the first n-1 values in a series (n = window size)
Pull Request Test Coverage Report for Build 162
9 of 9 (100.0%) changed or added relevant lines in 1 file are covered.
3 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-0.07%) to 66.118%
Files with Coverage Reduction
New Missed Lines
%
xclim/indices.py
3
64.25%
Totals
Change from base Build 161:
-0.07%
Covered Lines:
281
Relevant Lines:
425
💛 - Coveralls
|
gharchive/pull-request
| 2018-10-11T17:31:58 |
2025-04-01T04:55:30.062270
|
{
"authors": [
"coveralls",
"tlogan2000"
],
"repo": "Ouranosinc/xclim",
"url": "https://github.com/Ouranosinc/xclim/pull/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
232590555
|
coinbin transaction problem
my coinb.in address is
1CF7TjzTLAkK5xJDAmHzD5APgQokzRjFMs i sent .03761935 with a .0004 transaction fee to 14JqcgchhsQQv7LovnvA2ZCBgs14H88znN im not really sure of the tx id i didnt save it and dont no how to get it. the transaction doesnt show up on the blockchain link from your site but does show up on a different one. it says unconfirmed and its been 24 hours. why do all my coinbin transactions take for ever and coinbase goes right through. also the adress i sent this transaction to changed hours after i sent it. will it go through can i get this back? it was to another wallet pls help
Looking at the blockhcain history of your address https://blockchain.info/address/1CF7TjzTLAkK5xJDAmHzD5APgQokzRjFMs it looks like your transactions have been confirmed.
Sometimes transactions can take longer depending on your fee, nothing else. Next time set a higher fee.
All the best.
|
gharchive/issue
| 2017-05-31T14:47:30 |
2025-04-01T04:55:30.067154
|
{
"authors": [
"OutCast3k",
"cooper320"
],
"repo": "OutCast3k/coinbin",
"url": "https://github.com/OutCast3k/coinbin/issues/97",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
216991528
|
Website > LifeCycle hooks : tutorial mention Sink.createSink instead of Sink.create
Thanks for pointing this out :) It's fixed now!
|
gharchive/issue
| 2017-03-25T17:54:56 |
2025-04-01T04:55:30.068252
|
{
"authors": [
"LukaJCB",
"rvion"
],
"repo": "OutWatch/purescript-outwatch",
"url": "https://github.com/OutWatch/purescript-outwatch/issues/5",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1713021647
|
File list box doesn't sort items correctly
Seems the search function is case sensitive, counting upper case characters ahead of lowercase:
The only change between the above two images is I renamed 'researchtest' to 'Researchtest'
Correct behavior should sort alphanumerically without regard to case.
For additional reference, this is the correct sort in the Windows 10 file explorer
Internally the Filesystem code relies on std::filesystem::directory_iterator:
https://en.cppreference.com/w/cpp/filesystem/directory_iterator
The iteration order is unspecified, except that each directory entry is visited only once.
Many filesystems will store file info in sorted order, since there may be some lookup and update benefits to doing so. It's best not to rely on an assumption of filenames being already sorted, since there is no guarantee of it.
As for std::sort, you can specify a lambda as an optional third argument for the comparator:
https://en.cppreference.com/w/cpp/algorithm/sort
That can allow you do to a case insensitive sort.
The closest standard library function for string compare is strcmp:
https://en.cppreference.com/w/cpp/string/byte/strcmp
It seems stricmp (case insensitive string compare) is not part of the C++ standard, so is not supported by all compilers. Further, the meaning of case insensitive compare can be a bit ambiguous, depending on the language (think accented characters), or just not apply to languages that have no sense of case.
There are a few suggestions for writing string insensitive compare on Stack Overflow:
https://stackoverflow.com/questions/11635/case-insensitive-string-comparison-in-c
Of course, another option is to simply define the sort order as case sensitive. The way Windows does it is not necessarily "correct", just familiar. I think we would be perfectly justified if we wanted to impose a case sensitive sort here.
With that said, even on Linux, the file explorer and terminal listings show names sorted in a case insensitive manner.
|
gharchive/issue
| 2023-05-17T03:07:43 |
2025-04-01T04:55:30.074443
|
{
"authors": [
"DanRStevens",
"ldicker83"
],
"repo": "OutpostUniverse/OPHD",
"url": "https://github.com/OutpostUniverse/OPHD/issues/1363",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
416733425
|
[BUG] Issue with Huawei P20 Pro & Unity 2018.3
Hello,
I have an issue with Huawei P20 Pro and Unity 2018.3 and your plugin.
On Unity 2017.4.21 with a simple script with fullscreen = false;
I change only one thing in your value xml files. It's the ".Light" that I have removed to have a full opaque display.
I have this display on the screen :
where with Unity 2018.3.4 I have this :
My script is quite simple :
using UnityEngine;
using TMPro;
public class ScreenDetection : MonoBehaviour
{
public TextMeshProUGUI text1;
public TextMeshProUGUI text2;
private void Awake()
{
Screen.fullScreen = false;
}
// Start is called before the first frame update
void Start()
{
text1.text = "Screen size : " + Screen.width + " // " + Screen.height;
text2.text = "Safe size : " + Screen.safeArea.width + " // " + Screen.safeArea.height;
}
}
Is it something to solve in your plugin or is it an Unity issue ?
Is there a way to detect when the device doesn't have physical input but only soft display directly on the screen to redefine the the "safescreen" area cause Android seems to get rid of this when I use the Unity variable (due probably to OS limitation)
thanks in advance @Over17 !
We have the same issues on 2018.3
I found 2 possibles solutions. I can give you them @MPeloquin and submit a patch to @Over17 tomorrow morning !
I edit my previous post cause i wrote "LAYOUT_IN_DISPLAY_CUTOUT_MODE_SHORT_EDGES" instead of "LAYOUT_IN_DISPLAY_CUTOUT_MODE_NEVER". The NEVER force natively the activity to not use the Notch area instead of let the app managing it terribly bad.
Same issue here with G6 Play on Pie.
Tried first one and second one with no luck on Unity 2018.3.11f1
if someone made it , a fork will be nice.
Thanks !
@liszto I tried your patch, and it didn't work for me on a Pixel 3XL.
@Over17 it didn't work in portrait ? Or in landscape ?
I'm having the issue for landscape but currently I found no solution for this :/
I can linked my Android project if you want cause in portrait on all our devices (I didn't have a Pixel 3XL) it works.
Didn't work in either. The stripe is there from the very beginning in portrait.
I'm opening my Android Studio project. And I will drop here my entire patch (it maybe changed since my post above).
You scared me with your Pixel 3XL 😨
Same issue as #18 .
@Over17
This is what I'm doing in my project :
package com.manzalab.ubiant.plugins;
import android.os.Build;
import android.os.Bundle;
import android.view.View;
import android.view.WindowManager;
import com.unity3d.player.UnityPlayerActivity;
public class UbiantOverrideUnityActivity extends UnityPlayerActivity
{
public static int statusBarHeight = 0;
public static int softButtonBarHeight = 0;
private static int getLowProfileFlag()
{
return View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY |
View.SYSTEM_UI_FLAG_LAYOUT_STABLE |
View.SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN |
View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION |
View.SYSTEM_UI_FLAG_HIDE_NAVIGATION |
View.SYSTEM_UI_FLAG_FULLSCREEN;
}
private void showSystemUi()
{
// Works from API level 11
mUnityPlayer.setSystemUiVisibility(mUnityPlayer.getSystemUiVisibility() & ~getLowProfileFlag());
}
private void addUiVisibilityChangeListener()
{
mUnityPlayer.setOnSystemUiVisibilityChangeListener(new View.OnSystemUiVisibilityChangeListener()
{
@Override
public void onSystemUiVisibilityChange(final int visibility)
{
// Whatever changes - force status/nav bar to be visible
showSystemUi();
}
});
}
public static int statusBarHeight()
{
return statusBarHeight;
}
public static int softButtonBarHeight()
{
return softButtonBarHeight;
}
public int getStatusBarHeight()
{
int result = 0;
int resourceId = getResources().getIdentifier("status_bar_height", "dimen", "android");
if (resourceId > 0) {
result = getResources().getDimensionPixelSize(resourceId);
}
return result;
}
public int getSoftButtonsBarHeight()
{
// navigation bar height
int navigationBarHeight = 0;
int resourceId = getResources().getIdentifier("navigation_bar_height", "dimen", "android");
if (resourceId > 0) {
navigationBarHeight = getResources().getDimensionPixelSize(resourceId);
}
return navigationBarHeight;
}
/**
* Dispose of the mUnityPlayer when restarting the app.
* This ensures that when the app starts up again it does not start with stale data.
*/
@Override
protected void onCreate(Bundle savedInstanceState)
{
if (mUnityPlayer != null)
{
mUnityPlayer.quit();
mUnityPlayer = null;
}
statusBarHeight = getStatusBarHeight();
softButtonBarHeight = getSoftButtonsBarHeight();
super.onCreate(savedInstanceState);
getWindow().clearFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN);
if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.P)
{
WindowManager.LayoutParams lp = getWindow().getAttributes();
lp.layoutInDisplayCutoutMode = WindowManager.LayoutParams.LAYOUT_IN_DISPLAY_CUTOUT_MODE_NEVER;
//Set layout extend to Notch area
getWindow().setAttributes(lp);
}
// Clear low profile flags to apply non-fullscreen mode before splash screen
showSystemUi();
addUiVisibilityChangeListener();
}
}
I forget to link my values files from my project :
values-14.xml :
<?xml version="1.0" encoding="utf-8"?>
<resources>
<style name="UnityStatusBarTheme" parent="android:Theme.Holo.Light.DarkActionBar" />
<style name="UnityTransparentStatusBarTheme" parent="UnityStatusBarTheme" />
</resources>
values-19.xml :
<?xml version="1.0" encoding="utf-8"?>
<resources>
<style name="UnityStatusBarTheme" parent="android:Theme.Holo.Light.DarkActionBar">
<item name="android:windowBackground">@android:color/black</item>
</style>
<style name="UnityTransparentStatusBarTheme" parent="UnityStatusBarTheme">
<item name="android:windowNoTitle">true</item>
<item name="android:windowTranslucentStatus">true</item>
<item name="android:windowTranslucentNavigation">true</item>
</style>
</resources>
values-21.xml :
<?xml version="1.0" encoding="utf-8"?>
<resources>
<style name="UnityStatusBarTheme" parent="android:Theme.Material.Light.NoActionBar.Fullscreen" >
<item name="android:statusBarColor">@android:color/black</item>
<item name="android:navigationBarColor">@android:color/black</item>
<item name="android:windowBackground">@android:color/black</item>
</style>
<style name="UnityTransparentStatusBarTheme" parent="UnityStatusBarTheme">
<item name="android:windowTranslucentStatus">false</item>
<item name="android:windowTranslucentNavigation">false</item>
</style>
</resources>
values.xml :
<?xml version="1.0" encoding="utf-8"?>
<resources>
<style name="UnityStatusBarTheme" parent="android:Theme.NoTitleBar.Fullscreen"/>
<style name="UnityTransparentStatusBarTheme" parent="UnityStatusBarTheme"/>
</resources>
Okay I identified the change in Unity which caused this behavior - it's the Android notch support. Digging further.
@Over17 any news on your Unity side of this issue ?
Here's the link to the public issue tracker: https://issuetracker.unity3d.com/issues/android-android-9-devices-with-a-notch-are-not-in-full-screen-black-bar-is-visible-at-the-bottom-of-the-screen
Okay looks like I've fixed the bug in Unity. It will take some time to take it into 18.3, 19.1, 19.2 and 19.3.
Awesome!
@Over17 Nice!
@mikavaliviita unfortunately I don't have a workaround - it's a change in native Unity code, and I can't imagine a way to apply it from outside without proper hacking.
The fix has landed into:
2019.3.0a2
2019.2.0a15
2019.1 and 2018.4 pending.
Fix landed into 2019.1.2f1.
Fix landed into 2019.1.2f1.
Thank you. Is there a release date?
Supposedly next week.
I hope it will land in 2018.4 too #hope
Never lose hope!
Landing into 2018.4.1f1.
|
gharchive/issue
| 2019-03-04T10:29:42 |
2025-04-01T04:55:30.090595
|
{
"authors": [
"MPeloquin",
"Over17",
"QuentinGprd",
"elberto",
"liszto"
],
"repo": "Over17/UnityShowAndroidStatusBar",
"url": "https://github.com/Over17/UnityShowAndroidStatusBar/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
576784037
|
大神要不要考虑出一个只编译ipk的脚步
大神要不要考虑出一个只编译ipk的脚步
我在更新的时候觉得更新整个固件有一些不方便,只更新一个ipk要简单许多并且不要重新设置新的固件。
同意,请大神可以考虑一下
暂不考虑,我没需求
|
gharchive/issue
| 2020-03-06T08:50:07 |
2025-04-01T04:55:30.147677
|
{
"authors": [
"Joseepb",
"P3TERX",
"ppp47"
],
"repo": "P3TERX/Actions-OpenWrt",
"url": "https://github.com/P3TERX/Actions-OpenWrt/issues/102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
721091335
|
docker通过portainer配置后怎么限制日志文件大小呢
我不是很懂docker,只是很基础的使用还可以,用portainer添加aria2后发现'--log-opt max-size=1m'这个配置没有对应的位置添加,不知道怎么操作
我不是很懂docker,只是很基础的使用还可以,用portainer添加aria2后发现'--log-opt max-size=1m'这个配置没有对应的位置添加,不知道怎么操作
是通过Logging—add logging driver option添加个max-size=1M吗?
没用过 portainer ,这个帮不了你。
直接 CLI 粘贴一把梭多方便
|
gharchive/issue
| 2020-10-14T02:52:00 |
2025-04-01T04:55:30.149205
|
{
"authors": [
"P3TERX",
"q350348361"
],
"repo": "P3TERX/Docker-Aria2-Pro",
"url": "https://github.com/P3TERX/Docker-Aria2-Pro/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
900049061
|
Include subset provenance metadata when using the subset interface
Include subset provenance metadata when using the subset interface to download a subsetted data table. The provenance metadata (TBD) should include the filtering properties that had been used to create the subset.
CSV subset is now returned as a zip file that includes a JSON file containing the filtering properties used for the subset. Ticket added for a second step, which is to normalize the filtering data to whatever we decide (it currently reflects the format used internally by Dex).
|
gharchive/issue
| 2021-05-24T21:59:49 |
2025-04-01T04:55:30.156513
|
{
"authors": [
"rogerdahl",
"servilla"
],
"repo": "PASTAplus/dex",
"url": "https://github.com/PASTAplus/dex/issues/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
250723271
|
Need to download pathway map as .svg, .tiff or .jpeg
Another user request. We seem to have lost the download function
We do have one and working.
|
gharchive/issue
| 2017-08-16T18:46:42 |
2025-04-01T04:55:30.166805
|
{
"authors": [
"ARWattam",
"hyoo"
],
"repo": "PATRIC3/patric3_website",
"url": "https://github.com/PATRIC3/patric3_website/issues/1653",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
173006673
|
Add Experiment view page
global search -> searched for "hypoxia" -> clicked on one of the experiments -> when to
https://www.alpha.patricbrc.org/view/Experiment/354496
The header says "loading..." but it actually never loads the results.
We need Experiment view page, similar to that on production:
https://www.patricbrc.org/portal/portal/patric/SingleExperiment?cType=taxon&cId=2&eid=315319
This page was not implemented. I added the page and made a pull request. This will be included in the next build.
I reviewed the page and it looks good.
|
gharchive/issue
| 2016-08-24T17:01:58 |
2025-04-01T04:55:30.169724
|
{
"authors": [
"maovt",
"mshukla1"
],
"repo": "PATRIC3/patric3_website",
"url": "https://github.com/PATRIC3/patric3_website/issues/902",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1639299040
|
feat(magic): add dryrun config option to only print SQL output
%config PrqlMagic.dryrun=True allows to just printing without executing output SQL.
Great idea!
I can't think of a better name...
|
gharchive/pull-request
| 2023-03-24T12:27:00 |
2025-04-01T04:55:30.399386
|
{
"authors": [
"eitsupi",
"max-sixty"
],
"repo": "PRQL/pyprql",
"url": "https://github.com/PRQL/pyprql/pull/159",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
120046479
|
Differentiate between (override) RC and joystick
So far we considered the joystick and the RC the same device via different physical links. It seems like it would be good to differentiate a bit more:
Create a msg/joystick_input.msg file for MAVLink joystick commands
Store the last RC control state. If joystick data comes in and the RC is unchanged, use it.
If the RC control state changes and is in manual, lock out the joystick data (this is intended for override)
If the RC control state is non-manual and joystick stops, enable RC. This allows to fly home in position control when exiting the app.
@DonLakeFlyer Did I cover all corner cases? It sounds a bit ugly, but the best I could come up with.
Very good way of functioning. Probably good idea for the pilot and his team is to have a way to manually switch it back and forth from QGC. Vasil Petrov
Should there be a way to force control to one side of the equation? Force joystick, or force radio? Default would be both. But maybe at some point someone wants to take over control. Not sure. Will also need some sort of deadband filtering for jitter.
I don't quite get why you need to distinguish things by fllight mode?
Hi @LorenzMeier, is this currently implemented? or is there still a conflict between RC and joystick commanding?
https://github.com/PX4/Firmware/pull/10227 goes in a similar direction
I'm closing this for now as we're overall reworking the inputs and the current way of doing things seems to generally work - RC is quickly becoming a legacy thing and not connected in parallel.
@LorenzMeier @dagar
I don't agree that RC become legacy. And really waited to this feature.
There are few scenarios:
If the system is stable, and became full autonomous it doesn't need RC.
For test flight we get many drones saved because the auto modes had problems and we take control to RC.
On Fully deployed system, where there is no operator that know how to Fly RC. Or the system is beyond of the RC radio signal. The operator might want to take a joystick control.
In case of loss of GPS, the operator can take control to joystick attitude mode and fly in FPV mode.
@BazookaJoe1900 That's all fine, but will you work on it? Otherwise I'm going to keep it closed because for us it is a "wontfix" issue. There are lots of good suggestions for PX4, but we need to prioritize the work of the core dev team and not have issues dangling around that we have no plan to address.
We need to get better at setting expectations, and part of that is saying no to things. If you are willing to go ahead and contribute the implementation for this in the next weeks then we can make sure to support you on it.
|
gharchive/issue
| 2015-12-02T22:06:36 |
2025-04-01T04:55:30.425377
|
{
"authors": [
"BazookaJoe1900",
"DonLakeFlyer",
"LorenzMeier",
"Stifael",
"superware",
"tubeme"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/issues/3309",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
329865312
|
Set PWM_MIN for DJI F450 frame
Describe the bug
F450 builds seem to stop props if PWM_MIN is set to 1075. We might need to set them to 1175 or an alike slightly higher setting
To Reproduce
Steps to reproduce the behavior:
Switch the drone on and arm it without propellers attached
Tilt it 60 degrees
Expected behavior
The propellers should not stop
Drone (please complete the following information):
DJI F450 frame
Additional context
This can lead to crashes if it happens in-air.
This largely depends on the ESCs. Are you refering to using the original DJI ESCs?
In my experience an ESC calibration is usually needed and should be part of a vehicle setup. Enabling one-shot makes a difference too.
I've come across a lot of people selecting this in PX4 as a generic 450 sized frame. If this PX4 airframe is going to become more specific I suggest we also introduce a range of intentionally generic options.
Output from Dev call: DJI ESC don't do calibration
Hi @LorenzMeier ,
I've dug into this issue a bit. The configuration value invoked by 4011_dji_f450 sets PWM_MIN 1230, so something else is causing your issue.
When 4011_dji_f450 gets called in the startup scripts, it then calls 4001_quad_x which in turn calls rc.mc_defaults. rc.mc_defaults sets PWM_MIN 1075, (only if AUTOCNF == yes). Once rc.mc_defaults completes, 4001_quad_x completes, then 4011_dji_f450 sets PWM_MIN 1230, (again, only when AUTOCNF == yes).
Although the default value of PWM_MIN in the startup scripts is initialized at 1000, I think the cause of this issue might have been either not configuring the airframe correctly, or manually setting that value lower sometime after the airframe was set.
Let me know your thoughts.
-Mark
TODO: verify analysis of @mcsauder by setting f450 config and observe values
I've verified that uploading the current master branch v1.8.0-rc0 firmware, if the F450 airframe is selected PWM_MIN is set to 1230. If, however, the F450-sized quadrotor with CAN (4012_quad_x_can), PWM_MIN remains the default value of 1075. I've also checked F350 and verified the F350 airframe sets PWM_MIN to 1230.
Should set PWM_MIN 1230 perhaps be added to the 4012_quad_x_can config file?
Should set PWM_MIN 1230 perhaps be added to the 4012_quad_x_can config file?
From what I see uavcan does not use PWM_MIN.
Also I'm currently updating the MC PID tuning page and I specifically added PWM_MIN as a precondition: https://bkueng.gitbooks.io/px4-user-guide/content/en/advanced_config/pid_tuning_guide_multicopter.html#precondition
@LorenzMeier given that the default is set to 1230, and your initial reporter changed it to 1075, I don't think we need to change anything. What do you think?
Yep. Let's close this!
|
gharchive/issue
| 2018-06-06T13:22:10 |
2025-04-01T04:55:30.435403
|
{
"authors": [
"LorenzMeier",
"bkueng",
"dagar",
"mcsauder",
"thomasgubler"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/issues/9609",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
76032839
|
Fix potential null pointer deref in Mavlink dtor if task_main returns error
LL_APPEND is called just before the loop spins up but various error conditions can cause the task to exit before then. When that happens Mavlink::start_helper calls delete on the instance which tries to prune it from the global list. If this is the first Mavlink instance to attempt starting the list head is null and we hardfault in the Mavlink dtor.
Only call LL_DELETE after checking the list head for a null pointer.
This fixes a startup condition where our boards hang on boot without a USB CDC/ACM connection (mavlink_open_uart fails) but boot successfully when connected to USB.
Much appreciated. These are some really good contributions!
|
gharchive/pull-request
| 2015-05-13T15:50:28 |
2025-04-01T04:55:30.437336
|
{
"authors": [
"LorenzMeier",
"NaterGator"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/pull/2171",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
169317036
|
Remove LPOS.Z reset
As described in https://github.com/PX4/Firmware/issues/4878
You should remove all the resets. The local reference shouldn't be reset ever. I think @jgoppert forgot to remove this line during the refactoring.
|
gharchive/pull-request
| 2016-08-04T08:18:03 |
2025-04-01T04:55:30.438628
|
{
"authors": [
"ecmnet",
"mhkabir"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/pull/5228",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
238351112
|
mc_pos_control: Properly constrain xy velocity setpoints
Addresses #7467
Constrains the xy velocity setpoints before they get to the slewrate function.
@nkhoit
Thank you addressing the issue in #7467, however I don't think this is the correct fix. The reason for it is as follow:
1.) the setpoints generated by whatever mode should already be set to the correct limit. This means that for instance in auto, the vel_sp(0:1).length() should never exceed the cruise speed, otherwise there is already done something wrong. The same applies for manual control mode.
2.) The limitation is a safety net that if the above is not done properly, then constrain the velocity to vel_max. This check has to be done at the end of the velocity setpoint generation, otherwise it looses its purpose.
By just looking at the plot here #7467 just tells me that clearly the limitation of vel_max somehow failed.
in the log file that you posted, vel_max_xy was set to 8.
this means that the vel magnitude in xy should never exceed that value, otherwise something is obviously wrong. As you can see from the below plot, the magnitude is above the limit.
@Stifael
I guess the issue is in AUTO, vel_sp(0:1) exceed the cruise velocity.
They're strictly calculated as pos_sp(0:1)-pos(0:1) * PosXY_P.
https://github.com/PX4/Firmware/blob/master/src/modules/mc_pos_control/mc_pos_control_main.cpp#L1682
Should this just be constrained to the cruise speed then?
How about this?
I moved the velocity max constrain back to where it was and just constrained the vel_sp from auto modes to XY_CRUISE.
Being handled in #7538
|
gharchive/pull-request
| 2017-06-25T00:15:29 |
2025-04-01T04:55:30.443285
|
{
"authors": [
"Stifael",
"nkhoit"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/pull/7468",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
854092223
|
[service-deployment] MySQL Service CCE 이슈
CCE 이슈 목록
종류
호스트명
CCE-ID
점검항목
Linux
mysql
U-23
DoS 공격에 취약한 서비스 비활성화
MySQL
mysql
DY-07
안전한 암호화 알고리즘 사용
MySQL
mysql
DY-08
로그 활성화
MySQL
mysql
DY-09
최신 패치 적용
[ enable CCE ] https://github.com/PaaS-TA/PAAS-TA-MYSQL-RELEASE/pull/12
[ enable CCE ] https://github.com/PaaS-TA/OPENPAAS-SERVICE-JAVA-BROKER-MYSQL/pull/1
|
gharchive/issue
| 2021-04-09T02:19:36 |
2025-04-01T04:55:30.496718
|
{
"authors": [
"jinhyojin",
"okpc579"
],
"repo": "PaaS-TA/service-deployment",
"url": "https://github.com/PaaS-TA/service-deployment/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
360375196
|
PB-Assembly
Operating system
Linux, CentOS7
Package name
falcon-kit 1.2.3
pypeflow 2.1.0
Describe the issue
I am running the falcon job in our Moab PBS system, so I don’t have the -W block=T option available. And given this post:https://github.com/adaptivecomputing/torque/issues/268, I don’t think Moab will implement some blocking function in the near future.
So instead of using “pwatcher_type=blocking”, I used “pwatcher_type=fs_based”.
It turned out this option is not well implemented, for example, for some of the tasks, it worked, but for others, it failed.
When it fails, the error message will (always) be something like:
[INFO]CALL:
qdel Pedec92ba939bd0
qdel: illegally formed job identifier: Pedec92ba939bd0
I suspect this means falcon tries to kill an already-killed job after it detects the "run.sh.done" file. Normally A simply re-submission of the same job script will resume the whole pipeline. But I am just wondering whether we could fix this issue so that people don't need to re-submit the same job over and over.
Then I tried to use a combination of “-I -x” in hope to make an equivalent case for -W block=T (see this post: https://stackoverflow.com/questions/5982857/making-qsub-block-until-job-is-done). This time, the job keeps running, however, even the pbs job is “done” (status C in queue), there is no “run.sh.done” file generated in the designated directory. I am not sure whether this is the default behavior, or I hit another bug.
Error message
when using "pwatcher_type=fs_based":
[INFO]CALL:
qdel Pedec92ba939bd0
qdel: illegally formed job identifier: Pedec92ba939bd0
Then I tried to use a combination of “-I -x” in hope to make an equivalent case for -W block=T (see this post: https://stackoverflow.com/questions/5982857/making-qsub-block-until-job-is-done). This time, the job keeps running, however, even the pbs job is “done” (status C in queue), there is no “run.sh.done” file generated in the designated directory. I am not sure whether this is the default behavior, or I hit another bug.
If you really have a blocking qsub call, you will definitely get run.sh.done upon successful completion.
The fs_based process-watcher is very difficult to maintain. Please file an issue at
https://github.com/PacificBiosciences/pypeFLOW/issues
And try to create a simpler example, so we can focus solely on pypeFLOW.
|
gharchive/issue
| 2018-09-14T16:35:59 |
2025-04-01T04:55:30.507115
|
{
"authors": [
"pb-cdunn",
"yingzhang121"
],
"repo": "PacificBiosciences/pbbioconda",
"url": "https://github.com/PacificBiosciences/pbbioconda/issues/12",
"license": "BSD-3-Clause-Clear",
"license_type": "permissive",
"license_source": "github-api"
}
|
449800308
|
Use consistent groupIds/packages
Java classes are packaged under io.pckt.* however in pom.xml we are using io.microprofile. It would be good to unify them.
We should also consider renaming to com.packt.* as there is no io.pckt.*
@pilhuhn @starksm64 WDYT?
|
gharchive/issue
| 2019-05-29T13:12:20 |
2025-04-01T04:55:30.514420
|
{
"authors": [
"cealsair",
"pavolloffay"
],
"repo": "PacktPublishing/Hands-On-Enterprise-Java-Microservices-with-Eclipse-MicroProfile",
"url": "https://github.com/PacktPublishing/Hands-On-Enterprise-Java-Microservices-with-Eclipse-MicroProfile/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1094582888
|
Error encountered in style transfer notebook (ch11)
The following error shows up after completing the notebook:
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_151841/1024981148.py in
----> 1 out_img = postprocess(opt_img[0]).permute(1,2,0)
2 show(out_img)
~/miniconda3/envs/c2-vision/lib/python3.9/site-packages/torchvision/transforms/transforms.py in call(self, img)
59 def call(self, img):
60 for t in self.transforms:
---> 61 img = t(img)
62 return img
63
~/miniconda3/envs/c2-vision/lib/python3.9/site-packages/torchvision/transforms/transforms.py in call(self, img)
435
436 def call(self, img):
--> 437 return self.lambd(img)
438
439 def repr(self):
/tmp/ipykernel_151841/2609962387.py in (x)
6 ])
7 postprocess = T.Compose([
----> 8 T.Lambda(lambda x: x.mul_(1./255)),
9 T.Normalize(mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225], std=[1/0.229, 1/0.224, 1/0.225]),
10 ])
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
Some backward compatibility in torch auto tape.. I'm not 100% sure what exactly it is but I changed the cell
from
out_img = postprocess(opt_img[0]).permute(1,2,0)
show(out_img)
to
with torch.no_grad():
out_img = postprocess(opt_img[0]).permute(1,2,0)
show(out_img)
|
gharchive/issue
| 2022-01-05T17:28:13 |
2025-04-01T04:55:30.523371
|
{
"authors": [
"jemzipx",
"sizhky"
],
"repo": "PacktPublishing/Modern-Computer-Vision-with-PyTorch",
"url": "https://github.com/PacktPublishing/Modern-Computer-Vision-with-PyTorch/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1889905860
|
Use the javascript AWS Client instead of Go server
Using the package directly in the app could significantly improve the developer experience and, of course, reduce the code to maintain. I was doing some tests, and it can work, but the counterpoint is that Localstack needs to allow localhost URLs in the CORS settings (You can do it with an environment variable).
This change could open the door to porting the app to Electron.
What do you think? Could It be a good idea?
While i agree with you, the overall design principle was to provide the UI compatible with already existing developer setups. Using Localstack, most likely the dev stack is Docker/ container-based. Thus SQS-Admin is just another container. Electron would need the devs to install more software on they local machine. There would also be the need to provide ability to configure the Localstack host/ port, apart from the CORS settings.
I could have provided all as SPA without the server component but using client JS sdk, that would have meant the SPA would be built during container startup, which is also tricky because you'll loose imutability of the artefact. :)
Yeah, I agree with you, running it inside a container is more user-friendly. Anyway, we can pre-build the SPA, configure the connection inside the app, and persist it in session storage.
|
gharchive/issue
| 2023-09-11T07:59:58 |
2025-04-01T04:55:30.526171
|
{
"authors": [
"AlejandroPerez92",
"PacoVK"
],
"repo": "PacoVK/sqs-admin",
"url": "https://github.com/PacoVK/sqs-admin/issues/626",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1068314066
|
带有多边形的图像缩放后loadImage闪退
出现于在进行多边形修改或者删除多边形并重新点击出多边形后
同 #93
目前对问题的复现在于:缩放(主要是放大)图像进行点的编辑(移动/删除)然后缩放回接近原大小,之后切换图像基本会崩
目前对问题的复现在于:缩放(主要是放大)图像进行点的编辑(移动/删除)然后缩放回接近原大小,之后切换图像基本会崩
进一步确认问题:当图像存在多边形,放大图像出现边界框滑块后,缩小图像至边界框滑块消失,此时切换崩溃
目前对问题的复现在于:缩放(主要是放大)图像进行点的编辑(移动/删除)然后缩放回接近原大小,之后切换图像基本会崩
进一步确认问题:当图像存在多边形,放大图像出现边界框滑块后,缩小图像至边界框滑块消失,此时切换崩溃
不止如此,缩小放大,以及不出现滑块依旧可能崩溃,崩溃分两种情况,一种是闪退不报错,一种是未响应
目前确定是在loadimage完毕后退出,缩放之后可以保存,没有错,但是不管保不保存,只要缩放过后,尝试换图像就会崩溃
目前确定是在loadimage完毕后退出,缩放之后可以保存,没有错,但是不管保不保存,只要缩放过后,尝试换图像就会崩溃
但是loadimage中print显示代码都正常运行完成
问题复现步骤:
加载一个图像文件夹
标注第一张图像,按住ctrl+滚轮反复放大缩小(较大尺度的改变)
保存结果
S/F或点击列表切换下一张图像
崩溃
试了一下没复现,明天继续
I think we can close this issue now, pull the new updates and try to reproduce the issue again , if the app crashes please give us a feedback.
Thank You!
|
gharchive/issue
| 2021-12-01T11:53:30 |
2025-04-01T04:55:30.531341
|
{
"authors": [
"Youssef-Harby",
"geoyee",
"linhandev"
],
"repo": "PaddleCV-SIG/EISeg",
"url": "https://github.com/PaddleCV-SIG/EISeg/issues/90",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
545554653
|
docker中paddlepaddle版本过低,无法运行
docker中paddlepaddle版本过低,无法运行,也无法下载软件包,无法联网,应该如何解决呢?
from 他, 然后自己pip 重新安装吧....
去paddlepaddle网站下载最新的docker版
|
gharchive/issue
| 2020-01-06T05:55:51 |
2025-04-01T04:55:30.532703
|
{
"authors": [
"Cabchinoe",
"WilliamEt",
"wics1224"
],
"repo": "PaddlePaddle/DeepSpeech",
"url": "https://github.com/PaddlePaddle/DeepSpeech/issues/410",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
562602179
|
same response time in both 1 core(p2.xlarge) and 8 core(p2.8xlarge) AWS gpu
I am getting same response time in both 1 core and 8 core gpu.
i am using CUDA10.1 and cudnn7.6
for 8 core gpu. before running python program, i am running the following command in ubuntu
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
am i missing any parameter to set.
please help.
i am running with following settings:
parser = argparse.ArgumentParser(description=__doc__)
add_arg = functools.partial(add_arguments, argparser=parser)
# yapf: disable
add_arg('num_samples', int, 1, "# of samples to infer.")
add_arg('beam_size', int, 500, "Beam search width.")
add_arg('num_proc_bsearch', int, 8, "# of CPUs for beam search.")
add_arg('num_conv_layers', int, 2, "# of convolution layers.")
add_arg('num_rnn_layers', int, 3, "# of recurrent layers.")
add_arg('rnn_layer_size', int, 1024, "# of recurrent cells per layer.")
add_arg('alpha', float, 2.5, "Coef of LM for beam search.")
add_arg('beta', float, 0.3, "Coef of WC for beam search.")
add_arg('cutoff_prob', float, 1.0, "Cutoff probability for pruning.")
add_arg('cutoff_top_n', int, 40, "Cutoff number for pruning.")
add_arg('use_gru', bool, True, "Use GRUs instead of simple RNNs.")
add_arg('use_gpu', bool, True, "Use GPU or not.")
add_arg('share_rnn_weights',bool, False, "Share input-hidden weights across "
"bi-directional RNNs. Not for GRU.")
When you use 1 core gpu, did you run:
export CUDA_VISIBLE_DEVICES=0 (or any gpu id)
before running python program?
yes i have run export CUDA_VISIBLE_DEVICES=0 in 1 core gpu.
maybe the most cost is the data pipe.
|
gharchive/issue
| 2020-02-10T14:35:23 |
2025-04-01T04:55:30.534760
|
{
"authors": [
"lfchener",
"nayanhalder",
"zh794390558"
],
"repo": "PaddlePaddle/DeepSpeech",
"url": "https://github.com/PaddlePaddle/DeepSpeech/issues/424",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1773039805
|
[Model] update PP-ShiTuV2-rec preprocess parser policy
PR types(PR类型)
Description
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.1 out of 2 committers have signed the CLA.:white_check_mark: DefTruth:x: qiuyanjunqiuyanjun seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2023-06-25T03:50:45 |
2025-04-01T04:55:30.538479
|
{
"authors": [
"CLAassistant",
"DefTruth"
],
"repo": "PaddlePaddle/FastDeploy",
"url": "https://github.com/PaddlePaddle/FastDeploy/pull/2061",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2492144004
|
建议支持PaddlePaddle 3.0
在使用PaddlePaddle 3.0的时候出现#3127的问题。报错信息和#3127一样,都是cudnn报错
你好,这个看起来是Paddle和cuDNN版本匹配的问题,不是PaddleClas的问题。PaddleClas本身是支持Paddle 3.0的。如果可能的话,推荐使用Docker环境安装。
The issue has no response for a long time and will be closed. You can reopen or new another issue if are still confused.
From Bot
|
gharchive/issue
| 2024-08-28T13:51:12 |
2025-04-01T04:55:30.597749
|
{
"authors": [
"Bobholamovic",
"TingquanGao",
"mikezhuang2022"
],
"repo": "PaddlePaddle/PaddleClas",
"url": "https://github.com/PaddlePaddle/PaddleClas/issues/3230",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1505639105
|
[Question]: paddle.jit.to_static 在转静态模型时,对第一个非使用的变量怎么定义?
请提出你的问题
模型网络输入变量的定义 :def forward(query=None, query_repr=None, title_repr=None, title=None),
具体使用时是需要:
在训练时输入query和title输出两者相似度;
在离线时是输入query输出query向量;
在线预测时输入query_repr和title_repr输出两者相似度;
model = paddle.jit.to_static(
model,
input_spec=[
None, # query_id
paddle.static.InputSpec(shape=[None, None], dtype="float32"), # query_embedding
paddle.static.InputSpec(shape=[None, None], dtype="float32"), # title_embedding
],
)
现在的问题是 在转换predict静态模型时,👆的转换代码哪里有问题?
请提出你的问题
模型网络输入变量的定义 :def forward(query=None, query_repr=None, title_repr=None, title=None), 具体使用时是需要:
在训练时输入query和title输出两者相似度;
在离线时是输入query输出query向量;
在线预测时输入query_repr和title_repr输出两者相似度;
model = paddle.jit.to_static(
model,
input_spec=[
None, # query_id
paddle.static.InputSpec(shape=[None, None], dtype="float32"), # query_embedding
paddle.static.InputSpec(shape=[None, None], dtype="float32"), # title_embedding
],
)
现在的问题是 在转换predict静态模型时,👆的转换代码哪里有问题?
请问您使用的是哪个模型?
|
gharchive/issue
| 2022-12-21T03:04:25 |
2025-04-01T04:55:30.637672
|
{
"authors": [
"RileyShe",
"w5688414"
],
"repo": "PaddlePaddle/PaddleNLP",
"url": "https://github.com/PaddlePaddle/PaddleNLP/issues/4185",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1653132484
|
[Question]: 求问TypeError: init() got an unexpected keyword argument 'paddle_dtype'
请提出你的问题
在https://aistudio.baidu.com/aistudio/index
使用了好几个novelai的项目,全部出现了:TypeError: init() got an unexpected keyword argument 'paddle_dtype'
这个错误,没办法解决啊??
FAQ内没有 'paddle_dtype' 相关的解决办法......
使用的项目包括:
https://aistudio.baidu.com/aistudio/projectdetail/5866872?contributionType=1
https://aistudio.baidu.com/aistudio/projectdetail/5871361?contributionType=1
https://aistudio.baidu.com/aistudio/projectdetail/5680370?contributionType=1
https://aistudio.baidu.com/aistudio/projectdetail/4890925?contributionType=1
......芒果自用NovelAI 相关就开了好几个项目,都不可以用,更新库也不行,抱歉刚开始学习什么都不会,这个问题对我来说就非常困难了 TUT
[2023-04-03 23:23:13,082] [ INFO] - loading configuration file NovelAI_latest_ab21ba3c_paddle/text_encoder/config.json
[2023-04-03 23:23:13,086] [ INFO] - Model config CLIPTextConfig {
"architectures": [
"CLIPTextModel"
],
"attention_dropout": 0.0,
"bos_token_id": 0,
"dropout": 0.0,
"eos_token_id": 2,
"hidden_act": "quick_gelu",
"hidden_size": 768,
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 77,
"model_type": "clip_text_model",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"paddlenlp_version": null,
"projection_dim": 768,
"return_dict": true,
"vocab_size": 49408
}
TypeError Traceback (most recent call last)
~/ui.py in on_run_button_click(self, b)
111 with self.run_button_out:
112 clear_output()
--> 113 self.pipeline.run(get_widget_extractor(self.widget_opt), task = self.task)
114
115
~/utils.py in run(self, opt, task)
222
223 def run(self, opt, task = 'txt2img'):
--> 224 self.from_pretrained(precision = opt.precision)
225 seed = None if opt.seed == -1 else opt.seed
226
~/utils.py in from_pretrained(self, precision, verbose, force)
178 # text to image
179 # do not directly load with fp16, since certain operators only support fp32
--> 180 self.pipe = StableDiffusionPipeline.from_pretrained(model)
181
182 if vae is not None:
~/diffusers_paddle/pipeline_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
379 # check if the module is in a subdirectory
380 if os.path.isdir(os.path.join(cached_folder, name)):
--> 381 loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
382 else:
383 # else load from the root directory
~/.data/webide/pip/lib/python3.7/site-packages/paddlenlp/transformers/model_utils.py in from_pretrained(cls, pretrained_model_name_or_path, from_hf_hub, subfolder, *args, **kwargs)
483 if cls.constructed_from_pretrained_config():
484 return cls.from_pretrained_v2(
--> 485 pretrained_model_name_or_path, from_hf_hub=from_hf_hub, subfolder=subfolder, *args, **kwargs
486 )
487
~/.data/webide/pip/lib/python3.7/site-packages/paddlenlp/transformers/model_utils.py in from_pretrained_v2(cls, pretrained_model_name_or_path, from_hf_hub, subfolder, *args, **kwargs)
1360 # 3. init the model
1361 init_args = config["init_args"] or ()
-> 1362 model = cls(config, *init_args, **model_kwargs)
1363
1364 loaded_state_dict_keys = list(model_state_dict.keys())
~/.data/webide/pip/lib/python3.7/site-packages/paddlenlp/transformers/utils.py in impl(self, *args, **kwargs)
168 pre_init_func(self, init_func, *args, **kwargs)
169 # keep full configuration
--> 170 init_func(self, *args, **kwargs)
171 # registed helper by
172 if post_init_func:post_init_func
TypeError: init() got an unexpected keyword argument 'paddle_dtype'
你好,这个问题应该是更新了ppdiffusers导致,建议可以试一下0.9.0的ppdiffusers,试试看?
然后你所提供的几个项目我这里都看不到,你可以分享一下原来的吗?
https://aistudio.baidu.com/aistudio/projectdetail/4783600 您看一下这个呢?
你好,我发现当前的项目需要使用 paddlenlp==2.4.7
您可以在项目中使用 pip install paddlenlp==2.4.7 --user 命令安装,然后重启一下内核!
好的!非常感谢~我去试试
|
gharchive/issue
| 2023-04-04T03:59:45 |
2025-04-01T04:55:30.654203
|
{
"authors": [
"JunnYu",
"alice1883"
],
"repo": "PaddlePaddle/PaddleNLP",
"url": "https://github.com/PaddlePaddle/PaddleNLP/issues/5525",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1756321163
|
ASR--模型微调
使用现有的命令行启动ASR任务,有些领域内专有名词无法识别(大多会识别为同音字),PaddleSpeech是否支持通过添加个性化训练语料以实现“定制化ASR”呢?如果有,辛苦贴一下链接,谢谢。
可以看看 https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/demos/custom_streaming_asr 这个。还有尝试下热词干预,欢迎贡献。
这个地址已经找不到了,404了,请问哪还有参考资料嘛?
|
gharchive/issue
| 2023-06-14T08:15:02 |
2025-04-01T04:55:30.680284
|
{
"authors": [
"NLPerxue",
"wudanwei"
],
"repo": "PaddlePaddle/PaddleSpeech",
"url": "https://github.com/PaddlePaddle/PaddleSpeech/issues/3340",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
995636693
|
paddleseg 训练保存的模型没有model.yml,怎么用paddlex部署
问题类型:模型部署
问题描述
========================
请在这里描述您在使用过程中的问题,说明您的部署环境,部署需求,模型类型和应用场景等,便于开发人员快速响应。
C++部署详见文档。
PaddleX的Python部署暂不支持PaddleSeg训练的模型,可以使用PaddleSeg提供的接口进行部署。
|
gharchive/issue
| 2021-09-14T06:21:50 |
2025-04-01T04:55:30.683079
|
{
"authors": [
"lyf6",
"will-jl944"
],
"repo": "PaddlePaddle/PaddleX",
"url": "https://github.com/PaddlePaddle/PaddleX/issues/1140",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1773037892
|
修改COPY-FROM No. 18 static
修改COPY-FROM No. 18 static
PADDLEPADDLE_PR=54842
@SigureMo 佬,麻烦有空试下 dim 和 ndimension , 直接改怎么都通不过,改回lambda就没问题。
中英文 PR 要互链
|
gharchive/pull-request
| 2023-06-25T03:43:18 |
2025-04-01T04:55:30.691411
|
{
"authors": [
"SigureMo",
"enkilee"
],
"repo": "PaddlePaddle/docs",
"url": "https://github.com/PaddlePaddle/docs/pull/5964",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
391799566
|
create password hash as part of form data
Currently in iron-skillet and other config tools the goal is to set a user account with a new password. Need to have the password converted to md5 hash before added to the snippet output.
done via custom jinja2 filters
|
gharchive/issue
| 2018-12-17T16:48:47 |
2025-04-01T04:55:30.729097
|
{
"authors": [
"nembery",
"scotchoaf"
],
"repo": "PaloAltoNetworks/pan-cnc",
"url": "https://github.com/PaloAltoNetworks/pan-cnc/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1615389904
|
New postgres schema based on the production ADCR Oracle DB
PostgreSQL:
schema is now based on the production (ADCR) Oracle schema
Change version to 0.0.13 to match ADCR Oracle
Removed 'SCHEMA' component and added a 'PANDABIGMON' component in pandadb_version
panda_db_init.sh script now also processes PROCEDUREs as well (that were left out before)
Oracle:
Removed SCHEMA component and added the PANDABIGMON component.
I am not 100% sure about the following:
schema/postgres/sqls/patches/next_version.patch.sql - I kept it in as I am not sure that this scheduled job was included or not @tmaeno
I have also updated the pg_PARTITION.sql file from https://raw.githubusercontent.com/PanDAWMS/panda-docs/main/docs/source/database/sql/pg_PARTITION.sql
|
gharchive/pull-request
| 2023-03-08T14:41:19 |
2025-04-01T04:55:30.732534
|
{
"authors": [
"EdwardKaravakis"
],
"repo": "PanDAWMS/panda-database",
"url": "https://github.com/PanDAWMS/panda-database/pull/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
508818005
|
请求api的设置- 版本4.0
这是开发环境的请求地址,我已经设置过测试api,能否直接显示我配置的api域名,在哪里配置
已解决
已解决
|
gharchive/issue
| 2019-10-18T02:31:03 |
2025-04-01T04:55:30.734980
|
{
"authors": [
"wxbo22"
],
"repo": "PanJiaChen/vue-element-admin",
"url": "https://github.com/PanJiaChen/vue-element-admin/issues/2664",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
116199789
|
Fixed issues
Chenged class to "col-lg-10 col-lg-offset-1" in div container content
Great! Are the issues tracked here?
If yes let me know the ids so we can automatically resolve them when the merge is commited :wink:
I will checkout the PL now :smile:
@joariasl do we need the same changes in the post and tag layouts? :smile:
|
gharchive/pull-request
| 2015-11-10T20:56:16 |
2025-04-01T04:55:30.741348
|
{
"authors": [
"PanosSakkos",
"joariasl"
],
"repo": "PanosSakkos/personal-jekyll-theme",
"url": "https://github.com/PanosSakkos/personal-jekyll-theme/pull/55",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1868269642
|
Provinces of Korean culture didn't receive a core of Korea
In eu4, I played as another country and annexed Korea. Provinces of korean culture didn't receive a core of Korea in vic2.
Please go to forums, upload affected eu4 save and conversion's log.txt. Some screenshots of the issue would also be helpful.
These github issues are for internal tracking mostly.
|
gharchive/issue
| 2023-08-26T21:13:10 |
2025-04-01T04:55:30.850346
|
{
"authors": [
"Zemurin",
"oliverzzt"
],
"repo": "ParadoxGameConverters/EU4ToVic2",
"url": "https://github.com/ParadoxGameConverters/EU4ToVic2/issues/1001",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
102043879
|
Session token revoked on other devices with password update
Note: I'm using the REST API for my auth implementation as opposed to this example's Cloud Code approach. This may not be an issue in Cloud Code.
I've noticed that with revocable sessions and the workaround of updating a user's password in order to become the user and get the session token, per #7 , my session token is likewise updated which has the effect of invalidating the previous session token that may have been used to log in on other devices.
In the new enhanced sessions blog post, it mentions unique session objects per device, however it seems when logging in from the REST API (and perhaps Cloud Code as well?) there isn't a way to determine which installation referred the login and will just default to updating whatever token is available.
Any ideas on how to create a unique session and or avoid invalidating the current session token that may span across multiple devices? Perhaps there is a way to specify the installationId that the user session should be associated with and as such specify which session token to invalidate upon updating a user's password.
Is your app new? Check the settings page to ensure that revocable sessions are enabled. Additional logins should NOT revoke previous logins (unless your app is not using revocable sessions.)
Thanks. I had revokable sessions enabled, I just missed this setting...
Revoke existing session tokens when user changes password
|
gharchive/issue
| 2015-08-20T02:35:08 |
2025-04-01T04:55:30.871714
|
{
"authors": [
"gfosco",
"kynetiv"
],
"repo": "ParsePlatform/CloudCodeOAuthGitHubTutorial",
"url": "https://github.com/ParsePlatform/CloudCodeOAuthGitHubTutorial/issues/9",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
112783889
|
[query findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) with iOS 7
when i try to use [query findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) with iOS 7 , it always returns this error " nserror * domain @ nscocoaerrordomain - code 3840"
I think is probably a duplicate of the error I filed yesterday:
https://github.com/ParsePlatform/Parse-SDK-iOS-OSX/issues/448
Duplicate of #448, indeed.
|
gharchive/issue
| 2015-10-22T11:34:14 |
2025-04-01T04:55:30.874343
|
{
"authors": [
"drewlarsen",
"marian19",
"nlutsenko"
],
"repo": "ParsePlatform/Parse-SDK-iOS-OSX",
"url": "https://github.com/ParsePlatform/Parse-SDK-iOS-OSX/issues/449",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
116300409
|
[SKProductsRequest _handleReply:] crash
I'm getting the below crash report through crashlytics.
I'm using Parse through cocoapods. Here's the versions from my Podfile.lock.
Parse (1.7.5.3):
Bolts/Tasks (>= 1.2.0)
All of the crashes are happening on iOS7.
StoreKit
__34-[SKProductsRequest _handleReply:]_block_invoke + 540
Thread : Crashed: com.apple.main-thread
0 libobjc.A.dylib 6704609744 objc_msgSend + 16
1 StoreKit 6552361264 __34-[SKProductsRequest _handleReply:]_block_invoke + 540
2 libdispatch.dylib 6710723616 _dispatch_call_block_and_release + 24
3 libdispatch.dylib 6710723552 _dispatch_client_callout + 16
4 libdispatch.dylib 6710736236 _dispatch_main_queue_callback_4CF + 344
5 CoreFoundation 6503574884 CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE + 12
6 CoreFoundation 6503567524 __CFRunLoopRun + 1452
7 CoreFoundation 6502783800 CFRunLoopRunSpecific + 452
8 GraphicsServices 6597310512 GSEventRunModal + 168
9 UIKit 6553370856 UIApplicationMain + 1156
10 AppName 4296173188 main (main.m:16)
11 libdyld.dylib 6710835872 start + 4
How does this have anything to do with parse? The stack trace doesn't even touch any parse code.
Yup. I would say - since it happens purely on iOS 7 this is an Apple StoreKit.framework bug.
If you have a good repro case - I would more than glad to investigate further and help you out.
I looked into this a bit further. A lot of people are seeing the same issue because the delegate is not being managed correctly.
Take a look here http://stackoverflow.com/questions/24675528/ios-crash-report-skproductsrequest and here, for example: http://stackoverflow.com/questions/3324596/storekit-skproductsrequest-crash.
Could you confirm that you are doing this in your framework?
@ChrisGrant We are doing exactly that in https://github.com/ParsePlatform/Parse-SDK-iOS-OSX/blob/master/Parse/Internal/Product/ProductsRequestHandler/PFProductsRequestHandler.m#L58
Going to send out a Pull Request to make sure we extra safeguard everything here, hopefully it will resolve the problem.
Great thank you!
|
gharchive/issue
| 2015-11-11T09:58:07 |
2025-04-01T04:55:30.882029
|
{
"authors": [
"ChrisGrant",
"hhanesand",
"nlutsenko"
],
"repo": "ParsePlatform/Parse-SDK-iOS-OSX",
"url": "https://github.com/ParsePlatform/Parse-SDK-iOS-OSX/issues/537",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
102033207
|
Remove deprecate Parse-OSX.podspec.
This is no longer needed, since #2 is done.
burninated.gif
lol
|
gharchive/pull-request
| 2015-08-20T00:58:00 |
2025-04-01T04:55:30.883204
|
{
"authors": [
"nlutsenko",
"richardjrossiii"
],
"repo": "ParsePlatform/Parse-SDK-iOS-OSX",
"url": "https://github.com/ParsePlatform/Parse-SDK-iOS-OSX/pull/62",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
133697451
|
fix multiple include
When there are 2 or more include statements, if the current query does't contain keys in the first include field, then result will be returned before finding the rest of the include fields.
+1
Looks like a .DS_Store file snuck into your commit. Can you remove that, then squash into a single commit please?
I am sorry for that. This is my first pull request ever and I am thrilled to have an opportunity to contribute.
Cool, congrats on your first PR, and welcome to the open source community!
|
gharchive/pull-request
| 2016-02-15T12:10:58 |
2025-04-01T04:55:30.894357
|
{
"authors": [
"drew-gross",
"flessard",
"mchun"
],
"repo": "ParsePlatform/parse-server",
"url": "https://github.com/ParsePlatform/parse-server/pull/426",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
33433766
|
Make workers configurable within submit_topology.clj
submit_toplogy is hard coded to use three workers. Someone might want to change that number.
Closing this issue as we actually did implement the reported issue but there is a separate issue discussed, which is externalization of configuration options via YAML or somesuch. Anyway, not related.
|
gharchive/issue
| 2014-05-13T20:06:54 |
2025-04-01T04:55:30.895660
|
{
"authors": [
"alexlovelltroy",
"amontalenti"
],
"repo": "Parsely/streamparse",
"url": "https://github.com/Parsely/streamparse/issues/15",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2411587454
|
Modifying log contents in hadoop-tools/hadoop-archive-logs/src/main/java/org/apache/hadoop/tools/HadoopArchiveLogs.java
The following log line LOG.info("Can not find any valid fileControllers."); evaluated against the provided standards: 1. The log line does not include a parameter. It could include the configuration parameter YarnConfiguration.LOG_AGGREGATION_FILE_FORMATS to provide more context. 2. The log line does not include sensitive information. 3. The log message is concise and informative. 4. The log message is not for an exception. Due to the violation of standard (1), we would recommend a code change to include the configuration parameter in the log message.
Created by Patchwork Technologies.
👎 The additional comment replicates the verbose logging and just makes it default
|
gharchive/pull-request
| 2024-07-16T16:27:05 |
2025-04-01T04:55:30.930818
|
{
"authors": [
"alexmaass"
],
"repo": "Patchwork-Tech/hadoop",
"url": "https://github.com/Patchwork-Tech/hadoop/pull/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
614361076
|
Feature: Logging Location via QR Codes
Description:
This feature would allow the integration of QR codes with SafePaths. The idea is that this feature could supplement automatic GPS logging in situations where GPS data might be unavailable or imprecise. Locations which support this feature would use a QR code generator (such as this demo I made) and then display this QR code publicly. People would be able to scan these with their phone. Currently, this supports both deep linking (scanning with an external QR code reader will directly open in the app) and scanning from within the app itself.
Of note, this does not currently support universal linking (a feature which would use a web url to handle the link, which would direct users to the app if they already have it, or to the app store otherwise). I held off on universal linking because there could be privacy concerns there. (I.e. because the QR code has to contain location coordinates as url parameters, technically these would be getting sent to a server somewhere where they could potentially be associated with IP addresses.) Deep linking directly to the app, as I'm doing currently, doesn't have this issue.
Source code for the QR code generator: https://github.com/tyleryasaka/safe-paths-qr-gen
QR code generator demo: https://safepaths-qr-6b479.web.app/
Linked issues:
N/A
Screenshots:
How to test:
1. Testing externally scanned QR codes (deep linking)
For Android, run this command once you have the app running: adb shell am start -W -a android.intent.action.VIEW -d "safepaths://qr/34.56/-123.45" org.pathcheck.covidsafepaths
For iOS, run this command once you have the app running: xcrun simctl openurl booted safepaths://qr/34.56/-123.45
You should see a screen saying that the scan was successful.
Tap the "home" button on this screen to return to the main screen. Now tap the settings icon and export your location history. Examine this file. You should see an entry with coordinates corresponding to the latitude 34.56 and longitude -123.45 (from the dummy url above).
2. Testing QR code scanning within the app
First, go to https://safepaths-qr-6b479.web.app/ and follow the steps to generate a QR code.
Open the app and tap the QR code icon next to the settings icon in the top-right-hand corner of the screen. Scan the QR code generated above (or alternatively, to see the error screen, scan any other QR code). You should see a screen saying that the scan was successful (or that it failed, if you scanned an invalid code).
Tap the "home" button on this screen to return to the main screen. Now tap the settings icon and export your location history. Examine this file. You should see an entry with coordinates corresponding to the location you used when generating the QR code.
@tyleryasaka - Great work, thanks! I've invited you to a slack channel to discuss this feature, but having the implementation basically done makes the discussion a lot easier!
@tremblerz @summetj Thanks for the feedback! I pushed changes to disable flash when scanning QR codes, and moved the QR code scanning button to the settings menu.
I agree that it would be simplest to merge and then iterate on the UI (I'm not a UI expert), as long as the camera permissions issue can be taken care of (as discussed in Slack, we don't want to request camera permissions in the release version which doesn't have this feature).
I like this idea of using abstract identifiers rather than GPS coordinates, and that's essentially what my original project is doing. I went with the GPS coordinates for now because that seemed easiest to integrate with what's already been built. But, adding this feature sounds great to me.
Let me know any further changes I can make to this PR.
Thanks for the review @Patrick-Erichsen . I've addressed your comments and pushed changes.
Follow up question - are you testing the in-app scanning on an actual device?
Follow up question - are you testing the in-app scanning on an actual device?
Yes, I've been testing on my Google Pixel 2. I test what I can in the iOS simulator, but I was not able to figure out how to build it for my actual iPhone device.
Ok, conflicts have now been resolved. I'm just saving the location now by calling NativeModules.SecureStorageManager.importGoogleLocations, which works perfectly well. The only confusing part is that the method name sounds specific to Google imports. I would suggest renaming that method to be generic at a later point, as it does not do anything that is specific to the Google imports as far as I can tell. But I didn't want to touch the native code in this PR.
Closing this for the time being as we rope in UI/UX resources.
|
gharchive/pull-request
| 2020-05-07T21:37:45 |
2025-04-01T04:55:30.941687
|
{
"authors": [
"Patrick-Erichsen",
"summetj",
"tyleryasaka"
],
"repo": "Path-Check/covid-safe-paths",
"url": "https://github.com/Path-Check/covid-safe-paths/pull/778",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
382007385
|
Warnings/Configuration on admin Dashboard
Print some warnings about potential configuration errors, etc...
Currently in work on the system-settings branch.
This includes a setup assistant mentioned in #42
Closing in favor of #42
|
gharchive/issue
| 2018-11-18T22:26:55 |
2025-04-01T04:55:30.956126
|
{
"authors": [
"PatrickSachs"
],
"repo": "PatrickSachs/helios",
"url": "https://github.com/PatrickSachs/helios/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.