id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2280263136
|
test-branch/execute-job/quoted-authorization/dbbb412a-59ad-4e1c-b21b-819def315ac4
PR for testing be3e3c053f726070187f2f9858f8526871123a64
/canonical/self-hosted-runners/run-workflows be3e3c053f726070187f2f9858f8526871123a64
|
gharchive/pull-request
| 2024-05-06T07:46:54 |
2025-04-01T04:56:15.154131
|
{
"authors": [
"cbartz"
],
"repo": "canonical/repo-policy-compliance",
"url": "https://github.com/canonical/repo-policy-compliance/pull/1458",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1564825167
|
Tickets Aguardando aparece para todos usuarios
Nos tickets aguardando, não deveria aparecer apenas para suas respectivas filas sem q outros usuários de outras filas ou conexões vejam?
Ola @wammachado,
Não sei se eu entendi exatamente, mas acho que ele já tem este comportamento, porém, os tickets que não possuem uma fila definida, vai aparecer sempre para todos os usuários.
Entao, exemplo:
Usuario Wender esta vinculado às filas A, B e C.
A conexao 9999-9999 esta vinculado à fila D
o usuario Myllena esta vindulado apenas a fila D
Quando a myllena esta logada e vai na aba de tickets aguardando ela ve as
mensagens aguardando nas filas A, B, C e D (Todas as FIlas) pois quando
chega um ticket no numero 9999-9999 o sistema ja entende q estes
atendimentos estao na fila D.
Porem mesmo assim, ela ve as mensagens aguardando nas filas q nao pertence
à ela, e isso nao é legal , pois ela pode acabar assumindo um atendimento
que nao pertence à ela.
Em qui., 2 de fev. de 2023 às 21:19, Lucas Silva @.***>
escreveu:
Ola @wammachado https://github.com/wammachado,
Não sei se eu entendi exatamente, mas acho que ele já tem este
comportamento, porém, os tickets que não possuem uma fila definida, vai
aparecer sempre para todos os usuários.
—
Reply to this email directly, view it on GitHub
https://github.com/canove/whaticket-community/issues/509#issuecomment-1414542416,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJKLQIYG457RY5JKGTZZS5LWVRFIVANCNFSM6AAAAAAUMZFT4E
.
You are receiving this because you were mentioned.Message ID:
@.***>
--
seguem exemplos em prints
o usuario @.*** nao pertence a fila Finder IT e
quando loga ele ve os tickets aguardando na fila finder it
[image: image.png]
[image: image.png]
[image: image.png]
[image: image.png]
[image: image.png]
Em qui., 2 de fev. de 2023 às 21:19, Lucas Silva @.***>
escreveu:
Ola @wammachado https://github.com/wammachado,
Não sei se eu entendi exatamente, mas acho que ele já tem este
comportamento, porém, os tickets que não possuem uma fila definida, vai
aparecer sempre para todos os usuários.
—
Reply to this email directly, view it on GitHub
https://github.com/canove/whaticket-community/issues/509#issuecomment-1414542416,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJKLQIYG457RY5JKGTZZS5LWVRFIVANCNFSM6AAAAAAUMZFT4E
.
You are receiving this because you were mentioned.Message ID:
@.***>
--
@wammachado, não consigo visualizar as imagens, acho que o Github não mostra as imagens enviadas quando se responde por e-mail, perderia encaminha-las pelo próprio Github ?
@Luskan777 agora q vi, q as mensagem q chegam realmente nao tem nehuma fila atribuida, apenas conexao atribuida, eu confundi a conexão com a fila, ele so identifica a fila caso o cliente ja tenha sido atendido por alguma fila anteriormente.
Obrigado pelo retorno
@Luskan777 Retomando esse assunto, gostaria de saber se tem como os tickets aguardando que ficam atribuídos à uma conexao especifica so aparecer para os usuarios q estiverem atribuídos a esta conexão?
Ou seja, caso o usuario nao tiver nenhuma conexao atribuida ele possa ver todos os tickets aguardando normalmente.
Porem caso o usuario seja atribuido à uma conexao especifica ele so ve os tickets aguardando pertecentes à esta conexao.
Ola @wammachado ,
Acredito que não amigo, pelo o que eu pude entender da aplicação, a permissão dos ticket é feita apenas pela Fila em que o ticket é vinculado, e não pela conexão que a mensagem é recebida. Se o ticket não for vinculado a nenhuma fila, então todos tem permissão, isso indefere da conexão padrão em que o usuário é vinculado, pelo menos até onde eu sei.
Agora, se isso não for possível, e acredito que não seja mesmo, seria uma melhoria interessante a se ter. Transformaria a conexão como se fosse uma entidade pai, de forma com que os usuários que sejam vinculados a X conexão, só tenham permissão para ver e operar nos tickets que a conexão X recebe, em suas respectivas filas em que o usuário é atribuído também.
Eu ainda não tive essa necessidade, eu acho que a permissão dos tickets de acordo com a fila que é vinculado funciona muito bem.
Sendo que eu vincule as conexões as respectivas filas, dificilmente terei tickets em aguardo sem filas vinculadas.
Seria sim interessante:
1-se o usuario nao estiver atribuido a nenhuma conexao ele tem acesso as
mensagens de todas as conexoes, mas caso ele tenha atribuido a uma conexao
ele so veria as mensagens dessa conexao a nao ser q outra conexao tenha
encaminhado a mensagem para o departamento dele
2- exemplo: um usuario do suporte so tem acesso a conexao do numero do
suporte, entao ele pode atender ao cliente tratar o suporte e depois
encaminha ao financeiro, a partir dai ele nao tem mais acesso as mensgens
exceto a conexao dele receba algum encaminhamento do financeiro, pois pode
ter conversa do departamento financeiro que o suporte nao tenha q ver.
Em ter., 7 de fev. de 2023 às 10:52, Lucas Silva @.***>
escreveu:
Eu ainda não tive essa necessidade, eu acho que a permissão dos tickets de
acordo com a fila que é vinculado funciona muito bem.
Sendo que eu vincule as conexões as respectivas filas, dificilmente terei
tickets em aguardo sem filas vinculadas.
—
Reply to this email directly, view it on GitHub
https://github.com/canove/whaticket-community/issues/509#issuecomment-1420808292,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJKLQI7OT5NKN7JY27X4INDWWJHSDANCNFSM6AAAAAAUMZFT4E
.
You are receiving this because you were mentioned.Message ID:
@.***>
--
Neste caso, não sei se seria possível essa parte da transferencia, pois seriam 2 conexões diferentes, ou seja, 2 números de WhatsApp, transferir uma conversa de uma conexão para outra não seria possível dentro da infraestrutura da aplicação, pois teria que transferir uma conversa de um número para o outro. A transferencia seria possível apenas por filas.
Neste cenário, você teria que ter um número geral, e filas separadas para cada departamento, mas basicamente a aplicação já funciona desta forma.
Analisando a fundo, não sei se possuímos motivos viáveis para melhorias nessa parte. O problema em questão aqui é o fato das mensagens que não possuem filas vinculadas aparecerem para todos, e isso de fato é um problema. Geralmente isso ocorre quando você adiciona uma conexão e não atribui uma fila para ela, então ao receber novas mensagens, elas ficam sem fila, aparecendo assim para todos, estando em aguardo ou não.
Porém, a aplicação nos fornece formas de resolver isso, que seria transferir para a fila correta. No caso das mensagens que ficam em aguardo sem fila definida, como eu já mencionei, uma vez que você atribui a conexão a uma única fila, todas as mensagens futuras que essa conexão (número) receber, automaticamente já serão designadas para a respectiva fila.
Acho que estou com o mesmo problema, eu atribui 3 conexões a 3 filas diferentes, porem quando alguem manda mensagem, meio que aparece para todos, existe alguma forma de que, a pessoa X mandou msg para a conexão 1 só seja vista pela pessoas que estão definitas pela conexão 1 e fila atribuiada a conexão 1?
Entendi, parece correto mesmo. Vou tentar fazer.
Me incomoda esconder todos tickets sem fila atribuída pq tem gente que nao responde corretamente o botzin das filas.
Mas com a sugestão do @wammachado, dá pra fazer um filtro para que os usuários que possuem determinada conexão como padrão possam ver os tickets sem fila apenas daquela conexão.
É util quando ja existem numeros de whatsapp divulgados para diferentes setores.
Mas como o @Luskan777 apontou, pode ficar confuso se misturar as filas entre as conexões.
TicketsList/index.js
TicketsManager/index.js
Estou testando uma abordagem frontend:
Se showAll estiver desabilitado, só mostra os tickets sem fila da mesma conexão que a padrão do usuário.
Se showAll estiver habilitado, mostra tudo.
O ticket ainda vem na requisição ao backend eu só escondo ele no front mesmo.
|
gharchive/issue
| 2023-01-31T18:43:03 |
2025-04-01T04:56:15.190523
|
{
"authors": [
"Horgun",
"Luskan777",
"codsec",
"wammachado"
],
"repo": "canove/whaticket-community",
"url": "https://github.com/canove/whaticket-community/issues/509",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
114231106
|
users management ?
Hi,
I'm unable to find out how do we manage users in huginn, as the admin user ?
Hey @calimou. We don't currently have a user management UI. You could configure them in the Rails console, which is obviously not ideal.
+1 for this feature.
I've added a bounty to this issue.
I will be working on that as part of my contract with www.kreuzwerker.de
Features we have planned:
List users
Create new users from the admin interface
Edit existing users
Make devise options configurable (lockable variables, confirmable durations, etc.)
Manually de- and reactivate a user (should also stop the agents from running)
One other feature request for something I think would be useful: the ability for admin users to directly make changes to a user's agents.
Awesome @dsander!
@albertsun That is a great idea, I might work on this after we have the initial version merged.
@cantino @dsander It seems like the admin user management panel has already been done according to the merged pr referenced here. Is this ticket still open because of @albertsun request?
That should really be moved into a separate issue.
Then I guess this ticket should close then.
Yup, thanks to @dsander!
|
gharchive/issue
| 2015-10-30T09:24:32 |
2025-04-01T04:56:15.217361
|
{
"authors": [
"Jngai",
"albertsun",
"calimou",
"cantino",
"dsander",
"kirantpatil"
],
"repo": "cantino/huginn",
"url": "https://github.com/cantino/huginn/issues/1121",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1330197978
|
For for Map
Hello. I tried to use Map in a for-loop
let keySet = map.keySet();
// doesn't work
for (let key of keySet) {
// ...
}
// doesn't work
for (let key in keySet) {
// ...
}
I have set all the necessary parameters
javetProxyConverter.getConfig().setProxyMapEnabled(true);
javetProxyConverter.getConfig().setProxySetEnabled(true);
Map object
Map<String, Object> map = new HashMap<>() {{
put("x", 1);
put("y", 2);
}};
How can I do this?
I removed
javetProxyConverter.getConfig().setProxyMapEnabled(true);
javetProxyConverter.getConfig().setProxySetEnabled(true);
And everything worked
But I don't have access to .keySet(). I need for-loop for Map, and .keySet(). How to do it?
Have you tried the bridge converter?
The proxy converter tries to apply JS native array, set, object to the Java counterparts so that getting both JS and Java behaviors altogether is very challenging. But, you can polyfill the JS object with the Java API.
|
gharchive/issue
| 2022-08-05T17:36:50 |
2025-04-01T04:56:15.220444
|
{
"authors": [
"PyXiion",
"caoccao"
],
"repo": "caoccao/Javet",
"url": "https://github.com/caoccao/Javet/issues/176",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1004588266
|
A fatal error has been detected by the Java Runtime Environment
Hello, please see the following error created on a Ubuntu 20.04.2 LTS test server running Wildfly 20 with JDK 11. The OS has 32GB of memory with 50% assigned to Java.
The Javet code version is the latest artifact from GitHub actions for the project, as of today.
I do not have a simple test case; For now I can offer a concise example of the code and the error generated. The method below matches the link of the logs below from the Java error though it is obviously not a full test case.
The logs of my code show "Javet can't setMaxHeapSize, flags are sealed" . I didn't notice that in my last comment in the other issue. The monitoring we use for the server doesn't indicate low memory and the Linux OOM killer is not a factor though the server is getting hit hard so free memory is reduced. This code is executed many times across three servers via web services that are load balanced.
Thank you for your time and promising project. Please set me straight if this is a matter of a simple coding error. The following code is migrated from J2V8.
private V8Runtime runtime;
public String runMyCalculations(Long deptID, ATUserBO requestUserBO, Date historicalDate, ATUserViewStateDTO viewStateDTO, List requestedGroupStatsFieldList, Map criteria, Map inputData, List<Map<String,Object>> overrides, Boolean saveToFAR, Boolean anonymize, Boolean verboseLogging, String uuid, RAPIDiagnostics rapiDiagnostics) throws Exception {
try {
V8Host v8Host = V8Host.getV8Instance();
V8Flags flags = v8Host.getFlags();
if (!flags.isSealed()) {
flags.setMaxHeapSize(2048);
} else {
logger.error(logPrefix + "Javet can't setMaxHeapSize, flags are sealed");
}
runtime = v8Host.getV8Instance().createV8Runtime();
String calculationOutput = runtime.getExecutor(script).executeString();
} finally {
if (runtime != null) {
logger.debug(logPrefix + "starting release ... ");
runtime.close();
runtime=null;
logger.debug(logPrefix + "done release ... ");
}
}
Link of file hs_err_pid2779254.log
https://drive.google.com/file/d/1iSc4ZVVJ2HjjMGqGG4xkPrZ5_wSSwQ0q/view?usp=sharing
For further clarification, the error seems related to close() , in a nutshell the above link should be quick to spot this error in the native code error:
R13={method} {0x00007fb3464b98d8} 'closeV8Runtime' '(J)V' in 'com/caoccao/javet/interop/V8Native'
R14=0x00007fb3460cc717: <offset 0x00000000011d0717> in /tmp/javet/2779254/libjavet-v8-linux-x86_64.v.0.9.13.so at 0x00007fb344efc000
R15=0x00007fb38ca4b6d0 points into unknown readable memory: 0x000017f6bc509169 | 69 91 50 bc f6 17 00 00
Thank you for reporting this. It's the first time that Javet was reported to fail with the following call stack.
Stack: [0x00007fb3ad0f9000,0x00007fb3ad1fa000], sp=0x00007fb3ad1f7ab0, free space=1018k
Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libjavet-v8-linux-x86_64.v.0.9.13.so+0x5b7194]
C [libjavet-v8-linux-x86_64.v.0.9.13.so+0x5b7401] v8::Context::SetEmbedderData(int, v8::Local<v8::Value>)+0x21
C [libjavet-v8-linux-x86_64.v.0.9.13.so+0x58fbe9] Javet::V8Runtime::Unregister(v8::Local<v8::Context> const&)+0x69
C [libjavet-v8-linux-x86_64.v.0.9.13.so+0x58f1fe] Javet::V8Runtime::CloseV8Context()+0x80
C [libjavet-v8-linux-x86_64.v.0.9.13.so+0x58f9ea] Javet::V8Runtime::~V8Runtime()+0x2e
C [libjavet-v8-linux-x86_64.v.0.9.13.so+0x58fa50] Javet::V8Runtime::~V8Runtime()+0x1c
C [libjavet-v8-linux-x86_64.v.0.9.13.so+0x56c402] Java_com_caoccao_javet_interop_V8Native_closeV8Runtime+0x64
j com.caoccao.javet.interop.V8Native.closeV8Runtime(J)V+0
j com.caoccao.javet.interop.V8Host.closeV8Runtime(Lcom/caoccao/javet/interop/V8Runtime;)V+42
j com.caoccao.javet.interop.V8Runtime.close(Z)V+20
j com.caoccao.javet.interop.V8Runtime.close()V+13
Possible Root Cause
I suspect there might be an OOM issue as you pointed out. Could you follow the recommendations listed below to help me further analyze what the root cause was?
Recommendations
V8Flags
V8Flags only can be set before any V8 runtime is created. Once V8Flags is set, the settings are applied to all V8 runtime created till the library is unloaded or JVM is closed.
V8 GC
The sample code seems to be fine to me. I wonder if you could insert one line of code.
} finally {
if (runtime != null) {
logger.debug(logPrefix + "starting release ... ");
runtime.lowMemoryNotification(); // Enforce a GC.
runtime.close();
runtime=null;
logger.debug(logPrefix + "done release ... ");
}
The major difference between J2V8 and Javet in this case is J2V8 enforces a GC per invocation whereas Javet doesn't. Javet believes applications shall get more flexibility in terms of balancing the performance and resource. In low pressure scenarios, skipping the GC gives better performance to the applications. However, that may cause issues similar to what happened in your high pressure case. So, enforcing the GC may help. Also, applications may call V8Flags.setExposeGC(true) to expose gc() in JavaScript so that the JavaScript code can decide when to call the gc().
Javet Engine Pool
Creating ad-hoc V8 runtimes gives much more pressure to both JVM and C++ runtime than Javet engine pool does especially when the applications are on the edge of exhausting the system resource. Javet engine pool caches the V8 runtime and can be configurated to perform GC in a daemon thread. As V8 is single-threaded, if applications explicitly call the GC, the GC blocks the application thread. In your case, that could be a considerable performance penalty. By using Javet engine pool, that performance penalty is gone and the applications usually don't need to call close() explicitly any more. Instead, the actual close() is called when Javet engine pool is being closed. Please refer to this doc for detail.
As you are using Spring, Spring finds the Javet engine pool implements AutoClosable and closes the Javet engine pool automatically when the Spring applications are exiting. Please refer to this doc for detail.
Java GC
I noticed ShenandoahGC was turned on. I suspect the parallel GC might be slightly incompatible with the way Javet works. Could you try some other GC algorithms to see if this issue can be reproduced? Also, could you try -Xlog:gc to analyze the GC log?
-XX:ConcGCThreads=2 -XX:ParallelGCThreads=2 -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC
Support Channel
I wonder in your application there might be more tricky things some of which might damage the memory model. The major reason could be Javet brings some new concepts / patterns compared to J2V8. I strongly encourage Javet users to read this How to Think in Javet? and refactor the code accordingly.
I've helped many application developers complete the migration from J2V8 to Javet. The efficient way I recommend is to turn to discord (for global developers) or WeChat (for Chinese developers). I look forward to seeing you at discord.
|
gharchive/issue
| 2021-09-22T18:14:00 |
2025-04-01T04:56:15.233324
|
{
"authors": [
"caoccao",
"robertlazarski"
],
"repo": "caoccao/Javet",
"url": "https://github.com/caoccao/Javet/issues/93",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2422164114
|
ci: Add Java snippet checker
Similar to the CDS snippet syntax checker, add a Java version. It only checks for syntax errors using a Java parser written in JavaScript. No code is compiled at any point.
Unrelated build failure:
Found 1 broken link(s) to internal targets in 1 source(s):
in: /about/features
Unresolved hash link: /advanced/odata #odata-v2-proxy-node
|
gharchive/pull-request
| 2024-07-22T07:34:40 |
2025-04-01T04:56:15.234987
|
{
"authors": [
"bugwelle"
],
"repo": "cap-js/docs",
"url": "https://github.com/cap-js/docs/pull/1125",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2100512035
|
[IOS] [FirebaseAnalytics][I-ACS013000] Screen parameter value must be of type NSString: (nil)
Describe the bug
I faced with an issue related to
FirebaseAnalytics.setScreenName({
screenName: 'The screen name',
});
What I see in logs
10.15.0 - [FirebaseAnalytics][I-ACS013000] Screen parameter value must be of type NSString: (nil)
10.15.0 - [FirebaseAnalytics][I-ACS031028] Received invalid class for screen: (nil)
To Reproduce
Steps to reproduce the behavior:
Run setScreenName method with some string value
You won't see an error in your app but you will see this error in Xcode logs. And, eventually you won't receive screen_view event.
Expected behavior
The library shouldn't have this error.
Screenshots
Smartphone (please complete the following information):
Device: iPhone 12 Pro
OS: 17.2.1
Additional context
Android handles this without any errors.
Also, I had a chance to debug this a little bit and maybe it will be useful:
I removed nameOverride to check if it can resolve the issue.
Same problem here. Tried to downgrade to this plugin in 4.0 and pod install and got the same result.
If someone can help us 🙏
|
gharchive/issue
| 2024-01-25T14:25:36 |
2025-04-01T04:56:15.241893
|
{
"authors": [
"Sulorb",
"edshkliaruk"
],
"repo": "capacitor-community/firebase-analytics",
"url": "https://github.com/capacitor-community/firebase-analytics/issues/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
97456658
|
lock your Gem version for Capistrano v2 does not work!
In my project's gemfile I have set:
gem 'capistrano', '~> 2.15'
I also tried bundle install. My Gemfile.lock is:
GEM
remote: http://rubygems.org/
specs:
capistrano (2.15.6)
highline
net-scp (>= 1.0.0)
net-sftp (>= 2.0.0)
net-ssh (>= 2.0.14)
net-ssh-gateway (>= 1.1.0)
highline (1.7.2)
magentify (0.1.0)
capistrano (~> 2, >= 2.5.10)
net-scp (1.2.1)
net-ssh (>= 2.6.5)
net-sftp (2.1.2)
net-ssh (>= 2.6.5)
net-ssh (2.9.2)
net-ssh-gateway (1.2.0)
net-ssh (>= 2.6.5)
PLATFORMS
ruby
DEPENDENCIES
capistrano (~> 2.15.6)
magentify
But it still tries to use v3.
cap
(Backtrace restricted to imported tasks)
cap aborted!
NoMethodError: undefined method `map' for :except:Symbol
/Users/ccarnell/Sites/xxx/Capfile:3:in `load'
/Users/ccarnell/Sites/xxxCapfile:3:in `<top (required)>'
(See full trace by running task with --trace)
cap -v
Capistrano Version: 3.4.0 (Rake Version: 10.2.2)
gem list
em list
*** LOCAL GEMS ***
addressable (2.3.6)
bigdecimal (1.2.0)
bundle (0.0.1)
bundler (1.6.0)
capifony (2.8.1)
capistrano (3.4.0, 2.15.6, 2.15.5)
capistrano-composer (0.0.6)
capistrano-ext (1.2.1)
capistrano-file-permissions (0.1.1)
capistrano-maintenance (0.0.3)
capistrano-rsync (1.0.2)
capistrano-scm-gitcopy (0.0.8)
capistrano-symfony (0.4.0)
capistrano-withrsync (0.1.2)
capistrano-wp (0.4.10)
capistrano_rsync_with_remote_cache (2.4.0)
CFPropertyList (2.2.8)
chunky_png (1.3.1)
colored (1.2)
colorize (0.7.7)
compass (0.12.6)
erubis (2.7.0)
fssm (0.2.10)
highline (1.7.2, 1.7.1, 1.6.21)
i18n (0.6.9)
inifile (3.0.0)
io-console (0.4.2)
json (1.7.7)
launchy (2.4.2)
libxml-ruby (2.6.0)
lunchy (0.7.0)
magentify (0.1.0, 0.0.7)
minitest (4.3.2)
net-scp (1.2.1, 1.1.2)
net-sftp (2.1.2)
net-ssh (2.9.2, 2.7.0)
net-ssh-gateway (1.2.0)
nokogiri (1.5.6)
psych (2.0.0)
puppet-lint (1.1.0)
railsless-deploy (1.1.3)
rake (10.2.2, 0.9.6)
rdoc (4.0.0)
ruby-progressbar (1.4.1)
sass (3.2.19)
sqlite3 (1.3.7)
sshkit (1.3.0)
term-ansicolor (1.3.0)
test-unit (2.0.0.0)
tins (1.0.1)
Hey @craigcarnell
I don't know anything about this, but top result on google for that error pointed me to this issue which points to a bug in Magentify up to version 0.7:
http://stackoverflow.com/questions/20754060/capistrano-undefined-method-map
Are you picking up an old version of Magnetify somehow?
Could you post the full stack trace that results from cap --trace?
Do you need to call bundle exec cap instead of just cap?
Whatever is going on is not a Capistrano bug. You didn't post your command line, but if you are not running bundle exec ________________ then you have no guarantees that your shell is finding the binaries as defined in your bundle!
|
gharchive/issue
| 2015-07-27T12:54:02 |
2025-04-01T04:56:15.260026
|
{
"authors": [
"craigcarnell",
"leehambley",
"robd"
],
"repo": "capistrano/capistrano",
"url": "https://github.com/capistrano/capistrano/issues/1471",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1090068634
|
Ruby 3.1: Can't run rubocop on capistrano repo
Steps to reproduce
Install Ruby 3.1.
In the capistrano repo, run bundle install.
Run bundle exec rubocop
Expected behavior
Rubocop should run successfully.
Actual behavior
$ ruby -v
ruby 3.1.0p0 (2021-12-25 revision fb4df44d16) [x86_64-darwin20]
$ bundle exec rubocop
wrong number of arguments (given 5, expected 1)
ruby/3.1.0/psych.rb:323:in `safe_load'
ruby/gems/3.1.0/gems/rubocop-0.48.1/lib/rubocop/config_loader.rb:192:in `yaml_safe_load'
ruby/gems/3.1.0/gems/rubocop-0.48.1/lib/rubocop/config_loader.rb:175:in `load_yaml_configuration'
Seems like the solution is to upgrade to a more recent version of rubocop.
|
gharchive/issue
| 2021-12-28T18:45:43 |
2025-04-01T04:56:15.262348
|
{
"authors": [
"mattbrictson"
],
"repo": "capistrano/capistrano",
"url": "https://github.com/capistrano/capistrano/issues/2096",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
172525762
|
when and how are the deployment collections created in the mongodb ?
I was experimenting with the jenkins plugin for deployment . But I checked the required collection classes where not present in the db and hence the data related to deployment was never saved. Will jenkins-plugin work for deployment collector without running a deployment collector before ?
Yes. Jenkins plug in for deployment will work - the api actually creates a collector in the background.
|
gharchive/issue
| 2016-08-22T18:48:28 |
2025-04-01T04:56:15.264624
|
{
"authors": [
"raynaya",
"tabladrum"
],
"repo": "capitalone/Hygieia",
"url": "https://github.com/capitalone/Hygieia/issues/761",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
377495410
|
Document how to use with GCP
Would be great to have a docs section for how to get started with the Google Cloud Platform plugin similar to the Azure section of the docs.
agreed, we need a getting started with gcp section.
|
gharchive/issue
| 2018-11-05T17:06:54 |
2025-04-01T04:56:15.265482
|
{
"authors": [
"Graham42",
"kapilt"
],
"repo": "capitalone/cloud-custodian",
"url": "https://github.com/capitalone/cloud-custodian/issues/3107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1077041261
|
recognize singlequote strings
not sure if the syntax is correct.
skip=+\\\\\|\\"+ this part you have on double quoted strings is quite confusing.
But anyways, it would be nice to also recognize singlequoted strings as strings.
You're right.
Thanks for your contribution!
|
gharchive/pull-request
| 2021-12-10T16:50:53 |
2025-04-01T04:56:15.289830
|
{
"authors": [
"afreakk",
"cappyzawa"
],
"repo": "cappyzawa/starlark.vim",
"url": "https://github.com/cappyzawa/starlark.vim/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
439602892
|
Downgrade React Redux
Opening this to get a Netlify build to compare perf.
Seems to be about the same order of magnitude perf for the marquee step case.
|
gharchive/pull-request
| 2019-05-02T13:52:08 |
2025-04-01T04:56:15.306211
|
{
"authors": [
"captbaritone"
],
"repo": "captbaritone/webamp",
"url": "https://github.com/captbaritone/webamp/pull/773",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
219010922
|
chore(test): replace es6-weak-map with core-js
Changed
Usage of es6-weak-map (used for tests to support PhantomJS) with core-js/modules/es6.weak-map.
Testing / Reviewing
Testing should make sure no test is broken.
👍 LGTM
👍
|
gharchive/pull-request
| 2017-04-03T17:49:11 |
2025-04-01T04:56:15.311689
|
{
"authors": [
"asudoh",
"chrisdhanaraj",
"hellobrian"
],
"repo": "carbon-design-system/carbon-components",
"url": "https://github.com/carbon-design-system/carbon-components/pull/15",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1147030030
|
fix(percy): fix for percy false positives
Related Ticket(s)
No related issue
Description
This is another attempt to fix the false positives in percy snapshots.
Changelog
Changed
Add cy.wait before every percy snapshot
Updated translation fixture to be the pre-transformed output
@kennylam @annawen1 @ariellalgilmore @emyarod @IgnacioBecerra this looks much more stable now, I think this is ready for review.
|
gharchive/pull-request
| 2022-02-22T15:09:11 |
2025-04-01T04:56:15.313906
|
{
"authors": [
"jeffchew"
],
"repo": "carbon-design-system/carbon-for-ibm-dotcom-nextjs-test",
"url": "https://github.com/carbon-design-system/carbon-for-ibm-dotcom-nextjs-test/pull/200",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2038336608
|
[Module not found]: Compiling error when using DDSLightboxVideoPlayerContainer
Description
For our project, we use the DDSLightboxVideoPlayerContainer by importing it like so:
import DDSLightboxVideoPlayerContainer from '@carbon/ibmdotcom-web-components/es/components-react/lightbox-media-viewer/lightbox-video-player-container'
and we use it like so:
<DDSLightboxVideoPlayerContainer video-id={src} open={open} />
all works well when using @carbon/ibmdotcom-web-components": "^1.37.0
however, when we update to @carbon/ibmdotcom-web-components": "^2.0.0, we encounter this error:
Module not found: Can't resolve '../mixins/on'
https://nextjs.org/docs/messages/module-not-found
Import trace for requested module:
./node_modules/@carbon/ibmdotcom-web-components/lib/components-react-node/lightbox-media-viewer/lightbox-video-player-container.js
./components/LightboxMediaViewer/index.tsx
./app/(home)/HomeLeadSpace/index.tsx
○ Compiling /(home)/page ...
⨯ ./node_modules/@carbon/web-components/lib/globals/wrappers/createReactCustomElementType.js:12:0
any thoughts on what might be causing this issue?
Screenshot
Component(s) impacted
DDSLightboxVideoPlayerContainer / C4DLightboxVideoPlayerContainer
Browser
Chrome
Carbon for IBM.com version
2.0.0
Severity
Severity 1 = The design is broken in a critical way that blocks users from completing tasks or damages the brand. Affects major functionality, no workaround.
Application/website
ibm.com/quantum
Package
@carbon/ibmdotcom-web-components
CodeSandbox example
(the storybook is using 1.38)
Steps to reproduce the issue (if applicable)
No response
Release date (if applicable)
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
[X] I checked the current issues for duplicate issues
Facing similar issue for Footer component:
unhandledRejection Error: Cannot find module '/Users/spandan.banerjee/Projects/DSTM/node_modules/@carbon/ibmdotcom-web-components/lib/components-react-node/footer/footer-container' at createEsmNotFoundErr (node:internal/modules/cjs/loader:1171:15) at finalizeEsmResolution (node:internal/modules/cjs/loader:1159:15) at resolveExports (node:internal/modules/cjs/loader:584:14) at Module._findPath (node:internal/modules/cjs/loader:658:31) at Module._resolveFilename (node:internal/modules/cjs/loader:1120:27) at /Users/spandan.banerjee/Projects/DSTM/node_modules/next/dist/server/require-hook.js:55:36 at Module._load (node:internal/modules/cjs/loader:975:27) at Module.require (node:internal/modules/cjs/loader:1225:19) at mod.require (/Users/spandan.banerjee/Projects/DSTM/node_modules/next/dist/server/require-hook.js:65:28) at require (node:internal/modules/helpers:177:18) { type: 'Error', code: 'MODULE_NOT_FOUND', path: '/Users/spandan.banerjee/Projects/DSTM/node_modules/@carbon/ibmdotcom-web-components/package.json' }
Facing similar issue for Footer component:
unhandledRejection Error: Cannot find module '/Users/spandan.banerjee/Projects/DSTM/node_modules/@carbon/ibmdotcom-web-components/lib/components-react-node/footer/footer-container'
at createEsmNotFoundErr (node:internal/modules/cjs/loader:1171:15)
at finalizeEsmResolution (node:internal/modules/cjs/loader:1159:15)
at resolveExports (node:internal/modules/cjs/loader:584:14)
at Module._findPath (node:internal/modules/cjs/loader:658:31)
at Module._resolveFilename (node:internal/modules/cjs/loader:1120:27)
at /Users/spandan.banerjee/Projects/DSTM/node_modules/next/dist/server/require-hook.js:55:36
at Module._load (node:internal/modules/cjs/loader:975:27)
at Module.require (node:internal/modules/cjs/loader:1225:19)
at mod.require (/Users/spandan.banerjee/Projects/DSTM/node_modules/next/dist/server/require-hook.js:65:28)
at require (node:internal/modules/helpers:177:18) {
type: 'Error',
code: 'MODULE_NOT_FOUND',
path: '/Users/spandan.banerjee/Projects/DSTM/node_modules/@carbon/ibmdotcom-web-components/package.json'
}
also same problems when using c4d image
|
gharchive/issue
| 2023-12-12T18:27:23 |
2025-04-01T04:56:15.323003
|
{
"authors": [
"bluephoeniiix",
"sban2009",
"techtolentino"
],
"repo": "carbon-design-system/carbon-for-ibm-dotcom",
"url": "https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/11221",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1112853889
|
[npm vulnerabilities]: dependency marked needs to be update
Description
There is a high npm vulnerability associated with the package marked: ^2.0.0.
https://github.com/carbon-design-system/carbon-for-ibm-dotcom/blob/0026aa45d9033f049902fbc790bdb33797a90bb7/packages/utilities/package.json#L56
npm requires marked version 4.0.10 or greater where in the vulnerability is patched.
Component(s) impacted
N/A
Browser
No response
Carbon for IBM.com version
1.11.0
Severity
Severity 4 = The problem is not visible to or noticeable to an average user. Affects minor functionality, no workaround needed.
Application/website
cloud.ibm.com/docs
Package
@carbon/ibmdotcom-web-components
CodeSandbox example
N/A
Steps to reproduce the issue (if applicable)
No response
Release date (if applicable)
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
[X] I checked the current issues for duplicate issues
Hi @markkulube! We have a PR open for this https://github.com/carbon-design-system/carbon-for-ibm-dotcom/pull/8037
@ariellalgilmore thanks.
So closing this issue as resolution is in PR #8037
|
gharchive/issue
| 2022-01-24T16:15:26 |
2025-04-01T04:56:15.329752
|
{
"authors": [
"ariellalgilmore",
"markkulube"
],
"repo": "carbon-design-system/carbon-for-ibm-dotcom",
"url": "https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/8130",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
491407034
|
feat(tutorial): complete step 3
Closes #
{{short description}}
Changelog
New
{{new thing}}
Changed
{{change thing}}
Removed
{{removed thing}}
Hi, I have updated my code. Please have a look again. Thanks!
And, how about we update the step 3 for our tutorial. It's just a suggestion. Because I found other students still make the mistake on this step.
|
gharchive/pull-request
| 2019-09-10T01:53:39 |
2025-04-01T04:56:15.332317
|
{
"authors": [
"ammilly"
],
"repo": "carbon-design-system/carbon-tutorial",
"url": "https://github.com/carbon-design-system/carbon-tutorial/pull/2638",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
680354716
|
fix(connect): look at store in initialization
Related Ticket(s)
Refs #3589.
Description
This change ensures that <dds-footer-container>, etc. properties are populated from Redux store when the component is initialized, in addotion to reacting to changes in Redux store.
Lack of such mechanism caused empty footer data when a new instance of <dds-footer-container> is created after Redux store has already loaded all data from the service.
Changelog
New
"Store to component props" logic at component initialization in <dds-footer-container>, etc.
@asudoh fyi, I opened up PR https://github.com/carbon-design-system/ibm-dotcom-library/pull/3595 that has update snapshot files to fix the CI issues.
Superseded by: #3595
|
gharchive/pull-request
| 2020-08-17T16:17:04 |
2025-04-01T04:56:15.361541
|
{
"authors": [
"asudoh",
"jeffchew"
],
"repo": "carbon-design-system/ibm-dotcom-library",
"url": "https://github.com/carbon-design-system/ibm-dotcom-library/pull/3590",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2434534477
|
Datagrid tooltip component hover issue
Package
Carbon for IBM Products
Description
Create a Datagrid table
Add tooltip component next to the header name.
Expected : on hover of the tooltip , the tooltip description should show up
Current behavior : Tooltip hover doesn't show anything
https://github.com/user-attachments/assets/19b69d10-1bda-4086-abb8-77217e2484fe
Component(s) impacted
Datagrid
Browser
No response
@carbon/ibm-products (previously @carbon/ibm-cloud-cognitive) version
"@carbon/ibm-products": "2.40.0",
Suggested Severity
Severity 2 = Aspects of design is broken, and impedes users in a significant way, but there is a way to complete their tasks. Affects major functionality, has a workaround.
Product/offering
NA
CodeSandbox or Stackblitz example
https://codesandbox.io/p/sandbox/datagridfilter-forked-26hqww?file=%2Fsrc%2FExample.js%3A41%2C25
Steps to reproduce the issue (if applicable)
No response
Release date (if applicable)
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
[X] I checked the current issues for duplicate issues
There is one more issue when we have a component rendered as a Header, when we try to resize there is no Ellipsis . Attached video below for reference
https://github.com/user-attachments/assets/e0eb2076-3660-4d1a-9713-1ec738331cf1
This tooltip hover works with autoAlign in Chrome but not on Safari browser
Hi @marinamas, can you try using the autoAlign prop now? We've made some dependency updates which have included @carbon/reacts support for this.
We've been investigating TanStack Table, a third-party, open-source offering, which provides extensive data table capabilities surpassing what our Carbon Datagrid offers. It provides much more flexibility and customization. TanStack Table is headless which means it can easily be added alongside Datagrid component in your product or application. The benefits of more flexibility for product teams and less maintenance for Carbon makes it a win win. Lastly, it is available in multiple frameworks including React and Web Component so it provides an option to non-React product teams.
For these reasons, we have decided to transition from building our own custom table component to using an example-based approach with TanStack Table. Datagrid will still exist in our library for existing teams but we are announcing the deprecation* of the Datagrid component in v2.54.0 release so teams can begin to work through the transition. Details about how to use both Datagrid and TanStack together can be found here.
*Deprecation means that no new features will be added however sev 1 and sev 2 bugs will be supported.
|
gharchive/issue
| 2024-07-29T06:08:00 |
2025-04-01T04:56:15.370919
|
{
"authors": [
"ljcarot",
"marinamas",
"matthewgallo"
],
"repo": "carbon-design-system/ibm-products",
"url": "https://github.com/carbon-design-system/ibm-products/issues/5743",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
53620601
|
Update Android SDK with latest source code
Guys,
First of all, thank you for your awesome API.
I realize that some EXTRAS is not available in the latest SDK (3.2.0), but it is available in the source code (https://github.com/card-io/card.io-Android-source).
Properties like EXTRA_HIDE_CARDIO_LOGO, EXTRA_SCAN_INSTRUCTIONS etc.
Would you update the build with this new source code please?
Regards
We're going to update the official build with a 4.0 release, but we're doing some update work in sync with that. Once the package is ready, we'll release it.
Hi Tom
Here is my updated result:
"RESULT_SCAN_SUPPRESSED" is the result code passed to onActivityResult() method, not something we can pass to CardIOActivity. This means the "capture image only" mode is not available, you can only scan payment card.
Even we can set EXTRA_USE_CARDIO_LOGO param, this version can only switch between "card.io" logo and "Paypal" logo not hiding them.
Best Regards
Xiao Lu
On Jan 7, 2015, at 2:30 PM, Tom Whipple notifications@github.com wrote:
We're going to update the official build with a 4.0 release, but we're doing some update work in sync with that. Once the package is ready, we'll release it.
—
Reply to this email directly or view it on GitHub.
Thank you Tom,
We will wait. Do you have an estimate of a new release date?
Yeah, I'm interested in the release date too.
|
gharchive/issue
| 2015-01-07T10:58:06 |
2025-04-01T04:56:15.381896
|
{
"authors": [
"andresmafra",
"shawnlu123",
"tomwhipple"
],
"repo": "card-io/card.io-Android-SDK",
"url": "https://github.com/card-io/card.io-Android-SDK/issues/38",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
224331710
|
I get it,thank you
General information
SDK/Library version:
iOS Version and Device:
Integration type and version:
Issue description
You might take a look at the code/fork and discussion at https://github.com/card-io/card.io-iOS-source/issues/88
|
gharchive/issue
| 2017-04-26T03:14:58 |
2025-04-01T04:56:15.384246
|
{
"authors": [
"josharian",
"xiesha"
],
"repo": "card-io/card.io-iOS-SDK",
"url": "https://github.com/card-io/card.io-iOS-SDK/issues/250",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2446312671
|
🛑 mainnet - API is down
In 0a10a8f, mainnet - API (https://api.beta.explorer.cardano.org/api/v1/epochs) was down:
HTTP code: 200
Response time: 672 ms
Resolved: mainnet - API is back up in 66f6662 after 14 minutes.
|
gharchive/issue
| 2024-08-03T11:41:13 |
2025-04-01T04:56:15.386945
|
{
"authors": [
"cf-web3-team-integrations"
],
"repo": "cardano-foundation/cf-explorer-status",
"url": "https://github.com/cardano-foundation/cf-explorer-status/issues/316",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
854597540
|
Non repeatable build / broken dependency propogation
I'm not sure exactly whats up here, but cargo2nix appears broken.
I've been building in CI, and running to successful completion. Getting hold of master and issuing nix-build pulls everything from the binary cache, which no build, as expected.
However:
echo " " >> ./crates/xrfdc/Cargo.toml
cargo clean
cargo generate-lockfile
cargo2nix -f
nix-build .
results in the error below. Cargo is presumably unhappy with the metadata within encoding_rs.
I haven't bottomed out the cause yet (otherwise there would be an accompanying PR).
The fact that this has built successfully indicates there is an ordering type bug at play here? Or that cargo is emitting metadeta which renders the build invalid
Compiling async-graphql v2.8.2 (/build/async-graphql-2.8.2)
{
"reason": "compiler-message",
"package_id": "xrfdc 0.1.0 (path+file:///build/source)",
"target": {
"kind": [
"lib"
],
"crate_types": [
"lib"
],
"name": "xrfdc",
"src_path": "/build/source/src/lib.rs",
"edition": "2018",
"doc": true,
"doctest": true,
"test": true
},
"message": {
"rendered": "error[E0460]: found possibly newer version of crate `encoding_rs` which `async_graphql` depends on
--> src/schema.rs:2:5
|
2 | use async_graphql::*;
| ^^^^^^^^^^^^^
|
= note: perhaps that crate needs to be recompiled?
= note: the following crate versions were found:
crate `encoding_rs`: /nix/store/md0bfvsf39ch8vki443wsib7sc44vcjd-crate-encoding_rs-0.8.28/lib/libencoding_rs.rlib
crate `async_graphql`: /nix/store/akgk0qdl4fzn32mhpg54zwjgxh7nzc9a-crate-async-graphql-2.8.1/lib/libasync_graphql.rlib
",
"children": [
{
"children": [],
"code": null,
"level": "note",
"message": "perhaps that crate needs to be recompiled?",
"rendered": null,
"spans": []
},
{
"children": [],
"code": null,
"level": "note",
"message": "the following crate versions were found:
crate `encoding_rs`: /nix/store/md0bfvsf39ch8vki443wsib7sc44vcjd-crate-encoding_rs-0.8.28/lib/libencoding_rs.rlib
crate `async_graphql`: /nix/store/akgk0qdl4fzn32mhpg54zwjgxh7nzc9a-crate-async-graphql-2.8.1/lib/libasync_graphql.rlib",
"rendered": null,
"spans": []
}
],
"code": {
"code": "E0460",
"explanation": null
},
"level": "error",
"message": "found possibly newer version of crate `encoding_rs` which `async_graphql` depends on",
"spans": [
{
"byte_end": 31,
"byte_start": 18,
"column_end": 18,
"column_start": 5,
"expansion": null,
"file_name": "src/schema.rs",
"is_primary": true,
"label": null,
"line_end": 2,
"line_start": 2,
"suggested_replacement": null,
"suggestion_applicability": null,
"text": [
{
"highlight_end": 18,
"highlight_start": 5,
"text": "use async_graphql::*;"
}
]
}
]
}
}
{
"reason": "compiler-message",
"package_id": "xrfdc 0.1.0 (path+file:///build/source)",
"target": {
"kind": [
"lib"
],
"crate_types": [
"lib"
],
"name": "xrfdc",
"src_path": "/build/source/src/lib.rs",
"edition": "2018",
"doc": true,
"doctest": true,
"test": true
},
"message": {
"rendered": "error: aborting due to previous error
",
"children": [],
"code": null,
"level": "error",
"message": "aborting due to previous error",
"spans": []
}
}
{
"reason": "build-finished",
"success": false
}
cargo generate-lockfile
cargo2nix -f
This step is expected to be non-reproducible as the package index will be consulted. Still, the build should succeed if cargo can succeed (Definitely point out if this is not the case).
Among search results, this link seems pointing in the right direction:
It produces a crate with incompatible symbol names (a "different version"). There's a "strict version hash" derived by hashing a representation of the structure of a crate's entire external interface. That hash value is appended to every symbol name as part of the name mangling process.
A mechanism I could believe to be at work here is if Nix evaluation thinks a dependency doesn't need rebuild when it actually does. The compiler catches our error due to the wrong interface hashes. To simplify, we might be ignoring part of the input when writing the Nix expressions. If this theory is correct, cleaning out the store paths should result in Nix not finding the (broken) cached rlib and being able to succeed. Can you give this a shot?
You can find the paths that might need rebuilding using a similar workflow:
nix-store -q --deriver /nix/store/32d9pc56axxrc2830q5cz1iwifjbar2m-crate-cargo2nix-0.9.0-bin
nix show-derivation /nix/store/ddwgw69slmx1ajdcyjrj2z53c4kfw3ml-crate-cargo2nix-0.9.0.drv | grep '/nix.*crate.*drv
"/nix/store/ddwgw69slmx1ajdcyjrj2z53c4kfw3ml-crate-cargo2nix-0.9.0.drv": {
"/nix/store/3inhij8bmn3qzpn1m18j2f48bvwlil2r-crate-anyhow-1.0.35.drv": [
"/nix/store/3l1la0qrsgwyzx6nvpxlb8kz8l98jgi0-crate-semver-0.9.0.drv": [
"/nix/store/9lid9ypd34zxyp697ajx92qwn9bkzshf-crate-tera-1.5.0.drv": [
"/nix/store/ixv0163kk08nyjwjk22p2z990z4335wy-crate-cargo-0.48.0.drv": [
"/nix/store/jjkvilw9283576lxysgkyrng2v5myf33-crate-tempfile-3.1.0.drv": [
"/nix/store/js8lidv52qbdflgnjrqbv9hnhairsd66-crate-colorify-0.2.3.drv": [
"/nix/store/l5l4anxvgpj6kxvp9h3g2dgi37glabql-crate-serde-1.0.118.drv": [
"/nix/store/n08g1s2v3kgcir8z06f45hn2a32ybm0k-crate-toml-0.5.7.drv": [
"/nix/store/qyivqx0j3g4gikbf7y9gn1wfan1xl0ds-crate-pathdiff-0.2.0.drv": [
"/nix/store/vgdpa0vxdw1yh88xbgmbdvijhm2mdl95-crate-cargo-platform-0.1.1.drv": [
To confirm, the build does not succeed. I'll have a go with what you've suggested - thanks for the pointer. It'll be next week now before i get a chance.
Assuming cargo build works but nix build fails. Please correct me if that's not the case.
Yes, cargo works fine, nix does not. I can simplify the failure case to:
git checkout master
# confirm build is as CI - everything is pulled from binary cache
nix-build .
# Invalidate build for 1 of the crates in workspace
echo " " >> ./crates/xrfdc/Cargo.toml
nix-build .
The resultant error message is then on a downstream crate:
found possibly newer version of crate `cvt` which `xrfclk` depends on
so here, we have crate xrfdc and xrfclk which are both dependencies of a downstream binary crate. xrfdc builds correctly with the whitespace modified toml.
I can confirm that deleting all store paths, then rebuilding with nix-build . --option substitute false results in a successful build
Guessing that you've already implemented flushing the CI cache as a workaround for now.
Based on first example, there's strong evidence that the newly generated Cargo.nix did not react to the "modified" Cargo.toml.
Question is, how does the inconsequential change to the Cargo.toml result in rustc deciding that the rlib is no longer good...
Our workaround was nota subtle one - CI is ignoring all entries in the binary cache containing our crates. We warm it up by building hello-world with the cache for all cross-compilation targets, to avoid rebuilding the universe.
I'm equally baffled. I don't understand the internals of cargo sufficiently to know where to inspect the metadata.
My leading (only) candidate for this is a single binary crate, which shares a many of the dependencies of the failing ones. Its the only override aside from some fairly straightforward sys crates including build inputs on the libraries. We have
my_bin = pkgs.rustBuilder.rustLib.makeOverride {
name = "my_bin";
overrideAttrs = drv: {
# Rust flags were being ignored: this is a workaround.
configureCargo = ''
mkdir -p .cargo
cargo_config=${./.cargo/config.toml}
cp $cargo_config .cargo/$(stripHash $cargo_config)
'';
};
};
This is being done to set linkage flags for static cross compilation. I'm unaware of any other way to inject these in cargo2nix - i'd love to been shown to be incorrect.
[target."aarch64-unknown-linux-musl"]
linker = "aarch64-unknown-linux-musl-ld"
rustflags = ["-C", "target-feature=+crt-static", "-C", "link-arg=-s"]
Building the dependencies for this target will have different flags than building those dependencies for other targets, which if encoded in metadata would cause what i'm seeing if those dependencies were reused. I don't have a great handle on what metadata cargo stores, but i'd believe link flags were amongst them.
Does this seem likely? If so what would be a better method for injecting these rust flags for this target and its deps?
We've come to hit this issue as well. We use crates2nix however.
@Fuuzetsu Is there a related issue in that project? Also, are you doing cross compilation?
@Fuuzetsu Is there a related issue in that project? Also, are you doing cross compilation?
No issue on cargo2nix that I know of.
w.r.t. reproduction, sadly it's a closed source project that has been working well up until recently when we hit this issue. We have a workspace with maybe like 20 crates (not at PC to check) and when we added one more, the new crate is having the problem. I found this ticket by Googling.
We are not doing cross compilation.
@Fuuzetsu Is there a related issue in that project? Also, are you doing cross compilation?
No issue on cargo2nix that I know of.
w.r.t. reproduction, sadly it's a closed source project that has been working well up until recently when we hit this issue. We have a workspace with maybe like 20 crates (not at PC to check) and when we added one more, the new crate is having the problem. I found this ticket by Googling.
We are not doing cross compilation.
Sorry, first line meant to say crate2nix, not cargo2nix
@Fuuzetsu Are you able to share how you invoke crate2nix?
My working assumption for this error is its something in the manner in which I'm setting the linkage flags not propagating correctly. However, that could be completely unrelated, and there is some more fundamental bug here.
The way this first manifested itself for us was very similar to what your describe - we've 15 or so crates in a workspace (closed source), and adding a new one caused this problem.
@Fuuzetsu Are you able to share how you invoke crate2nix?
My working assumption for this error is its something in the manner in which I'm setting the linkage flags not propagating correctly. However, that could be completely unrelated, and there is some more fundamental bug here.
The way this first manifested itself for us was very similar to what your describe - we've 15 or so crates in a workspace (closed source), and adding a new one caused this problem.
We're not doing anything strange with the crates I think: I am adding things like buildInputs to a bunch and setting some env vars to the others but once we get past that, it's fairly vanilla. I'm not messing with linking (except for a single crate where I use extraLinkFlags to deal with https://github.com/NixOS/nixpkgs/issues/119382).
I don't really think it's an issue with how we're invoking the builder nor with what you're doing. I initially was blaming non-determinism of rustc but it doesn't explain why changing the whitespace in the upstream crate makes it work sometimes... The issue happening when a crate is added is also very strange: they ought to be just separate derivations, I don't understand why they would be affecting each other.
Honestly I'm a bit lost as to how to deal with this. I was going to try and examine what's happening during the build in detail when I got some time but I haven't gotten to that point yet. If we have a way to replicate the issue then change some whitespace and have a working build, it should presumably be possible to examine what rustc is doing during the build and find the difference...
@Fuuzetsu Are you able to share how you invoke crate2nix?
My working assumption for this error is its something in the manner in which I'm setting the linkage flags not propagating correctly. However, that could be completely unrelated, and there is some more fundamental bug here.
The way this first manifested itself for us was very similar to what your describe - we've 15 or so crates in a workspace (closed source), and adding a new one caused this problem.
We're not doing anything strange with the crates I think: I am adding things like buildInputs to a bunch and setting some env vars to the others but once we get past that, it's fairly vanilla. I'm not messing with linking (except for a single crate where I use extraLinkFlags to deal with NixOS/nixpkgs#119382).
I don't really think it's an issue with how we're invoking the builder nor with what you're doing. I initially was blaming non-determinism of rustc but it doesn't explain why changing the whitespace in the upstream crate makes it work sometimes... The issue happening when a crate is added is also very strange: they ought to be just separate derivations, I don't understand why they would be affecting each other.
Honestly I'm a bit lost as to how to deal with this. I was going to try and examine what's happening during the build in detail when I got some time but I haven't gotten to that point yet. If we have a way to replicate the issue then change some whitespace and have a working build, it should presumably be possible to examine what rustc is doing during the build and find the difference...
Oh, I wanted to add something. One thing that's different on say my machine where build works vs coworker's machine where build failed vs CI where build failed is the codegen-units rustc parameter that basically comes from the environment. This parameter is very suspicious and I think easily can lead to different build results. This could explain why it builds on one machine but fails in CI. I have not yet tried tuning NIX_BUILD_CORES but I wonder if it'd work to set it to say 1 in CI or something.
I think you may have something with codegen-units. I hadn't considered this, but our errors started around the time we bumped our CI machines from 8 to 16 cores.
I'm not sure of the exact correlation, because the CI infrastructure was done out of band from our commit history. Perhaps codegen-units should be a build parameter in cargo2nix / crate2nix.
The docs suggest this is not the case though:
I'm not sure what you mean by this. The defaults aren't used in nix builds. See build-crate.nix and configure-crate.nix.
For now I'm going to send MR to nixpkgs to make codegen-units configurable. Will need changes in crates2nix and cargo2nix to make use of it of course but have to start somewhere.
The docs suggest this is not the case though:
I'm not sure what you mean by this. The defaults aren't used in nix builds. See build-crate.nix and configure-crate.nix.
Cargo2nix uses cargo to do the building
The docs suggest this is not the case though:
I'm not sure what you mean by this. The defaults aren't used in nix builds. See build-crate.nix and configure-crate.nix.
Cargo2nix uses cargo to do the building
Ah, I see. I still am not sure what you mean by "this is not the case" in this situation but as I understand, this means that probably cargo2nix is using the value of 16 as release builds are non-incremenal by default as of few rust versions ago (and nix builds aren't able to be incremental anyway so one would hope it's doing non-incremental settings anyway).
I guess the patch to nixpkgs doesn't matter for cargo2nix and for your usecase you should just either patch cargo2nix to set the flag explicitly or set it in Cargo.toml, assuming it gets honoured properly.
We started doing codegen-units=1 in our codebase recently in hopes that it'd fix the issue. Sadly, I just hit this again even with codegen-units=1 so it doesn't make the issue go away. I guess this needs some real debugging ;/
Need to look at the drv's between success & failure. In the tree of dependencies, there's got to be some extra drv's that show up (along with old ones) or else the impurity is a non-determinism that's leaking in, meaning the intensional store winds up with multiple results at the same nix path. It's a great help to spot the propagation of differences in the drvs
0.9.1 has fixed a behavior where trivial features were leaking down into dependencies. This might actually help the behavior.
After reading more into Fuuzetsu's experience, it's pretty clear that this issue is with Rust, non-deterministic binary outputs. In general, we can't protect from non-determinism. I guess would could recommend an override workflow for the specific crate to always trigger rebuild. A workaround is the right answer if there's unavoidable non-determinism in the Rust dependency or some rustc behavior.
Yes, rustc 1.55 seems OK and issues there lie with macro crates that produce non-deterministic output.
In rustc 1.56, there are issues with compiler (rather, LLVM) itself in general so I would wait for 1.57 where there are bunch of fixes.
There's a nix config flag that you can set to rebuild all packages multiple times and check hashes: this can help catch non-deterministic packages ahead of time, hopefully before they go into your binary caches etc. Another is to override your rust deps with preferLocalBuild or whatever it is.
For us, we are just on 1.55 for now.
so I would wait for 1.57 where there are bunch of fixes.
@Fuuzetsu Did you see any fixes land as expected?
so I would wait for 1.57 where there are bunch of fixes.
@Fuuzetsu Did you see any fixes land as expected?
Yes, there were a bunch. We ran into another issue but it turned out to be a derive crate producing code from a macro in non-deterministic way. It's looking like we're going to be upgrading from 1.55 to 1.57, skipping the broken 1.56.
You can see some issues getting fixed in https://github.com/rust-lang/rust/issues/90301 – it's all LLVM stuff.
Then I think it's appropriate to handle this issue as errata, noting it in the common issues on the README but not attempting to hunt it down as a nix or cargo2nix issue. With no objections, I'll include this note in my open PR and close this issue on merge.
Then I think it's appropriate to handle this issue as errata, noting it in the common issues on the README but not attempting to hunt it down as a nix or cargo2nix issue. With no objections, I'll include this note in my open PR and close this issue on merge.
Yes, there is nothing on the nix side that's broken or that one can do beyond something like "disable binary caches because binaries are randomly not compatible". Definitely nothing cargo2nix or crate2nix or anything can do (except somehing crazy like enabling preferLocalBuild or something).
|
gharchive/issue
| 2021-04-09T14:51:51 |
2025-04-01T04:56:15.435132
|
{
"authors": [
"Fuuzetsu",
"ollie-etl",
"psionic-k"
],
"repo": "cargo2nix/cargo2nix",
"url": "https://github.com/cargo2nix/cargo2nix/issues/184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2080714271
|
Carla 0.9.15 additional map seem still be 0.9.14 link
in the tag 0.9.15 the additional map seems still be the old 0.9.14 link
https://carla-releases.s3.us-east-005.backblazeb2.com/Linux/AdditionalMaps_0.9.14.tar.gz
issue fixed
|
gharchive/issue
| 2024-01-14T12:44:23 |
2025-04-01T04:56:15.445810
|
{
"authors": [
"zczjx"
],
"repo": "carla-simulator/carla",
"url": "https://github.com/carla-simulator/carla/issues/7066",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2526948668
|
Failed to find XercesC (missing: XercesC_VERSION)
CARLA version: 0.9.13
Platform/OS: Ubuntu 20.04
Problem you have experienced: Failed to find XercesC (missing: XercesC_VERSION)
What you expected to happen: building PythonAPI
Steps to reproduce: make PythonAPI
Other information (documentation you consulted, workarounds you tried): i have Xerces 3.2.5 installed instead of 3.2.3 and changed Setup.sh and BuildOSM2ODR.bat accordingly.
Error message that i got:
CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
Failed to find XercesC (missing: XercesC_VERSION)
Call Stack (most recent call first):
/usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:393 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.16/Modules/FindXercesC.cmake:99 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
CMakeLists.txt:71 (find_package)
-- Configuring incomplete, errors occurred!
See also "/media/michal/7eeeb763-63f0-425a-88b3-041b93352d7a/carla/Build/libosm2dr-build/CMakeFiles/CMakeOutput.log".
make: *** [Util/BuildTools/Linux.mk:152: osm2odr] Error 1
Hi,
In your situation, I would try using Carla 0.9.15.2
There are many different bug fixes since version 0.9.13, and one of them is a changed download link.
If you can't change to Carla 0.9.15.2, I could look at the problem at a later time.
Hi, thanks for responding.
I can't use 0.9.15 version, as i need to use a ROS Bridge after that, and this is configured on 0.9.13 version. Unless there is a way of making it work on 0.9.15 that i am not aware of?
Hi,
open the following file
path/to/carla/Utils/BuildTools/Setup.sh
and search for this line
line 432
XERCESC_REPO=https://ftp.cixug.es/apache//xerces/c/3/sources/xerces-c-${XERCESC_VERSION}.tar.gz
and change it to
XERCESC_REPO=https://archive.apache.org/dist/xerces/c/3/sources/xerces-c-${XERCESC_VERSION}.tar.gz
The download link has changed, and I hope this is the only problem.
|
gharchive/issue
| 2024-09-15T14:51:49 |
2025-04-01T04:56:15.452273
|
{
"authors": [
"PatrickPromitzer",
"SirMichcik"
],
"repo": "carla-simulator/carla",
"url": "https://github.com/carla-simulator/carla/issues/8139",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
746346277
|
Agent support for all scenarios in one class
Currently, when trying to run all the scenarios of one class, manual_control.py needs to be run separately whenever a scenario in the class is prepared. Is there any existing methods to keep the agent wait for ego vehicle while the other scenarios are getting prepared? If no, can you please add this feature?
OS: [Windows 10]
CARLA Version [CARLA 0.9.10]
Python version [3.9.0]
Scenario_runner Version [0.9.10]
That's currently not supported as right now the manual control automatically shuts down when the ego is destroyed. In order to add your feature, some kind of reset should have to be implemented, but I don't think it is as easy as it sounds. I'm sorry but there are no plans to do so right now, but feel free to do so and do a PR :slightly_smiling_face:
That's currently not supported as right now the manual control automatically shuts down when the ego is destroyed. In order to add your feature, some kind of reset should have to be implemented, but I don't think it is as easy as it sounds. I'm sorry but there are no plans to do so right now, but feel free to do so and do a PR
@glopezdiest Thank you for the response. Will do a PR for this and will also try this from my side.
|
gharchive/issue
| 2020-11-19T08:07:44 |
2025-04-01T04:56:15.455520
|
{
"authors": [
"TeeManiac",
"glopezdiest"
],
"repo": "carla-simulator/scenario_runner",
"url": "https://github.com/carla-simulator/scenario_runner/issues/688",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1893687996
|
Disable drive activity LED by default
For compatibility with Pico W, disable use of Pico LED by default.
The W uses GPIO 25 for SPI communication to the CYW43439. In the old days, the Pico default LED pin (PICO_DEFAULT_LED_PIN in pico-sdk\src\boards\include\boards\pico.h) was 25.
You can still enable the drive activity light by putting something like
add_compile_definitions(USE_LED=1)
in CMakeLists.txt, for example.
Relevant to #15 and #74
|
gharchive/pull-request
| 2023-09-13T04:08:58 |
2025-04-01T04:56:15.458363
|
{
"authors": [
"carlk3"
],
"repo": "carlk3/no-OS-FatFS-SD-SPI-RPi-Pico",
"url": "https://github.com/carlk3/no-OS-FatFS-SD-SPI-RPi-Pico/pull/87",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
266347674
|
HitBTC error in live trading
As soon as backfilling completes after running the live trade command:
An error occurred { Error: hitbtc GET http://api.hitbtc.com/api/1/trading/balance?nonce=1508296556989&apikey=YOUR-API-KEY 401 Unauthorized {"error":{"code":1002,"message":"Authorisation failed","description":""}}
at response.text.then.text (/app/node_modules/ccxt/ccxt.js:675:19)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7) constructor: [Function: AuthenticationError] }
HitBTC API is down! unable to call getBalance, retrying in 10s
Also when backfilling before live trading it tries to backfill the data from all time, so 1391 days or so for BTC-USD.
You didn't enter your API key.
It's throwing a 401 (Unauthorized).
http://api.hitbtc.com/api/1/trading/balance?nonce=1508296556989&apikey=YOUR-API-KEY (From the error you're having. Note the YOUR-API-KEY)
You can change your API Key in conf.js
Also backfilling that 1000+ is normal.
My API is in there. @nedievas is submitting a PR to fix the problem. Also no it's not normal to backfill 1000+ days of data its completely unnecessary for live trading.
|
gharchive/issue
| 2017-10-18T03:20:44 |
2025-04-01T04:56:15.471994
|
{
"authors": [
"avery1227",
"brucetus"
],
"repo": "carlos8f/zenbot",
"url": "https://github.com/carlos8f/zenbot/issues/642",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1573253867
|
Remove direct reference to OSRE vs GSoC
Realized that this link is supposed to also be on the GSoC application and think it would be confusing if it talked about OSRE and not GSoC. To simplify just referred to everything more generally ("summer contributors") and changed OSRE admin to Org admin.
I only wanted to change this one which is directly linked as part of the GSoC application. If accepted then we can revert this page back to make them consistent with the other pages. Just trying not to confuse/muddle GSoC application.
|
gharchive/pull-request
| 2023-02-06T20:47:09 |
2025-04-01T04:56:15.483617
|
{
"authors": [
"slieggi"
],
"repo": "carlosmalt/ucsc-ospo",
"url": "https://github.com/carlosmalt/ucsc-ospo/pull/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
444525373
|
TEST Fail || HELP
after training, i get this message when i try to test
(tf_gpu) S:\Pagmer\DCGAN\DCGAN-tensorflow-master\DCGAN-tensorflow-master>python main.py --dataset Doors --input_height=250 --crop
{'G_img_sum': <absl.flags._flag.BooleanFlag object at 0x000001A603246F98>,
'batch_size': <absl.flags._flag.Flag object at 0x000001A601E230F0>,
'beta1': <absl.flags._flag.Flag object at 0x000001A601EA23C8>,
'checkpoint_dir': <absl.flags._flag.Flag object at 0x000001A603246978>,
'ckpt_freq': <absl.flags._flag.Flag object at 0x000001A603246E10>,
'crop': <absl.flags._flag.BooleanFlag object at 0x000001A603246AC8>,
'data_dir': <absl.flags._flag.Flag object at 0x000001A6032467B8>,
'dataset': <absl.flags._flag.Flag object at 0x000001A6032466A0>,
'epoch': <absl.flags._flag.Flag object at 0x000001A67BFD8320>,
'export': <absl.flags._flag.BooleanFlag object at 0x000001A603246BA8>,
'freeze': <absl.flags._flag.BooleanFlag object at 0x000001A603246C18>,
'h': <tensorflow.python.platform.app._HelpFlag object at 0x000001A60324D048>,
'help': <tensorflow.python.platform.app._HelpFlag object at 0x000001A60324D048>,
'helpfull': <tensorflow.python.platform.app._HelpfullFlag object at 0x000001A60324D0B8>,
'helpshort': <tensorflow.python.platform.app._HelpshortFlag object at 0x000001A60324D128>,
'input_fname_pattern': <absl.flags._flag.Flag object at 0x000001A603246710>,
'input_height': <absl.flags._flag.Flag object at 0x000001A6021F9208>,
'input_width': <absl.flags._flag.Flag object at 0x000001A603246518>,
'learning_rate': <absl.flags._flag.Flag object at 0x000001A600727898>,
'max_to_keep': <absl.flags._flag.Flag object at 0x000001A603246CC0>,
'out_dir': <absl.flags._flag.Flag object at 0x000001A603246828>,
'out_name': <absl.flags._flag.Flag object at 0x000001A6032468D0>,
'output_height': <absl.flags._flag.Flag object at 0x000001A603246588>,
'output_width': <absl.flags._flag.Flag object at 0x000001A603246630>,
'sample_dir': <absl.flags._flag.Flag object at 0x000001A6032469E8>,
'sample_freq': <absl.flags._flag.Flag object at 0x000001A603246D68>,
'train': <absl.flags._flag.BooleanFlag object at 0x000001A603246A20>,
'train_size': <absl.flags._flag.Flag object at 0x000001A601EA2B00>,
'visualize': <absl.flags._flag.BooleanFlag object at 0x000001A603246B38>,
'z_dim': <absl.flags._flag.Flag object at 0x000001A603246EB8>,
'z_dist': <absl.flags._flag.Flag object at 0x000001A603246F60>}
2019-05-15 18:50:27.450608: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-05-15 18:50:27.609271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.7465
pciBusID: 0000:01:00.0
totalMemory: 8.00GiB freeMemory: 6.64GiB
2019-05-15 18:50:27.614137: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-15 18:50:28.136045: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-15 18:50:28.140202: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-05-15 18:50:28.141478: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-05-15 18:50:28.142812: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6389 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Variables: name (type shape) [size]
generator/g_h0_lin/Matrix:0 (float32_ref 100x524288) [52428800, bytes: 209715200]
generator/g_h0_lin/bias:0 (float32_ref 524288) [524288, bytes: 2097152]
generator/g_bn0/beta:0 (float32_ref 512) [512, bytes: 2048]
generator/g_bn0/gamma:0 (float32_ref 512) [512, bytes: 2048]
generator/g_h1/w:0 (float32_ref 5x5x256x512) [3276800, bytes: 13107200]
generator/g_h1/biases:0 (float32_ref 256) [256, bytes: 1024]
generator/g_bn1/beta:0 (float32_ref 256) [256, bytes: 1024]
generator/g_bn1/gamma:0 (float32_ref 256) [256, bytes: 1024]
generator/g_h2/w:0 (float32_ref 5x5x128x256) [819200, bytes: 3276800]
generator/g_h2/biases:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/beta:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/gamma:0 (float32_ref 128) [128, bytes: 512]
generator/g_h3/w:0 (float32_ref 5x5x64x128) [204800, bytes: 819200]
generator/g_h3/biases:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/beta:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/gamma:0 (float32_ref 64) [64, bytes: 256]
generator/g_h4/w:0 (float32_ref 5x5x3x64) [4800, bytes: 19200]
generator/g_h4/biases:0 (float32_ref 3) [3, bytes: 12]
discriminator/d_h0_conv/w:0 (float32_ref 5x5x3x64) [4800, bytes: 19200]
discriminator/d_h0_conv/biases:0 (float32_ref 64) [64, bytes: 256]
discriminator/d_h1_conv/w:0 (float32_ref 5x5x64x128) [204800, bytes: 819200]
discriminator/d_h1_conv/biases:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn1/beta:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn1/gamma:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_h2_conv/w:0 (float32_ref 5x5x128x256) [819200, bytes: 3276800]
discriminator/d_h2_conv/biases:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_bn2/beta:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_bn2/gamma:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_h3_conv/w:0 (float32_ref 5x5x256x512) [3276800, bytes: 13107200]
discriminator/d_h3_conv/biases:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_bn3/beta:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_bn3/gamma:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_h4_lin/Matrix:0 (float32_ref 524288x1) [524288, bytes: 2097152]
discriminator/d_h4_lin/bias:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 62093700
Total bytes of variables: 248374800
[] Reading checkpoints... ./out\20190515.185027 - data - Doors\checkpoint
[] Failed to find a checkpoint
Traceback (most recent call last):
File "main.py", line 147, in
tf.app.run()
File "C:\ProgramData\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "main.py", line 124, in main
raise Exception("Checkpoint not found in " + FLAGS.checkpoint_dir)
Exception: Checkpoint not found in ./out\20190515.185027 - data - Doors\checkpoint
anyone knows whats up?
Hi. I also faced exact same problem.. Can any one suggest on this how to resolve.
Thanks in advance.
By default, the checkpoint is saved every 200 epochs. If you train during less epochs, no checkpoint will be saved and you won't be able to test your generator.
You can control the checkpoint frequency with --ckpt_freq and the number of iterations with --epoch.
Hope it helps!
Thanks for the answer!
is there a possibility to get different test results? I'm getting only the same one
@Shnoogy I am facing the same issue. The generated images are always almost the same with minor change like pixel intensity. I have a small dataset of only 40 images, it may be the cause (overfitting?) but I'm not sure. Do you have a small dataset too ? I tried to change the option of visualize() to 0 but it didn't resolve the issue. (#204)
Hi..
I am still not able to test .. I trained with 300 epochs then too i
got : exception:
Checkpoint not found in ./out/20190712.103742 - data - face/checkpoint
what to do now .. please help me.
command to test was: python main.py --input_height 96 --input_width 96
--output_height 96 --output_width 96 --dataset face --crop --epoch 300
--input_fname_pattern ".jpg*"
command to train was : python main.py --input_height 96 --input_width 96
--output_height 96 --output_width 96 --dataset face --crop --train --epoch
300 --input_fname_pattern ".jpg*"
On Thu, Jun 20, 2019 at 2:36 PM Guillaume Fradet notifications@github.com
wrote:
@Shnoogy https://github.com/Shnoogy I am facing the same issue. The
generated images are always almost the same with minor change like pixel
intensity. I have a small dataset of only 40 images, it may be the cause
(overfitting?) but I'm not sure. Do you have a small dataset too ? I tried
to change the option of visualize() to 0 but it didn't resolve the issue.
(#204 https://github.com/carpedm20/DCGAN-tensorflow/issues/204)
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carpedm20/DCGAN-tensorflow/issues/339?email_source=notifications&email_token=AFLNCW2GLZDKWVWKLRNZKPLP3NCBFA5CNFSM4HNE24X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYEZL2I#issuecomment-503944681,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFLNCW2APYXJDVX5EF2GY6LP3NCBFANCNFSM4HNE24XQ
.
--
With Best Wishes and Regards
Anishi Gupta
Did anyone get the solution? I tried training with 250 epochs but still getting the same error.
No I tried with 300 epochs still not working.
On Mon, 22 Jul 2019, 13:38 matak07, notifications@github.com wrote:
Did anyone get the solution? I tried training with 250 epochs but still
getting the same error.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carpedm20/DCGAN-tensorflow/issues/339?email_source=notifications&email_token=AFLNCW7KNUKBY5W5PYP3ZWLQAVTHPA5CNFSM4HNE24X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2PD3RQ#issuecomment-513686982,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFLNCWZ5D2A2RE4GV5NZE5DQAVTHPANCNFSM4HNE24XQ
.
What I did was "hardcoded" the saved checkpoint file into line 122 of main.py file and set visualization to True, and now I can generate test images
Its throwing syntax error, when I am hardcoding it.
This is my location of check point. what should I write inside dcgan.load()
/root/Desktop/DCGAN-tensorflow-master/out/20190712.163251 - data - face -
x96.z100.uniform_signed.y96.b6
On Sat, Jul 27, 2019 at 11:24 AM Muhammad057 notifications@github.com
wrote:
What I did was "hardcoded" the saved checkpoint file into line 122 of
main.py file and set visualization to True, and now I can generate test
images
[image: 1]
https://user-images.githubusercontent.com/40855134/61990593-c92a8e80-b05c-11e9-8ffd-166322c1c1f3.PNG
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carpedm20/DCGAN-tensorflow/issues/339?email_source=notifications&email_token=AFLNCW6ZNMNRGDKHTRFLNV3QBPPIDA5CNFSM4HNE24X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD26EL7A#issuecomment-515655164,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFLNCWY26AUK76AS4FP6R2DQBPPIDANCNFSM4HNE24XQ
.
--
With Best Wishes and Regards
Anishi Gupta
I write inside dcgan.load()
/root/Desktop/DCGAN-tensorflow-master/out/20190712.163251 - data - face - x96.z100.uniform_signed.y96.b6
Add \checkpoint after - x96.z100.uniform_signed.y96.b6, example
/root/Desktop/DCGAN-tensorflow-master/out/20190712.163251 - data - face - x96.z100.uniform_signed.y96.b6/checkpoint
The checkpoint file shall point out to the last checkpoint which is saved after every "x" number of iterations.
Hi
I did this, still fail to test.
The error screenshot is attached in this email.
plus when I am running command to test, other black check points folder are
created.
Please help me with this.
On Thu, Aug 1, 2019 at 3:15 AM Muhammad057 notifications@github.com wrote:
I write inside dcgan.load()
/root/Desktop/DCGAN-tensorflow-master/out/20190712.163251 - data - face -
x96.z100.uniform_signed.y96.b6
Add \checkpoint after - x96.z100.uniform_signed.y96.b6, example
/root/Desktop/DCGAN-tensorflow-master/out/20190712.163251 - data - face -
x96.z100.uniform_signed.y96.b6/checkpoint
The checkpoint file shall point out to the last checkpoint which is saved
after every "x" number of iterations.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carpedm20/DCGAN-tensorflow/issues/339?email_source=notifications&email_token=AFLNCW6HCCZXD65FGFG3GCLQCKERVA5CNFSM4HNE24X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3JSGRA#issuecomment-517153604,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFLNCW4XEE6MUWWQXPA4BETQCKERVANCNFSM4HNE24XQ
.
--
With Best Wishes and Regards
Anishi Gupta
Hi
I think error lies in mismatch between current graph and the graph.
File "/root/Desktop/DCGAN-tensorflow-master/model.py", line 547, in
load self.saver.restore(self.sess, os.path.join(checkpoint_dir,
ckpt_name)) File
"/root/anaconda2/envs/venv/lib/python2.7/site-packages/tensorflow/python/training/saver.py",
line 1322, in restore
err, "a mismatch between the current graph and the
graph")tensorflow.python.framework.errors_impl.InvalidArgumentError:
Restoring from checkpoint failed. This is most likely due to a mismatch
between the current graph and the graph from the checkpoint. Please ensure
that you have not altered the graph expected based on the checkpoint.
Original error:Assign requires shapes of both tensors to match. lhs shape=
[100,8192] rhs shape= [100,18432] [[node save/Assign_38 (defined at
/root/Desktop/DCGAN-tensorflow-master/model.py:161) ]]Errors may have
originated from an input operation.Input Source operations connected to
node save/Assign_38: generator/g_h0_lin/Matrix (defined at
/root/Desktop/DCGAN-tensorflow-master/ops.py:99)
Is there any problem with input size or something?
On Thu, Aug 1, 2019 at 3:30 PM anishi gupta anishi.anishi@gmail.com wrote:
Hi
I did this, still fail to test.
The error screenshot is attached in this email.
plus when I am running command to test, other black check points folder
are created.
Please help me with this.
On Thu, Aug 1, 2019 at 3:15 AM Muhammad057 notifications@github.com
wrote:
I write inside dcgan.load()
/root/Desktop/DCGAN-tensorflow-master/out/20190712.163251 - data - face -
x96.z100.uniform_signed.y96.b6
Add \checkpoint after - x96.z100.uniform_signed.y96.b6, example
/root/Desktop/DCGAN-tensorflow-master/out/20190712.163251 - data - face -
x96.z100.uniform_signed.y96.b6/checkpoint
The checkpoint file shall point out to the last checkpoint which is saved
after every "x" number of iterations.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carpedm20/DCGAN-tensorflow/issues/339?email_source=notifications&email_token=AFLNCW6HCCZXD65FGFG3GCLQCKERVA5CNFSM4HNE24X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3JSGRA#issuecomment-517153604,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFLNCW4XEE6MUWWQXPA4BETQCKERVANCNFSM4HNE24XQ
.
--
With Best Wishes and Regards
Anishi Gupta
--
With Best Wishes and Regards
Anishi Gupta
send me the screenshot
Hi attached the screenshot pluse send over gmail too
Hi
Please find attached. Screenshot.
On Thu, Aug 1, 2019 at 4:00 PM Muhammad057 notifications@github.com wrote:
send me the screenshot
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carpedm20/DCGAN-tensorflow/issues/339?email_source=notifications&email_token=AFLNCW3HKDU5TOVIFLKU2T3QCK3MLA5CNFSM4HNE24X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3KEAHI#issuecomment-517226525,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFLNCW2FZ5VO4B5N3L3TBTTQCK3MLANCNFSM4HNE24XQ
.
--
With Best Wishes and Regards
Anishi Gupta
Moreover I am using tensorflow version 1.14.0.. i dont think it will be a problem.
please check the size of train and test images
Its near about 100 images for training. I have used no folder for test iamges
Minimum how many images should i keep for training, epochs I kept = 300. Is there any need of test images folder. I thing it will be generated automatically.
Dimension of images are already given in command terminal.
I dont know where damn problem is.
Please help me.
Hi
I am using 100 images for training with 300 epochs. Still fails to test
On Thu, 1 Aug 2019, 16:54 Muhammad057, notifications@github.com wrote:
please check the size of train and test images
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carpedm20/DCGAN-tensorflow/issues/339?email_source=notifications&email_token=AFLNCW6WN7TCJT5J6U6RBQLQCLBWHA5CNFSM4HNE24X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3KH3XA#issuecomment-517242332,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFLNCW7ESW2LHI33OODOXKDQCLBWHANCNFSM4HNE24XQ
.
I found that the problems happened in main.py. Specifically with these lines:
FLAGS.out_dir = os.path.join(FLAGS.out_dir, FLAGS.out_name)
FLAGS.checkpoint_dir = os.path.join(FLAGS.out_dir, FLAGS.checkpoint_dir)
FLAGS.sample_dir = os.path.join(FLAGS.out_dir, FLAGS.sample_dir)
Notice how the first os.path.join concatenates a directory and a folder name, while the latter two concatenate two directories. On MacOS, I was getting two directories jammed together into one long directory that didn't exist.
I commented out these lines and added my own string manipulation to get it working, but its messy. I doubt others are using a Mac, so my recommendation is, add some printouts of FLAGS.checkpoint_dir and FLAGS.sample_dir before the checkpoint check in the code. See if what you're getting is actually the directory you want.
Hi I am using 100 images for training with 300 epochs. Still fails to test
…
On Thu, 1 Aug 2019, 16:54 Muhammad057, @.***> wrote: please check the size of train and test images — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#339?email_source=notifications&email_token=AFLNCW6WN7TCJT5J6U6RBQLQCLBWHA5CNFSM4HNE24X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3KH3XA#issuecomment-517242332>, or mute the thread https://github.com/notifications/unsubscribe-auth/AFLNCW7ESW2LHI33OODOXKDQCLBWHANCNFSM4HNE24XQ .
I met similar problems as you. Have you got the solution?
Thanks
|
gharchive/issue
| 2019-05-15T16:10:14 |
2025-04-01T04:56:15.562611
|
{
"authors": [
"JJisbug",
"Muhammad057",
"Shnoogy",
"ani16",
"guillaumefrd",
"matak07",
"trespassermax"
],
"repo": "carpedm20/DCGAN-tensorflow",
"url": "https://github.com/carpedm20/DCGAN-tensorflow/issues/339",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
134182641
|
No good documentation
I am unable to use this as there is no documentation and the issue talking about it being in the code is just as unhelpful.
The code itself is pretty self explanatory. Just have a look at send, getThreadInfo, getUnread and listen.
Don't be lazy.
When I run listen() it just stays there, it prints the data to the screen but doesn't actually advance my code to do anything with said data.
It doesn't seem to me you have any experience in coding (python, at least). Study up a bit on classes and overriding them
Yes I do. I have several years, you just have no documentation.
|
gharchive/issue
| 2016-02-17T05:25:07 |
2025-04-01T04:56:15.565986
|
{
"authors": [
"PidgeyL",
"nicolas-martin",
"pingpong1109"
],
"repo": "carpedm20/fbchat",
"url": "https://github.com/carpedm20/fbchat/issues/29",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
945412013
|
Training Status should be "Failed" rather than "Ask to Repeat"
This follows up on #1808
While training status for the three checkout steps (DC/LC/SWC Homework, Demo, and Discussion) should be Asked to Repeat rather than Failed we do want the Training to be Failed rather than Asked to Repeat.
I think this would require an update such as
SELECT *
FROM workshops_trainingprogress
WHERE requirement_id = 1
AND state = 'a';
UPDATE workshops_trainingprogress
SET state = 'f'
WHERE requirement_id = 1
AND state = 'a';
@maneesha I'm not 100% clear on what is required by this ticket. Do you want me to run 1-off query and change values in production database?
Yes, this reflects a change to data, not a change to code.
Done. I'll upload the first query results to you on Slack.
|
gharchive/issue
| 2021-07-15T14:01:20 |
2025-04-01T04:56:15.570249
|
{
"authors": [
"maneesha",
"pbanaszkiewicz"
],
"repo": "carpentries/amy",
"url": "https://github.com/carpentries/amy/issues/2001",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1666342264
|
IDs trocados
A4
R DA SOCIEDADE 95 010087 e 010088
Estas duas paragens tem na realidade os ids trocados
Paragem Sul R DA SOCIEDADE 95 com o id 010087 no GTFS
Paragem Norte R DA SOCIEDADE 95 com o id 010088 no GTFS
Mais quatro com IDs trocados
Estr Lagoa da Cheia Agropecuária 010049 e 010050
Estr Lagoa da Cheia 610010051 e 010052
É mais pratico trocar diretamente no GTFS sendo preciso tambem trocar os sentidos das linhas que servem
Olá @DiogoBaptista98, acredito que este erro já esteja corrigido. Se não estiver peço por favor que o Issue seja re-aberto. Obrigado!
|
gharchive/issue
| 2023-04-13T12:40:09 |
2025-04-01T04:56:15.578793
|
{
"authors": [
"DiogoBaptista98",
"joao-vasconcelos"
],
"repo": "carrismetropolitana/gtfs",
"url": "https://github.com/carrismetropolitana/gtfs/issues/53",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2201530506
|
🛑 Back Office is down
In f3e500f, Back Office (https://bo.cartelmatic.com/) was down:
HTTP code: 503
Response time: 630 ms
Resolved: Back Office is back up in 0f22c2d after 19 minutes.
|
gharchive/issue
| 2024-03-22T02:05:21 |
2025-04-01T04:56:15.603551
|
{
"authors": [
"nib216"
],
"repo": "cartelmatic/upptime",
"url": "https://github.com/cartelmatic/upptime/issues/454",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
844191825
|
Problems with 3D local SLAM
Hi, I am having problems with the 3D local SLAM tuning. I am using 2x VLP-16 and a Microstrain 3dm-gx5-45. The main issue that I have is that the local trajectory slightly oscillates, as you can see in the picture below. My configuration is the following:
options = {
map_builder = MAP_BUILDER,
trajectory_builder = TRAJECTORY_BUILDER,
map_frame = "map",
tracking_frame = "microstrain_imu_link",
published_frame = "base_link",
odom_frame = "odom",
provide_odom_frame = true,
publish_frame_projected_to_2d = false,
use_odometry = false,
use_nav_sat = false,
use_landmarks = false,
num_laser_scans = 0,
num_multi_echo_laser_scans = 0,
num_subdivisions_per_laser_scan = 1,
num_point_clouds = 2,
lookup_transform_timeout_sec = 0.2,
submap_publish_period_sec = 0.3,
pose_publish_period_sec = 5e-3,
trajectory_publish_period_sec = 30e-3,
rangefinder_sampling_ratio = 1.,
odometry_sampling_ratio = 1.,
fixed_frame_pose_sampling_ratio = 1.,
imu_sampling_ratio = 1.,
landmarks_sampling_ratio = 1.,
}
--TRAJECTORY BUILDER PARAMS
--Providing full 360 scans with two lidars (and per-point stamp)
TRAJECTORY_BUILDER_3D.num_accumulated_range_data = 6
--Setting up a voxel filter to reduce the density of the data
VOXEL_SIZE = 0.05
TRAJECTORY_BUILDER_3D.voxel_filter_size = VOXEL_SIZE
MAP_BUILDER.use_trajectory_builder_3d = true
MAP_BUILDER.num_background_threads = 12
POSE_GRAPH.optimization_problem.huber_scale = 5e2
POSE_GRAPH.optimize_every_n_nodes = 200
POSE_GRAPH.constraint_builder.sampling_ratio = 0.03
POSE_GRAPH.optimization_problem.ceres_solver_options.max_num_iterations = 20
POSE_GRAPH.constraint_builder.min_score = 0.62
POSE_GRAPH.constraint_builder.global_localization_min_score = 0.66
Changing the num_accumulated_range_data from 2 to 6 helped reduce the oscillation, but it is still present. Thanks for your help!
Sounds like a tuning question - if you are using the ROS velodyne driver have you tried setting npackets to 1? That would provide ~75 packets per rotation to Cartographer, so you would need to increase TRAJECTORY_BUILDER_3D.num_accumulated_range_data appropriately, the original authors had it set to 160 for two velodynes.
I am working offline with recorded data and I can't test this right now, I will try this as soon as I can. However, I fail to understand how should this produce a different effect to the mapping, since it should be equivalent to providing full 360º scans with per-point timestamp, right?
Thanks for your help!
|
gharchive/issue
| 2021-03-30T07:53:09 |
2025-04-01T04:56:15.619282
|
{
"authors": [
"halops",
"tyhowell"
],
"repo": "cartographer-project/cartographer",
"url": "https://github.com/cartographer-project/cartographer/issues/1825",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
728742985
|
Incorrect scan matching during a robot rotation
My lua-option is defined like following:
map_frame = "map",
tracking_frame = "base_link",
published_frame = "base_link",
odom_frame = "odom",
provide_odom_frame = false,
publish_frame_projected_to_2d = false,
use_pose_extrapolator = true,
use_odometry = false,
Currently, I just plug my RPlidar A2 to my PC and testing the map, wihtout odometry/IMU sensor. If I rotate the lidar scanner, I can see that the topic /scan_matched_points2 is somehow drifted and distorts the map generation. Please see the picture below ( red is the laser scanner, green is the scan_matched_point2:
If I see the demo from the Deutsches Museum bagfile, I don't see such a drift. What did I do wrong? Since the Laser frame is the same as the robot frame (base_link). Is it because I don't have odometry sensor? If I change the lua setting to odom_frame = "base_link", I see the same result.
Thanks!
tuning is tricky and maybe you find a way to improve it if you play around long enough.
However, rotations are extremely hard to handle (for almost any SLAM) without odometry info or an IMU. Adding either of these will help you deal with this much easier.
tuning is tricky and maybe you find a way to improve it if you play around long enough.
However, rotations are extremely hard to handle (for almost any SLAM) without odometry info or an IMU. Adding either of these will help you deal with this much easier.
Thank @johuber for the hint. I'll get an IMU sensor then.
|
gharchive/issue
| 2020-10-24T08:51:03 |
2025-04-01T04:56:15.623224
|
{
"authors": [
"johuber",
"ywiyogo"
],
"repo": "cartographer-project/cartographer_ros",
"url": "https://github.com/cartographer-project/cartographer_ros/issues/1534",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1422947501
|
kctrl: Allow users to watch for changes while using dev
kctrl dev could have a flag that asks it to watch the resources it is deploying for changes (maybe --watch).
kctrl would perform a dev reconciliation whenever it detects changes.
Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.
@100mik the proposed feature would be really useful! Could we reopen the issue?
|
gharchive/issue
| 2022-10-25T19:14:07 |
2025-04-01T04:56:15.625634
|
{
"authors": [
"100mik",
"ThomasVitale"
],
"repo": "carvel-dev/kapp-controller",
"url": "https://github.com/carvel-dev/kapp-controller/issues/954",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
769857446
|
fix :set default tab or last used tab
fix the default tab display
new feature of default opening the last used tab
Replaced by: https://github.com/casbin/casbin-forum/pull/96
|
gharchive/pull-request
| 2020-12-17T10:50:41 |
2025-04-01T04:56:15.653690
|
{
"authors": [
"hsluoyz",
"noneback"
],
"repo": "casbin/casbin-forum",
"url": "https://github.com/casbin/casbin-forum/pull/95",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
907526054
|
docs: Add false collapsed parameter.
Signed-off-by: ErikQQY 2283984853@qq.com
@hsluoyz Plz review
|
gharchive/pull-request
| 2021-05-31T14:35:24 |
2025-04-01T04:56:15.665593
|
{
"authors": [
"ErikQQY"
],
"repo": "casdoor/casdoor-website",
"url": "https://github.com/casdoor/casdoor-website/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1727179818
|
Casdoor as SAML SP with keycloak login not work
Hi! I try to use Casdoor as SAML SP with keycloak different version and casdoor can't login.
In Keycloak 12.0.4 (version of our test environment)
Create realm
Configure client connection
Download metadata files in moq auth format
Create new provider in casdoor
Try to login
In Keycloack 21.1.1 (latest version running in docker with start-dev parameter)
create realm
Configure client connection
Copy metadata SP from realm settings
Create new provider in casdoor
Try to login
I also tried to change the client ID from full path to relative /api/acs, but that didn't help
@leo220yuyaodog
@yamaritta please try keycloack v6.0.1. High version is unavailable.
|
gharchive/issue
| 2023-05-26T08:48:58 |
2025-04-01T04:56:15.672258
|
{
"authors": [
"hsluoyz",
"leo220yuyaodog",
"yamaritta"
],
"repo": "casdoor/casdoor",
"url": "https://github.com/casdoor/casdoor/issues/1894",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1570351018
|
Electrum protocol support
ord could gain even more traction if it could use the Electrum protocol instead of bitcoin RPC.
Unless I'm mistaken, I don't think the electrum protocol is suitable for this type of application. ord needs to process all transactions of all blocks to index sats. Electrum is mostly used for fast lookups of address balances and transactions, and is very slow at serving blocks (if it serves them at all, I think most servers will only serve headers).
Thanks for the issue! @brandonblack Can you clarify what you're looking for? Is the desire some kind of light client? Or the ability to use public servers to download block data?
Sorry, I'm just getting up to speed on ord here, and I think what I'm getting toward is alluded to in this comment on the overall redb sync performance: https://github.com/casey/ord/issues/1377#issuecomment-1418005263
In short, I think we have < 10 years until fragmentation in the sat space makes fully indexing all ordinal sat ranges out of the reach of most users, so we should probably plan for that future now. To do so, we'd want to produce a lighter weight ord which has probably 2 modes:
When directly connected to a full -txindex bitcoin node, it indexes all inscriptions to date (along with potentially some other rare stats of interest).
When connected to an Electrum server, it only supports queries for specific sats or UTXOs, eg. "make sure I'm not sending something rare by sending this transaction", and "inscribe this sat and tell me what it's number was, then index it for future".
Sorry, I'm just getting up to speed on ord here, and I think what I'm getting toward is alluded to in this comment on the overall redb sync performance: #1377 (comment)
In short, I think we have < 10 years until fragmentation in the sat space makes fully indexing all ordinal sat ranges out of the reach of most users, so we should probably plan for that future now. To do so, we'd want to produce a lighter weight ord which has probably 2 modes:
1. When directly connected to a full `-txindex` bitcoin node, it indexes all inscriptions to date (along with potentially some other rare stats of interest).
2. When connected to an Electrum server, it only supports queries for specific sats or UTXOs, eg. "make sure I'm not sending something rare by sending this transaction", and "inscribe this sat and tell me what it's number was, then index it for future".
This issue seems somewhat related to issue https://github.com/casey/ord/issues/1501 since both suggest a need for additional code modularization (which I think would be required to support point 2 above).
Additionally, this privacy-focused project might be of interest if Electrum is used: https://github.com/chris-belcher/electrum-personal-server
I'm not keen on introducing a dependency on the electrum protocol, or re-implementing it ourself. I think this is really about "make syncing fast" and "create new security trade-offs", and I think those things can be supported without introducing a dependency on electrum. Users already have a hard time getting bitcoind running and connecting to bitcoind, and getting ord running correctly. I don't want to add functionality where getting things running requires configuring an additional service, or which requires complex configuration, so you need Docker or something like it to get started.
In short, I think we have < 10 years until fragmentation in the sat space makes fully indexing all ordinal sat ranges out of the reach of most users, so we should probably plan for that future now.
Years away is a long time! We definitely have bigger fish to fry, like making the current sync model fast.
I'm going to convert this to a discussion so that conversation can continue!
|
gharchive/issue
| 2023-02-03T19:42:26 |
2025-04-01T04:56:15.680815
|
{
"authors": [
"andrewtoth",
"brandonblack",
"casey",
"tyjvazum"
],
"repo": "casey/ord",
"url": "https://github.com/casey/ord/issues/1491",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1581143131
|
Expansion of CONTRIBUTING
Still requires documentation for Translation/Localization efforts and a PR template.
I'll convert to Draft since it's not ready for review yet
Absolutely should be a draft, I have no knowledge of any translation/localization being worked on currently and would love an assist if you know who is doing so (assuming it is being done).
I have a PR template (based on GitHub template) that I typically use, can be seen on #1669. If this is acceptable as a starting point I will include it in documentation for now.
Localization and PR templates were put in backlog. Documentation ready, but a proof-read or grammatical review would be appreciated.
|
gharchive/pull-request
| 2023-02-12T06:33:07 |
2025-04-01T04:56:15.682981
|
{
"authors": [
"Psifour",
"raphjaph"
],
"repo": "casey/ord",
"url": "https://github.com/casey/ord/pull/1698",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2449627026
|
fix: fix delete vector bug
The previous deletion logic was by index, and there would be a bug when deleting it after sorting
@Sherlocksuper what about deletion for other objects?
|
gharchive/pull-request
| 2024-08-05T23:15:52 |
2025-04-01T04:56:15.686977
|
{
"authors": [
"Sherlocksuper",
"hsluoyz"
],
"repo": "casibase/casibase",
"url": "https://github.com/casibase/casibase/pull/936",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1883242599
|
Kotlin standard library errors with kte and Gradle continuous build
Using 3.1.0, we are having a great experience with:
TemplateEngine.createPrecompiled
and also with
val codeResolver = DirectoryCodeResolver(CWD.server("src/main/jte"))
TemplateEngine.create(codeResolver, Paths.get("build/jte-classes"), ContentType.Html).apply {
setBinaryStaticContent(true)
}
Precompiled runs great in prod, the DirectoryCodeResolver gives us great near-instant hotreload locally, with both .jte and .kte.
The only problem we have is with Gradle continuous build. We have a hotreload setup that uses continuous build to restart the server anytime our business logic changes. And for some reason, it is erroring out on .kte templates like so:
rootCause=gg.jte.TemplateException: Failed to compile template, error at pages/Admin/adminTrialShow.kte:1
@file:Suppress("ktlint")
/Users/ntwigg/Documents/dev/diffplugdotcom/server/build/jte-classes-73184/gg/jte/generated/ondemand/pages/Admin/JteadminTrialShowGenerated.kt:1:7
Reason: Cannot access built-in declaration 'kotlin.Suppress'. Ensure that you have a dependency on the Kotlin standard library
at gg.jte.compiler.kotlin.KotlinClassCompiler.compile(KotlinClassCompiler.java:45)
at gg.jte.compiler.TemplateCompiler.precompileClasses(TemplateCompiler.java:122)
at gg.jte.compiler.TemplateCompiler.precompile(TemplateCompiler.java:94)
at gg.jte.compiler.TemplateCompiler.load(TemplateCompiler.java:50)
at gg.jte.TemplateEngine.lambda$resolveTemplateOnDemand$0(TemplateEngine.java:354)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)
at gg.jte.TemplateEngine.resolveTemplateOnDemand(TemplateEngine.java:347)
at gg.jte.TemplateEngine.resolveTemplate(TemplateEngine.java:337)
at gg.jte.TemplateEngine.render(TemplateEngine.java:228)
It's strange because the same code works great when we run it without continuous build. Not necessarily a JTE issue, but I wonder if setNoStdlib here ought to be false instead of true?
https://github.com/casid/jte/blob/f53fb9584e7d8526ebc7efba581efad378d3782f/jte-kotlin/src/main/java/gg/jte/compiler/kotlin/KotlinClassCompiler.java#L25-L28
Hmm, the documentation says Don't automatically include the Kotlin/JVM stdlib and Kotlin reflection into the classpath, which sounded good to me, when I intitially added Kotlin support. However, I'm not a heavy Kotlin or Gradle user, so that was rather a gut decision than a well backed one from experience.
Maybe you could try to change this line, make a snapshot build and see if it is working afterwards?
Hello @casid
I tried to set the noStdLib to false and rebuild. It did not fix the issue.
|
gharchive/issue
| 2023-09-06T05:19:33 |
2025-04-01T04:56:15.692003
|
{
"authors": [
"casid",
"nedtwigg",
"ylemoigne"
],
"repo": "casid/jte",
"url": "https://github.com/casid/jte/issues/271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
231501462
|
add widget for access_logging
Cherry-pick #256 with no conflicts
looks like an indentation issue on this and the original, otherwise 👍 lgtm
I fixed the spacing here. I'll submit a new PR to fix the indent on develop
|
gharchive/pull-request
| 2017-05-26T01:19:38 |
2025-04-01T04:56:15.693381
|
{
"authors": [
"dereklwood",
"mattwuenschel"
],
"repo": "caskdata/cdap-ambari-service",
"url": "https://github.com/caskdata/cdap-ambari-service/pull/257",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
109858439
|
Update README.md with latest net-ssh changes
This is temporary until everything can be modified to use bundle install and bundle exec and tested with packaging.
:+1: lgtm. create https://issues.cask.co/browse/COOPR-781 for the permanent fix
|
gharchive/pull-request
| 2015-10-05T18:34:36 |
2025-04-01T04:56:15.703545
|
{
"authors": [
"dereklwood",
"wolf31o2"
],
"repo": "caskdata/coopr",
"url": "https://github.com/caskdata/coopr/pull/962",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
107398264
|
update synology-cloud-station to 3.2-3484
This pull request updates synology-cloud-station to the latest version 3.2-3484.
Thank you for the contribution. It was merged directly as 9bdac110d5629b299d3a0818b790a6ae3c10eb98 to keep commit history cleaner. Your contribution is still credited to you.
|
gharchive/pull-request
| 2015-09-20T15:30:47 |
2025-04-01T04:56:15.704532
|
{
"authors": [
"victorpopkov",
"wuman"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/13897",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
144071313
|
updated sabnzbd (1.0.0)
new build from sabnzbd for osx, see https://forums.sabnzbd.org/viewtopic.php?f=8&t=20459
Thank you for the contribution. It was merged directly as 6f45f6264548589a87c0e831dbafb44cb365bfff to keep commit history cleaner. Your contribution is still credited to you.
|
gharchive/pull-request
| 2016-03-28T20:13:09 |
2025-04-01T04:56:15.705782
|
{
"authors": [
"elnappo",
"victorpopkov"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/20131",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
279145197
|
Add cutter 1.0
After making all changes to the cask:
[x] brew cask audit --download {{cask_file}} is error-free.
[x] brew cask style --fix {{cask_file}} reports no offenses.
[x] The commit message includes the cask’s name and version.
Additionally, if adding a new cask:
[x] Named the cask according to the [token reference].
[x] brew cask install {{cask_file}} worked successfully.
[x] brew cask uninstall {{cask_file}} worked successfully.
[x] Checked there are no [open pull requests] for the same cask.
[x] Checked the cask was not already refused in [closed issues].
[x] Checked the cask is submitted to [the correct repo].
audit for cutter: warning
- possible duplicate, cask token conflicts with Homebrew core formula: https://github.com/Homebrew/homebrew-core/blob/master/Formula/cutter.rb
Error: audit failed for 1 cask: cutter
Formula is unrelated.
Thanks for the PR!
|
gharchive/pull-request
| 2017-12-04T20:52:28 |
2025-04-01T04:56:15.709699
|
{
"authors": [
"commitay",
"ndaprela"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/41645",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
53236974
|
Add BusyCal
How does this look? The trial version now functions as the full version (see also #5779).
Looks perfect. Thanks you.
@ndr-qef, my pleasure. I'm just trying to figure out why Installer.app is still running after install. Just ran this on a different machine and noticed. I don't think I've seen this behaviour in any other casks I've installed, would this be something with a known workaround?
Also after running brew cask uninstall busycal I'm still seeing some kind of busted BusyCal.app in /Applications. Maybe this needs some special treatment for both install and uninstall.
You are right; I can reproduce the behavior in a Mavericks VM. Moreover, the pkg installer opens a Finder window at /Applications, which is likely undesirable from our perspective.
I didn't notice the Finder window, I see it now as well. That must be part of the installer scripts because I'm seeing it when running the installer manually. When running the installer manually it does exit Installer.app at the end.
|
gharchive/pull-request
| 2015-01-02T04:20:39 |
2025-04-01T04:56:15.712803
|
{
"authors": [
"Cottser",
"ndr-qef"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/8582",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1031389642
|
Mainnet error frequency rule
Mainnet error frequency rule
At least 1 events occurred between 2021-10-19 17:28 UTC and 2021-10-19 17:43 UTC
@timestamp: 2021-10-19T17:43:59.197Z
@version: 1
_id: aRKnmXwBaqnV04emJGsu
_index: mainnet-casper-node-rs-2021.10.19
_type: _doc
agent: {
"ephemeral_id": "36b182fe-7e77-4b51-be80-6706dc9ef001",
"hostname": "casperlabs-casper-node-1",
"id": "51290c0e-7161-481e-a829-4a1f549fa3e6",
"name": "casperlabs-casper-node-1",
"type": "filebeat",
"version": "7.10.1"
}
cl-sre-metadata: {
"env": "mainnet",
"node_hostname": "casperlabs-casper-node-1",
"node_version": "1.0.0"
}
cl_log_source: casper-node
cloud: {
"account": {
"id": "107469540579"
},
"availability_zone": "us-east-2a",
"image": {
"id": "ami-0d5d9d301c853a04a"
},
"instance": {
"id": "i-0b3992bfd47f941b1"
},
"machine": {
"type": "t3.xlarge"
},
"provider": "aws",
"region": "us-east-2"
}
ecs: {
"version": "1.6.0"
}
fields: {
"message": "could not send response to request down oneshot channel"
}
host: {
"architecture": "x86_64",
"containerized": false,
"hostname": "casperlabs-casper-node-1",
"id": "ec2a247454e5d7806dde28f3b3a5db74",
"ip": [
"10.22.5.226",
"fe80::d4:57ff:feb3:ec46"
],
"mac": [
"02:d4:57:b3:ec:46"
],
"name": "casperlabs-casper-node-1",
"os": {
"codename": "bionic",
"family": "debian",
"kernel": "4.15.0-1051-aws",
"name": "Ubuntu",
"platform": "ubuntu",
"version": "18.04.3 LTS (Bionic Beaver)"
}
}
input: {
"type": "log"
}
level: ERROR
log: {
"file": {
"path": "/var/log/casper/casper-node.log"
},
"offset": 36265235
}
log_filename: casper-node.log
message: {"timestamp":"Oct 19 17:43:58.676","level":"ERROR","fields":{"message":"could not send response to request down oneshot channel"},"target":"casper_node::effect"}
num_hits: 3
num_matches: 1
tags: [
"beats_input_codec_plain_applied",
"syslog_parsed"
]
target: casper_node::effect
timestamp: Oct 19 17:43:58.676
Closing this for now, as this is expected to not be an issue anymore once #2207 is fixed.
|
gharchive/issue
| 2021-10-20T13:06:52 |
2025-04-01T04:56:15.726978
|
{
"authors": [
"marc-casperlabs",
"piotr-dziubecki"
],
"repo": "casper-network/casper-node",
"url": "https://github.com/casper-network/casper-node/issues/2240",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
988163688
|
[Request] Responses returning expressions for lists by using Json Path
Hello, my friends! I'd like to suggest expressions returning lists that are values of json attributes. For example: I have a json object in the request that contains an attribute with a list as its value. In the response, there's the same list, so in order to not hardcoding it, I would like to return the request list in the response too, as you can see in the examples below:
Request:
"items": [
{
"sku": "Gasolina Superior",
"name": "Gasolina Superior",
"value": 10000,
"quantity": 1
}
]
In the response, I tried the following expression on Castlemock:
"items": ${BODY_JSON_PATH(expression="$.salebasket.items[*]")
The expression I got by using this website, but the response I'm getting when I make the request to Castlemock is the following:
You can see that it returns not a list, but the first object in it. If a pass two objects in the list, the same occurs:
Is there a way to return the list as it is passed in the request? If there isn't, what do you guys think of making this possible?! It would be a lot appreciatted.
I searched in the expression docs, but didn't find something, so I'm opening this request.
Thank you very much!
Hi @wedla.
Thanks for reporting this! Great issue report!
I took a look at this and I can see that Castle Mock only returns one values. However, i have updated this so that it can retrieve multiple values. The fix will be part of next release (Which I hope can be released during the weekend).
Thanks!
|
gharchive/issue
| 2021-09-04T00:01:53 |
2025-04-01T04:56:15.748010
|
{
"authors": [
"karldahlgren",
"wedla"
],
"repo": "castlemock/castlemock",
"url": "https://github.com/castlemock/castlemock/issues/408",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
112869526
|
Integrate CACM collection
The CACM collection is small enough that we can include it in the repository... so we can have indexing/retrieval experiments completely integrated in with the system.
Won't integrate CACM in this repo.
Instead, we put it on another repo: https://github.com/castorini/Anserini-data/tree/master/CACM to avoid possible license issue.
|
gharchive/issue
| 2015-10-22T19:05:01 |
2025-04-01T04:56:15.749522
|
{
"authors": [
"Peilin-Yang",
"lintool"
],
"repo": "castorini/Anserini",
"url": "https://github.com/castorini/Anserini/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
386233581
|
Replicate Anserini Core18 experiments
Can we have someone outside the project replicate the Anserini TREC 2018 CORE runs?
You'll need to be an active participant in TREC 2018 because otherwise you won't have access to the topics and qrels...
Fetch the topics and qrels per https://github.com/castorini/Anserini/blob/master/docs/experiments-core18.md#Retrieval
Change the collection path at
https://github.com/castorini/Anserini/blob/master/src/main/resources/regression/core18.yaml#L16
Run
nohup python src/main/python/run_regression.py --collection core18 --index >& log.core18 &
@isoboroff if you have a some time?
Sure, it worked fine with only the two changes you mentioned:
2018-11-30 21:42:43,052 - regression_test - INFO - ==========Verifying Results==========
2018-11-30 21:42:43,175 - regression_test - INFO - {"actual": 0.2487, "collection": "core18", "expected": 0.2487, "metric": "map", "model": "bm25", "topic": "All Topics"}
2018-11-30 21:42:43,251 - regression_test - INFO - {"actual": 0.364, "collection": "core18", "expected": 0.364, "metric": "p30", "model": "bm25", "topic": "All Topics"}
2018-11-30 21:42:43,354 - regression_test - INFO - {"actual": 0.2911, "collection": "core18", "expected": 0.2911, "metric": "map", "model": "bm25+rm3", "topic": "All Topics"}
2018-11-30 21:42:43,431 - regression_test - INFO - {"actual": 0.4087, "collection": "core18", "expected": 0.4087, "metric": "p30", "model": "bm25+rm3", "topic": "All Topics"}
2018-11-30 21:42:43,536 - regression_test - INFO - {"actual": 0.2919, "collection": "core18", "expected": 0.2919, "metric": "map", "model": "bm25+ax", "topic": "All Topics"}
2018-11-30 21:42:43,612 - regression_test - INFO - {"actual": 0.4033, "collection": "core18", "expected": 0.4033, "metric": "p30", "model": "bm25+ax", "topic": "All Topics"}
2018-11-30 21:42:43,713 - regression_test - INFO - {"actual": 0.2504, "collection": "core18", "expected": 0.2504, "metric": "map", "model": "ql", "topic": "All Topics"}
2018-11-30 21:42:43,787 - regression_test - INFO - {"actual": 0.362, "collection": "core18", "expected": 0.362, "metric": "p30", "model": "ql", "topic": "All Topics"}
2018-11-30 21:42:43,896 - regression_test - INFO - {"actual": 0.2754, "collection": "core18", "expected": 0.2754, "metric": "map", "model": "ql+rm3", "topic": "All Topics"}
2018-11-30 21:42:43,970 - regression_test - INFO - {"actual": 0.3773, "collection": "core18", "expected": 0.3773, "metric": "p30", "model": "ql+rm3", "topic": "All Topics"}
2018-11-30 21:42:44,075 - regression_test - INFO - {"actual": 0.2976, "collection": "core18", "expected": 0.2976, "metric": "map", "model": "ql+ax", "topic": "All Topics"}
2018-11-30 21:42:44,151 - regression_test - INFO - {"actual": 0.4067, "collection": "core18", "expected": 0.4067, "metric": "p30", "model": "ql+ax", "topic": "All Topics"}
2018-11-30 21:42:44,151 - regression_test - INFO - All Tests Passed!
Awesome, thanks @andrewyates !
sure, #521
|
gharchive/issue
| 2018-11-30T15:28:33 |
2025-04-01T04:56:15.758593
|
{
"authors": [
"andrewyates",
"lintool"
],
"repo": "castorini/Anserini",
"url": "https://github.com/castorini/Anserini/issues/492",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2324141353
|
Attachment seeker when run by spatial transformer not finding any attachments
I have a feeling with the new logic the attachment tables are not being added to the geodatabase so the attachment seeker is not finding anything to extract. Double check this is the case and fix it.
Change has been made for this in the commit: https://github.com/cat-cfs/twobilliontoolkit/commit/8024ce6be6ccc93738c219eab226aaf5221dc87d
Changes include:
fixed the way the gdb path was join in the arc export function causing it to not function properly and now extract attachments
👍
|
gharchive/issue
| 2024-05-29T19:53:40 |
2025-04-01T04:56:15.760698
|
{
"authors": [
"AnthonyRodway"
],
"repo": "cat-cfs/twobilliontoolkit",
"url": "https://github.com/cat-cfs/twobilliontoolkit/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1563271827
|
Test individual archivers
Background
Right now the individual archivers (inheriting from AbstractDatasetArchiver) are largely untested. There's been discussion in #43 about the appropriate way to test these archivers. In the long run, we will want to have new archives tested through nightly builds to fully verify the entire workflow. This, however will likely be future work once we are actually generating new archives regularly, and have decided how we want to integrate those archives into PUDL. In the immediate future, there are likely helpful tests that we can create at the unit/integration level to perform some basic sanity checking.
Scope
This issue will only track the development of unit/integration tests, and will leave full integration into the nightly builds process for future work. Things we can check for at this level include:
[ ] Are all of the links we attempt to download files from valid?
[ ] Is the structure of the archive what is expected by PUDL (partitions are formatted correctly, and filenames look like what we'd expect
@zschira Are there unit or integration tests that you imagine falling under this issue that wouldn't be covered by what's described in #70?
|
gharchive/issue
| 2023-01-30T21:55:51 |
2025-04-01T04:56:15.778307
|
{
"authors": [
"zaneselvans",
"zschira"
],
"repo": "catalyst-cooperative/pudl-archiver",
"url": "https://github.com/catalyst-cooperative/pudl-archiver/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1350138990
|
Are some CEMS UTC datetimes off by an hour?
In the CEMS transformation step, the reported operating hour is converted from local time to UTC time based on each plant's local timezone reported in plants_entity_eia. However, I am wondering if in some cases we are currently changing the data to the wrong UTC time. For the Open Grid Emissions project, we convert datetime_utc back to datetime_local based on each plant's reported local timezone from plants_entity_eia so that we can assign a monthly report_date to each column that can be used to match the CEMS data with the monthly totals reported in EIA-923. Because we want a complete timeseries for each unit-month, we drop any observations in unit-months with incomplete hourly data. During this process, I've found that in many cases, incomplete months contain only a single hourly value at the very end of the month. This made me suspicious (why would a plant only report the last hour of a month?), and made me wonder if maybe the datetime is off by an hour.
I would have to look into specific examples to see if this is actually the case, but here is my hypothesis about what might be happening:
Because the raw CEMS data is reported by state-month, I am wondering if the op_hour column represents the local time for each plant, or a uniform local time for the entire state. In states that span multiple time zones (AZ, OR, ID, ND, SD, NE, KS, TX, KY, TN, IN, FL), this could make a difference. If the reported operating hour is uniform for the entire state, then the transform step would need to first convert local reported time to local plant time, then to UTC.
To check into this I need to:
Examine specific examples of unit-months with only a single hour reported at the end of the month
Find out where these units are located
Look at the raw CEMS data to see if these single hours are present there
Try to look into the CEMS documentation about what the operating hour represents
Other possible explanations:
Maybe some issue with DST?
Is the version of plants_entity_eia used to convert to UTC different from the one I'm using to convert back?
See https://github.com/singularity-energy/open-grid-emissions/issues/194
So after looking into this, I discovered that the reason that I was seeing months with single hour datapoints for some units is because the raw CEMS files from the FTP site report all data in local standard time (not local prevailing time). So when these data are converted to local prevailing time, there are cases (generally for plants that only report during the ozone season and have data through Sept 30 at 11pm) when there is a single hour of data for October 1 at Midnight in local prevailing time.
I also did not find any evidence that there were a lot of plants located in the states that have 2 time zones that were off by an hour. However, I still want to double check with EPA the process they use for converting timezone data for each plant.
I accidentally closed this issue when I just meant to comment so re-opening.
One thing I'm noticing is that in pudl.transform.epacems, it assumes that the reported operating hour is the plant's local standard time (rather than the state's local standard time). This is something I want to confirm with EPA before closing this.
Ugh. Timezones. 😵💫
According to email communication with the EPA CAMPD team, "OP_HOUR represents each plant's local standard time. Therefore, the OP_HOUR data, including the FTP datasets, correspond to each facility's local standard time even if the data set is for a state which uses two different time zones."
This confirms that the current transformation steps for the CEMS data are correct.
Whew!
|
gharchive/issue
| 2022-08-24T23:53:20 |
2025-04-01T04:56:15.785969
|
{
"authors": [
"grgmiller",
"zaneselvans"
],
"repo": "catalyst-cooperative/pudl",
"url": "https://github.com/catalyst-cooperative/pudl/issues/1864",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1065058381
|
pydoc.locate cannot locate modules in local dir
Sometimes pydoc.locate fails to locate modules in local dir. Probably because local directory changes to something else.
HotFix:
PYTHONPATH=.:$PYTHONPATH catalyst-dl run -C config.yml
https://github.com/python/cpython/blob/main/Lib/pydoc.py#L1721 -> https://github.com/python/cpython/blob/af9ee57b96cb872df6574e36027cc753417605f9/Lib/importlib/_bootstrap.py#L1167
|
gharchive/issue
| 2021-11-27T14:46:48 |
2025-04-01T04:56:15.794627
|
{
"authors": [
"bagxi"
],
"repo": "catalyst-team/hydra-slayer",
"url": "https://github.com/catalyst-team/hydra-slayer/issues/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1997093015
|
[Discussion] Extension's Structure Proposal
This issue is to discuss the extension structure.
With the first proposal I try to keep in consideration all the ideas in #1, the metadata in other standards as suggested in #3 and some issues reported in nwb-schema:
(1)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/538]
(2)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/517]
(3)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/431]
(4)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/406]
(5)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/343]
(6)[https://github.com/NeurodataWithoutBorders/pynwb/issues/1736]
NB:
The schemes are still WIP: I will add more details on a written form later
Reference arrows have been left out of the schema for a more compact visualization (they will be included in the final version)
OptogeneticSeries link to Microscope and LightSource represented in the schema for the MicroscopySeries
Ophys Acquisition
Optogenetics
it also take into account the extensions for patterned photostimulation: https://github.com/catalystneuro/ndx-holographic-stimulation, https://github.com/histedlab/ndx-photostim
I have been meaning to start this discussion for the last month but it was not until last week that I met wit the members of the Cladinin group.
Recently, the Cladinin lab came with a proposal for doing imaging registration to standard atlases called Bifrost. While the details of how to do registration are complex I think that we can move the NWB format further by having at least the ability to express the most basic of spatial coordinates transformations: the affine transform.
In fmri neuroimaging the transformation is used to express how the data in voxel space (i.e. the data as it is) could be transformed into some real world coordinates. A good summary can be found in the following tutorial but the idea is simple, every image / video has a matrix of the following kind:
That is used to express the rotations and translations that would be necessary to express the data as it is (the voxel or pixel space) into lab coordinates.
Right now, I think that the current elements of the ImagingPlane map to the following elements of the Affine matrix formulation for expressing real world coordinates:
origin_coords: maps to the translation part of the matrix
grid_spacing: maps to the diagonal part of the matrix (the scaling).
As you see we are missing the ability to express rotations that usually affect the non-diagonal terms of the matrix (and also the diagonal). That is, the having the concept of the affine matrix will generalize two our fields and expand the expressive power of proposed ImagingSpace concept. I think this is a useful concept to have.
Another concept that maybe needs refinment is the use and scope of the reference frame field. Currently we have reference_frame that expresses as a description of what the origin is (e.g. Bregma). For example, here is the origin as Bregma in the Paxinos atlas.
Which is what I think inspired the field at the time. However, in neuro imaging it seems more common to agree that the axis are given anatomically (right-left, posterior-anterior, inferior-superior) and then there are different conventions for the signs of the given axes. Check this image:
That is, if we are gona use the reference_frame field or something else as a description of the what do the axes mean maybe we should update the string to contain a set of working examples as the current one is not very descriptive in my opinion.
With regards to the light source, what are the intended values to record under power and intensity? Is it the power/intensity of the light source itself, or the power/intensity measured under the objective, which in most cases are the most meaningful to record? I assume a description of this would be added in the doc, but maybe it could be made more explicit by for example consider names like for example source_power or target_power/ power_at_target
For the topic of coordinates a good discussion can be found here:
https://github.com/ome/ngff/pull/138
|
gharchive/issue
| 2023-11-16T15:12:54 |
2025-04-01T04:56:15.808479
|
{
"authors": [
"alessandratrapani",
"ehennestad",
"h-mayorquin"
],
"repo": "catalystneuro/ndx-microscopy",
"url": "https://github.com/catalystneuro/ndx-microscopy/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2458876503
|
Deathemonic rofi launcher does not close when esc is pressed
Im using the launcher Deathemonic bin on Hyprland 0.41.2-3
If i open it with my shortcut (super + R) it does not close until i open an app, even when esc is pressed
If this is a Rofi issue im very much ready to go repor there, but I dont really know what the Deathemonic bins even are so im asking here first
It can be closed with alt+f1, seems like you can change it in bin/launcher
oh yep, i see. I might suggest possibly removing that line (so that its consistent with the runner) or adding a documentation mention for that. I'm willing to draft a small pr for either of those.
Try using rofi-wayland instead of the regular rofi for wayland compositors.
This was on rofi-wayland. using normal rofi through xwayland didnt work very well for me.
|
gharchive/issue
| 2024-08-10T03:36:54 |
2025-04-01T04:56:15.840130
|
{
"authors": [
"Iris-TheRainbow",
"NotMugil",
"xirzo"
],
"repo": "catppuccin/rofi",
"url": "https://github.com/catppuccin/rofi/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1203796341
|
🛑 Artfos ERP is down
In 1245aac, Artfos ERP (https://intranet.artfos.com.ar/equipo/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Artfos ERP is back up in b4955fc.
|
gharchive/issue
| 2022-04-13T21:12:56 |
2025-04-01T04:56:15.847115
|
{
"authors": [
"cavalicenti"
],
"repo": "cavalicenti/upptime",
"url": "https://github.com/cavalicenti/upptime/issues/294",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1213778275
|
🛑 Janax Soft is down
In 9b558ae, Janax Soft (https://www.janaxsoftware.com.ar/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Janax Soft is back up in 0f9a227.
|
gharchive/issue
| 2022-04-24T21:48:33 |
2025-04-01T04:56:15.849561
|
{
"authors": [
"cavalicenti"
],
"repo": "cavalicenti/upptime",
"url": "https://github.com/cavalicenti/upptime/issues/700",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
335432272
|
NextPath is broken on Postgres
Description
https://discourse.cayley.io/t/path-save-lost-some-data-when-using-sql-backends/1192
Steps to reproduce the issue:
Initialize a clean PG instance
Load test data
Execute the query:
g.V("<dani>", "<bob>").Save("<follows>", "target").All()
Received results:
{"id":"bob","target":"fred"}
{"id":"dani","target":"greg"}
Expected results:
{"id":"bob","target":"fred"}
{"id":"dani","target":"bob"}
{"id":"dani","target":"greg"}
Environment details:
Backend database: postgres
Related to a new optimizer that flattens NodesFrom. Because of this, NextPath stops working - it's not implemented for the base Select iterator.
|
gharchive/issue
| 2018-06-25T14:29:20 |
2025-04-01T04:56:15.854363
|
{
"authors": [
"dennwc"
],
"repo": "cayleygraph/cayley",
"url": "https://github.com/cayleygraph/cayley/issues/724",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
582414926
|
chore: bdd-sbhg-1584371985 to 0.0.1
chore: Promote bdd-sbhg-1584371985 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers:
If they are not already assigned, you can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
|
gharchive/pull-request
| 2020-03-16T16:03:11 |
2025-04-01T04:56:15.885186
|
{
"authors": [
"cjxd-bot-test"
],
"repo": "cb-kubecd/environment-pr-183-2-boot-vault-gke-staging",
"url": "https://github.com/cb-kubecd/environment-pr-183-2-boot-vault-gke-staging/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
595548304
|
chore: bdd-spring-1586226233 to 0.0.1
chore: Promote bdd-spring-1586226233 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers:
If they are not already assigned, you can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
|
gharchive/pull-request
| 2020-04-07T02:37:34 |
2025-04-01T04:56:15.888943
|
{
"authors": [
"cjxd-bot-test"
],
"repo": "cb-kubecd/environment-pr-195-26-boot-gke-production",
"url": "https://github.com/cb-kubecd/environment-pr-195-26-boot-gke-production/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
613660412
|
chore: bdd-nh-1588804450 to 0.0.1
chore: Promote bdd-nh-1588804450 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers:
If they are not already assigned, you can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
|
gharchive/pull-request
| 2020-05-06T22:50:32 |
2025-04-01T04:56:15.919099
|
{
"authors": [
"cjxd-bot-test"
],
"repo": "cb-kubecd/environment-pr-509-22-bdd-frontend-staging",
"url": "https://github.com/cb-kubecd/environment-pr-509-22-bdd-frontend-staging/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
567767224
|
Implement apS function
Hi!
I implemented the apS function from the paper. I wasn't sure if it was deliberately omitted or not? Maybe because you had in mind modelling rigid and non-rigid Selectives separately, or perhaps there was some other reason?
If you think this might be a useful contribution, let me know and I'll expand on the testing a little further before submitting something for review. But if not, don't feel obliged: I'm just satisfying my own curiosities!
Sorry, I completely missed this until now!
@icmurray Sorry for the slow review. This looks great. Is there a reason why it’s still a draft PR? Are you still working on it?
@cb372 thanks for reviewing!
I left the PR in a draft state because I had the niggling feeling that something was missing! But it's not come to me, so I guess either it's good enough or it wasn't important.
My drive-by, unsolicited suggestion: assume good intentions and merge.
"Merge first and ask questions later", if you will. 😄
|
gharchive/pull-request
| 2020-02-19T18:46:55 |
2025-04-01T04:56:15.921841
|
{
"authors": [
"cb372",
"dwijnand",
"icmurray"
],
"repo": "cb372/cats-selective",
"url": "https://github.com/cb372/cats-selective/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1643815878
|
Can't write Assert.xyz for a Map where the expected value of a key is 'null'
6.14.3
Expected behavior
The provided test case shouldn't fail - at least I think it should not fail.
Actual behavior
The test case fails.
Is the issue reproducible on runner?
[x] NetBeans
Test case sample
@Test
public void thisSucceeds() {
var actual = new HashMap<String, String> ();
actual.put("key", "1");
var expected = new HashMap<String, String>();
expected.put("key", "1");
Assert.assertEqualsDeep(actual, expected);
}
@Test
public void thisFailsButShouldAlsoSucceed() {
var actual = new HashMap<String, String> ();
actual.put("key", null);
var expected = new HashMap<String, String>();
expected.put("key", null);
Assert.assertEqualsDeep(actual, expected);
}
Problem [Assert.assertEqualsDeep, Assert.java#L2063]
The method trys to retrieve the data type of the expected value expectedValue.getClass() which fails since the expected value of the key is null.
The latest released version of TestNG is 7.7.1. Please retry using that and let us know if the problem exists.
Sorry, cant test 7.7.1 since swapping the libs shipped with netbeans 17 (6.14.3) does not allow me to run any tests...
Exception in thread "main" org.testng.TestNGException: Couldn't find a constructor in class org.testng.reporters.VerboseReporter at org.testng.internal.objects.InstanceCreator.newInstance(InstanceCreator.java:57) at org.testng.ITestObjectFactory.newInstance(ITestObjectFactory.java:10) at org.testng.TestNG.setListenerClasses(TestNG.java:679) at org.testng.TestNG.configure(TestNG.java:1519) at org.testng.TestNG.privateMain(TestNG.java:1405) at org.testng.TestNG.main(TestNG.java:1378)
The method assertEqualsDeep(Map<?, ?> actual, Map<?, ?> expected, String message) has changed from 6.14.3 to 7.7.1 but both contain the expectedValue.getClass() call without checking that expectedValue is not null.
Method:
https://github.com/cbeust/testng/blob/89dc5845fcb46c26af187e50ea907a7382d06e72/testng-asserts/src/main/java/org/testng/Assert.java#L2143
https://github.com/cbeust/testng/blob/89dc5845fcb46c26af187e50ea907a7382d06e72/testng-asserts/src/main/java/org/testng/Assert.java#L2164
This method has the same problem, altough its much more unlikely that a set will contain a 'null' value...
https://github.com/cbeust/testng/blob/89dc5845fcb46c26af187e50ea907a7382d06e72/testng-asserts/src/main/java/org/testng/Assert.java#L2044
https://github.com/cbeust/testng/blob/89dc5845fcb46c26af187e50ea907a7382d06e72/testng-asserts/src/main/java/org/testng/Assert.java#L2063
I've created a maven test project with netbeans and added testng 7.7.1 and my testcase... and its running fine 😂 dont know why my ant-project throws exceptions at me when i swapped the libs, but maven works fine. and the testcase too.
Objects.equals(...) in 7.7.1 handles the map stuff (and also null values) while back in 6.13.4 there was just actual == expected so execution continues and reaches the problematic code section.
closed as not reproducible/fixed in 7.7.1
@mictru - Thank you for the confirmation. Since you mention ant, am guessing that somewhere in your libs there perhaps is TestNG already available (and its an older version) or maybe you have a jar in your libs that contains a shaded TestNG jar within it. All these are just naive guesses since I have never used ant as a build tool.
|
gharchive/issue
| 2023-03-28T12:10:08 |
2025-04-01T04:56:15.951751
|
{
"authors": [
"krmahadevan",
"mictru"
],
"repo": "cbeust/testng",
"url": "https://github.com/cbeust/testng/issues/2891",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
70665470
|
Drop ant from the project
As the project clearly moved to Maven long time ago, I think ant reference and files could be removed from the project.
Of course, useful ant targets must be moved in maven.
+1
Agreed, ant needs to go.
Done
|
gharchive/issue
| 2015-04-24T11:27:09 |
2025-04-01T04:56:15.953993
|
{
"authors": [
"FibreFoX",
"cbeust",
"juherr"
],
"repo": "cbeust/testng",
"url": "https://github.com/cbeust/testng/issues/650",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
77986977
|
is this project finish ?
i try registration but i take error.
Error:
An exception occurred while executing 'INSERT INTO fos_user (username, username_canonical, email, email_canonical, enabled, salt, password, last_login, locked, expired, expires_at, confirmation_token, password_requested_at, roles, credentials_expired, credentials_expire_at) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)' with params ["oguzhan", "oguzhan", "oguzhantasc@gmail.com", "oguzhantasc@gmail.com", 1, "gmtlydkij6gckwkggk00cgoswgs0osc", "bhm55APgIomOiNvDLJt9LhFVJIkq7e8FNA1llWW\/U7f2+mFbIfYDWO3iFJTQ6uLF00YblBB3Y\/0bUHzUG49x5w==", null, 0, 0, null, null, null, "a:1:{i:0;s:11:\"ROLE_LEADER\";}", 0, null]:
Hi dear,
That's already published.But the error about u'r document's doesn't exits.U have to add , app/config/parameters.yml . and u'r config.I'm adding sql query's.Therefore u have to update your mysql db.
Merhabalar,
Bu proje tamamlanmıştır.Fakat sizin elinizdeki dosyaların içerisinde parameters.yml dosyası eksiktir.Kendi parameters.yml dosyanızı açıp içerisine kendi ayarlarınızı yazmanız ve veritabanınızı güncellemeniz gerekmektedir.
hi again, this error no your site - http://www.gorev.li -
Let's check it out together, username : testt pass: 12345 and we can test together on this link.
http://www.gorev.li/dashboard/#&togetherjs=7r8u6E04Qc
Log-in after click this link http://www.gorev.li/dashboard/#&togetherjs=7r8u6E04Qc
nice work :+1:
lastly, u fix this area then everything is nice. http://hizliresim.com/mP3nyP
|
gharchive/issue
| 2015-05-19T08:13:11 |
2025-04-01T04:56:15.996358
|
{
"authors": [
"ccali14",
"oguzhantasci"
],
"repo": "ccali14/Gorevli-Project",
"url": "https://github.com/ccali14/Gorevli-Project/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1645838540
|
Add support for properties
Hi, great tool, thank you for your work !
Would it be possible to also search for property terms ?
I see in term_collector.py that you filter out PropertyClass terms:
if not isinstance(ontology_term, PropertyClass) and ontology_term is not Thing \ and ontology_term is not Nothing:
Would it be possible to provide arguments to control this behavior ?
Hi @nleguillarme — Thank you, we are glad to hear that!
Yes, we can certainly support search for property terms. To help us design a suitable solution, would you mind giving us a general idea of your use case? For example, are there scenarios where you would want to search for only properties, or would an argument to search property terms (in addition to classes) suffice?
Thank you for the feedback, this is very helpful. We are working on a solution which will be included in a next release. I'll update you here once we have a pre-release implementation that could be experimented with.
Hello,
We have added the functionality as requested in the newest pre-release (2.2.0). Please take a look and let us know if the functionality and documentation is sufficient. We will add it to PyPi in the next release, if we have your approval.
Hello
I've just tested it and it seems to work as expected.
Thank you for adding this functionality.
|
gharchive/issue
| 2023-03-29T13:39:59 |
2025-04-01T04:56:16.000915
|
{
"authors": [
"nleguillarme",
"paynejason",
"rsgoncalves"
],
"repo": "ccb-hms/ontology-mapper",
"url": "https://github.com/ccb-hms/ontology-mapper/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1678273219
|
chatgpt bugbounty usage
Adding repo github prompts for bugbounty and pentesting to experimental list
Thank you for the fantastic contribution.
Thank you for the repository
|
gharchive/pull-request
| 2023-04-21T10:17:07 |
2025-04-01T04:56:16.021466
|
{
"authors": [
"cckuailong",
"tin-z"
],
"repo": "cckuailong/awesome-gpt-security",
"url": "https://github.com/cckuailong/awesome-gpt-security/pull/1",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1902346004
|
How to set vina score to dock complex with metals?
I heard that there is some sf than can be used in metal ions(Mg,Se,Zn...) docking in AutoDock Vina, But I couldn't find it in manual.
Is there anything to do with v = Vina(sf_name='vina')?
The vina scoring function, which is the default (e.g., v = Vina()), handles metals. There's zinc specific potentials but that is for the autodock4 scoring function.
Though the default scoring function can deal with metals, the protein.pdb cannot be converted into pdbqt file if they have Zn,Mg...
for example, 1ew9_pocket.pdb consists of Zn and Mg, and the Error is like:
/ADFRsuite_x86_64Linux_1.0/bin/prepare_receptor -r 1ew9_pocket.pdb -A hydrogens -o 1ew9_pocket.pdbqt
' ' apparently composed of not std residues. Deleting
Traceback (most recent call last):
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/Utilities24/prepare_receptor4.py", line 216, in <module>
dict=dictionary)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/MoleculePreparation.py", line 558, in __init__
version=version, delete_single_nonstd_residues=delete_single_nonstd_residues)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/MoleculePreparation.py", line 124, in __init__
self.repairMol(mol, self.repair_type_list)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/MoleculePreparation.py", line 174, in repairMol
self.newHs = self.addHydrogens(mol)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/MoleculePreparation.py", line 187, in addHydrogens
HB.addHydrogens(mol)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/MolKit/hydrogenBuilder.py", line 61, in addHydrogens
babel.assignHybridization(mol.allAtoms)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/PyBabel/atomTypes.py", line 136, in assignHybridization
self.valence_three()
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/PyBabel/atomTypes.py", line 236, in valence_three
elif self.count_free_ox(a) >= 2: a.babel_type="Cac"
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/PyBabel/atomTypes.py", line 167, in count_free_ox
self.count_heavy_atoms(bonded_atom) == 1:
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/PyBabel/atomTypes.py", line 157, in count_heavy_atoms
if bonded_atom.babel_type[0] == 'H': count = count + 1
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/MolKit/molecule.py", line 409, in __getattr__
raise AttributeError('member %s not found'%member)
AttributeError: member babel_type not found
Though the default scoring function can deal with metals, the protein.pdb cannot be converted into pdbqt file if they have Zn,Mg...
for example, 1ew9_pocket.pdb consists of Zn and Mg, and the Error is like:
/ADFRsuite_x86_64Linux_1.0/bin/prepare_receptor -r 1ew9_pocket.pdb -A hydrogens -o 1ew9_pocket.pdbqt
' ' apparently composed of not std residues. Deleting
Traceback (most recent call last):
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/Utilities24/prepare_receptor4.py", line 216, in <module>
dict=dictionary)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/MoleculePreparation.py", line 558, in __init__
version=version, delete_single_nonstd_residues=delete_single_nonstd_residues)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/MoleculePreparation.py", line 124, in __init__
self.repairMol(mol, self.repair_type_list)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/MoleculePreparation.py", line 174, in repairMol
self.newHs = self.addHydrogens(mol)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/AutoDockTools/MoleculePreparation.py", line 187, in addHydrogens
HB.addHydrogens(mol)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/MolKit/hydrogenBuilder.py", line 61, in addHydrogens
babel.assignHybridization(mol.allAtoms)
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/PyBabel/atomTypes.py", line 136, in assignHybridization
self.valence_three()
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/PyBabel/atomTypes.py", line 236, in valence_three
elif self.count_free_ox(a) >= 2: a.babel_type="Cac"
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/PyBabel/atomTypes.py", line 167, in count_free_ox
self.count_heavy_atoms(bonded_atom) == 1:
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/PyBabel/atomTypes.py", line 157, in count_heavy_atoms
if bonded_atom.babel_type[0] == 'H': count = count + 1
File "/data2/rjli/ADFRsuite-1.0/ADFRsuite_x86_64Linux_1.0/CCSBpckgs/MolKit/molecule.py", line 409, in __getattr__
raise AttributeError('member %s not found'%member)
AttributeError: member babel_type not found
If so, how can we use vina to dock protein with metals? We even cannot get their pdbqt files!
@diogomart Dear developer, I met the same questions. When there is metal ions such as Mg, the ADFR can't convert it into pdbqt file.
So it is impossible to use vina to dock complex with metal.
Hi @Kerro-junior and @Dadiao-shuai,
You should be able to get passed the issue if you are not adding hydrogens in prepare_receptor. Please see an example here: https://github.com/ccsb-scripps/AutoDock-Vina/issues/241#issuecomment-1734266302
If you think your problem is structure-specific, it would be really helpful if you could your files so others can reproduce the error.
|
gharchive/issue
| 2023-09-19T07:07:47 |
2025-04-01T04:56:16.120307
|
{
"authors": [
"Dadiao-shuai",
"Kerro-junior",
"RJ-Li",
"diogomart",
"rwxayheee"
],
"repo": "ccsb-scripps/AutoDock-Vina",
"url": "https://github.com/ccsb-scripps/AutoDock-Vina/issues/243",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1686401259
|
Update pom for release
Fixes #32
Changes made in small increments for better review.
Please squash all changes before merging.
@aalmiray - sorry, it looks like this may need a rebase now.
It's OK. I'll take care of it in a few.
|
gharchive/pull-request
| 2023-04-27T08:42:24 |
2025-04-01T04:56:16.234221
|
{
"authors": [
"aalmiray",
"afrittoli"
],
"repo": "cdevents/sdk-java",
"url": "https://github.com/cdevents/sdk-java/pull/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
142538257
|
books 如何在本地进行测试?
编辑了 books 下的 py 文件之后,想在本地测试一下是否能够成功生成想要的书籍,请问应该如何操作?
安装GAE SDK,它就有一个本地调试环境,用它打开kindleear工程,然后在launcher的 extra flags里面加入 module-worker.yaml,就可以调试了,或也可以使用dev_appserver.py打开本地工程app.yaml/module-worker.yaml,一样的,我都是使用dev_appserver.py,至于将生成的书籍发送处理,有多种方法,要不你就建本机SMTP调试服务器,要不就使用sendmail(移植于linux)方案。
我用的是本机SMTP调试服务器方案。
如果嫌麻烦的话,还是上传到GOOGLE服务器吧,多上传几次多调试几次就好。
好像限额是一天1000次上传还是多少,足够你用的。
好的,谢谢,我还是上传调试吧 :joy:
@cdhigh 大神,因为我想修改一下前端,所以试图使用 Launcher 本地调试 KindleEar,按照你说的配置倒是可以访问了。但是页面显示“not found”,Log 中显示:
INFO 2016-04-28 15:11:45,813 module.py:787] worker: "GET / HTTP/1.1" 404 9
这是怎么回事儿呢?还需要设置什么地方吗?
worker模块和default模块的端口地址是不一样的,你输入了worker模块的端口号而出错。
@cdhigh 原来如此,感谢点拨,已搞定。
|
gharchive/issue
| 2016-03-22T03:38:07 |
2025-04-01T04:56:16.255341
|
{
"authors": [
"cdhigh",
"miaowm5",
"runbing"
],
"repo": "cdhigh/KindleEar",
"url": "https://github.com/cdhigh/KindleEar/issues/274",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1901951312
|
feat(deps): upgrade runtime dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-runtime-dependencies-1.x" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 1403
Questions ?
Please refer to the Backport tool documentation
|
gharchive/pull-request
| 2023-09-19T00:07:26 |
2025-04-01T04:56:16.262925
|
{
"authors": [
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-cli",
"url": "https://github.com/cdk8s-team/cdk8s-cli/pull/1403",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1888781507
|
chore(deps): upgrade configuration
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-configuration-k8s-26-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 2835
Questions ?
Please refer to the Backport tool documentation
|
gharchive/pull-request
| 2023-09-09T15:05:50 |
2025-04-01T04:56:16.265561
|
{
"authors": [
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/2835",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
953382121
|
chore(deps): upgrade dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-dependencies" workflow
AWS CodeBuild CI Report
CodeBuild project: GitHubPipelineAutoBuildProj-XK8ItpxnSow5
Commit ID: 0dd0ca3aae177ff66b685b2cfab6ca19e8104166
Result: SUCCEEDED
Build Logs (available for 30 days)
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
|
gharchive/pull-request
| 2021-07-27T00:05:37 |
2025-04-01T04:56:16.268853
|
{
"authors": [
"aws-cdk-automation"
],
"repo": "cdklabs/aws-delivlib",
"url": "https://github.com/cdklabs/aws-delivlib/pull/952",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
929608979
|
Enhance survey filters in Django admin
Summary | Résumé
Trello
https://trello.com/c/eDGRMZDD/868-refine-filters-for-survey-send-feature
New filters on registrant including new columns
Default hardcoded 14 day filter
Refine filters for survey send feature
|
gharchive/pull-request
| 2021-06-24T21:01:41 |
2025-04-01T04:56:16.297155
|
{
"authors": [
"dorianjp"
],
"repo": "cds-snc/covid-alert-portal",
"url": "https://github.com/cds-snc/covid-alert-portal/pull/654",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1637504262
|
chore: disable Salesforce feature flag
Summary
Update the Staging environment variables to disable the Salesforce feature flag. This is being done to confirm that the API and admin still work as expected with the feature disabled.
What happens when your PR merges?
Prefix the title of your PR:
fix: - tag main as a new patch release
feat: - tag main as a new minor release
BREAKING CHANGE: - tag main as a new major release
[MANIFEST] - tag main as a new patch release and deploy to production
chore: - use for changes to non-app code (ex: GitHub actions)
Alternatively, change the VERSION file - this will not create a new tag, but rather will release the tag in VERSION to production.
What are you changing?
[ ] Releasing a new version of Notify
[ ] Changing kubernetes configuration
Provide some background on the changes
Give details ex. Security patching, content update, more API pods etc
If you are releasing a new version of Notify, what components are you updating
[ ] API
[ ] Admin
[ ] Documentation
[ ] Document download API
Checklist if releasing new version:
[ ] I made sure that the changes are as expected in Notify staging
[ ] I have checked if the docker images I am referencing exist
[ ] api lambda (requires Notification-Production / AdministratorAccess login)
[ ] api k8s
[ ] admin
[ ] documentation
[ ] document download API
Checklist if making changes to Kubernetes:
[ ] I know how to get kubectl credentials in case it catches on fire
After merging this PR
[ ] I have verified that the tests / deployment actions succeeded
[ ] I have verified that any affected pods were restarted successfully
[ ] I have verified that I can still log into Notify production
[ ] I have verified that the smoke tests still pass on production
[ ] I have communicated the release in the #notify Slack channel.
Diff
$ make diff-staging
Archive: .previous.env.zip
inflating: .env
Archive: .env.zip
inflating: .env
36c36
< FF_SALESFORCE_CONTACT=true
---
> FF_SALESFORCE_CONTACT=false
|
gharchive/pull-request
| 2023-03-23T13:02:08 |
2025-04-01T04:56:16.306568
|
{
"authors": [
"patheard"
],
"repo": "cds-snc/notification-manifests",
"url": "https://github.com/cds-snc/notification-manifests/pull/1549",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1420973794
|
Modify Templates Table to Include UpdatedAt Field
Acceptance Criteria
[ ] templates table includes UpdateAt timestamp field which is updated the record is modified.
[ ] modify return from database so that this field is available in the form client
[ ] clean migration with no loss of data
@thiessenp-cds I think this was implemented as part of the work you have done around /myforms. Is it correct? If yes then we could close the ticket.
@thiessenp-cds I think this was implemented as part of the work you have done around /myforms. Is it correct? If yes then we could close the ticket.
Yup, updatedAt was added to the Prisma templates file. So you can mark that off as complete :)
|
gharchive/issue
| 2022-10-24T14:58:10 |
2025-04-01T04:56:16.309092
|
{
"authors": [
"Moro-Code",
"craigzour",
"thiessenp-cds"
],
"repo": "cds-snc/platform-forms-client",
"url": "https://github.com/cds-snc/platform-forms-client/issues/1168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2436116595
|
A few broken text for Logic
Description
There's some areas with broken text around the logic functionality - see screenshots.
Steps to reproduce
Area one:
Go to "Add a rule"
Scroll down to the bottom of the Modal
See error
Area two:
Go to the Branching tab
See error in panel
Details
Browser: Chrome
Operating system: Mac
Environment: Staging
Language: English
Expected behaviour
Plain language text
Screenshots or videos
When you open up the modal:
Navigating to a Page in Logic view:
Opening up a new form and navigating to the Logic tab:
A few more instances:
|
gharchive/issue
| 2024-07-29T18:52:45 |
2025-04-01T04:56:16.314705
|
{
"authors": [
"Abi-Nada",
"anikbrazeau"
],
"repo": "cds-snc/platform-forms-client",
"url": "https://github.com/cds-snc/platform-forms-client/issues/4090",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1365567424
|
[Snyk] Upgrade @aws-sdk/client-sqs from 3.74.0 to 3.150.0
This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to upgrade @aws-sdk/client-sqs from 3.74.0 to 3.150.0.
:information_source: Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.
The recommended version is 31 versions ahead of your current version.
The recommended version was released 23 days ago, on 2022-08-15.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open upgrade PRs.
For more information:
🧐 View latest project report
🛠 Adjust upgrade PR settings
🔕 Ignore this dependency or unsubscribe from future upgrade PRs
Closed by #1001
|
gharchive/pull-request
| 2022-09-08T05:45:55 |
2025-04-01T04:56:16.319641
|
{
"authors": [
"bryan-robitaille",
"sastels"
],
"repo": "cds-snc/platform-forms-client",
"url": "https://github.com/cds-snc/platform-forms-client/pull/1009",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
285199383
|
nature.com
apparently it was not using all of nature , only "open search" in the past , so..
https://www.nature.com/search?q=autism&title=autism&order=relevance&date_range=last_30_days
https://www.nature.com/search?q=autism&title=autism&order=date_desc
https://github.com/ceberous/AutismMastadonBot/commit/409783bd9398df18ce0b9cd95819e610bdf9dd30
|
gharchive/issue
| 2017-12-30T12:03:40 |
2025-04-01T04:56:16.327317
|
{
"authors": [
"ceberous"
],
"repo": "ceberous/AutismMastadonBot",
"url": "https://github.com/ceberous/AutismMastadonBot/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1709114502
|
🛑 tanecni-divadlo.cz is down
In 8b979ca, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 3382909.
|
gharchive/issue
| 2023-05-14T23:43:38 |
2025-04-01T04:56:16.330317
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/13977",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1709912428
|
🛑 tanecni-divadlo.cz is down
In 730ca20, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in c777d88.
|
gharchive/issue
| 2023-05-15T11:38:30 |
2025-04-01T04:56:16.333290
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/14000",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1714403697
|
🛑 tanecni-divadlo.cz is down
In 47f2f0c, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 71e6c50.
|
gharchive/issue
| 2023-05-17T18:08:41 |
2025-04-01T04:56:16.336247
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/14119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1716215237
|
🛑 tanecni-divadlo.cz is down
In 77d7bca, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 59b0f8f.
|
gharchive/issue
| 2023-05-18T20:23:36 |
2025-04-01T04:56:16.339391
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/14179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1763983884
|
🛑 tanecni-divadlo.cz is down
In db9df2f, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 946a7bb.
|
gharchive/issue
| 2023-06-19T18:08:37 |
2025-04-01T04:56:16.342322
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/15828",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1783512930
|
🛑 tanecni-divadlo.cz is down
In e0d63c1, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in ffdb4de.
|
gharchive/issue
| 2023-07-01T04:58:41 |
2025-04-01T04:56:16.345539
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/16476",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1790926041
|
🛑 tanecni-divadlo.cz is down
In c56dfaf, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 118ce13.
|
gharchive/issue
| 2023-07-06T06:48:38 |
2025-04-01T04:56:16.348503
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/16734",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1796509159
|
🛑 tanecni-divadlo.cz is down
In 30161d1, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 9a425c2.
|
gharchive/issue
| 2023-07-10T10:58:21 |
2025-04-01T04:56:16.351747
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/16946",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1829072855
|
🛑 tanecni-divadlo.cz is down
In 39caa36, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 346fa79.
|
gharchive/issue
| 2023-07-31T12:22:26 |
2025-04-01T04:56:16.354730
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/17977",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1841720252
|
🛑 tanecni-divadlo.cz is down
In 29d7d87, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in ab86fb8.
|
gharchive/issue
| 2023-08-08T17:27:29 |
2025-04-01T04:56:16.357695
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/18411",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1863331873
|
🛑 tanecni-divadlo.cz is down
In a0bcee0, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in a122df8 after 474 days, 17 hours, 41 minutes.
|
gharchive/issue
| 2023-08-23T13:23:27 |
2025-04-01T04:56:16.360704
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/19201",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1865956425
|
🛑 tanecni-divadlo.cz is down
In 0fea204, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 520
Response time: 10334 ms
Resolved: tanecni-divadlo.cz is back up in b2700e8 after 476 days, 2 hours, 32 minutes.
|
gharchive/issue
| 2023-08-24T22:07:42 |
2025-04-01T04:56:16.363922
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/19277",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1870126876
|
🛑 tanecni-divadlo.cz is down
In 318521b, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 21812a2 after 479 days, 21 hours, 22 minutes.
|
gharchive/issue
| 2023-08-28T17:02:55 |
2025-04-01T04:56:16.366927
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/19479",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1921363987
|
🛑 Bali 2017 is down
In b29855b, Bali 2017 (https://bali-2017.cebre.us/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bali 2017 is back up in 9ca43e6 after 3 minutes.
|
gharchive/issue
| 2023-10-02T07:08:32 |
2025-04-01T04:56:16.369482
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/20311",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1501303829
|
🛑 tanecni-divadlo.cz is down
In 034140f, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 64cd8f1.
|
gharchive/issue
| 2022-12-17T09:56:45 |
2025-04-01T04:56:16.372471
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/4673",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.