id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
12748537
|
jsonp and U+2028/U+2029
As per this web page, there are cases in which a few extra unicode characters need \u escapes. Any chance of adding that?
Generally, anybody home?
Bump.
|
gharchive/issue
| 2013-04-03T11:39:11 |
2025-04-01T04:34:55.001498
|
{
"authors": [
"benson-basis",
"juanibiapina"
],
"repo": "lloyd/yajl",
"url": "https://github.com/lloyd/yajl/issues/99",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1527946382
|
Revert(web): Mark package as side-effect free
This reverts commit 67d900c7df45c601f9a54f0a3a6b1f1342978582.
My bad did not check enough that the sideEffect in the web package tree-shakes most of the code.
Also thinking about that any default state without using JS should not be broken anyway.
spirit-web-entry has clearly a smaller size when using sideEffects free and I did not notice. Sorry for that.
|
gharchive/pull-request
| 2023-01-10T20:12:45 |
2025-04-01T04:34:55.143827
|
{
"authors": [
"literat"
],
"repo": "lmc-eu/spirit-design-system",
"url": "https://github.com/lmc-eu/spirit-design-system/pull/631",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
282796469
|
Error while loading datta
I was using a 0.6.7 dockerized cerebro to manage five 5.6.2 Elasticsearch nodes but I was getting continuous "Error while loading data" messages.
I have updated the dockerized cerebro to the lasted version: 0.7.2 but I still have the same messages.
Just I can ask for nodes status but I'm not able to perform any other operation.
I can connect to other ElasticSearch without those problems.
How could I troubleshoot this problem?
Thanks and kind regards
We are not even able to ask for the created snapshots
We are creating a daily snapshot of some of our indexes.
Regards
@sonfrau Can you see what the error message says? If you click on the message that should expand. If not, could you share the logs?
In the application.log I'm getting this messages:
2017-12-18 08:28:57,557 - [ERROR] - from application in ForkJoinPool-2-worker-1
Error processing request [path: /overview, body: {"host":"******","username":"******","password":"***********"}]
org.asynchttpclient.exception.RemotelyClosedException: Remotely closed
[error] application - Error processing request [path: /snapshots, body: {"host":"******","username":"******","password":"***********"}]
play.api.libs.json.JsResultException: JsResultException(errors:List((,List(ValidationError(List(error.expected.jsarray),WrappedArray())))))
at play.api.libs.json.JsReadable$$anonfun$2.apply(JsReadable.scala:23)
at play.api.libs.json.JsReadable$$anonfun$2.apply(JsReadable.scala:23)
at play.api.libs.json.JsResult$class.fold(JsResult.scala:73)
at play.api.libs.json.JsError.fold(JsResult.scala:13)
at play.api.libs.json.JsReadable$class.as(JsReadable.scala:21)
Could I give you further information?
@sonfrau It should be safe to change the log level, although I must admit this might not make a lot of difference.
Do you have any error logs on your ES(for the first error posted)?
And for the second, can you give me the output of running these requests against your cluster:
/_cat/indices?format=json
/_snapshot
Good morning,
Ok, then I'm not going to raise the level.
on our ES, I don't see any special error. I share with you last logs, some of them, at the same time we were trying to perform actions on cerebro072
[2017-12-18T08:22:33,251][INFO ][o.e.c.r.a.AllocationService] [Node04] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[index-transactions-2017.12.18][3], [index-transactions-2017.12.18][2]] ...]).
[2017-12-18T08:22:33,539][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-transactions-2017.12.18/YPPZ0I1HT2W4V8JjOn5-lw] create_mapping [logevent]
[2017-12-18T08:22:33,543][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-core-development-2017.12.18/P04DiEi-Shedd2h6n5USdg] create_mapping [index-core-development]
[2017-12-18T08:23:01,587][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [gateway-2017.12.18/022QYhN6QzGZE_xEHvNT4w] create_mapping [quote]
[2017-12-18T08:23:02,981][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [gateway-2017.12.18/022QYhN6QzGZE_xEHvNT4w] update_mapping [quote]
[2017-12-18T08:23:24,339][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-dev-core-cluster-2017.12.18/h0tFvY9kRIuRblKUUDtuaA] update_mapping [fluentd]
[2017-12-18T08:25:53,590][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-transactions-2017.12.18/YPPZ0I1HT2W4V8JjOn5-lw] update_mapping [logevent]
[2017-12-18T08:28:48,861][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [gateway-2017.12.18/022QYhN6QzGZE_xEHvNT4w] create_mapping [organizations]
[2017-12-18T08:28:51,002][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-common-services-cluster-2017.12.18/wXwmnZeETYefS6WtIz6qmg] update_mapping [fluentd]
[2017-12-18T08:43:36,671][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-2017.12.18/4PwY78qbStKoeygDxHw2eg] update_mapping [logevent]
[2017-12-18T08:46:45,947][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-2017.12.18/4PwY78qbStKoeygDxHw2eg] update_mapping [logevent]
[2017-12-18T08:54:39,818][INFO ][o.e.c.m.MetaDataCreateIndexService] [Node04] [index-2017.12.18] creating index, cause [auto(bulk api)], templates [template_1, template-index], shards [5]/[2], mappings [_default_]
[2017-12-18T08:55:35,304][INFO ][o.e.c.r.a.AllocationService] [Node04] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[index-2017.12.18][4]] ...]).
[2017-12-18T08:55:35,540][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-2017.12.18/MgGjNhO9R-6HoZRdzrscRA] create_mapping [logevent]
[2017-12-18T09:36:48,535][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-audit-2017.12.18/j4e-aqO6RBOek--Ig2gHRg] update_mapping [logevent]
[2017-12-18T09:46:49,153][INFO ][o.e.c.m.MetaDataMappingService] [Node04] [index-audit-2017.12.18/j4e-aqO6RBOek--Ig2gHRg] update_mapping [logevent]
I attach the screenshot output gotten of your second request
I guess it's enough for you that detail or you need the whole output.
Thanks and kind regards.
You should know that there is an Nginx Proxy between cerebro072 and our ES cluster.
We need that Proxy to protect our ES cluster.
We have the same infraestructure to protect other ES cluster but we haven't detected the same behaviour.
Regards
While we were checking this issue we have seen for first time a right output
That is to say that cerebro072 is able to show ES cluster. May be there is a timeout response problem, don't you think?
May be the response is to big and there isn't time enough to receive all data.
May be there is some say to raise that timeout response. What do you think?
Sorry, I hadn't seen your first message: "@sonfrau Can you see what the error message says? If you click on the message that should expand. If not, could you share the logs?"
This is the detail I get:
When I ask for overview:
Error while loading data
{
"error": "Failure for [_stats/docs,store]"
}
```When I ask for the snapshots:
Error loading repositories
{
"error": "JsResultException(errors:List((,List(ValidationError(List(error.expected.jsarray),WrappedArray())))))"
}
Thanks!
|
gharchive/issue
| 2017-12-18T08:17:40 |
2025-04-01T04:34:55.154794
|
{
"authors": [
"andrewkcarter",
"lmenezes",
"sonfrau"
],
"repo": "lmenezes/cerebro",
"url": "https://github.com/lmenezes/cerebro/issues/247",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2713922875
|
Typescript pipelines
I see pipeline "Code" nodes can have code written in Python. Will it be possible to write "Code" nodes in typescript?
@canadaduane Yes! We're actively working on it. Is there a specific reason to use TS over Python or are you just more comfortable with TS?
I was exploring/contrasting Lmnr and Dify. Currently our team of ~10 engineers is typescript-focused due to our building a user-friendly nextjs app for students and teachers.
interesting, I didn't know you could write code in Dify. Are you just building a prototype or you are considering using either Dify or Laminar pipeline builder for production use case?
|
gharchive/issue
| 2024-12-03T03:56:21 |
2025-04-01T04:34:55.159962
|
{
"authors": [
"canadaduane",
"skull8888888"
],
"repo": "lmnr-ai/lmnr",
"url": "https://github.com/lmnr-ai/lmnr/issues/251",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
156103518
|
Standardize Display of Spectrographs
All the shorter codes display at the same size (at least on my browser/screen), same width, same typeface, and each individual box representing a section is the same size. There seems to be a cutoff on length, however, because the longer codes scrunch the width and rearrange the number of boxes per row. Can the longer codes (IA, NY1876) match the shorter on width and box size and just run longer down the page?
I'm kind of surprised by this, because I thought I had it working the way
you want it to be instead of the way you describe. Can you please take a
screenshot of two that aren't the way you expect? I just want to make sure
where the problem lies.
On Saturday, May 21, 2016, Kellen Funk notifications@github.com wrote:
All the shorter codes display at the same size (at least on my
browser/screen), same width, same typeface, and each individual box
representing a section is the same size. There seems to be a cutoff on
length, however, because the longer codes scrunch the width and rearrange
the number of boxes per row. Can the longer codes (IA, NY1876) match the
shorter on width and box size and just run longer down the page?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/lmullen/civil-procedure-codes/issues/34
--
Lincoln Mullen, http://lincolnmullen.com
Assistant Professor, Department of History & Art History
George Mason University
The third shot is good, just to be clear.
Okay good. It's working like I expect. There are a standard number of cells
in each row. It's only compressing the width because of the height. All I
will need to do is set a figure height bigger for the longer codes. Thanks.
On Saturday, May 21, 2016, Kellen Funk notifications@github.com wrote:
The third shot is good, just to be clear.
—
You are receiving this because you commented.
Reply to this email directly or view it on GitHub
https://github.com/lmullen/civil-procedure-codes/issues/34#issuecomment-220780089
--
Lincoln Mullen, http://lincolnmullen.com
Assistant Professor, Department of History & Art History
George Mason University
The widths should be corrected now. If you think there are any that are not correct, let me know. A few might have a bit of excess height, but that just affects the white space around the plot and can be easily cropped. In addition, I've put more cells in each row: now there are 50, which makes it easier to count how many sections are in a code.
|
gharchive/issue
| 2016-05-21T14:04:02 |
2025-04-01T04:34:55.168965
|
{
"authors": [
"kfunk074",
"lmullen"
],
"repo": "lmullen/civil-procedure-codes",
"url": "https://github.com/lmullen/civil-procedure-codes/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1287475543
|
Splitpayments: floating point percentage for split value
Fixes #673
Follow up on PR #679 by @exitfiat
added the necessary changes to allow for having float in the percentage in splitpayments extension
LGTM
Please run make format once again :)
|
gharchive/pull-request
| 2022-06-28T14:53:16 |
2025-04-01T04:34:55.170496
|
{
"authors": [
"callebtc",
"talvasconcelos"
],
"repo": "lnbits/lnbits-legend",
"url": "https://github.com/lnbits/lnbits-legend/pull/690",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
342649751
|
[ext] Add visit #/mop-up question to Namibia Form
@onlyjsmith commented on Mon Jul 16 2018
External feedback source: https://github.com/disarm-platform/user-requests-and-feedback/issues/83
Exact questions TBD - can put in blocked for now
Done
|
gharchive/issue
| 2018-07-19T09:27:57 |
2025-04-01T04:34:55.228401
|
{
"authors": [
"TNgidi"
],
"repo": "locational/douma-app",
"url": "https://github.com/locational/douma-app/issues/457",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
729700586
|
GEOMESA-2937 Fix schema management index flag checks
Split up GeoMesaDataStore test classes
Remove accidental check-in of application conf file
Signed-off-by: Emilio Lahr-Vivaz elahrvivaz@ccri.com
except where noted, the other changes are just splitting out the different data store tests into 3 files
|
gharchive/pull-request
| 2020-10-26T16:10:05 |
2025-04-01T04:34:55.231545
|
{
"authors": [
"elahrvivaz"
],
"repo": "locationtech/geomesa",
"url": "https://github.com/locationtech/geomesa/pull/2640",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
128344865
|
GEOMESA-1063 API for discovering available feature types and converters
Added some methods for accessing feature types and converters that
are available on the classpath
All files name "geomesa-data.conf" on the classpath will be loaded.
Each file is expected to have a root element geomesa.converters
with one key corresponding to the name of the data type. Under the
data type name should be sft and converter definitions.
Signed-off-by: Anthony Fox anthonyfox@ccri.com
FYI example confs here; https://github.com/locationtech/geomesa/blob/master/geomesa-tools/conf/application.conf
@jahhulbert-ccri good catch. Ok, seems like we need to do a bit of disambiguation. @elahrvivaz check out https://github.com/geomesa/gm-data
I like the idea of having definitions in modular jar files available on the classpath. Especially if we have the wrapper classes generated and dropped into those jars. I think we also need the ability to load and modify definitions at runtime - not from the classpath. Anyone have thoughts around this?
We 100% need to load and modify while running in addition to the classpath so that we can support interactive type/converter creation without restarting the containers...
Currently we can do it via a separate jar file like this and reference them by name...which i'm going to probably change to include dropdown of known types.
https://github.com/jahhulbert-ccri/geomesa-nifi/tree/master/geomesa-nifi-resources/src/main/resources
We should be able to dynamically drop in jars on the classpath right now - you just have to have a file named application.conf in your jar, and we will pick it up with ConfigFactory.load:
https://github.com/typesafehub/config#standard-behavior
Also, I added a 'name' to converters, so we can match them up with the sft using that.
for runtime changes, we could use zookeeper distributed caching. we already have that mostly done in whiptail, hooked up to a rest api, but we could make it part of the command line tools instead.
closing for additional work...copied branch here: https://github.com/jahhulbert-ccri/geomesa/tree/fcr_discover_api
|
gharchive/pull-request
| 2016-01-23T18:35:36 |
2025-04-01T04:34:55.238073
|
{
"authors": [
"anthonyccri",
"elahrvivaz",
"jahhulbert-ccri"
],
"repo": "locationtech/geomesa",
"url": "https://github.com/locationtech/geomesa/pull/780",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2149725240
|
creating a document from the file tree does not always automatically open it
On macOS
Appears to be fixed
|
gharchive/issue
| 2024-02-22T18:46:11 |
2025-04-01T04:34:55.244756
|
{
"authors": [
"smailbarkouch"
],
"repo": "lockbook/lockbook",
"url": "https://github.com/lockbook/lockbook/issues/2541",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1601908044
|
Is this project still maintained? :)
Hi @logicabrity ,
we tried to reach out to you regarding PyPI name aeon via the given mail mail@marc-antonio.de
Also now we opened an issue at PyPI according to PEP 541 to request the aeon handle: https://github.com/pypi/support/issues/2639
But if you can see this here and respond it would be convenient!
Thanks a lot!
Please delete this or remove my e-mail address.
communication has been established, so closing this issue for now
|
gharchive/issue
| 2023-02-27T20:35:40 |
2025-04-01T04:34:55.298432
|
{
"authors": [
"aiwalter",
"logicabrity"
],
"repo": "logicabrity/aeon-legacy",
"url": "https://github.com/logicabrity/aeon-legacy/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
115078327
|
Migrate to Jekyll 3
May have to deal with collections/portfolio in a new way
collections seem ok, but the syntax highlighter needs work. What had been wrapped using whitespace: pre-wrap doesn't do that anymore. I don't know when it stopped working
Ah, it seems I'm getting an additional block.
Old version:
New version:
|
gharchive/issue
| 2015-11-04T15:28:58 |
2025-04-01T04:34:55.308193
|
{
"authors": [
"logista"
],
"repo": "logista/btsite2015",
"url": "https://github.com/logista/btsite2015/issues/88",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1550156764
|
🛑 PotusTV is down
In e0fafbb, PotusTV (https://tv.potus.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PotusTV is back up in 05dfba1.
|
gharchive/issue
| 2023-01-20T00:36:21 |
2025-04-01T04:34:55.310496
|
{
"authors": [
"logos914"
],
"repo": "logos914/potus-estado",
"url": "https://github.com/logos914/potus-estado/issues/256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1605577810
|
🛑 Gestión de Proyectos is down
In dd6ce7e, Gestión de Proyectos (https://p.potus.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Gestión de Proyectos is back up in 293262d.
|
gharchive/issue
| 2023-03-01T19:56:54 |
2025-04-01T04:34:55.312807
|
{
"authors": [
"logos914"
],
"repo": "logos914/potus-estado",
"url": "https://github.com/logos914/potus-estado/issues/334",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1866071571
|
🛑 Gestión de Proyectos is down
In 7bb4a18, Gestión de Proyectos (https://p.potus.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Gestión de Proyectos is back up in e6f0565 after 331 days, 19 hours.
|
gharchive/issue
| 2023-08-25T00:45:33 |
2025-04-01T04:34:55.315297
|
{
"authors": [
"logos914"
],
"repo": "logos914/potus-estado",
"url": "https://github.com/logos914/potus-estado/issues/595",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1284907987
|
🛑 PotusTV is down
In abbe4df, PotusTV (https://tv.potus.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PotusTV is back up in a024c29.
|
gharchive/issue
| 2022-06-26T13:04:03 |
2025-04-01T04:34:55.317524
|
{
"authors": [
"logos914"
],
"repo": "logos914/potus-estado",
"url": "https://github.com/logos914/potus-estado/issues/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
210239203
|
Is it compatible with logstash 5.x?
Currently I'm trying to wire up everything
Just downloaded latest available logstash 5.2.1 and trying to run logstash-plugin install --no-verify logstash-input-perfmon as suggested in docs, but got:
Installing logstash-input-perfmon
Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "logstash-codec-plain":
In snapshot (Gemfile.lock):
logstash-codec-plain (= 3.0.2)
In Gemfile:
logstash-input-s3 (>= 0) java depends on
logstash-mixin-aws (>= 0) java depends on
logstash-codec-plain (>= 0) java
# same block repeats bazillion times
Running `bundle update` will rebuild your snapshot from scratch, using only the gems in your Gemfile, which may resolve the conflict.
Bundler could not find compatible versions for gem "logstash-core":
In snapshot (Gemfile.lock):
logstash-core (= 5.2.1)
In Gemfile:
logstash-core-plugin-api (>= 0) java depends on
logstash-core (= 5.2.1) java
logstash-input-perfmon (>= 0) java depends on
logstash-core (< 2.0.0, >= 1.4.0) java
logstash-core (>= 0) java
Running `bundle update` will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.
unfortunately there is literally no chance for me to get ruby onto server and for me it seems that it wont help just because of version restrictions
Fixed with https://github.com/logstash-plugins/logstash-input-perfmon/pull/5
|
gharchive/issue
| 2017-02-25T14:54:28 |
2025-04-01T04:34:55.351032
|
{
"authors": [
"NickMRamirez",
"mac2000"
],
"repo": "logstash-plugins/logstash-input-perfmon",
"url": "https://github.com/logstash-plugins/logstash-input-perfmon/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1166280054
|
Batch API calls
This is a first implementation for batching API calls as per discussion here https://github.com/logtail/logback-logtail/issues/4.
Some notes:
Batch size is configures using <batchSize>200</batchSize>, see logback-batch-test.xml
Some tests require an explicit flush() in order to force the sending.
There's a need for a Thread.sleep(2000) in LogtailAppenderBatchConfigSizeTest
We recommend not suggesting queueSize in the documentation as to not create confusion with batchSize
@cosmin-marginean Hi Cos, thanks a lot for the PR! There seem to be tests failing - do you want to tackle this or should the team here take a look? 🙌
It's ok, I can have a look this afternoon, quite likely to do with me coding against jdk11 or something
@gyfis I couldn't get this to break, but I see some 401 - Unauthorized. in the tests - are these tests using any "LOGTAIL_INGEST_KEY" env setting?
https://github.com/logtail/logback-logtail/runs/5510436756?check_suite_focus=true
@cosmin-marginean Ah, sorry, we do have a secret token on the repo which I think is not accessible by PRs. Let me test this locally real quick...
@cosmin-marginean Hi again Cos, sorry, the tests pass fine locally :tada:
The PR looks good, too. I wonder if there's a way to register an exit event to flush any logs that are in the queue - this is so that this library doesn't discard logs. I think with this you could then drop the Thread.sleep and flush. This property seems like a good start: logging.register-shutdown-hook. Do you know if it's possible to register this somehow in our appender, to clean up after ourselves?
That's a very good point @gyfis. Leave this with me, I'll do some experiments these days, as this will indeed be a problem beyond the tests - we don't want messages lost on app shutdown in production.
Appreciate it! Tag me here or at hello@logtail.com if you'd like any help :pray: Thanks a lot!
@gyfis I've added a flush() on stop() which seems to do the job: https://github.com/logtail/logback-logtail/pull/5/commits/04088535760ad2a0e0e46cfb3f6bf4364c886e27
I couldn't quite figure out how to write a test for this without actually querying the logs in the remote system, so the only way I could validate this was to send 5 messages (for a batch size of 10), and test with and without flush in stop() and look in the console/dashboard. The messages appear in Logtail when this is on.
We can't really use a shared variable either as a second test (checking if the messages were flushed) would probably be run in a separate instance, so checking the number of API calls (like we do in other tests) won't help much.
Let me know if you think there's a better way to test this.
@cosmin-marginean Hi Cos, thanks for the update!
Ideally (acknowledging I'm out of depth here), I'd like to see a mocked server-appender pair that we could start, stop, and inspect mid-run. Logzio seems to be doing something similar with their MockLogzioBulkListener used e.g. in the serverCrashTest, but I understand that's quite a lot of code to write.
Let me know if you want to tackle this too - otherwise I'll make one more review, test locally, and merge after!
@gyfis That would be ideal, yes, I agree. I personally don't have the time at the moment to go into the depths of this unfortunately, but I'd be happy to review some work on this at some point if required.
Thanks!
The code looks good - thanks again Cos!
|
gharchive/pull-request
| 2022-03-11T10:59:05 |
2025-04-01T04:34:55.372780
|
{
"authors": [
"cosmin-marginean",
"gyfis"
],
"repo": "logtail/logback-logtail",
"url": "https://github.com/logtail/logback-logtail/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
681852564
|
Cannot create slack endpoint
Hi, I'm new to your provider and configured the same way you have in the example.
Versions:
mac OSX 10.15.5
provider version: 1.1.3 and 1.1.4
main.tf:
provider "logzio" {
api_token = var.api_token
base_url = "https://api-nl.logz.io"
}
resource "logzio_endpoint" "slack_some-alerts" {
title = "slack_some-alerts"
description = "Slack Integration for some-alerts"
endpoint_type = "slack"
slack {
url = var.slack_url_some-alerts
}
}
tfstate:
{
"version": 4,
"terraform_version": "0.12.26",
"serial": 1,
"lineage": "32ef805d-404b-5ebf-7836-1062a57c124b",
"outputs": {},
"resources": []
}
Error:
Error: rpc error: code = Unavailable desc = transport is closing
panic: unhandled endpoint type slack
2020-08-19T14:27:39.336+0100 [DEBUG] plugin.terraform-provider-logzio:
2020-08-19T14:27:39.336+0100 [DEBUG] plugin.terraform-provider-logzio: goroutine 15 [running]:
2020-08-19T14:27:39.336+0100 [DEBUG] plugin.terraform-provider-logzio: github.com/jonboydell/logzio_terraform_provider/logzio.endpointFromResourceData(0xc00030e9a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
2020-08-19T14:27:39.336+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/src/github.com/jonboydell/logzio_terraform_provider/logzio/resource_endpoint.go:251 +0xae3
2020-08-19T14:27:39.336+0100 [DEBUG] plugin.terraform-provider-logzio: github.com/jonboydell/logzio_terraform_provider/logzio.resourceEndpointCreate(0xc00030e9a0, 0x1b71640, 0xc000480ee0, 0x2, 0x26e5aa0)
2020-08-19T14:27:39.336+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/src/github.com/jonboydell/logzio_terraform_provider/logzio/resource_endpoint.go:260 +0x43
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: github.com/hashicorp/terraform/helper/schema.(*Resource).Apply(0xc00018ee80, 0xc0003ba640, 0xc00027a9e0, 0x1b71640, 0xc000480ee0, 0xc00027c301, 0xc0005b3428, 0x1b009a0)
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/pkg/mod/github.com/hashicorp/terraform@v0.12.6/helper/schema/resource.go:287 +0x3b4
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: github.com/hashicorp/terraform/helper/schema.(*Provider).Apply(0xc00018ef80, 0xc000287a58, 0xc0003ba640, 0xc00027a9e0, 0xc0005a82c8, 0xc0005b8078, 0x1b02860)
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/pkg/mod/github.com/hashicorp/terraform@v0.12.6/helper/schema/provider.go:285 +0x18f
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: github.com/hashicorp/terraform/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc0000be4a0, 0x1ec80a0, 0xc0003be210, 0xc0000c4a20, 0xc0000be4a0, 0xc0003be210, 0xc0002f4bd0)
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/pkg/mod/github.com/hashicorp/terraform@v0.12.6/helper/plugin/grpc_provider.go:885 +0x894
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: github.com/hashicorp/terraform/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x1c251c0, 0xc0000be4a0, 0x1ec80a0, 0xc0003be210, 0xc0003ba370, 0x0, 0x1ec80a0, 0xc0003be210, 0xc0002f8000, 0x24a)
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/pkg/mod/github.com/hashicorp/terraform@v0.12.6/internal/tfplugin5/tfplugin5.pb.go:3217 +0x23e
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: google.golang.org/grpc.(*Server).processUnaryRPC(0xc000001e00, 0x1ed4280, 0xc000564480, 0xc000164500, 0xc000384c30, 0x26bad20, 0x0, 0x0, 0x0)
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/pkg/mod/google.golang.org/grpc@v1.18.0/server.go:966 +0x470
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: google.golang.org/grpc.(*Server).handleStream(0xc000001e00, 0x1ed4280, 0xc000564480, 0xc000164500, 0x0)
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/pkg/mod/google.golang.org/grpc@v1.18.0/server.go:1245 +0xd25
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00003c0a0, 0xc000001e00, 0x1ed4280, 0xc000564480, 0xc000164500)
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/pkg/mod/google.golang.org/grpc@v1.18.0/server.go:685 +0x9f
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: created by google.golang.org/grpc.(*Server).serveStreams.func1
2020-08-19T14:27:39.337+0100 [DEBUG] plugin.terraform-provider-logzio: /Users/jon.boydell/go/pkg/mod/google.golang.org/grpc@v1.18.0/server.go:683 +0xa1
2020/08/19 14:27:39 [DEBUG] logzio_endpoint.some-alerts: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalMaybeTainted
2020/08/19 14:27:39 [TRACE] EvalMaybeTainted: logzio_endpoint.some-alerts encountered an error during creation, so it is now marked as tainted
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalWriteState
2020/08/19 14:27:39 [TRACE] EvalWriteState: removing state object for logzio_endpoint.some-alerts
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalApplyProvisioners
2020/08/19 14:27:39 [TRACE] EvalApplyProvisioners: logzio_endpoint.some-alerts has no state, so skipping provisioners
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalMaybeTainted
2020/08/19 14:27:39 [TRACE] EvalMaybeTainted: logzio_endpoint.some-alerts encountered an error during creation, so it is now marked as tainted
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalWriteState
2020/08/19 14:27:39 [TRACE] EvalWriteState: removing state object for logzio_endpoint.some-alerts
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalIf
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalIf
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalWriteDiff
2020/08/19 14:27:39 [TRACE] <root>: eval: *terraform.EvalApplyPost
2020-08-19T14:27:39.338+0100 [DEBUG] plugin: plugin process exited: path=/Users/rpablo/.terraform.d/plugins/terraform-provider-logzio pid=61927 error="exit status 2"
2020/08/19 14:27:39 [ERROR] <root>: eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2020/08/19 14:27:39 [ERROR] <root>: eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2020/08/19 14:27:39 [TRACE] [walkApply] Exiting eval tree: logzio_endpoint.some-alerts
2020/08/19 14:27:39 [TRACE] vertex "logzio_endpoint.some-alerts": visit complete
2020/08/19 14:27:39 [TRACE] dag/walk: upstream of "provider.logzio (close)" errored, so skipping
2020/08/19 14:27:39 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2020/08/19 14:27:39 [TRACE] dag/walk: upstream of "root" errored, so skipping
2020/08/19 14:27:39 [TRACE] statemgr.Filesystem: not making a backup, because the new snapshot is identical to the old
2020/08/19 14:27:39 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2020/08/19 14:27:39 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2020/08/19 14:27:39 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2020/08/19 14:27:39 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2020-08-19T14:27:39.353+0100 [DEBUG] plugin: plugin exited
Hi @r1ckr
the endpoint type should be Slack with capital S, we will fix the readme soon.
Would be possible to ignore the casing on those type of parameters? That will be way less error prone
That's a great idea @r1ckr , we will add it to our roadmap. You are more than welcome to open a PR for that as well
Cheers @yyyogev ! Where would that PR be needed? in the terraform client or the provider?
At the provider, in the resource_endoint
Done at https://github.com/logzio/terraform_provider_logzio/pull/50
|
gharchive/issue
| 2020-08-19T13:44:24 |
2025-04-01T04:34:55.379303
|
{
"authors": [
"r1ckr",
"yyyogev"
],
"repo": "logzio/logzio_terraform_provider",
"url": "https://github.com/logzio/logzio_terraform_provider/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
265040256
|
Fix month typo
:)
hahaha thanks! :)
|
gharchive/pull-request
| 2017-10-12T18:25:36 |
2025-04-01T04:34:55.380972
|
{
"authors": [
"felippenardi",
"loiane"
],
"repo": "loiane/loiane.github.io",
"url": "https://github.com/loiane/loiane.github.io/pull/28",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1011484982
|
External bluetooth support
Select Add-On (place a lowercase x for the relevant add-on)
[x ] Xiaomi Mi Scale
Is your feature request related to a problem? Please describe.
I use an ESP32 to read the data from the scale. It returns the weight, and the impedance as a sensor.
Is it possible to use this sensor data as an alternative source ?
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Aside from pretty much rebuilding the add-on to cater for an external source (not even sure how I'd pull that data), I don't think it'll be possible.
It would probably be easier to edit the code on the ESP than rewriting this one.
Thanks for the suggestion though
|
gharchive/issue
| 2021-09-29T22:03:06 |
2025-04-01T04:34:55.424958
|
{
"authors": [
"Korte68",
"lolouk44"
],
"repo": "lolouk44/hassio-addons",
"url": "https://github.com/lolouk44/hassio-addons/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1001964726
|
[QUESTION] What is Longhorn nodeSelector?
What is nodeSelector for a Longhorn storage class?
Is it string or a Key-Value pair?
Is it lable for Kubernetes node or is it an annotation for a Kubernetes node?
It looks like it is a string in the example in Longhorn documentation. But in Kubernetes a nodeSelector is a key-value pair!
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
nodeSelector: "storage,fast"
I marked some nodes in my K8S cluster with label longhorn=storage-node.
How to use only these nodes for Longhorn volumes?
It's a string slice type []string.
Please check this doc.
But from your use case, I guess you would want this doc.
The question has been addressed, close this issue/question.
Feel free to open a new issue or question if you have further ones.
|
gharchive/issue
| 2021-09-21T07:03:13 |
2025-04-01T04:34:55.444173
|
{
"authors": [
"jenting",
"mchudinov"
],
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/3038",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1087288429
|
[BUG] Restore from backup attempts to attach to disabled node
Describe the bug
There is a node in maintenance after malfunctioning (and thrashed some volumes). The node is cordoned in k8s, and drained. The node is marked unschedulable in Longhorn and marked for eviction. When restoring from backup in the UI, the disabled node is selected for restoration and thus the volume results in a faulted state.
Expected behavior
A disabled node shouldn't be selected for restoring a backup.
Log or Support bundle
Might be useful:
longhorn-support-bundle_4095c298-0e54-4f98-bf29-443349b40df5_2021-12-23T01-03-13Z.zip
Environment
Longhorn version: 1.2.3
Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Helm
Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s
Number of management node in the cluster: 3
Number of worker node in the cluster: 0
Node config
OS type and version: Ubuntu
CPU per node: 12-16
Memory per node: 64GB
Disk type(e.g. SSD/NVMe): NVME
Network bandwidth between the nodes: 10G
Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): Baremetal
Number of Longhorn volumes in the cluster: 10-20
Additional context
The operator accidentally hard-rebooted a node after accidentally upgrading k8s from 1.21 to 1.22.
When restoring from backup in the UI, the disabled node is selected for restoration and thus the volume results in a faulted state.
Do you mean you're able to attach the volume to the disabled node on the UI or the replica will be scheduled to the disabled node?
I guess you're saying that when attaching the volume on the UI, the node list on the UI includes the SchedulingDisabled node, right?
In the UI, I went to backups and selected a backup to restore from. It automatically selected the disabled node and attempted to attach it. It obviously failed because the node is disabled.
Do you mean volume pvc-833b8091-00e2-4043-8dbb-f0e33a85b8f2 is attached to the disabled node donkey during the restoring?
For Longhorn, scheduling means if the replica can be allocated to the node/disk. Hence schedule disabling does not block the volume being attached to it.
As for why the volume can be attached to a drained node, it is caused by bug #3459. The fix will be included in v1.2.4 and v1.3.0.
The reason why the volume becomes faulted is mentioned in the following logs. In brief, Longhorn somehow fails to lock the backup volume (miss the locking file in the S3):
2021-12-23T01:57:26.646129608+01:00 [pvc-833b8091-00e2-4043-8dbb-f0e33a85b8f2-r-3c90bb20] time="2021-12-23T00:57:26Z" level=warning msg="failed to load lock backupstore/volumes/ca/60/pvc-833b8091-00e2-4043-8dbb-f0e33a85b8f2/locks/lock-fe8382a0b3f1431a.lck on backupstore reason failed to get object: backups/backupstore/volumes/ca/60/pvc-833b8091-00e2-4043-8dbb-f0e33a85b8f2/locks/lock-fe8382a0b3f1431a.lck response: {\n AcceptRanges: \"bytes\",\n Body: <nil>,\n CacheControl: \"max-age=60\",\n ContentLength: 218,\n ContentType: \"application/xml\"\n} error: AWS Error: NoSuchKey <nil>\n404 tx00000000000002d09cb67-0061c3c976-1495d704-ams3c\n" pkg=backupstore
2021-12-23T01:57:26.753951614+01:00 [pvc-833b8091-00e2-4043-8dbb-f0e33a85b8f2-r-3c90bb20] time="2021-12-23T00:57:26Z" level=warning msg="Failed to initiate the backup restore, will do revert and cleanup then."
2021-12-23T01:57:26.649483691+01:00 [pvc-833b8091-00e2-4043-8dbb-f0e33a85b8f2-r-690b2052] time="2021-12-23T00:57:26Z" level=warning msg="Failed to initiate the backup restore, will do revert and cleanup then."
Nice! Thanks for fixing this!
|
gharchive/issue
| 2021-12-23T01:02:38 |
2025-04-01T04:34:55.452998
|
{
"authors": [
"jenting",
"shuo-wu",
"withinboredom"
],
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/3454",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1616720416
|
[BUG]
The longhorn UI is unavailable
longhorn runs on k8s. This morning my server crashed. harbor and k8s components were manually rolled back once, and longhornUI is no longer available.
I checked the longhorn ui pod log which is shown below
10.244.0.0 - - [09/Mar/2023:08:40:11 +0000] "GET /v1/ws/1s/events HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:40:29 +0000] "GET /v1/nodes? HTTP/1.1" 499 0 "https://longhorn.xyz10.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:40:29 +0000] "GET /v1/events? HTTP/1.1" 499 0 "https://longhorn.xyz10.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:40:29 +0000] "GET /v1/volumes? HTTP/1.1" 499 0 "https://longhorn.xyz10.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:40:29 +0000] "GET /v1/ws/1s/volumes HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:40:31 +0000] "GET /v1/ws/1s/volumes HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:40:50 +0000] "GET /v1/ws/1s/nodes HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:40:52 +0000] "GET /v1/ws/1s/nodes HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:41:11 +0000] "GET /v1/ws/1s/events HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:41:29 +0000] "GET /v1/ws/1s/events HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:41:30 +0000] "GET /v1/ws/1s/volumes HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:41:31 +0000] "GET /v1/ws/1s/volumes HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:41:50 +0000] "GET /v1/ws/1s/nodes HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0" 10.244.0.0 - - [09/Mar/2023:08:41:52 +0000] "GET /v1/ws/1s/nodes HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0
My k8s cluster environment has two nodes. In previous use, I mounted volumes of 100g on two nodes respectively. UI was unavailable after the cluster collapsed, and I found insufficient space when I needed to use pvc for deploying other services.
The k8s cluster version is v1.23.12 and the longhorn version is 1.2.4.
After removing all the pvc volumes, I reinstalled longhorn, still not available, and in the process found two restart in Longhorn-Manager, then I deleted and rebuilt the pod, and the UI returned to normal
|
gharchive/issue
| 2023-03-09T08:50:33 |
2025-04-01T04:34:55.457295
|
{
"authors": [
"ShuHaoSong"
],
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/5519",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
470830196
|
Adding nodes
Hi, I have been testing with Longhorn for quite a while now and happy with the longhorn UI adding new nodes functionality. My question is, apart from using Longhorn UI, are there any other ways that I could define the structure of my nodes (with raw device), maybe using yaml, adding the nodes and device when deploying longhorn?
Hi @thbeh ,
There are multiple features related here:
In the next release, users can choose which new nodes would be used automatically by Longhorn and which directory they would use as the default data path. See #582 and #583 .
The settings of the two options when deploying longhorn will be covered by #623 . But I guess what you want is to have node added automatically in the disabled state then update Longhorn configuration after you deployed Longhorn.
We will have a Longhorn CLI to help to configure Longhorn with the command line, we're tracking it at #613 .
Longhorn currently doesn't support raw devices. Users need to format the device and mount it on the node before Longhorn can use it.
Yes, I believe #623 make sense and something similar below (from glusterfs/heketi) would be good -
{
"node": {
"hostnames": {
"manage": [
"k8s-w3"
],
"storage": [
"192.168.34.13"
]
},
"zone": 1
},
"devices": [
"/data1",
"/data2",
"/data3"
]
}
And this config (json or yaml) could be pick up during longhorn deployment.
Thanks
@thbeh I think what you need is more than #623 . #623 only dealt with the global setting of Longhorn, not node configuration. And there is no way to configure node if the node is not there. So sounds like you would need #613 to help with the node configuration (which is a common feature request we've received recently).
|
gharchive/issue
| 2019-07-21T22:08:25 |
2025-04-01T04:34:55.462469
|
{
"authors": [
"thbeh",
"yasker"
],
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/643",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2327678899
|
[BUG] Unable to attach or mount volumes after systemctl restart k3s
Describe the bug
Pods with mounted pvc get stuck and cant start after the kubelet is restarted and longhorn is helm chart v1.4.2 version.
Pods stuck in longhorn-system also:
share-manager-pvc
Can't update helm to 1.5.x because of this, probably
To Reproduce
systemctl restart k3s
try to restart start pod and mount a pvc
Expected behavior
Pod succesfully starts and pvc mounts
Support bundle for troubleshooting
Can't create support bundle since pod cant start longhorn-support-bundle-manager-support-bundle
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned longhorn-system/longhorn-support-bundle-manager-support-bundle-2024-05-31ts2vh5
Warning FailedMount 81s (x5 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[kube-api-access-jnbv6], unattached volumes=[kube-api-access-jnbv6]: timed out waiting for the condition
Environment
Longhorn version: helm chart v1.4.2
Impacted volume (PV): unmounted volumes=[host-proc lib-modules kube-api-access-nh4h8 host-dev host-sys]
Installation method (e.g. Rancher Catalog App/Helm/Kubectl): helm
Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s v1.26.4+k3s1
Number of control plane nodes in the cluster: 1
Number of worker nodes in the cluster: 1 (same node as control)
Node config
OS type and version: Ubuntu 22.04.2 LTS (Jammy Jellyfish)
Kernel version: 5.15.0-71-generic #78-Ubuntu SMP Tue Apr 18 09:00:29 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
CPU per node: 32
Memory per node: 64Gb
Disk type (e.g. SSD/NVMe/HDD): SSD
Network bandwidth between the nodes (Gbps): 10Gbps
Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): VMWare
Number of Longhorn volumes in the cluster: 13
A support bundle is appriciated.
@derekbit
Can't create support bundle since pod cant start longhorn-support-bundle-manager-support-bundle
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned longhorn-system/longhorn-support-bundle-manager-support-bundle-2024-05-31ts2vh5
Warning FailedMount 81s (x5 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[kube-api-access-jnbv6], unattached volumes=[kube-api-access-jnbv6]: timed out waiting for the condition
|
gharchive/issue
| 2024-05-31T11:57:59 |
2025-04-01T04:34:55.470389
|
{
"authors": [
"derekbit",
"paf91"
],
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/8680",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1616722131
|
feat(lep): recurring filesystem trim design
https://github.com/longhorn/longhorn/issues/5186
NIT: May need to handle deleteSnapMode as well since it's meaningless to this.
@shuo-wu can you explain a little more?
@mantissahz Please review this as well.
This LEP in general looks good to me and straightforward.
I mean, the new field introduced by https://github.com/longhorn/longhorn-manager/pull/1708#event-8708770632.
I would like to clean up old snapshots that exceeds the retain number before create a new snapshot mandatorily because it is an abnormal situation. (so there is no new field introduced) as mentioned in https://github.com/longhorn/longhorn-manager/pull/1708#issuecomment-1463152193
|
gharchive/pull-request
| 2023-03-09T08:51:38 |
2025-04-01T04:34:55.473728
|
{
"authors": [
"c3y1huang",
"innobead",
"mantissahz"
],
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/pull/5520",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1930599885
|
feat: notifications
This PR adds some notification options about new posts.
I thought it would be useful to make it as simple as possible, that is, only the main settings for notifications - without signing up for them when creating a post/thread.
What do you guys think about the current types of notifications? Is it too much, or is something missing?
To do:
add a link to open the discussion/post from email in the browser
provide a link to opt out of notifications for a given thread in email
add a queue system for sending emails (probably based on the DB since this is our only available option?)
But what I have listed above, I will add in separate PRs in the future.
I think those three settings are perfect. And I always found it annoying that MyBB asked you if you wanted to subscribe on every thread. Should be a global thing like you have it here.
As for queue - I agree that's the way to go, and I'd keep that simple and db driven also. There's been a couple stabs at implementing a queue for CI in the past, but I've never had the time to dive into their implementations and clean things up and take it across the finish line.
MGatner had his libraries in a separate organization and it looks like everything is in place: https://github.com/tattersoftware
Thanks for the feedback! We'll see how it goes, but it's good to know we're on the right track with notifications.
MGatner had his libraries in a separate organization and it looks like everything is in place: https://github.com/tattersoftware
Thanks for the feedback! We'll see how it goes, but it's good to know we're on the right track with notifications.
Doh! You're right. It's been awhile and I'm worn out today lol.
Thanks for the feedback! We'll see how it goes, but it's good to know we're on the right track with notifications.
One thing I think we might consider adding down the road is a "Mute" button on a thread incase we're tired of hearing about a particular topic.
One thing I think we might consider adding down the road is a "Mute" button on a thread incase we're tired of hearing about a particular topic.
Good point. Adding this feature at the page level, not just via email, should not be a problem.
|
gharchive/pull-request
| 2023-10-06T16:42:28 |
2025-04-01T04:34:55.494336
|
{
"authors": [
"lonnieezell",
"michalsn"
],
"repo": "lonnieezell/forum-example",
"url": "https://github.com/lonnieezell/forum-example/pull/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1751547334
|
Create /one passed in post request
Is there a reason why you are appending a "/one" to the URL for the creation handler?
https://github.com/loopback4/ra-data-lb4/blob/next/src/index.ts#L154
@delkant
Refer to #1
|
gharchive/issue
| 2023-06-11T18:30:47 |
2025-04-01T04:34:55.529357
|
{
"authors": [
"ckoliber",
"delkant"
],
"repo": "loopback4/ra-data-lb4",
"url": "https://github.com/loopback4/ra-data-lb4/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
59954703
|
RequestParams calls String.format without locale.
In ibrary/src/main/java/com/loopj/android/http/RequestParams.java
calling String.format("%d", 0) when your locale is set to for instance Farsi, produces non-machine readable output.
String.format(Locale.US, ... ) should be used in cases where machine readable output is needed.
Closing as merged
|
gharchive/issue
| 2015-03-05T13:57:49 |
2025-04-01T04:34:55.540208
|
{
"authors": [
"Zlo",
"smarek"
],
"repo": "loopj/android-async-http",
"url": "https://github.com/loopj/android-async-http/issues/820",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
108029419
|
getHttpClient().getParams() deprecated
I have updated the library and I found this function is deprecated. I just want to know what is the substitution for this function. I'm using like below:
client.cancelRequests(context, true);
// when cancel requesst, the redirection will be shut down too. So, open the redirection again.
client.getHttpClient().getParams().setParameter(ClientPNames.ALLOW_CIRCULAR_REDIRECTS, true);
It is deprecated by upstream, no need to switch code now.
Also to allow circular redirects, I'd suggest you to use official API AsyncHttpClient.setEnableRedirects(final boolean enableRedirects, final boolean enableRelativeRedirects, final boolean enableCircularRedirects)
Which will shield you from deprecation messages for now.
Code: https://github.com/loopj/android-async-http/blob/master/library/src/main/java/com/loopj/android/http/AsyncHttpClient.java#L591
|
gharchive/issue
| 2015-09-23T23:48:21 |
2025-04-01T04:34:55.542205
|
{
"authors": [
"captainbupt",
"smarek"
],
"repo": "loopj/android-async-http",
"url": "https://github.com/loopj/android-async-http/issues/969",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1028260785
|
🛑 services is down
In cb01116, services (https://services.k8s.it/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: services is back up in 90fa8d0.
|
gharchive/issue
| 2021-10-17T08:52:30 |
2025-04-01T04:34:55.563390
|
{
"authors": [
"lorenzogirardi"
],
"repo": "lorenzogirardi/status",
"url": "https://github.com/lorenzogirardi/status/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
976856931
|
[SMTP] Cannot send notification to ....
Info
Uptime Kuma Version: 1.3.2
Using Docker?: Yes
Hi, thanks for this fantastic software.
When service goes down, I don't receive the mail notification and I got the following error in the console :
Cannot send notification to myaddress@example.com (not my real mail address, of course)
When I make the SMTP configuration test or when a service is back UP, I receive the mail.
I use the docker logs command to see the logs.
Is there a way to see more verbose logs ?
Info
Uptime Kuma Version: 1.3.2
Using Docker?: Yes
Hi, thanks for this fantastic software.
When service goes down, I don't receive the mail notification and I got the following error in the console :
Cannot send notification to myaddress@example.com (not my real mail address, of course)
When I make the SMTP configuration test or when a service is back UP, I receive the mail.
I use the docker logs command to see the logs.
Is there a way to see more verbose logs ?
It looked like due to the network problem of the server where you are hosting Uptime Kuma, because it cannot connect to your SMTP server and your monitoring service.
But why this error only for down status alert ?
I have simulated down situation with one host (local web container). My network works perfectly.
Since I did not log the detail of error in current latest stable version, I just update the nightly version which log the error.
louislam/uptime-kuma:nightly-amd64
Apparently, my mail provider doesn't like when service goes down 😑
response: '550 5.2.0 Spam message rejected'
I will contact them
Thanks a lot for your help.
Apparently, my mail provider doesn't like when service goes down 😑
response: '550 5.2.0 Spam message rejected'
I will contact them
Thanks a lot for your help.
lol, funny. Maybe they don't like "DOWN"
|
gharchive/issue
| 2021-08-23T10:04:40 |
2025-04-01T04:34:55.582991
|
{
"authors": [
"Salamafet",
"louislam"
],
"repo": "louislam/uptime-kuma",
"url": "https://github.com/louislam/uptime-kuma/issues/242",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2552614035
|
Open maintainance links in new tab
🛡️ Security Policy
[X] I agree to have read this project Security Policy
Description
Currently urls are rendered as a href and are clickable, but if you click on them you move away from uptimekuma.
I would be great if these are rendered with target="_blank" to open in a new tab.
👟 Reproduction steps
Create a manual maintainance with a link to another site e.g. https://example.com
👀 Expected behavior
Link opens in a new tab
😓 Actual Behavior
Link opens in current tab
🐻 Uptime-Kuma Version
1.23.13
💻 Operating System and Arch
Docker
🌐 Browser
Chrome
🖥️ Deployment Environment
not relevant
📝 Relevant log output
No response
If you don't mind can you tell a little bit about how to find this security policy as I've tried to find it in the repo but it is showing something like this
I am a beginner contributor that's why I just need a little bit help
I am willing to work on this issue ... as I have stron foundation on front-End technologies @kub3let can you please assign me this work .. looking forward to your response
Thank you:)
okay sure @CommanderStorm
@kub3let can you tell me in what part of uptime kuma maintainance you insert the link?
@VarPDev sure it's in the description field, to replicate:
https://status.your_domain.com/maintenance
Foo bar
We are under maintenance please check https://example.com/ for more information
@kub3let
The problem is that kuma use marked library to convert text in html.
Also passing example the marker library doesn't work.
So I think you have to open a issue to marker library and when they solve this, you can open a link in other tab using this syntax:
example
I don't think changing marked.js upstream is the right approach here.
It does exposed the rendering functions so can be adjusted in uptimekuma, e.g.
// Create a new renderer instance
const renderer = new marked.Renderer();
// Override the link function
renderer.link = function(href, title, text) {
const link = `<a href="${href}"${title ? ` title="${title}"` : ''} target="_blank" rel="noopener noreferrer">${text}</a>`;
return link;
};
// Initialize marked with the custom renderer
marked.setOptions({
renderer: renderer
});
// Example Markdown input
const markdownInput = '[Google](https://www.google.com)';
// Parse the markdown
const htmlOutput = marked(markdownInput);
console.log(htmlOutput);
Another approach would be a js document load handler which updates all a href under .shadow-box > .content, but that's a more a hack and should be used as last option.
If we prefer that by default it should be put to all links target="_blank" then it is not the right approach.
But I would prefer to be able to choose whether to have a brank link or not.
Since the marked library already supports the use of html but has a bug on creating the link I would open a ticket there.
Currently marked if you pass him this link: example
He creates an html like this: exampleSo maybe it is wrong how marked creates the html.
what do you think?
Striping the tags is most likely a security feature, e.g. strip js etc. from it. So that's probably why target get's removed.
Either way I don't think people should need to write html with target blank etc., they should just paste an http link and it should be rendered accordingly with target=blank.
Limiting it only to the maintenance content makes sense, but I don't think it hurts doing it globally as well.
|
gharchive/issue
| 2024-09-27T10:47:24 |
2025-04-01T04:34:55.594211
|
{
"authors": [
"SowmikDey",
"VarPDev",
"kub3let",
"mahenoorsalat"
],
"repo": "louislam/uptime-kuma",
"url": "https://github.com/louislam/uptime-kuma/issues/5130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2166336466
|
correct one keyword
⚠️⚠️⚠️ Since we do not accept all types of pull requests and do not want to waste your time. Please be sure that you have read pull request rules:
https://github.com/louislam/uptime-kuma/blob/master/CONTRIBUTING.md#can-i-create-a-pull-request-for-uptime-kuma
Tick the checkbox if you understand [x]:
[x] I have read and understand the pull request rules.
Description
correct one keyword in japanese.
Fixes #(issue)
Type of change
Please delete any options that are not relevant.
Other
Checklist
[x] My code follows the style guidelines of this project
[ ] I ran ESLint and other linters for modified files
[x] I have performed a self-review of my own code and tested it
[ ] I have commented my code, particularly in hard-to-understand areas (including JSDoc for methods)
[x] My changes generates no new warnings
[ ] My code needed automated testing. I have added them (this is optional task)
Screenshots (if any)
Please do not use any external image service. Instead, just paste in or drag and drop the image here, and it will be uploaded automatically.
Language files cannot be touched in any pull requests, because they easily create merge conflicts with Weblate.
Please update it on https://weblate.kuma.pet/projects/uptime-kuma/uptime-kuma/ instead
|
gharchive/pull-request
| 2024-03-04T09:09:42 |
2025-04-01T04:34:55.599459
|
{
"authors": [
"CommanderStorm",
"FlyingFeng2021"
],
"repo": "louislam/uptime-kuma",
"url": "https://github.com/louislam/uptime-kuma/pull/4548",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
924227230
|
try catch for writeStagedState serialize
Even though there was ".catch" on this line, it only caught errors from setItem, so an error from serialize would be completely unhandled.
This will at least give a more helpful error message than this raw error:
RangeError: Out of memory
at stringify([native code])
at writeStagedState(node_modules/redux-persist/lib/createPersistoid.js:90:48)
at processNextKey(node_modules/redux-persist/lib/createPersistoid.js:78:7)
at _callTimer(node_modules/react-native/Libraries/Core/Timers/JSTimers.js:130:7)
at apply(node_modules/react-native/Libraries/Core/Timers/JSTimers.js:383:7)
at __callFunction(node_modules/react-native/Libraries/BatchedBridge/MessageQueue.js:416:27)
at fn(node_modules/react-native/Libraries/BatchedBridge/MessageQueue.js:109:12)
at __guard(node_modules/react-native/Libraries/BatchedBridge/MessageQueue.js:364:9)
at value(node_modules/react-native/Libraries/BatchedBridge/MessageQueue.js:108:10)
at value([native code])
Note that this needs some testing..
|
gharchive/pull-request
| 2021-06-17T18:40:01 |
2025-04-01T04:34:55.610955
|
{
"authors": [
"SethArchambault"
],
"repo": "loveland/redux-persist",
"url": "https://github.com/loveland/redux-persist/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
722135095
|
gzip: stdin: not in gzip format
what i did was just :
./build.sh 8.9.2 linux-x64
This happens when there's a problem downloading the source code for one of the dependencies, perhaps due to a temporary networking problem.
@lovell can you confirm if you get same issue because I'm trying for 2 days. i used different machines and network
I'm trying for 2 days. i used different machines and network.
You'll need to work out where the lin.sh script is failing. The log output just before the failure should indicate how far it had got.
provide a ready-to-use lambda layer
Please see https://github.com/lovell/sharp/issues/1772
|
gharchive/issue
| 2020-10-15T08:52:43 |
2025-04-01T04:34:55.613604
|
{
"authors": [
"Dramex",
"lovell"
],
"repo": "lovell/sharp-libvips",
"url": "https://github.com/lovell/sharp-libvips/issues/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
226844269
|
Add authentication
This is a fixed version of @ekho's commit, closes #6.
Ref: https://github.com/ekho/jenkins_exporter/commit/b8afe37dea6c996c73c24a996b6ae3e044b9b6c3
@SqiSch Any chance to get this merged?
@SqiSch 👍
/ping @lovooit
|
gharchive/pull-request
| 2017-05-07T09:42:13 |
2025-04-01T04:34:55.627683
|
{
"authors": [
"edganiukov",
"fhemberger"
],
"repo": "lovoo/jenkins_exporter",
"url": "https://github.com/lovoo/jenkins_exporter/pull/10",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1589138147
|
[doc] Add human-readable names and descriptions to remaining blocks
Add a human-readable name as well as one-line and one-paragraph descriptions to the remaining hardware blocks, including those specific to a top (ast, sensor_ctrl, alert_handler, and rv_plic).
Draft document here: https://docs.google.com/document/d/1nX0GiXE6W5PEEGGwiCCToCwq19nCBpVRDtmN5f_0paQ/edit
What about primitives? And what about TL-UL?
I had a quick look for one_line_desc in the top specific blocks and they didn't have them.
hw/top_earlgrey/ip/ast/data/ast.hjson
hw/top_earlgrey/ip/sensor_ctrl/data/sensor_ctrl.hjson
hw/ip_templates/alert_handler/data/alert_handler.hjson.tpl
hw/ip_templates/rv_plic/data/rv_plic.hjson.tpl
They appear to exist in the internal doc: https://docs.google.com/document/d/1nX0GiXE6W5PEEGGwiCCToCwq19nCBpVRDtmN5f_0paQ. It looks like the red and purple ones have been put in the repo and the others have not.
|
gharchive/issue
| 2023-02-17T10:34:46 |
2025-04-01T04:34:55.631073
|
{
"authors": [
"HU90m",
"andreaskurth"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/issues/17307",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1649345931
|
[cryptolib, doc] Make a coding conventions document for the cryptolib.
Description
See discussion on https://github.com/lowRISC/opentitan/pull/17762
The cryptolib has a few highly specific coding conventions that it would be good to document and link on the website. Things to include:
Module ID assignments
How to use the cryptolib-specific status_t constructs
Code organization (e.g. isolating the top-level API from other internal code)
I won't cover things like hardening methods, since those are not cryptolib-specific and apply to code in e.g. ROM as well.
Also see: https://github.com/lowRISC/opentitan/pull/18673#issuecomment-1553871991
|
gharchive/issue
| 2023-03-31T13:19:17 |
2025-04-01T04:34:55.633875
|
{
"authors": [
"jadephilipoom"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/issues/17775",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2548670081
|
[rom_ext] Migrate to ECDSA verification of application firmware
Migrate to ECDSA verification of application firmware.
Currently, the ROM_EXT uses RSA3K keys to validate application firmware, but we want to use ECDSA verification instead and eliminate the use of RSA keys.
[ ] Migrate the ES ROM_EXT to using ECDSA keys
[ ] Cherry-pick ECDSA changes from master branch.
[ ] Integrate ECDSA sigverify into the ROM_EXT.
[ ] Change FPGA application keys to ECDSA keys.
[ ] Change SiVAL application keys to ECDSA keys.
[ ] Change ProdA application keys to ECDSA keys.
[ ] Change ProdC application keys to ECDSA keys.
[ ] Eliminate RSA application keys from the earlgrey_es_sival branch.
[ ] Cherry-pick the ROM_EXT changes and new keys to master
[ ] Cherry-pick the ROM_EXT changes and new keys to earlgrey_1.0.0
#24544
#24643
|
gharchive/issue
| 2024-09-25T18:08:09 |
2025-04-01T04:34:55.637537
|
{
"authors": [
"cfrantz"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/issues/24641",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1202586127
|
[spi_device] Payload Start Index (CSR + Logic)
This commit revised the last_written_payload_idx to payload_start_idx.
The payload_start_idx indicates the first pointer of the written
payload. It usually shows 0. If the attached SPI host system issues more
than 256B, this may show the non-zero value. SW should read from
start_idx to get the correct value in that case.
Note: Please review last three commits. first 8 commits are for overflow event PR.
That's good idea. Let me merge and address it . Thanks!
On Apr 12, 2022, at 5:25 PM, tjaychen @.***> wrote:
@tjaychen commented on this pull request.
In hw/ip/spi_device/rtl/spid_upload.sv https://github.com/lowRISC/opentitan/pull/12088#discussion_r848969611:
@@ -292,37 +293,41 @@ module spid_upload
.dst_pulse_o (sys_payloadptr_clr_posedge)
);
// last_written_payloadptr: in contrast to payloadptr,
// last_written_payloadptr provides a location that the HW lastly wrote to
// the payload buffer.
// payload_start_idx: in contrast to payloadptr,
this is a really minor point, but instead of maintaining both the payload idx and ptr, could we get away with the idx and a mirror bit? (kind of like fifo's). so when you first exceed 256, we could just set that bit to 1 to indicate the depth is now always 256B and that the idx should keep incrementing.
I might have missed some other point that makes this scheme not doable. Again, only a minor point, lgtm otherwise.
—
Reply to this email directly, view it on GitHub https://github.com/lowRISC/opentitan/pull/12088#pullrequestreview-940277313, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJDG3XOYJ3OG2D32QOD7RTVEYH6RANCNFSM5TI56KYQ.
You are receiving this because you authored the thread.
That's good idea. Let me merge and address it . Thanks!
Nope I can't. CI does not pass :(
lgtm!
do you need any help looking at the sw build failures @eunchan? or does this just need a rebase?
Ah. I fixed one in DIF, but missed other (unittest). I will fix today. Thanks!
|
gharchive/pull-request
| 2022-04-13T00:08:12 |
2025-04-01T04:34:55.644773
|
{
"authors": [
"eunchan",
"tjaychen"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/12088",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
515852581
|
[sw/vendor/coremark] use util/vendor_hw.py for coremark
Add sw/vendor/coremark.vendor.hjson and one patch in
sw/vendor/patches/coremark to use util/vendor_hw.py script to replace
the previous version of coremark that had been copied in by hand.
Commit message auto-generated by util/vendor_hw.py -c below:
Update coremark to eembc/coremark@21d473a
Update code from upstream repository
https://github.com/eembc/coremark.git to revision
21d473aae1f11d52ea592a8685734be2209aa66f
@tjaychen This PR cannot be updated any more since we made our repository public. Would you mind opening a new PR with this change (and the review comment addressed)? Please let me know if there's something we can help with.
|
gharchive/pull-request
| 2019-11-01T01:08:07 |
2025-04-01T04:34:55.647664
|
{
"authors": [
"imphil",
"tjshep"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/750",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
53306465
|
Unable to change profile picture
Can't change profile picture from initial photo
Thanks for filing! Yeah, unfortunately the avatar picture comes from gravatar or twitter, and there's no image upload capability built in.
See: https://github.com/TelescopeJS/Telescope/issues/356
Workaround in case anyone else is stuck on this:
Add an email that has a gravatar associated with it to your profile. It will pull in that image automatically.
|
gharchive/issue
| 2015-01-03T15:19:14 |
2025-04-01T04:34:55.650424
|
{
"authors": [
"karn09",
"litnerdy",
"lpatmo"
],
"repo": "lpatmo/cb",
"url": "https://github.com/lpatmo/cb/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
360638201
|
Add GitHub Pages for Better Documentation
I feel the one area that is lacking is really good mature documentation. If the documentation can be better explained with even more examples, I believe the use of this library will explode. Google will also put this library higher in its rankings.
My first recommendation is to setup GitHub Pages and migrate all the documentation there. Break down the documentation into sections that drill down into details.
Secondly, I think it would be a good exercise to add a lot more documentation within the library to increase the quality of the intellisense. Right now if I want to add an indicator or study what other indicators are doing, I find myself in base classes that I'm not sure what their purpose is.
If we want more contributors, we need to increase the quality of documentation.
That's a great idea, @irperez . But i'm quite new to Github Pages, would you mind helping me to set up the Github Pages, and tell me what i need to prepare for the documentation? I am willing to help 😄
@lppkarl I see you setup the pages. I've gone on and got started with some setup. But lots more to do. Its something we can build on and add more detail.
See #67
@irperez Much appreciated for your help. I'll take a look of the pages, and add the details later.
|
gharchive/issue
| 2018-09-16T13:11:13 |
2025-04-01T04:34:55.658314
|
{
"authors": [
"irperez",
"lppkarl"
],
"repo": "lppkarl/Trady",
"url": "https://github.com/lppkarl/Trady/issues/66",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1706657449
|
[PARSING] Enhance request parser
Improve parsing with state-based switch cases (nginx-like);
Add parsing for chunked body requests;
Add exceptions for Bad Request (400), Entity Too Large (413), Not Implemented (501) and HTTP Version Not Supported (505).
TODO: Integrate exceptions with webserver, so the response can be assembled properly.
LGTM!
|
gharchive/pull-request
| 2023-05-11T22:31:13 |
2025-04-01T04:34:55.663427
|
{
"authors": [
"araggohnxd",
"lrcouto"
],
"repo": "lrcouto/webserv",
"url": "https://github.com/lrcouto/webserv/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
565157945
|
Please update
[var allElementsClassName = "_2HS9r";]-----> update this to
[var allElementsClassName = "_2vikl";]-----> this
whatsapp got update for several week ago
and also change this
var input = document.getElementsByClassName(inputMessageClassName + " copyable-text selectable-text")[0];
change 0 to 1 like this
var input = document.getElementsByClassName(inputMessageClassName + " copyable-text selectable-text")[0];
fixed thanks 👍
|
gharchive/issue
| 2020-02-14T07:40:32 |
2025-04-01T04:34:55.665534
|
{
"authors": [
"almahdiamiry",
"kindvy",
"lreiner"
],
"repo": "lreiner/Whatsapp-Message-Spammer",
"url": "https://github.com/lreiner/Whatsapp-Message-Spammer/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
355750622
|
matchSnapshot should allow to test entire file snapshot instead of just the first document
In my scenario, I have multiple documents being declared in the same file.
So my file goes...
my-project.yaml
--- deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
--- service
apiVersion: v1
kind: Service
metadata:
...
--- configMap
apiVersion: v1
kind: ConfigMap
metadata:
...
But when I try to test snapshot of this file with matchSnapshot, I always get the deployment object but not the service and configMap objects snapshot. So I can never test snapshots of the service and configMaps.
I know the best practice and workaround to this is to separate the documents into different files but before I go with that approach, is there any way we can add that feature to this plugin easily? I am new to go and I don't have a faintest idea on how to write a new plugin so when I looked at the code, I couldn't understand.
Any way to work around this? I have a helm chart that defines two related deployments in the same file with a range. I don't want to split them up into separate files because it'll be lots of copy/pasting when we want to change things in both of them.
Close due to archiving repository.
Sorry for not presenting so long. I've been working on another project and don't have time to for helm-unittest.
Please consider other working forks like quintush/helm-unittest.
|
gharchive/issue
| 2018-08-30T20:55:43 |
2025-04-01T04:34:55.669033
|
{
"authors": [
"RaviDasari",
"lrills",
"phsteve"
],
"repo": "lrills/helm-unittest",
"url": "https://github.com/lrills/helm-unittest/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1275907845
|
Fix typos
Description
This PR is fix some typos.
Completed Tasks
[x] I have read the Contributing Guide.
[x] The pull request is complete (implemented / written).
[x] Git commits have been cleaned up (squash WIP / revert commits).
[-] I wrote tests and ran bundle exec rake locally (if code is attached to PR).
Thanks for the changes. This will land in the next release.
|
gharchive/pull-request
| 2022-06-18T23:25:52 |
2025-04-01T04:34:55.701522
|
{
"authors": [
"lsegal",
"ydah"
],
"repo": "lsegal/yard",
"url": "https://github.com/lsegal/yard/pull/1446",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1732755292
|
Update flow.rst
This patch makes searching for TODO comments with current ticket number an explicit step in the development workflow.
@ktlim would you be able to review this?
|
gharchive/pull-request
| 2023-05-30T18:23:11 |
2025-04-01T04:34:55.715257
|
{
"authors": [
"arunkannawadi"
],
"repo": "lsst-dm/dm_dev_guide",
"url": "https://github.com/lsst-dm/dm_dev_guide/pull/618",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1010769019
|
Porting Lasair to new hardware (i.e. Somerville)
[x] Build porting plan, understand what is time critical, assign tasks
[ ] Build ingestion pipeline on new hardware
[ ] Build history files with 4 years data
[ ] Start ingestion
[ ] Ingest history files
[ ] Cleaning up
Ken and I made a plan for this
https://lsst-uk.atlassian.net/wiki/spaces/LUSC/pages/2786164737/Porting+Lasair+to+new+Somerville
We can talk about it on Monday
The cycle 5 activities are still under discussion BTW
23/MAR/22 Lasair meeting
agreed carries into cycle-6
|
gharchive/issue
| 2021-09-29T10:16:09 |
2025-04-01T04:34:55.720393
|
{
"authors": [
"RoyWilliams",
"tms-epcc"
],
"repo": "lsst-uk/lasair-lsst",
"url": "https://github.com/lsst-uk/lasair-lsst/issues/96",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2324859588
|
🛑 SoHard 网盘 is down
In 345fad0, SoHard 网盘 (http://pan.lsy223622.com:2236) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SoHard 网盘 is back up in f8b4675 after 15 minutes.
|
gharchive/issue
| 2024-05-30T06:55:55 |
2025-04-01T04:34:55.726474
|
{
"authors": [
"lsy223622"
],
"repo": "lsy223622/status",
"url": "https://github.com/lsy223622/status/issues/1456",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1458644814
|
🛑 Mirror-爬 is down
In 3844c96, Mirror-爬 (https://ncov.zhouym.tech/) was down:
HTTP code: 404
Response time: 941 ms
Resolved: Mirror-鱼露 is back up in 497c0c8.
|
gharchive/issue
| 2022-11-21T21:58:23 |
2025-04-01T04:34:55.728962
|
{
"authors": [
"lsy223622"
],
"repo": "lsy223622/xdncov-mirror-status",
"url": "https://github.com/lsy223622/xdncov-mirror-status/issues/11347",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1513114578
|
🛑 Mirror-爬 is down
In 35d6609, Mirror-爬 (https://ncov.zhouym.tech/) was down:
HTTP code: 404
Response time: 942 ms
Resolved: Mirror-鱼露 is back up in d1bfec1.
|
gharchive/issue
| 2022-12-28T19:56:32 |
2025-04-01T04:34:55.731313
|
{
"authors": [
"lsy223622"
],
"repo": "lsy223622/xdncov-mirror-status",
"url": "https://github.com/lsy223622/xdncov-mirror-status/issues/15337",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1235417184
|
🛑 Mirror-没有女朋友跨年的群傻逼 is down
In 81f01bc, Mirror-没有女朋友跨年的群傻逼 (https://x.ksfu.top/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Mirror-木生 is back up in a126fec.
|
gharchive/issue
| 2022-05-13T15:51:44 |
2025-04-01T04:34:55.733676
|
{
"authors": [
"lsy223622"
],
"repo": "lsy223622/xdncov-mirror-status",
"url": "https://github.com/lsy223622/xdncov-mirror-status/issues/1845",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1315715386
|
🛑 Mirror-爬 is down
In bad49ff, Mirror-爬 (https://ncov.zhouym.tech/) was down:
HTTP code: 404
Response time: 1412 ms
Resolved: Mirror-鱼露 is back up in b305bc5.
|
gharchive/issue
| 2022-07-23T18:08:18 |
2025-04-01T04:34:55.736187
|
{
"authors": [
"lsy223622"
],
"repo": "lsy223622/xdncov-mirror-status",
"url": "https://github.com/lsy223622/xdncov-mirror-status/issues/2761",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1332744193
|
🛑 Mirror-爬 is down
In 0385c78, Mirror-爬 (https://ncov.zhouym.tech/) was down:
HTTP code: 404
Response time: 1818 ms
Resolved: Mirror-鱼露 is back up in afb49f7.
|
gharchive/issue
| 2022-08-09T05:43:46 |
2025-04-01T04:34:55.738723
|
{
"authors": [
"lsy223622"
],
"repo": "lsy223622/xdncov-mirror-status",
"url": "https://github.com/lsy223622/xdncov-mirror-status/issues/4128",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1339535801
|
🛑 Mirror-爬 is down
In 56b2f6d, Mirror-爬 (https://ncov.zhouym.tech/) was down:
HTTP code: 404
Response time: 771 ms
Resolved: Mirror-鱼露 is back up in 14e62a8.
|
gharchive/issue
| 2022-08-15T21:32:49 |
2025-04-01T04:34:55.741049
|
{
"authors": [
"lsy223622"
],
"repo": "lsy223622/xdncov-mirror-status",
"url": "https://github.com/lsy223622/xdncov-mirror-status/issues/4717",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1428153703
|
🛑 Mirror-爬 is down
In 25615f7, Mirror-爬 (https://ncov.zhouym.tech/) was down:
HTTP code: 404
Response time: 692 ms
Resolved: Mirror-鱼露 is back up in fb91fdb.
|
gharchive/issue
| 2022-10-29T08:27:51 |
2025-04-01T04:34:55.743360
|
{
"authors": [
"lsy223622"
],
"repo": "lsy223622/xdncov-mirror-status",
"url": "https://github.com/lsy223622/xdncov-mirror-status/issues/9374",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
173223715
|
FFImageLoading is unable to generate image.
On Android I have an ImageViewAsync inside of an ListView that I am loading an image into as follows:
ImageService.Instance.LoadUrl(item.CoverPhoto)
.Retry(3,200)
.ErrorPlaceholder(placeholder.ToString(), ImageSource.CompiledResource)
.LoadingPlaceholder (placeholder.ToString (), ImageSource.CompiledResource)
.Error (exception =>
{
#if DEBUG
Debugger.Break ();
#endif
});
The error action here occasionally fires as I'm scrolling around the ListView with a generic exception and the message says "FFImageLoading is unable to generate image." The exception has no stack trace. I can give you the call stack when the application pauses on Debugger.Break();
TaskExtensions.AnonymousMethod__0(System.Exception exception)
FFImageLoading.Work.ImageLoaderTaskBase.()
Java.Lang.Thread.RunnableImplementor.Run()
Java.Lang.IRunnableInvoker.n_Run(System.IntPtr jnienv, System.IntPtr native__this)
object.3b1001b5-a37b-4833-ae9e-3ebf9fc57b0e (arg0, arg1)
I have a short list only 30 items or so. All the images seem to download fine, but it seems there is a problem loading the image into the ImageViewAsync. Testing on a physical device, Moto G running Android 5.1, but getting reports on other phones and OS versions too. Any ideas as to why this would be?
Hi @rusty21,
I can't reproduce it. Can you attach a sample project?
Thanks
2.1.8-pre-150 nugets released. It contains many Android stability & performance fixes. Do you still experience it? If you do, please reopen this issue.
|
gharchive/issue
| 2016-08-25T14:46:37 |
2025-04-01T04:34:55.755310
|
{
"authors": [
"daniel-luberda",
"rusty21"
],
"repo": "luberda-molinet/FFImageLoading",
"url": "https://github.com/luberda-molinet/FFImageLoading/issues/293",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2362149854
|
🛑 Školský šport is down
In c073c01, Školský šport (https://skolskysport.sk/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Školský šport is back up in 9cfaff5 after 5 minutes.
|
gharchive/issue
| 2024-06-19T11:42:07 |
2025-04-01T04:34:55.757822
|
{
"authors": [
"lubosm"
],
"repo": "lubosm/minedusk-uptime",
"url": "https://github.com/lubosm/minedusk-uptime/issues/2082",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1724931073
|
🛑 Slovenský historický ústav v Ríme is down
In 20948f7, Slovenský historický ústav v Ríme (http://www.shur.sk/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Slovenský historický ústav v Ríme is back up in 2fe33f5.
|
gharchive/issue
| 2023-05-25T01:08:04 |
2025-04-01T04:34:55.760184
|
{
"authors": [
"lubosm"
],
"repo": "lubosm/minedusk-uptime",
"url": "https://github.com/lubosm/minedusk-uptime/issues/842",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1514615310
|
"Authorize Twitter" button does authorization and then nothing
"Authorize Twitter" button does authorization, it redirects to Twitter and I click "Authorize App", but then I get back to the same page and nothing changed - including the "Authorize Twitter" button which still sits there. When I click it now, however, nothing happens. Did I do the "Publish your handle" part wrong? (What exactly does that even mean, though? A link to the mastodon page, or @user@host, or what exactly will work? That seems to be a bit unclear.) Did the authorization itself even work? In overall, the clearness of the instructions and more importantly the feedback on which step I did correctly or didn't yet, seems a bit lacking.
I'm seeing the same thing. The Authorize Twitter button seems to work but when I return nothing happens. Fedifinder worked fine a couple of weeks ago and the old version still works for me. Tried it in Edge and Firefox, same result in both browsers.
|
gharchive/issue
| 2022-12-30T16:07:10 |
2025-04-01T04:34:55.761889
|
{
"authors": [
"alanta",
"ell1e"
],
"repo": "lucahammer/fedifinder",
"url": "https://github.com/lucahammer/fedifinder/issues/200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
391146035
|
implement header encryption
Fixes #1335.
Codecov Report
Merging #1672 into master will decrease coverage by 0.1%.
The diff coverage is 73.98%.
@@ Coverage Diff @@
## master #1672 +/- ##
=========================================
- Coverage 87.06% 86.96% -0.1%
=========================================
Files 97 97
Lines 6213 6265 +52
=========================================
+ Hits 5409 5448 +39
- Misses 573 581 +8
- Partials 231 236 +5
Impacted Files
Coverage Δ
internal/protocol/packet_number.go
100% <ø> (ø)
:arrow_up:
internal/wire/extended_header.go
93.33% <0%> (ø)
:arrow_up:
internal/wire/header.go
88.46% <100%> (+0.23%)
:arrow_up:
internal/handshake/crypto_setup.go
72.12% <47.06%> (-0.64%)
:arrow_down:
internal/handshake/initial_aead.go
70% <63.64%> (-5.76%)
:arrow_down:
session.go
73.49% <70%> (-0.45%)
:arrow_down:
internal/handshake/aead.go
86.36% <88.24%> (-2.53%)
:arrow_down:
packet_unpacker.go
90.32% <93.33%> (+3.66%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update df22a9e...a638185. Read the comment docs.
|
gharchive/pull-request
| 2018-12-14T15:04:45 |
2025-04-01T04:34:55.774474
|
{
"authors": [
"codecov-io",
"marten-seemann"
],
"repo": "lucas-clemente/quic-go",
"url": "https://github.com/lucas-clemente/quic-go/pull/1672",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1058966513
|
Need for Speed Most wanted script is not recognize Origin NFS13.exe
Hello,
i dunno where to post the issue, because there is not link in Nuclues GUI for report issue, i only would youtube video about game and Nuclues and here are comment disable for some bad reason.. So im reporting it here.
I own game on Origin and script is not able regoznice its NFS13.exe so im unable to run it with Nucleus, other games which i tried are working.
I wonde in genel not about this issues:
is somewhere something like requests for adding addition yet unsupported games?
Would be possible add some mode to run to 2 different games on computer which would use Nucleus setup? If some childern want to play at same time, but 2 different games with 2 controllers, 1 controller per game? So far i have to use virtual machines for that.
Its still broken, i saw some script updates, but its not fixing this issue.
|
gharchive/issue
| 2021-11-19T22:27:44 |
2025-04-01T04:34:55.776850
|
{
"authors": [
"ruthan"
],
"repo": "lucasassislar/nucleuscoop",
"url": "https://github.com/lucasassislar/nucleuscoop/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2685290160
|
correct way of nesting flexes
I'm trying to create a pane/docking system with egui_flex 0.1.1. What I'm trying to achieve is:
a group of outer flexes that determines the layout of the panes
each pane will display its content, which may use its own flexes but shouldn't affect the size of the panes
Here is a sample of my code:
Flex::vertical().show(ui, |flex| {
flex.add_flex(egui_flex::item().grow(0.4), Flex::horizontal(), |flex| {
flex.add_simple(egui_flex::item().grow(0.3), |ui| {
self.ui_pane(ui, PaneContent::Empty);
});
flex.add_widget(egui_flex::item(), Separator::default().vertical());
flex.add_simple(egui_flex::item().grow(0.7), |ui| {
self.ui_pane(ui, PaneContent::Empty);
});
});
flex.add_widget(egui_flex::item(), Separator::default().horizontal());
flex.add_simple(egui_flex::item().grow(0.6), |ui| {
self.ui_pane(ui, PaneContent::Empty);
});
});
For some reason the contents keep stretching beyond the boundary of the window itself. I can't figure out what I did wrong just by looking at the examples. I'd appreciate any help.
grow(0.4) will just mean that if there is more space available than needed this item will grow with a factor of 0.4 and the other one with a factor of 0.6. If you want to use this for relative spacing you could give the items a basis of 0, I think then it should work as expected.
Thank you! It worked...for a flash second, and then the executable crashes:
thread 'main' panicked at D:\ProgramData\rust.cargo\registry\src\index.crates.io-6f17d22bba15001f\egui-0.29.1\src\ui.rs:897:9:
assertion failed: 0.0 <= width
Though for that instant it does look like things are arranged correctly. Here is the updated code:
Flex::vertical().show(ui, |flex| {
flex.add_flex(
egui_flex::item().grow(0.4).basis(0.0),
Flex::horizontal(),
|flex| {
flex.add_simple(egui_flex::item().grow(0.3).basis(0.0), |ui| {
self.ui_pane(ui, PaneContent::Empty);
});
flex.add_widget(egui_flex::item(), Separator::default().vertical());
flex.add_simple(egui_flex::item().grow(0.7).basis(0.0), |ui| {
self.ui_pane(ui, PaneContent::Empty);
});
},
);
flex.add_widget(egui_flex::item(), Separator::default().horizontal());
flex.add_simple(egui_flex::item().grow(0.6).basis(0.0), |ui| {
self.ui_pane(ui, PaneContent::Empty);
});
});
|
gharchive/issue
| 2024-11-23T04:06:30 |
2025-04-01T04:34:55.784867
|
{
"authors": [
"Pentalimbed",
"lucasmerlin"
],
"repo": "lucasmerlin/hello_egui",
"url": "https://github.com/lucasmerlin/hello_egui/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2526774673
|
[Request] Family Icon
Icon name
Family Icon
Use cases
When applying for a visa and there's "dependants" section for the applicant to add their children
Anytime there's a family sharing section in the UI
Anytime there's parental control section in the UI
Design ideas
Something like this:
Checklist
[X] I have searched if someone has submitted a similar issue before and there weren't any. (Please make sure to also search closed issues, as this issue might already have been resolved.)
[X] I have searched existing icons to make sure it does not already exist and I didn't find any.
[X] I am not requesting a brand logo and the art is not protected by copyright.
[X] I am not requesting an icon that includes religious, political imagery or hate symbols.
[X] I have provided appropriate use cases for the icon(s) requested.
Open lucide studio
|
gharchive/issue
| 2024-09-15T07:04:44 |
2025-04-01T04:34:55.804078
|
{
"authors": [
"TheMikeyRoss",
"jguddas"
],
"repo": "lucide-icons/lucide",
"url": "https://github.com/lucide-icons/lucide/issues/2456",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1422134501
|
Update husky, automatically install husky on pnpm i
Fixes #836
Awesome I had this issue as well, very annoying.
Sorry, guys, I ended up here completely by accident
But this changes are not enough for migration to version 8
You should move your scripts from package.json into separate files in .husky folder
Here's more info https://typicode.github.io/husky/#/?id=migrate-from-v4-to-v8
@oceandrama Thanks for letting us know!
|
gharchive/pull-request
| 2022-10-25T09:14:28 |
2025-04-01T04:34:55.806420
|
{
"authors": [
"ericfennis",
"oceandrama",
"wojtekmaj"
],
"repo": "lucide-icons/lucide",
"url": "https://github.com/lucide-icons/lucide/pull/847",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
994674676
|
New error using the new update.
[2021-09-13 11:39:11,114] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.5.1, git-hash=unknown, git-branch=unknown
[2021-09-13 11:39:11,216] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed groups
[2021-09-13 11:39:11,216] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed model parallel group with size 1
[2021-09-13 11:39:11,216] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed expert parallel group with size 1
[2021-09-13 11:39:11,217] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert data parallel process group with ranks: [0]
[2021-09-13 11:39:11,217] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [0]
[2021-09-13 11:39:11,240] [INFO] [engine.py:198:init] DeepSpeed Flops Profiler Enabled: False
Traceback (most recent call last):
File "train_dalle.py", line 497, in
config_params=deepspeed_config,
File "/home/valterjordan/DALLE-pytorch/dalle_pytorch/distributed_backends/distributed_backend.py", line 152, in distribute
**kwargs,
File "/home/valterjordan/DALLE-pytorch/dalle_pytorch/distributed_backends/deepspeed_backend.py", line 162, in _distribute
**kwargs,
File "/home/valterjordan/miniconda3/envs/dalle_env/lib/python3.7/site-packages/deepspeed/init.py", line 141, in initialize
config_params=config_params)
File "/home/valterjordan/miniconda3/envs/dalle_env/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 204, in init
self.training_dataloader = self.deepspeed_io(training_data)
File "/home/valterjordan/miniconda3/envs/dalle_env/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1188, in deepspeed_io
data_parallel_rank=data_parallel_rank)
File "/home/valterjordan/miniconda3/envs/dalle_env/lib/python3.7/site-packages/deepspeed/runtime/dataloader.py", line 52, in init
rank=data_parallel_rank)
File "/home/valterjordan/miniconda3/envs/dalle_env/lib/python3.7/site-packages/torch/utils/data/distributed.py", line 87, in init
self.num_samples = math.ceil(len(self.dataset) / self.num_replicas) # type: ignore
TypeError: object of type 'Processor' has no len()
Issue still priests but disappears downgrading webdatasets really strange.
Still getting the issue as well. @jaoeded what version of webdataset did you use to fix it?
Still getting the issue as well. @jaoeded what version of webdataset did you use to fix it?
webdataset==0.1.62
Sadly this too can cause some errors but it erases the TypeError: object of type 'Processor' has no len() for the most part not totally tho.
I guess we can just monkey patch the len dunder method...
https://github.com/lucidrains/DALLE-pytorch/pull/366
https://github.com/microsoft/DeepSpeed/issues/1371
just a note:
deepspeed supports IterableDataset https://github.com/microsoft/DeepSpeed/blob/86dd6a6484a4c3aa8a04fc7e7e6c67652b09dad5/deepspeed/runtime/engine.py#L1141
webdataset exposes an IterableDataset https://github.com/webdataset/webdataset
iterable dataset do not have a __len__ method, only __iter__
So I'm wondering if the problem could be coming from the use of webdataset and deepspeed here that would be incorrect in some way ?
for example this call https://github.com/lucidrains/DALLE-pytorch/blob/main/train_dalle.py#L392 to torch.utils.data.distributed.DistributedSampler seems suspicious and unrelated with deepspeed
@rom1504 All I know is that maintaining a DeepSpeed compatible codebase has been an utter nightmare since day one. Interop with deep speed breaks something fairly frequently. As such my motivation to fix these things "properly" is pretty much non-existent.
I agree that it likely has something to do with the data sampler; but I didn't want to just remove that as it seems to be explicitly for handling the multi-GPU scenario with DeepSpeed I believe?
I guess @robvanvolt might be interested to have a look at this code since it's related with his deepspeed issue
Hey, the DistributedSampler is indeed unrelated to DeepSpeed, it's actually for Horovod. I should have documented this, sorry about that.
I have an intuition about the issue here (something about the dataset returned by DeepSpeed) but need to see whether I can fix it tomorrow (I'm also not very up-to-date with the code base). Otherwise after Wednesday is the earliest time.
Sorry for the brevity.
I had a quick look at this; I'm not yet familiar with WebDatasets, so maybe you can answer this more easily.
Why is it important to use a wds.WebLoader? Can't we pass the wds.WebDataset to distr_backend.distribute to let DeepSpeed handle data loading with its distr_dl (and removing this if-branch accordingly)?
Sorry, I know you probably answered this already during all the testing and implementing.
The problem is that PyTorch's sampling strategy does not work with IterableDatasets; see the open issue here: https://github.com/pytorch/pytorch/issues/28743.
The only change we need is to pass None here when ENABLE_WEDATASET is True.
Is this issue safe to close now?
@janEbert @afiaka87 @rom1504 I'm closing this issue janEbert's pr seems to have fixed it. reopen if needed.
One last note feel free to try the pr yourselves. if it does not work feel free to reopen it worked for me.
|
gharchive/issue
| 2021-09-13T09:41:25 |
2025-04-01T04:34:55.826222
|
{
"authors": [
"afiaka87",
"janEbert",
"jaoeded",
"js12312",
"rom1504"
],
"repo": "lucidrains/DALLE-pytorch",
"url": "https://github.com/lucidrains/DALLE-pytorch/issues/359",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1233943273
|
Image Editing
Do you have any suggestions for generating images by not only relying on text, but on a masked image, as open describes in their blog https://openai.com/dall-e-2/?
You need to train it with inpainting task. In particular, the Decoder Unet needs to be able to take in a mask input to be concatenated with masked image on the channel dimension to predict the original image.
I think this would be a nice addition at some point, if I do anything on this regard will let you know :)
It would be good to train an all in one model where the model inpaints as needed or also do full image generation by simply giving a full zero mask.
Agree, it would be a complementary task so doing both tasks at the same time should likely not hurt the overall performance.
So if one model is trained for both tasks at the same time, we would need to do noised_image + empty masked image + empty mask during normal training, noised_image + masked_original_image + mask during inpainting pertaining.
Not 100% sure, but do we have to add noise to the entire image, or is it enough just to add noise to the masked part? Not exactly the most scientific resource but check the video on https://openai.com/dall-e-2/ timestamp 2:37 "monkey paying taxes". It seems like they input an image where only the masked part is noised.
During training, only x is noised or denoised and masked image and mask used directly from the various pieces of openai GLIDE and DDPM code if I understand it correctly.
yea, depending on some circumstances next week, i could build this, let's leave this open
It would be good to train an all in one model where the model inpaints as needed or also do full image generation by simply giving a full zero mask.
yup, this is the most ideal case :)
It would be good to train an all in one model where the model inpaints as needed or also do full image generation by simply giving a full zero mask.
yup, this is the most ideal case :)
Alternatively can you not just finetune the generation model for inpainting you simply would have just to change the input layer the rest of the weights in the network you should be able to take over.
i think i'm going to aim for integrating this technique https://github.com/andreas128/RePaint it is a pretty recent paper, but the results look good. can use this resampler technique for both dalle2 and imagen
ok it is done https://github.com/lucidrains/dalle2-pytorch#inpainting
|
gharchive/issue
| 2022-05-12T12:54:59 |
2025-04-01T04:34:55.833097
|
{
"authors": [
"Mut1nyJD",
"egeozsoy",
"lucidrains",
"xiankgx"
],
"repo": "lucidrains/DALLE2-pytorch",
"url": "https://github.com/lucidrains/DALLE2-pytorch/issues/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
818999697
|
Edge features thrown out
Hi, thanks for this implementation!
I was wondering if the pytorch-geometric implementation of this architecture is throwing the edge features out by mistake, as seen here
https://github.com/lucidrains/egnn-pytorch/blob/1b8320ade1a89748e4042ae448626652f1c659a1/egnn_pytorch/egnn_pytorch.py#L148-L151
Or maybe my understanding is wrong?
Cheers,
@josejimenezluna oh yes, you are correct https://github.com/lucidrains/egnn-pytorch/releases/tag/0.0.14
Closing this 👍
|
gharchive/issue
| 2021-03-01T15:50:31 |
2025-04-01T04:34:55.835411
|
{
"authors": [
"josejimenezluna",
"lucidrains"
],
"repo": "lucidrains/egnn-pytorch",
"url": "https://github.com/lucidrains/egnn-pytorch/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
948731013
|
🛑 Cam-85 is down
In ed44e9c, Cam-85 (http://lucien.kerl.io:85) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Cam-85 is back up in 271668a.
|
gharchive/issue
| 2021-07-20T14:47:57 |
2025-04-01T04:34:55.838994
|
{
"authors": [
"lucienkerl"
],
"repo": "lucienkerl/status",
"url": "https://github.com/lucienkerl/status/issues/235",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1731488630
|
🛑 Cam-85 is down
In 394aff1, Cam-85 (http://lucien.kerl.io:85) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Cam-85 is back up in ec890de.
|
gharchive/issue
| 2023-05-30T04:32:48 |
2025-04-01T04:34:55.841267
|
{
"authors": [
"lucienkerl"
],
"repo": "lucienkerl/status",
"url": "https://github.com/lucienkerl/status/issues/975",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
670513424
|
Luckperms Error when add plugin on BungeeCord
06:23:16 [SEVERE] [LuckPerms] Exception occurred whilst loading data for de45bdc2-2bf3-487b-b497-4924787c97c7 - SharkblackFr
06:23:16 [SEVERE] java.lang.NullPointerException
06:23:16 [SEVERE] at me.lucko.luckperms.common.plugin.util.AbstractConnectionListener.loadUser(AbstractConnectionListener.java:67)
06:23:16 [SEVERE] at me.lucko.luckperms.bungee.listeners.BungeeConnectionListener.lambda$onPlayerLogin$0(BungeeConnectionListener.java:90)
06:23:16 [SEVERE] at net.md_5.bungee.scheduler.BungeeTask.run(BungeeTask.java:63)
06:23:16 [SEVERE] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
06:23:16 [SEVERE] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
06:23:16 [SEVERE] at java.lang.Thread.run(Thread.java:748)
06:23:16 [SEVERE] Task BungeeTask(sched=net.md_5.bungee.scheduler.BungeeScheduler@609db43b, id=21, owner=me.lucko.luckperms.bungee.LPBungeeBootstrap@3943a2be, task=me.lucko.luckperms.bungee.listeners.BungeeConnectionListener$$Lambda$190/123817119@7194b156, delay=0, period=0, running=true) encountered an exception
java.lang.NullPointerException
at me.lucko.luckperms.bungee.listeners.BungeeConnectionListener.lambda$onPlayerLogin$0(BungeeConnectionListener.java:103)
at net.md_5.bungee.scheduler.BungeeTask.run(BungeeTask.java:63)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
the server version and in 1.14.4 and the bungeecord utilidé is the latest version.
Concerning luckperms we use the version Bungee (LuckPerms-Bungee-5.1.84.jar) and spigot (LuckPerms-Bukkit-5.1.84.jar)
please help me as soon as possible because I can no longer add bungeecord plugins using luckperms currently
thanks in advance for your help
The BungeeCord LoginEvent is being called with a null connection, this is likely a bug with your version of BungeeCord. Ensure you are using the latest build, or perhaps try using Waterfall instead. (https://papermc.io/downloads#Waterfall)
|
gharchive/issue
| 2020-08-01T04:36:23 |
2025-04-01T04:34:55.843999
|
{
"authors": [
"SharkblackFr",
"lucko"
],
"repo": "lucko/LuckPerms",
"url": "https://github.com/lucko/LuckPerms/issues/2518",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1684766275
|
TTS doesn't work, example project doesn't compile
Hi. I can't open up the example project, I downloaded the archive from the releases section of the Example Project github page, when trying to open or compile I receive error:
1>C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\Microsoft.MakeFile.Targets(44,5): error MSB3073: The command "C:\UE_5.1\Engine\Build\BatchFiles\Build.bat PluginsManagementEditor Win64 Development -Project="C:\Users\f4xw\Downloads\UEAzSpeechSampleProject-main\UEAzSpeechSampleProject-main\PluginsManagement.uproject" -WaitMutex -FromMsBuild" exited with code 6.
1>Done building project "PluginsManagement.vcxproj" -- FAILED.
Can't find any info or tutorials.
How does the TTS work if I just use Text To Speech with default settings node? Do I need to attach it to an audio component or something?
Also where to find the list of available voices? Can I leave None or Default?
Hi! Sorry for the delay!
Are there more information in the logs?
You can find the supported languages here: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support
The text to speech task will automatically convert the audio to sound wave and play in an internal audio component. But if you want to get the sound wave and play later or modify the settings, you can use the Text to Sound Wave task. :)
You will need to setup the default voice name and language in Project Settings -> Plugins -> AzSpeech
|
gharchive/issue
| 2023-04-26T10:42:13 |
2025-04-01T04:34:55.849004
|
{
"authors": [
"faxcorp",
"lucoiso"
],
"repo": "lucoiso/UEAzSpeech",
"url": "https://github.com/lucoiso/UEAzSpeech/issues/200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
650870859
|
the diagram of the network structure? I
Hello, can you provide an efficentnet-b3 SSD diagram of the network structure? I don’t understand this structure.
Please references to the paper: https://arxiv.org/pdf/1905.11946.pdf
|
gharchive/issue
| 2020-07-04T10:54:36 |
2025-04-01T04:34:55.853952
|
{
"authors": [
"gengpengs",
"lufficc"
],
"repo": "lufficc/SSD",
"url": "https://github.com/lufficc/SSD/issues/155",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
352481395
|
Config returns variables on Android but returns empty on iOS
What have I done:
$ yarn add react-native-config
$ react-native link react-native-config
// 2nd line, add a new apply:
apply from: project(':react-native-config').projectDir.getPath() + "/dotenv.gradle"
"react-native": "0.55.4"
"react-native-config": "^0.11.5"
https://github.com/luggit/react-native-config/issues/187#issuecomment-395585247
Helped me solve this!
|
gharchive/issue
| 2018-08-21T10:35:46 |
2025-04-01T04:34:55.860507
|
{
"authors": [
"Martian2Lee"
],
"repo": "luggit/react-native-config",
"url": "https://github.com/luggit/react-native-config/issues/283",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1564573429
|
Can not read .env file on Android.
Hi there.
I have a problem with the installation of RNC.
It works like a charm on IOS but I can not read the env variables on Android. I checked this repo's installation guide many times and matched the introduction step by step.
I've read many issues related to my problem but nothing helped me.
I'll let you know the implementation, may you can help me.
My RN & RNC version:
"react-native-config": "^1.5.0", "react-native": "0.70.6".
My gradle file:
project.ext.envConfigFiles = [
debug: ".env",
release: ".env",
]
apply from: project(':react-native-config').projectDir.getPath() + "/dotenv.gradle"
...
def enableProguardInReleaseBuilds = false
...
defaultConfig {
minSdkVersion rootProject.ext.minSdkVersion
targetSdkVersion rootProject.ext.targetSdkVersion
versionCode 142
versionName "0.1.38"
buildConfigField "boolean", "IS_NEW_ARCHITECTURE_ENABLED", isNewArchitectureEnabled().toString()
//React-native-config needs this line because we have flavors
resValue "string", "build_config_package", "com.ifsguide.app"
// codepush -> https://github.com/microsoft/react-native-code-push/issues/1961
resValue 'string', "CODE_PUSH_APK_BUILD_TIME", String.format("\"%d\"", System.currentTimeMillis())
if (isNewArchitectureEnabled()) {
// We configure the CMake build only if you decide to opt-in for the New Architecture.
externalNativeBuild {
cmake {
arguments "-DPROJECT_BUILD_DIR=$buildDir",
"-DREACT_ANDROID_DIR=$rootDir/../node_modules/react-native/ReactAndroid",
"-DREACT_ANDROID_BUILD_DIR=$rootDir/../node_modules/react-native/ReactAndroid/build",
"-DNODE_MODULES_DIR=$rootDir/../node_modules",
"-DANDROID_STL=c++_shared"
}
}
if (!enableSeparateBuildPerCPUArchitecture) {
ndk {
abiFilters (*reactNativeArchitectures())
}
}
}
}
...
buildTypes {
debug {
signingConfig signingConfigs.debug
}
release {
// Caution! In production, you need to generate your own keystore file.
// see https://reactnative.dev/docs/signed-apk-android.
signingConfig signingConfigs.release
minifyEnabled enableProguardInReleaseBuilds
proguardFiles getDefaultProguardFile("proguard-android.txt"), "proguard-rules.pro"
}
}
flavorDimensions "version"
productFlavors {
production {
dimension "version"
applicationId "com.ifsguide.app"
resValue "string", "app_name", "IFS Guide"
ndk {
abiFilters "armeabi-v7a", "x86", "arm64-v8a", "x86_64"
}
}
staging {
dimension "version"
applicationId "com.ifsguide.app.staging"
resValue "string", "app_name", "IFSG-Staging"
versionName defaultConfig.versionName + "-Staging"
ndk {
abiFilters "armeabi-v7a", "x86", "arm64-v8a", "x86_64"
}
}
}
...
And, also this my BuildConfig.java file from /build/generated/source/buildConfig/staging/debug/com/ifsguide/BuildConfig.java.
/**
* Automatically generated file. DO NOT MODIFY
*/
package com.ifsguide;
public final class BuildConfig {
public static final boolean DEBUG = Boolean.parseBoolean("true");
public static final String APPLICATION_ID = "com.ifsguide.app.staging";
public static final String BUILD_TYPE = "debug";
public static final String FLAVOR = "staging";
public static final int VERSION_CODE = 142;
public static final String VERSION_NAME = "0.1.38-Staging";
// Field from default config.
public static final boolean IS_NEW_ARCHITECTURE_ENABLED = false;
// Field from default config.
public static final String RELEASE_MODE = "staging";
}
My .env file in the root of the project:
RELEASE_MODE=staging
So, as you can see, we have the env variable in the generated BuildConfig.java file but IDK why I can not access the variable on Android.
I'm having the same issue, but it only started happening when I upgraded to RN from 0.70.6 to 0.71.1.
I have the same problem here. IOS works, but not in android :/
I can confirm same issue happens on Android. iOS works fine.
react-native: 0.71.1
react-native-config: 1.5.0
/build/generated/source/buildConfig/staging/debug/com/ifsguide/BuildConfig.java is not empty and fields generated correctly. Calling Config from JS side returns empty object.
Same issue here
same RN 0.71.1
The issue that the topic starter has is because the res value of build_config_package in the gradle file
//React-native-config needs this line because we have flavors
resValue "string", "build_config_package", "com.ifsguide.app"
is incorrect.
The package name in BuildConfig is
/**
* Automatically generated file. DO NOT MODIFY
*/
package com.ifsguide;
The gradle file should therefore contain
//React-native-config needs this line because we have flavors
resValue "string", "build_config_package", "com.ifsguide"
The issue that the topic starter has is because the res value of build_config_package in the gradle file
//React-native-config needs this line because we have flavors
resValue "string", "build_config_package", "com.ifsguide.app"
is incorrect.
The package name in BuildConfig is
/**
* Automatically generated file. DO NOT MODIFY
*/
package com.ifsguide;
The gradle file should therefore contain
//React-native-config needs this line because we have flavors
resValue "string", "build_config_package", "com.ifsguide"
Thanks!
after updating build_config_package to make it same with package_id, it works now
Using version: "react-native-config": "1.4.11" Work in Android and iOS!!
-keep class com.xcule.BuildConfig { *; }
Please check and confirm proguard is enabled / not.
I have checked it and it does not work anyway 😞 react-native@0.71.3"
Adding the following line to the start of the file /android/app/build.gradle solved my problem:
apply from: project(":react-native-config").projectDir.getPath() + "/dotenv.gradle"
I have the same issue, I have added
apply from: project(":react-native-config").projectDir.getPath() + "/dotenv.gradle"
to my app/build.gradle, I have added in the proguard rules, I have checked my package name is in my AndroidManifest, I have added in resValue "string", "build_config_package", "---SNIP-ID---" to default config. I can see my variables showing in BuildConfig.Java but js side empty object in android.
I found (what seemed to be) my issue. I had two different bundle ids across ios and android. As soon as I made them the same it seemed to fix it.
Adding the following line to the start of the file /android/app/build.gradle solved my problem: apply from: project(":react-native-config").projectDir.getPath() + "/dotenv.gradle" If it is totally necessary I consider the official set up documentation should include it. Thanks!
The solution worked for me too.
Adding the following line to the start of the file /android/app/build.gradle solved my problem: apply from: project(":react-native-config").projectDir.getPath() + "/dotenv.gradle" If it is totally necessary I consider the official set up documentation should include it. Thanks!
This worked for me also, thanks
下記設定で読み込めるようになりました、ありがとうございました。
apply from: project(":react-native-config").projectDir.getPath() + "/dotenv.gradle"
The issue that the topic starter has is because the res value of build_config_package in the gradle file
//React-native-config needs this line because we have flavors
resValue "string", "build_config_package", "com.ifsguide.app"
is incorrect.
The package name in BuildConfig is
/**
* Automatically generated file. DO NOT MODIFY
*/
package com.ifsguide;
The gradle file should therefore contain
//React-native-config needs this line because we have flavors
resValue "string", "build_config_package", "com.ifsguide"
perfect solution ❤️
I can confirm same issue happens on Android. iOS works fine.
react-native: 0.71.1 react-native-config: 1.5.0
/build/generated/source/buildConfig/staging/debug/com/ifsguide/BuildConfig.java is not empty and fields generated correctly. Calling Config from JS side returns empty object.
same issue
Thanks @curthipster , I missed this step in Anvanced Android Setup:
defaultConfig {
...
resValue "string", "build_config_package", "YOUR_PACKAGE_NAME_IN_ANDROIDMANIFEST_XML"
}
Also what I did previously was to get it dynamically from .env like so:
resValue "string", "build_config_package", project.env.get("APP_ID")
but what worked is:
resValue "string", "build_config_package", "com.test.myapp"
When you were used code push with resValue "string", "CODE_PUSH_APK_BUILD_TIME", String.format("\"%d\"", System.currentTimeMillis()), you must add new line like below:
defaultConfig {
...
resValue "string", "CODE_PUSH_APK_BUILD_TIME", String.format("\"%d\"", System.currentTimeMillis())
// added for react-native-config
resValue "string", "build_config_package", your_package_name
}
Thanks a lot above comments.
|
gharchive/issue
| 2023-01-31T15:48:56 |
2025-04-01T04:34:55.879685
|
{
"authors": [
"CDBridger",
"DovletAmanov",
"EmreDereli",
"IsharaD-Swivel",
"LukasMod",
"Yangeok",
"adsalihac",
"akinlekan28",
"curthipster",
"daviseares",
"gerardcastell",
"hpelitebook745G2",
"ikmz0104",
"justin-tay",
"marcelxsilva",
"mohammad-goldast",
"vatsalshah7556",
"xiongxiongjiang"
],
"repo": "luggit/react-native-config",
"url": "https://github.com/luggit/react-native-config/issues/729",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1305216788
|
⚠️ TiendaEnLinea has degraded performance
In 8e444d0, TiendaEnLinea (https://www.fahorro.com/) experienced degraded performance:
HTTP code: 200
Response time: 5999 ms
Resolved: TiendaEnLinea performance has improved in 7f1e83e.
|
gharchive/issue
| 2022-07-14T19:43:39 |
2025-04-01T04:34:55.891004
|
{
"authors": [
"luitz"
],
"repo": "luitz/fda-uptime",
"url": "https://github.com/luitz/fda-uptime/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1010164112
|
Letters are hidden when indenting with tabs and having listchars
Hey! Sorry if the title sounds kinda confusing. I have files indented with tabs looking like this (first described here):
I have this nvim config:
local opt = vim.opt
opt.history = 1000
opt.swapfile = false
opt.updatetime = 750
opt.undofile = true
opt.guicursor = ''
opt.cursorline = true
opt.cursorcolumn = true
opt.number = true
opt.relativenumber = true
opt.wrap = false
opt.splitbelow = true
opt.splitright = true
opt.colorcolumn = { '80' }
opt.showtabline = 2
opt.encoding = 'utf-8'
opt.hidden = true
opt.termguicolors = true
vim.g.mapleader = ','
require("plugins")
Indent-blankline config:
vim.opt.list = true
vim.opt.listchars = {
eol = "$"
}
require('indent_blankline').setup {
show_end_of_line = true,
filetype_exclude = {'text', 'help', 'markdown', 'dashboard'},
show_trailing_blankline_indent = false,
char_highlight_list = {
'IndentBlanklineIndent1',
'IndentBlanklineIndent2',
'IndentBlanklineIndent3',
'IndentBlanklineIndent4',
'IndentBlanklineIndent5',
'IndentBlanklineIndent6'
}
}
As you can see I have a simple setup. If I remove vim.opt.listchars it works:
Any help?:( The solution may be simple but at this point I don't know what it could be.
If I remove vim.opt.listchars it works:
This is because you were overwriting the listchars without specifying replacements for tabs. Without anything set for tabs, vim will display ^I, which is only two columns wide regardless of your tabstop setting. Since the plugin still creates the virtual text based on tabstop it will be too long and override actual text.
The vim defaults for listchars are:
vim.opt.listchars = {
tab = "> ",
trail = "-",
nbsp = "+",
}
So just add your eol to that table and it should work as expected again.
Alternatively you could also just set vim.opt.tabstop = 2.
Hey that worked! thanks! I was a bit confusing thinking that "tab" option was somehow set by this plugin and thus showing the indentation marks even when I didn't have it in my listchars. But it seems to be a different functionality, right? Although I read the help and couldn't figured out too much what > is...
*lcs-tab*
tab:xy[z] Two or three characters to be used to show a tab.
The third character is optional.
tab:xy The 'x' is always used, then 'y' as many times as will
fit. Thus "tab:>-" displays:
>
>-
>--
etc.
tab:xyz The 'z' is always used, then 'x' is prepended, and
then 'y' is used as many times as will fit. Thus
"tab:<->" displays:
>
<>
<->
<-->
etc.
When "tab:" is omitted, a tab is shown as ^I.
Thank you again!
Although I read the help and couldn't figured out too much what > is
> doesn't have any meaning. It is just the character Neovim displays for tabs by default.
I'll open this issue again, the plugin should handle this more gracefully.
And I also need to update the readme, the examples there should not overwrite all listchars.
Hey @lukas-reineke thanks for the improvement. I'm getting this error though: packer.nvim: Error running config for indent-blankline.nvim: /usr/share/nvim/runtime/lua/vim/_meta.lua:170: E474: Invalid argument
config is as follows:
vim.opt.list = true
vim.opt.listchars:append("eol:$")
require('indent_blankline').setup {
show_end_of_line = true,
filetype_exclude = {'text', 'help', 'markdown', 'dashboard'},
show_trailing_blankline_indent = false,
char_highlight_list = {
'IndentBlanklineIndent1',
'IndentBlanklineIndent2',
'IndentBlanklineIndent3',
'IndentBlanklineIndent4',
'IndentBlanklineIndent5',
'IndentBlanklineIndent6'
}
}
vim.opt.listchars:append("eol:$") is standard Neovim. That has nothing to do with this plugin.
Not sure why it is broken for you. Maybe try updating Neovim?
It should be
vim.opt.listchars:append({ eol = "$" })
and only works since Neovim v0.5.1, which was released 2 days ago.
vim.opt.listchars:append("eol:$") works fine for me on 0.5.0, 0.5.1 and latest master
|
gharchive/issue
| 2021-09-28T20:12:29 |
2025-04-01T04:34:55.919871
|
{
"authors": [
"Daxtorim",
"LuisxSullivaN",
"lukas-reineke"
],
"repo": "lukas-reineke/indent-blankline.nvim",
"url": "https://github.com/lukas-reineke/indent-blankline.nvim/issues/241",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
598911168
|
Support for Tab based indentation
Will this support tab based indented source files? If not, is there a plan to support the feature?
Tabs should work fine.
This screenshot uses these settings
set listchars=eol:↴
set listchars+=tab:>-
let g:indent_blankline_space_char = '-'
let g:indent_blankline_char = '>'
Line 3, 5 and 8 are empty.
Let me know if you have any problems.
@lukas-reineke Is it possible to not show the first indent level when using tabs like g:indent_blankline_show_first_indent_level = v:false does for spaces?
g:indent_blankline_show_first_indent_level works for tabs as well.
@lukas-reineke Oh you're right, that's my bad. I had this line left in my config which made it appear as if the option wasn't working:
set list listchars=tab:▏\
So 97% of code I edit is space indented, but I'd like to be able to tell at a glance when code has tabs. My listchars is vim.opt.listchars = "tab:→ ,extends:»,precedes:«,trail:·,nbsp:◆"
What happens with indent-blankline configured as
safeRequire("indent_blankline").setup({
char = "▏",
is that the first char of my tab list (→) is replaced with the ▏ which is totally reasonable but this means that it becomes impossible to tell whether the indent is done with a tab or spaces since the second tab listchar is space.
Any ideas? Do you think we would require code change to allow indent-blankline to be able to get out of the way when a tab is used?
You can set a different character for tab in version 3. Please wait until it is released, or try it out with the v3 branch.
That's spectacular news thank you.
@lukas-reineke v3 is wild! It can correctly apply indent markers for differently indented regions in the same file! Holy crap...
|
gharchive/issue
| 2020-04-13T14:10:31 |
2025-04-01T04:34:55.925714
|
{
"authors": [
"Melkster",
"lukas-reineke",
"ratheesh",
"unphased"
],
"repo": "lukas-reineke/indent-blankline.nvim",
"url": "https://github.com/lukas-reineke/indent-blankline.nvim/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1059082342
|
Remove figma ds
Working on replacing components from react-figma-plugin-ds with components in the package
Pull Request Test Coverage Report for Build 1483982410
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 75.862%
Totals
Change from base Build 1480215251:
0.0%
Covered Lines:
430
Relevant Lines:
546
💛 - Coveralls
|
gharchive/pull-request
| 2021-11-20T07:15:07 |
2025-04-01T04:34:55.936241
|
{
"authors": [
"coveralls",
"lukasoppermann"
],
"repo": "lukasoppermann/design-tokens",
"url": "https://github.com/lukasoppermann/design-tokens/pull/163",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
323099218
|
Improve the currency selection option
It supports selecting or searching for a currency to enable. You can search and press enter to quickly enable. You can press backspace to delete the previous addition. You can press the x to delete previous additions. KMD and CHIPS cannot be deleted.
I really like it but I'm not sure how usable it's going to be with a large amount of coins. This is only with 9 coins enabled and it already becomes a bit unclear that it's a select input:
If a user had 20 or 30 coins enabled the drop down selection might be off the visible page.
One potential solution could be to keep the select component but not list all the currencies inside it, list them below instead. What do you think?
If a user had 20 or 30 coins enabled the drop down selection might be off the visible page.
That's a good point. I've set a max height on it now, so that it scrolls the content if there are a lot of currencies. I've set it to a low number so you can test. I'll increase it if accepted so that it doesn't need scrolling quite as early.
That's much better 👌
|
gharchive/pull-request
| 2018-05-15T07:34:18 |
2025-04-01T04:34:55.940122
|
{
"authors": [
"lukechilds",
"sindresorhus"
],
"repo": "lukechilds/hyperdex",
"url": "https://github.com/lukechilds/hyperdex/pull/219",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
643824182
|
EfficientNet-b0 slower than Resnet-50?
I wrote a small script to check inference and training speed of efficientnet vs. Resnets. According to these results, the efficient-b0 is measurably slower than resnet-50 ?!? Am I doing something wrong?
Using version 0.4.1 (GPU V100)
=== Checking effnetb0
inference time=19.62038330078125
train time=70.4931103515625
=== Checking resnet18
inference time=3.7844888305664064
train time=20.64025634765625
=== Checking resnet50
inference time=11.5154638671875
train time=56.6721044921875
Using version 1.4.0 (local GPU M2200)
=== Checking effnetb0
inference time=45.6470166015625
train time=215.383828125
=== Checking resnet18
inference time=11.30126220703125
train time=57.34619140625
=== Checking resnet50
inference time=27.5243359375
train time=151.6120703125
//EDIT:
probably related to https://github.com/lukemelas/EfficientNet-PyTorch/issues/19
XXX
Script is below
import torchvision
import efficientnet_pytorch
import torch
models = {}
models['effnetb0'] = efficientnet_pytorch.EfficientNet.from_pretrained("efficientnet-b0")
models['resnet18'] = torchvision.models.resnet18(pretrained=False)
models['resnet50'] = torchvision.models.resnet50(pretrained=False)
device = 'cuda'
t_in = torch.randn(1, 3, 224, 224, device=device)
for name, model in models.items():
# set model to eval
model.to(device)
print("=== Checking {}".format(name))
# do warmup
print("do warmup")
for i in range(100):
t_out = model.forward(t_in)
# do inference
print("measure inference")
model.eval()
navrg = 100
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
for i in range(navrg):
t_out = model.forward(t_in)
end.record()
torch.cuda.synchronize()
t_in_ms = start.elapsed_time(end)
print("inference time={}".format(t_in_ms/navrg))
# train time
print("measure train")
model.train()
navrg = 100
optimizer = torch.optim.Adam(model.parameters())
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
for i in range(navrg):
t_out = model.forward(t_in)
loss = torch.mean(t_out)**2
loss.backward()
optimizer.step()
end.record()
torch.cuda.synchronize()
t_in_ms = start.elapsed_time(end)
print("train time={}".format(t_in_ms/navrg))
Yes, this is a known issue with PyTorch. Hopefully the fp32 depthwise convolutions will be fast in PyTorch soon (it's on the roadmap I believe, maybe even in 1.6 or 1.7).
Closing this because it's the save as #19 (I believe, correct me if wrong).
Is this addressed?
My B0 is still slower than ResNet50 by a factor of ~2.
I'm on Torch=1.10.0+cu113
Still having this problem with pytorch 2
|
gharchive/issue
| 2020-06-23T13:03:25 |
2025-04-01T04:34:55.954153
|
{
"authors": [
"cgebbe",
"lukemelas",
"seann999",
"siarez"
],
"repo": "lukemelas/EfficientNet-PyTorch",
"url": "https://github.com/lukemelas/EfficientNet-PyTorch/issues/195",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
294077944
|
Fix installer args, remove double quotes in foobar2000
The foobar2000 installer file don't work to enclose output path in double quotes.
This commit remove double quotes around $dir .
Well crap. You’re right. See http://nsis.sourceforge.net/Docs/Chapter3.html
I broke it so I should fix it. What a drag.
|
gharchive/pull-request
| 2018-02-03T03:01:02 |
2025-04-01T04:34:55.982002
|
{
"authors": [
"akiakishitai",
"rasa"
],
"repo": "lukesampson/scoop-extras",
"url": "https://github.com/lukesampson/scoop-extras/pull/774",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
406126360
|
[Request] Add IPFS-Cluster, et al
Home page : https://cluster.ipfs.io/
Project page : https://github.com/ipfs/ipfs-cluster
Download page : https://dist.ipfs.io/#ipfs-cluster-ctl, https://dist.ipfs.io/#ipfs-cluster-service
IPFS-Cluster consists of two commands, ipfs-cluster-ctl and ipfs-cluster-service. It helps create and manage IPFS clusters.
The downloaded files are .zip archives that contains a stand-alone .exe binary.
This issue belongs in the Main bucket. Unfortunately, we cannot move this issue on this end, so it is being closed.
If this is still an issue, please create a new issue in the Main bucket via https://github.com/ScoopInstaller/Main/issues/new?body=Moved+from+https%3A%2F%2Fgithub.com%2Flukesampson%2Fscoop%2Fissues%2F3076
|
gharchive/issue
| 2019-02-03T20:19:51 |
2025-04-01T04:34:55.985497
|
{
"authors": [
"NatoBoram",
"rasa"
],
"repo": "lukesampson/scoop",
"url": "https://github.com/lukesampson/scoop/issues/3076",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1529567706
|
Publish to Maven central
Currently to download dependencies users needs to compile and publish to local Maven repository or use limited targets from Jitpack.
Jitpack is not working anymore, currently the only option is to install via local Maven repository.
|
gharchive/issue
| 2023-01-11T19:29:44 |
2025-04-01T04:34:55.990327
|
{
"authors": [
"lukwol"
],
"repo": "lukwol/cmnav",
"url": "https://github.com/lukwol/cmnav/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
67207203
|
Adding helper classes for positioning items
It would be nice to have some helper classes in CSS to place elements like headlines, text or buttons on some stardard positions like left, right, middle, top and bottom.
+1
Hi! There is already some helpers.
clearfix, float-right, float-left, float-none, or flexbox could help to position the element or the vertical alignment.
text-left, text-center, text-right are there to manage the alignment of text & inline-block elements.
The helpers should work (combined).
|
gharchive/issue
| 2015-04-08T20:00:38 |
2025-04-01T04:34:56.002340
|
{
"authors": [
"digitalcraftsman",
"jsam",
"malexandre"
],
"repo": "lumapps/lumX",
"url": "https://github.com/lumapps/lumX/issues/240",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
962241564
|
Extras list
Listassa voisi olla mahdollisuus mahdollisuus muokata olemassa olevien extrojen tietoja. Nyt käyttäjä joutuu poistamaan ja luomaan uudelleen extran jos haluaa muokata yksittäistä kohtaa niissä. Lisäksi lisääminen ei anna herjaa jos saman niminen extra lisätään, vaan toast ilmoittaa onnistuneesta lisäyksestä, vaikka todellisesti uutta extraa ei lisätty.
Lisätty validaatio samannimisille lisäpalveluille.
|
gharchive/issue
| 2021-08-05T22:26:06 |
2025-04-01T04:34:56.010445
|
{
"authors": [
"pontushed",
"tsa-dom"
],
"repo": "lumawelhot/Luma-varaukset",
"url": "https://github.com/lumawelhot/Luma-varaukset/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
348427102
|
Broken with current version of Dash
The structure of the dumped docset seems to have changed. Still investigating...
FYI @alex-swiftify
Thanks @sebastianludwig for your efforts!
Just a side question, is the extracted docset supposed to contain uncompressed Apple Help files in HTML format?
I’m looking for anway to generate an uncompressed version of Apple HTML help for our internal needs.
Yes, exactly this is how it used to work. Haven't figured out yet how it's working now or how to generate the HTML files. Will keep you posted.
@sebastianludwig, I have found that Dash no longer transfers the DocSet in HTML format since around Jan 2018 (but rather uncompresses the DocSet on the iOS device):
https://github.com/Kapeli/Dash-iOS/issues/66
I need uncompressed version(s) of Apple HTML Help files for our internal needs, and just got some help from Dash developer and a custom build of the "Apple Docs Helper" utility.
If you feel like you will benefit from that, I can share my findings with you - email me at alex(at)swiftify.com .
|
gharchive/issue
| 2018-08-07T18:00:14 |
2025-04-01T04:34:56.042450
|
{
"authors": [
"alex-swiftify",
"sebastianludwig"
],
"repo": "lurado/BetterAppleDocsets",
"url": "https://github.com/lurado/BetterAppleDocsets/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2126252903
|
add preallocated and indexed MSMs
Upstream fixes for https://github.com/lurk-lab/grumpkin-msm/pull/13, just matching the names and benchmarking
!gpu-benchmark
Closing in favor of https://github.com/lurk-lab/arecibo/pull/374
|
gharchive/pull-request
| 2024-02-08T23:59:31 |
2025-04-01T04:34:56.044151
|
{
"authors": [
"huitseeker",
"winston-h-zhang"
],
"repo": "lurk-lab/arecibo",
"url": "https://github.com/lurk-lab/arecibo/pull/306",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1267239270
|
build error. how to think about this problem error ==> if(${status} STREQUAL "-")
$ cmake -S. -Bbuild -D'CMAKE_INSTALL_PREFIX=[./]'
-- Using existing generated toolchain
-- Using toolchain file: /home/kt/Camera_Test/depthai-core-main/build/generated/toolchain.cmake
-- [hunter] Calculating Toolchain-SHA1
-- [hunter] Calculating Config-SHA1
-- [hunter] HUNTER_ROOT: /home/kt/.hunter
-- [hunter] [ Hunter-ID: cb0ea1f | Toolchain-ID: a3a48bb | Config-ID: 07e4a3f ]
-- [hunter] NLOHMANN_JSON_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: 3.9.1)
-- [hunter] XLINK_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: luxonis-2021.4.2-develop)
-- [hunter] BZIP2_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: 1.0.8-p0)
-- [hunter] FP16_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: luxonis-0.0.0)
-- [hunter] LIBARCHIVE-LUXONIS_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: hunter-3.5.2)
-- [hunter] SPDLOG_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: 1.8.2)
-- [hunter] ZLIB_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: 1.2.11-p2)
-- [hunter] BACKWARD_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: 1.6)
-- [hunter] LIBNOP_ROOT: /home/kt/.hunter/_Base/cb0ea1f/a3a48bb/07e4a3f/Install (ver.: 1.0-ec8f75a)
CMake Error at shared/depthai-shared.cmake:38 (string):
string sub-command SUBSTRING requires four arguments.
Call Stack (most recent call first):
CMakeLists.txt:162 (include)
CMake Error at shared/depthai-shared.cmake:39 (if):
if given arguments:
"STREQUAL" "-"
Unknown arguments specified
Call Stack (most recent call first):
CMakeLists.txt:162 (include)
-- Configuring incomplete, errors occurred!
See also "/home/kt/Camera_Test/depthai-core-main/build/CMakeFiles/CMakeOutput.log".
See also "/home/kt/Camera_Test/depthai-core-main/build/CMakeFiles/CMakeError.log".
@jaiminlee did you git clone or downloaded the sources as zip? If the latter, use the "prepackaged" release zip / tar, which includes the necessary git submodules
I'm running into the same problem. I've downloaded this depthai-core-v2.16.0.zip file from the releases page.
Running the following command (from the README.md):
cmake -S. -Bbuild -D'BUILD_SHARED_LIBS=ON'
cmake --build build
And i'm getting the following output:
-- Generating new toolchain...
-- Using toolchain file: /home/dwffls/edward/ros2_ws/src/sensors/oak_camera/include/depthai-core/build/generated/toolchain.cmake
-- The CXX compiler identification is GNU 11.2.0
-- The C compiler identification is GNU 11.2.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Setting build type to 'Release' as none was specified.
-- Found Git: /usr/bin/git (found version "2.34.1")
-- [hunter] Calculating Toolchain-SHA1
-- [hunter] Calculating Config-SHA1
-- [hunter] HUNTER_ROOT: /home/dwffls/.hunter
-- [hunter] [ Hunter-ID: cb0ea1f | Toolchain-ID: 3f80026 | Config-ID: 07e4a3f ]
-- [hunter] NLOHMANN_JSON_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: 3.9.1)
-- [hunter] XLINK_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: luxonis-2021.4.2-develop)
-- [hunter] BZIP2_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: 1.0.8-p0)
-- [hunter] FP16_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: luxonis-0.0.0)
-- [hunter] LIBARCHIVE-LUXONIS_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: hunter-3.5.2)
-- [hunter] SPDLOG_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: 1.8.2)
-- [hunter] ZLIB_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: 1.2.11-p2)
-- [hunter] BACKWARD_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: 1.6)
-- [hunter] LIBNOP_ROOT: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install (ver.: 1.0-ec8f75a)
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Could NOT find libdw (missing: LIBDW_LIBRARY LIBDW_INCLUDE_DIR)
-- Could NOT find libbfd (missing: LIBBFD_LIBRARY LIBBFD_INCLUDE_DIR)
-- Could NOT find libdwarf (missing: LIBDWARF_LIBRARY LIBDWARF_INCLUDE_DIR LIBELF_LIBRARY LIBELF_INCLUDE_DIR)
-- Found Backward: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install/lib/backward
-- Found nlohmann_json: /home/dwffls/.hunter/_Base/cb0ea1f/3f80026/07e4a3f/Install/lib/cmake/nlohmann_json/nlohmann_jsonConfig.cmake (found suitable version "3.9.1", minimum required is "3.6.0")
CMake Error at shared/depthai-shared.cmake:38 (string):
string sub-command SUBSTRING requires four arguments.
Call Stack (most recent call first):
CMakeLists.txt:162 (include)
CMake Error at shared/depthai-shared.cmake:39 (if):
if given arguments:
"STREQUAL" "-"
Unknown arguments specified
Call Stack (most recent call first):
CMakeLists.txt:162 (include)
-- Configuring incomplete, errors occurred!
See also "/home/dwffls/edward/ros2_ws/src/sensors/oak_camera/include/depthai-core/build/CMakeFiles/CMakeOutput.log".
By commenting out the following lines, I am able to build the library as normal:
depthai-shared.cmake: line 38-41
depthai-bootloader-shared.cmake: line 32-35
@dwffls
I've created a PR addressing this: #514
Do you mind giving it a try?
Output of your command is:
On branch main
Your branch is up to date with 'origin/main'.
nothing to commit, working tree clean
/home/dwffls/edward
true
And it is correct that the "edward" folder is also a git repository. Gonna try the PR right now
Checkoud out #514 and got the git submodule update --init --recursive error on the first run and after running the command, the build succeeded.
Should be fixed now! Thanks for the quick help!
|
gharchive/issue
| 2022-06-10T08:36:42 |
2025-04-01T04:34:56.128669
|
{
"authors": [
"dwffls",
"jaiminlee",
"themarpe"
],
"repo": "luxonis/depthai-core",
"url": "https://github.com/luxonis/depthai-core/issues/502",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
706768427
|
API enhancement: Feature tracking
I could really use Feature Tracking in my cloud change project. Hope you can find time to come up with that. Thanks.
Yes, we are planning on this for delivery with the KickStarter in December. It will be IMU-assisted.
Oh and I just found our other issue on this: https://github.com/luxonis/depthai/issues/146 So I'll close this one and ping you on that one @Soar-RTD so you can follow it.
|
gharchive/issue
| 2020-09-23T00:12:09 |
2025-04-01T04:34:56.150236
|
{
"authors": [
"Luxonis-Brandon",
"Soar-RTD"
],
"repo": "luxonis/depthai",
"url": "https://github.com/luxonis/depthai/issues/201",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1838280608
|
still could not find response key
#588 this bug is commited to version 0.5.0? I install trl==0.5.0. But it still raise error
RuntimeError: Could not find response key [835, 13291, 29901, 13] in token IDs tensor([ 1, 13866, 338, 385, 15278, 393, 16612, 263, 3414, 29892,
3300, 2859, 411, 385, 1881, 393, 8128, 4340, 3030, 29889,
29871, 13, 13, 2277, 29937, 2799, 582, 1953, 29901, 13,
10994, 599, 445, 881, 367, 11105, 287, 13, 13, 2277,
29937, 13291, 29901, 13, 29902, 505, 451, 1063, 11105, 287,
5149, 29889])
from trl import DataCollatorForCompletionOnlyLM
from transformers import GenerationConfig, LlamaForCausalLM, LlamaTokenizer
import torch
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
tokenizer.pad_token_id = 0
response_template = "### Response:\n"
data_collator = DataCollatorForCompletionOnlyLM(response_template, tokenizer=tokenizer, mlm=False)
text1 = """\n\n### Instructions:\nHello all
this should be masked\n\n### Response:\nI have not been masked correctly."""
text2 = """\n\n### Instructions:\nThis is another longer text that should also be masked. This text is significantly longer than the previous one.\n\n### Response:\nI have not been masked correctly."""
encoded_text1 = tokenizer(text1)
encoded_text2 = tokenizer(text2)
examples = [encoded_text1, encoded_text2]
batch = data_collator(examples)
for i in range(2):
labels = batch["labels"][i]
ast_pad_idx = np.where(labels == -100)[0][-1]
result_text = tokenizer.decode(batch["input_ids"][i, last_pad_idx + 1 :])
self.assertEqual(result_text, "I have not been masked correctly.")
I am also getting the same error. I tried setting padding_site to right. That does not change the outcome.
thanks
|
gharchive/issue
| 2023-08-06T17:12:48 |
2025-04-01T04:34:56.193668
|
{
"authors": [
"LopezGG",
"moseshu"
],
"repo": "lvwerra/trl",
"url": "https://github.com/lvwerra/trl/issues/619",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
121580459
|
lxc edit, push and pull on stopped containers
lxc edit, push and pull currently only work for running containers.
It should be possible to edit files when a container is not running. Eg. after lxc init, before starting.
lxc init <image> <container name>
lxc file edit newcontainer1/etc/network/interfaces
lxc start <container name>
Actually not so easy. We need to add file pull/push to the container interface and the LXC container backend, then add some new code to forkgetfile and forkputfile to chroot into the container rootfs and pull/push the file.
Done
|
gharchive/issue
| 2015-12-10T21:23:42 |
2025-04-01T04:34:56.240941
|
{
"authors": [
"pgassmann",
"stgraber"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/issues/1394",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
838352928
|
Unable to add second node in lxd cluster
Hi,
Created lxd master and working fine,Try to create slave in another machine.
commands
sudo zpool create disk2_pool /dev/sdb
sudo zfs create disk2_pool/clusterData
While adding second node cluster,getting error..
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=ps]:
What IP address or DNS name should be used to reach this node? [default=slave_ip]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: master_ip
Cluster fingerprint: 4c67c80c1bb8d6fda4583927b088
You can validate this fingerprint by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local": disk2_pool/clusterData
Choose "zfs.pool_name" property for storage pool "local": disk2_pool
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create storage pool 'local': The source must match zfs.pool_name if specified
Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create storage pool 'local': The source must match zfs.pool_name if specified
You are using disk2_pool/clusterData as the source, and disk2_pool for zfs.pool_name. You need to use disk2_pool for both.
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd 896M 524K 895M - - 0% 0% 1.00x ONLINE -
$ lxc storage create zfs zfs source=lxd zfs.pool_name=lxd
Storage pool zfs created
Thank you so much,Now it is working
|
gharchive/issue
| 2021-03-23T05:07:47 |
2025-04-01T04:34:56.246278
|
{
"authors": [
"Kumar6295",
"monstermunchkin"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/issues/8595",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
998939439
|
Support for moving instances and custom volumes between projects
We currently have server-side support for moving instances or custom volumes between storage pools.
But we don't currently have support for moving between projects. Instead the client handles it as copy+delete.
We should expand the POST operation on both instances and custom volumes to support moving between projects and have LXD itself handle the copy+delete.
One thing to be careful about here is project access. We'll need to call the access checker to ensure that the requestor is allowed access to the target project.
@stgraber Can I be assign to this one?
Yep!
@stgraber, PR almost done but still I'm not sure what do you mean by "access checker". Which mechanism is it?
You can call rbac.UserHasPermission(r, targetProject, "manage-containers") which will return true if the user is allowed instance creation on the target project. For custom volumes, you'd want the same but using "manage-storage-volumes" instead.
|
gharchive/issue
| 2021-09-17T04:53:31 |
2025-04-01T04:34:56.249162
|
{
"authors": [
"presztak",
"stgraber"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/issues/9235",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
347583757
|
以下几个问题
1.邮箱不能有中划线。(但是很多人的域名和邮箱加了中划线)
2.局域网会被限制登录,貌似被别人尝试很多次了,所有人都不能登录了。
第一个问题我看下;第二个问题可能是密码输入错误次数太多,15分钟后请重试。
第二个问题,是局域网其他人也不能登录。是密码输错了两次就这样。
限制分两种:1. 用户限制;2.ip限制;
用户限制就是同一个用户同一时间段登录错误次数太多,可能多台机器都在用这个用户登录,这样这个用户就不能再登录了,15分钟应该就可以了。
另外,这个限制可以后台配置是否启用。有好几个用户提到这个问题了。
默认配置的应该是5 次
哦,好的,谢谢!
最近会发布个新版本,把这个限制是否启用 做成后台可配置的,请关注。
好的,谢谢!一直关注中,你这个系统做的挺好的,我很喜欢。
谢谢夸奖!
不用谢,是真的,我还推荐给了几个朋友,我朋友也觉得挺好,还说Jira的中国式体验,因为很多人都用破解版的Jira,有你这东西了,大家都换了,挺好。
比jira轻,用起来很简单,有什么问题欢迎提出,一起让他变得很好吧。
好的,没问题,我反正现在会用,我多体验,有体验不足或者bug类的,我会在这里提出来。
|
gharchive/issue
| 2018-08-04T03:49:26 |
2025-04-01T04:34:56.256706
|
{
"authors": [
"26597925",
"lxerxa"
],
"repo": "lxerxa/actionview",
"url": "https://github.com/lxerxa/actionview/issues/19",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
202073045
|
导入后所有UILabel的tap手势全部失效
如题 - -
我这边不能复现,且我的只是扩展,并没有全局修改某方法啊?怎么会一导入所有的label手势就没用了呢?
又下了一次
重新导入后解决 - -
|
gharchive/issue
| 2017-01-20T07:39:07 |
2025-04-01T04:34:56.300585
|
{
"authors": [
"Jokemac",
"lyb5834"
],
"repo": "lyb5834/YBAttributeTextTapAction",
"url": "https://github.com/lyb5834/YBAttributeTextTapAction/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
492967534
|
core: crash when sending RetryPolicy
We're seeing the following crash when setting a RetryPolicy on outbound requests. This appears to be an assertion failure in upstream Envoy that's being triggered because of how we're calling the async_client.
Was fixed in https://github.com/lyft/envoy-mobile/pull/511
|
gharchive/issue
| 2019-09-12T18:42:44 |
2025-04-01T04:34:56.314276
|
{
"authors": [
"rebello95"
],
"repo": "lyft/envoy-mobile",
"url": "https://github.com/lyft/envoy-mobile/issues/434",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.