id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
662974440
|
Release of CmdStan 2.24.0
Summary:
2.24 Release Notes
@serban-nicusor-toptal the just tagged cmdstan 2.24-rc1 does not include the dependencies like stan, Stan-math, etc....
I'm uploading the archive now, takes a bit sorry ... slow internet
I just downloaded the archive and now it's 33MB... that's good, but the extracted files land in a directory "cmdstan-"... can that be renamed to "cmdstan-2.24.0rc1" please?
Thanks.
Sure, one moment. Must have been a typo.
@wds15 Should be all fine now.
Now it is "cmdstan-v2.24.0-rc1" while it should be "cmdstan-2.24.0-rc1" as it is usually, but maybe just leave it as is for now. Next time.
No worries, my bad forgot to remove the v as a prefix. Will re-upload in a moment
CmdStan 2.24.0
Features:
CmdStan User's Guide now online - updated and revised
Utility command stansummary allows user-specified quantile reporting,
improved output format, better command-line options handling
Many makefile improvements:
precompiled headers
detect compiler option dependencies
improved messaging
Bugfixes
Generated quantities for models with non-scalar parameters
|
gharchive/issue
| 2020-07-21T12:40:12 |
2025-04-01T04:35:57.221398
|
{
"authors": [
"mitzimorris",
"serban-nicusor-toptal",
"wds15"
],
"repo": "stan-dev/cmdstan",
"url": "https://github.com/stan-dev/cmdstan/issues/911",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1449157351
|
Update wanted ebooks list
I see that sometimes they're deleted and sometimes they're commented out, so I have a 50% of choosing correctly. Of course, I also have a 50% chance of choosing incorrectly…
Typically commented out when someone is actively working on it, and deleted when it’s released 🙂
Well, then, I was right, which happens far less than 50% of the time. lol (And I wasn't sure, since this one wasn't commented out while I was working on it.)
Thanks!
|
gharchive/pull-request
| 2022-11-15T04:44:48 |
2025-04-01T04:35:57.244682
|
{
"authors": [
"acabal",
"robinwhittleton",
"vr8hub"
],
"repo": "standardebooks/web",
"url": "https://github.com/standardebooks/web/pull/203",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
505358529
|
Update build system to use java 11
We use java 8 because at the time, bazel required java 8 to bootstrap. Nowadays they use java 11, and java 8 is EOL in Flathub.
(Alternatively, we could stop building the whole of tensorflow, and only build tensorflow-lite...)
Fixed by #51
|
gharchive/issue
| 2019-10-10T15:47:48 |
2025-04-01T04:35:57.263562
|
{
"authors": [
"gcampax"
],
"repo": "stanford-oval/almond-gnome",
"url": "https://github.com/stanford-oval/almond-gnome/issues/50",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
639033060
|
Categorical HM integration with Spatial kernel implementations
For example, can we do something like:
def innerProdMatMult(tileA: SRAM2[T], tileB: SRAM2[T], tileC: SRAM2[T]): Void = {
Foreach(...){...} // fill tileC using inner products
}
def outerProdMatMult(tileA: SRAM2[T], tileB: SRAM2[T], tileC: SRAM2[T]): Void = {
Foreach(...){...} // fill tileC using outer products
}
Accel{
...
innerProductMatMult(tileA, tileB, tileC) (innerProductMatMult, outerProductMatMult)
...
}
This may be problematic with Spatial's staging. Maybe you can use blackboxes as containers for kernel switching since those won't get dropped. Maybe there can be a generic BlackBoxUse node and store the possible drop-in replacements as Parameter metadata
To add some more motivation: Aetherling (https://github.com/David-Durst/embeddedHaskellAetherling) has multiple options for the same operation with different performance-resources trade-offs. This feature would allow treating both options as a single operator that can be configured by HM,
|
gharchive/issue
| 2020-06-15T17:55:15 |
2025-04-01T04:35:57.265517
|
{
"authors": [
"David-Durst",
"mattfel1"
],
"repo": "stanford-ppl/spatial",
"url": "https://github.com/stanford-ppl/spatial/issues/309",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
506328634
|
data/Animals.csv file not found
when running on colab
https://colab.research.google.com/github/stared/thinking-in-tensors-writing-in-pytorch/blob/master/3 Linear regression.ipynb#scrollTo=s5gdu-mRRiwo
@jlj0n3s Right now the workaround is to manually upload the files from https://github.com/stared/thinking-in-tensors-writing-in-pytorch/tree/master/data.
I will look into other options (e.g. hard-coding the path).
Solved!
|
gharchive/issue
| 2019-10-13T13:55:26 |
2025-04-01T04:35:57.277360
|
{
"authors": [
"jlj0n3s",
"stared"
],
"repo": "stared/thinking-in-tensors-writing-in-pytorch",
"url": "https://github.com/stared/thinking-in-tensors-writing-in-pytorch/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1378270544
|
Add property to enable/disable registration of the "old" DocsAPI in-Coordination implementation
Currently system property stargate.rest.enableV1 (SYSPROP_ENABLE_SGV1_REST) exists to allow either enabling or disabling of registration of the Coordinator co-located (running within Coordinator) REST API implementation. The idea is to allow disabling (and possibly re-enabling) of this endpoint dynamically, during transition to Stargate V2.
Similar property should be added for Documents API for same reason: the most likely use case is to disable registration at first, and only after validating goodness of the new implementation (running in production for a while) actually removing the old implementation.
Superceded by #2106, closing.
|
gharchive/issue
| 2022-09-19T17:11:33 |
2025-04-01T04:35:57.282027
|
{
"authors": [
"tatu-at-datastax"
],
"repo": "stargate/stargate",
"url": "https://github.com/stargate/stargate/issues/2081",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1579587900
|
Fix changelog generation
It's impossible to keep the changelog updated.. Merges between v1 and main are impossible to handle. We need to make a working solution.
Here is the small idea on how this can be done:
Main thing, keep change log for the versions separated
CHANGELOG_V1.md exists in the v1 and main branch (merged)
CHANGELOG_V2.mdexists only in main branch
Update the generation so that we can only include tags with the version prefix, thus for v1 we get only v1 changes.
Ensure V1 changelog is changes only in v1 branch. Ensure V2 change log is changed only in main
fixed in #2443
|
gharchive/issue
| 2023-02-10T12:15:16 |
2025-04-01T04:35:57.284927
|
{
"authors": [
"ivansenic",
"jeffreyscarpenter"
],
"repo": "stargate/stargate",
"url": "https://github.com/stargate/stargate/issues/2435",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2129027586
|
Update what_s_starknet.js.md
Typographical errors were corrected and ambiguities clarified in the description.
Motivation and Resolution
The primary goal of this pull request is to enhance readability and ensure the accuracy of the documentation. It addresses various typographical mistakes and clarifies sections where the description was ambiguous, potentially leading to confusion amongst users and developers alike.
RPC version (if applicable)
Not applicable to this pull request as the changes are focused on documentation.
Usage related changes
This PR does not directly affect the usage of the project but improves the documentation to help users and developers understand the project's features and functionalities more clearly.
Corrected typographical errors in the main description.
Clarified ambiguous descriptions to ensure they are understood correctly.
Development related changes
While the core of this PR is centered around documentation, it indirectly benefits developers by providing clearer guidelines and reducing the potential for misinterpretation.
No direct development-related changes, but improved documentation can aid in development processes.
Checklist:
[x] Performed a self-review of the code
[x] Rebased to the last commit of the target branch (or merged it into my branch)
[x] Linked the issues which this PR resolves
[x] Documented the changes in code (API docs will be generated automatically)
[x] Updated the tests, ensuring they reflect the clarified descriptions where applicable
[x] All tests are passing
I will quote this from another contribution you made :D
|
gharchive/pull-request
| 2024-02-11T15:11:08 |
2025-04-01T04:35:57.295382
|
{
"authors": [
"ivpavici",
"mettete"
],
"repo": "starknet-io/starknet.js",
"url": "https://github.com/starknet-io/starknet.js/pull/958",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1510379613
|
Made sierra contracts be felt-vec based.
Stack:
#1485
#1484
#1482 ⬅
⚠️ Part of a stack created by spr. Do not merge manually using the UI - doing so may have unexpected results.
This change is
crates/starknet/src/contract_class_test.rs line 19 at r1 (raw file):
Previously, orizi wrote…
in here specifically - this would not make sense - as this is only the debug info, which will be used for debug only, so it cannot be here within the serialized program.
I see, but I find it strange that a sierra can be serialized to felts only inside a StarkNet Contract.
|
gharchive/pull-request
| 2022-12-25T17:43:35 |
2025-04-01T04:35:57.299612
|
{
"authors": [
"ilyalesokhin-starkware",
"orizi"
],
"repo": "starkware-libs/cairo",
"url": "https://github.com/starkware-libs/cairo/pull/1482",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1925274762
|
feat: Storage{Base}Address impls
Implements the Into trait to convert a Storage{Base}Address into a felt252
Implements the Into trait to convert a Storage{Base}Address into a u256
Implements PartialEq trait to allow comparison of StorageBaseAddress and StorageAddress
Resolves #4195
This change is
I was thinking that adding them gradually to the corelibrary would be great. It's a bit frustrating to always have to doublecheck whether a type conversion is in the corelib or not, only to end up writing it ourselves.
Perhaps I can make the process more generic and write XIntoY where X can be converted to felt252 and felt252 can be converted to Y; and same for TryInto
That would actually solve most of these cases
If you don't want these kind of implementations in the core library then I can add them to Alexandria instead.
I understand that it might increase considerably the size of the corelib if we start implementing all traits for corelib types.
However, it's very convenient, devX wise, to be able to natively perform all of these operations without having to think about where all myimpl are defined, and explicitly importing them
|
gharchive/pull-request
| 2023-10-04T02:20:27 |
2025-04-01T04:35:57.303622
|
{
"authors": [
"enitrat"
],
"repo": "starkware-libs/cairo",
"url": "https://github.com/starkware-libs/cairo/pull/4194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1583817332
|
功能问题
MySQL,Redis管理部署出来没有呀
请具体描述下问题。
参考mysql redis的接入文档
https://github.com/starsliao/ConsulManager/tree/main/docs
|
gharchive/issue
| 2023-02-14T09:36:09 |
2025-04-01T04:35:57.308156
|
{
"authors": [
"aiopser",
"starsliao"
],
"repo": "starsliao/ConsulManager",
"url": "https://github.com/starsliao/ConsulManager/issues/53",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
1419689961
|
请问补上速通链接已点过的期数的具体方法
> 我这里还是不行
刚刚试了,显示Error: raise Exception('貌似自己这显示完成了,但实际上没有?建议自己检查下(尤其是有团支书账号可以看到情况的),然后可以在issue#31里反馈下') 团支书后台没有记录
我这里刚刚试过,还是没有显示,我在想是不是因为之前我点过那个速通链接,多点了很多导致留下了不少空白的完成记录。但现在速通链接已经寄了,所以即使后台看不到完成记录,但在这里还是检测为已完成,所以就没有速通?是不是要等到空白记录都填上,这个程序才能发挥作用?
新的期数应该可以,以前的应该是这个情况,如果不想手动解决可以把文件主体那块换成一个循环补下以前的
Originally posted by @startkkkkkk in https://github.com/startkkkkkk/Beijing_Daxuexi_Simple/issues/31#issuecomment-1286636353
小白请问您在这里说的“补下以前的”具体怎么操作呢,我试图直接修改study.py, 不再检查是否已经做过(直接注释Line80-82 QWQ我只是试试,不知道怎么改),然后就出现以下报错信息
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
要不试试每周用浏览器控制台吧。登录https://m.bjyouth.net/site/login,在控制台输入
fetch('https://m.bjyouth.net/dxx/check', {'method':'POST','headers':{'Content-Type':'application/json'},'body':JSON.stringify({id:"93",org_id:4028763})})
其中最后一个字典里的id是本期期数93,下周94,以此类推
谢谢同学的分享,不过我今天发现他们更新之后之前速通链接留下的空白学习记录被删除了hhh,这个action可以正常运行了
|
gharchive/issue
| 2022-10-23T07:41:29 |
2025-04-01T04:35:57.314705
|
{
"authors": [
"SakuraLaurel",
"z-rrr"
],
"repo": "startkkkkkk/Beijing_Daxuexi_Simple",
"url": "https://github.com/startkkkkkk/Beijing_Daxuexi_Simple/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
665333598
|
Sonoff SNZB-02 i have this error can you give me a idea how to resolve this .
hello a have a few of this sensors in my house anda i cant mount because a have this error .
2020-07-24 19:07:00.001 (zigbee2mqtt) MQTT message: zigbee2mqtt/0x00124b001b7832e7 {'temperature': 27.84, 'linkquality': 92, 'humidity': 55.28, 'battery': 100, 'voltage': 3200}
2020-07-24 19:07:00.001 (zigbee2mqtt) This plugin does not support zigbee device with model "SNZB-02" yet
2020-07-24 19:07:00.001 (zigbee2mqtt) If you would like plugin to support this device, please create ticket by this link: https://github.com/stas-demydiuk/domoticz-zigbee2mqtt-plugin/issues/new?labels=new+device&template=new-device-support.md
regard´s
Manuel
hi
https://github.com/stas-demydiuk/domoticz-zigbee2mqtt-plugin/issues/367
Closed as duplicate
|
gharchive/issue
| 2020-07-24T18:11:20 |
2025-04-01T04:35:57.318045
|
{
"authors": [
"kan0bi",
"stas-demydiuk",
"tomdudu38"
],
"repo": "stas-demydiuk/domoticz-zigbee2mqtt-plugin",
"url": "https://github.com/stas-demydiuk/domoticz-zigbee2mqtt-plugin/issues/369",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1040503154
|
EventFrom marks optional properties as required
Description
When extracting event types from a model blueprint with EventFrom, optional properties of the event will be treated just like required properties.
Expected Result
The type of optional properties should be preserved
Actual Result
Optional properties are marked as required
Reproduction
See PR with failing test or codesandbox here
I have only briefly looked into this, but so far it doesn't look like this is was introduced by #2743.
Weirdly enough, this seems to work when I'm testing it right inside types.ts
@CodingDive there is a difference between an optional property and a property with an undefined value. Since we are using the return type to create final event types, this is somewhat expected (although inconvenient). See a slightly modified variant of your code to see how it can behave differently:
TS playground
I'm not saying that this is how you should write the code - I'm merely showcasing this so you can better understand how this works. If we take a closer look you have created a factory function that takes an optional parameter but the returned object has always that property on it.
I doubt there is anything we can do about this - we could take into account the optionality of the parameters and try to "transfer" that to the created event types. However, this would make things way more complicated and would also make the other use case not possible to express (required prop with optional value).
@janovekj I believe that you were experimenting without strictNullChecks: true.
@Andarist interesting! Thank you so much for for sharing.
The syntax you described actually seems to fix the issue I've been having as the event type now rightfully shows the property can be optional/undefined too.
EVENT_WITH_OPTIONAL_PROP: (optionalArg?: string | undefined) => {
if (optionalArg) {
return { optionalArg }
}
return {}
}
Feel free to close the issue if this is the syntax we should be using going forward!
|
gharchive/issue
| 2021-10-31T15:28:25 |
2025-04-01T04:35:57.370725
|
{
"authors": [
"Andarist",
"CodingDive",
"janovekj"
],
"repo": "statelyai/xstate",
"url": "https://github.com/statelyai/xstate/issues/2777",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2509542768
|
Update monitor-an-experiment.md
add new health checks
🚀 Deployed on https://66da9218c84f8dfc8b9adbce--cozy-fox-0defec.netlify.app
|
gharchive/pull-request
| 2024-09-06T05:22:40 |
2025-04-01T04:35:57.391218
|
{
"authors": [
"jasonwzm",
"lin-statsig"
],
"repo": "statsig-io/docs",
"url": "https://github.com/statsig-io/docs/pull/1977",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
149546809
|
[MIPAS] coda.time_double_to_parts_utc can't manage the conversion of the date in the right way
coda.time_double_to_parts(0.0)
[2000, 1, 1, 0, 0, 0, 0]
coda.time_double_to_parts_utc(0.0)
[1999, 12, 31, 23, 59, 28, 0]
Since the reference time of coda output is 2000-1-1, this 2 functions should return the same time for 0.0 s.
2 seconds likely should take into account the leap seconds: during the MIPAS mission the insertion of a leap second occurred only 2 times (2005-12-31 and 2008-12-31).
Interestingly, we recently added a better clarification on this topic in the documentation. Please have a look at https://github.com/stcorp/coda/blob/master/libcoda/coda-time.c
The important part is: "In CODA, the choice was made to base the offset for the TAI floating point values on the convention that 2000-01-01T00:00:00 UTC has an offset of 0 (in UTC). Since at that time TAI was 32 seconds ahead of UTC, coda_utcdatetime_to_double() (which converts UTC to TAI) will thus return the value 32 for 2000-01-01T00:00:00."
According to me, in this way CODA is changing the reference time. If you get a time (as example with coda.fetch), this time (without inserted leap seconds) is written as second since 2000-1-1 UTC. In the MIPAS file this time is stored with 3 integer (# of day,# sec from beginning of the day, # microseconds) and these values are related to the 2000-1-1 UTC.
Did you add 32 seconds to the value obtained from the MIPAS file when you get the time?
If not, time_double_to_parts should return the time corresponding to this amount of seconds without the leap second, and time_double_to_parts_utc should return the the right utc time corresponding to this amount of seconds.
coda.time_double_to_parts(0.0)
[2000, 1, 1, 0, 0, 0, 0]
coda.time_double_to_parts_utc(0.0)
[2000, 1, 1, 0, 0, 0, 0]
coda.time_double_to_parts(189388800.0)
[2006, 1, 1, 0, 0, 0, 0]
coda.time_double_to_parts_utc(189388800.0)
[2005, 12, 31, 23, 59, 60, 0]
coda.time_double_to_parts(189388801.0)
[2000, 1, 1, 0, 0, 1, 0]
coda.time_double_to_parts(189388801.0)
[2000, 1, 1, 0, 0, 0, 0]
It is a bit more complicated. The algorithm most people use is one where each day has 86400 seconds. If you don't do this, and you are close to midnight, then the leap seconds can introduce a change of date. Instead of requiring people to always properly deal with leap seconds when converting a CODA time value back to a date, we went for simplicity. See also the comment for 'time' in http://stcorp.nl/coda/doc/codadef/codadef-format.html.
Instead, if a user really wants to be leap second precise, we do provide some functions to allow this, but you will have to do more work yourself. For instance, you can disable the automatic conversion to time by setting coda.set_option_perform_conversions(0). In that case you will get the MIPAS time value as a record of 3 integers and you can perform any time conversion you want yourself.
However, be aware that ENVISAT time values are not fully leap second accurate either. You have to take care of https://earth.esa.int/handbooks/ra2-mwr/CNTR2-8-2.html as well if you want to be precise.
|
gharchive/issue
| 2016-04-19T18:39:57 |
2025-04-01T04:35:57.634045
|
{
"authors": [
"flaviobar",
"svniemeijer"
],
"repo": "stcorp/coda",
"url": "https://github.com/stcorp/coda/issues/11",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2673365060
|
feat: add at method to array/fixed-endian-factory
Resolves #3135 .
Description
What is the purpose of this pull request?
This pull request:
adds at method to array/fixed-endian-factory
adds the respective benchmarks and tests
updates README
Related Issues
Does this pull request have any related issues?
This pull request:
resolves #3135
Questions
Any questions for reviewers of this pull request?
No.
Other
Any other information relevant to this pull request? This may include screenshots, references, and/or implementation notes.
the at method in @stdlib/array/bool had its benchmarks written in a single file so i followed that convention. kindly let me know if we need any changes.
Checklist
Please ensure the following tasks are completed before submitting this pull request.
[x] Read, understood, and followed the contributing guidelines.
@stdlib-js/reviewers
kindly give this a review @kgryte !
benchmarks written in a single file so i followed that convention.
Yes, for at that makes sense, as element access should not be length-dependent.
I've moved the stuff in main.js to where you requested and I've placed it alphabetically among the non static methods in
README.md. I hope that's okay
|
gharchive/pull-request
| 2024-11-19T19:52:45 |
2025-04-01T04:35:57.640950
|
{
"authors": [
"aayush0325",
"kgryte"
],
"repo": "stdlib-js/stdlib",
"url": "https://github.com/stdlib-js/stdlib/pull/3184",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1509953810
|
Add C implementation for @stdlib/math/base/special/expit
Resolves #767.
Checklist
[x] update readme.md
[x] include.gypi
[x] binding.gyp
[x] include/stdlib/math/base/special/
[x] src
[x] manifest.json
[x] lib
[x] examples
[x] benchmark
[x] test
@kgryte
All tests and benchmarks cleared, this PR can have a review.
|
gharchive/pull-request
| 2022-12-24T05:53:49 |
2025-04-01T04:35:57.644192
|
{
"authors": [
"Pranavchiku"
],
"repo": "stdlib-js/stdlib",
"url": "https://github.com/stdlib-js/stdlib/pull/770",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
230745167
|
findAsync with an Array parameter
Hello!
According to documentation, one can pass an Array as second argument for the findAsync method. What is the expected outcome of this?
I was expecting something like:
Given this array, if any of the entries match a file, return the path to this file
And indeed it looks like this in the code https://github.com/steelbrain/atom-linter/blob/master/src/index.js#L55.
But when checking out the tests for this, there's no example looking for an Array and I'm getting weird behavior when using it - if I pass ["a", "b", "c"], I get "a" if it exists, but if it doesn't, I get null, even if "b" and/or "c" exist.
Am I making wrong assumption on how it should work? If I want to look for 2 different files to find default configurations, should I do 2 calls for the findAsync method?
Thanks!
Will check
|
gharchive/issue
| 2017-05-23T15:17:05 |
2025-04-01T04:35:57.655410
|
{
"authors": [
"gpiress",
"steelbrain"
],
"repo": "steelbrain/atom-linter",
"url": "https://github.com/steelbrain/atom-linter/issues/169",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1783190281
|
Snscrape Get Items Error
Everything was working fine then suddenly a unknown snscrape error occured.
Traceback (most recent call last):
File "f:\TwitterGiveawayBot-main\get_tweet.py", line 258, in search_giveaway
for i,tweet in enumerate(sntwitter.TwitterSearchScraper(text).get_items()):
File "C:\Program Files\Python311\Lib\site-packages\snscrape\modules\twitter.py", line 1763, in get_items
for obj in self._iter_api_data('https://twitter.com/i/api/graphql/7jT5GT59P8IFjgxwqnEdQw/SearchTimeline', _TwitterAPIType.GRAPHQL, params, paginationParams, cursor = self._cursor, instructionsPath = ['data', 'search_by_raw_query', 'search_timeline', 'timeline', 'instructions']):
File "C:\Program Files\Python311\Lib\site-packages\snscrape\modules\twitter.py", line 915, in _iter_api_data
obj = self._get_api_data(endpoint, apiType, reqParams, instructionsPath = instructionsPath)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\snscrape\modules\twitter.py", line 886, in _get_api_data
r = self._get(endpoint, params = params, headers = self._apiHeaders, responseOkCallback = functools.partial(self._check_api_response, apiType
= apiType, instructionsPath = instructionsPath))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\snscrape\base.py", line 275, in _get
return self._request('GET', *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\snscrape\base.py", line 271, in _request
raise ScraperException(msg)
snscrape.base.ScraperException: 4 requests to https://twitter.com/i/api/graphql/7jT5GT59P8IFjgxwqnEdQw/SearchTimeline?variables={"rawQuery"%3A"1x nft giveaway rt lang%3Aen"%2C"count"%3A20%2C"product"%3A"Latest"%2C"withDownvotePerspective"%3Afalse%2C"withReactionsMetadata"%3Afalse%2C"withReactionsPerspective"%3Afalse}&features={"rweb_lists_timeline_redesign_enabled"%3Afalse%2C"blue_business_profile_image_shape_enabled"%3Afalse%2C"responsive_web_graphql_exclude_directive_enabled"%3Atrue%2C"verified_phone_label_enabled"%3Afalse%2C"creator_subscriptions_tweet_preview_api_enabled"%3Afalse%2C"responsive_web_graphql_timeline_navigation_enabled"%3Atrue%2C"responsive_web_graphql_skip_user_profile_image_extensions_enabled"%3Afalse%2C"tweetypie_unmention_optimization_enabled"%3Atrue%2C"vibe_api_enabled"%3Atrue%2C"responsive_web_edit_tweet_api_enabled"%3Atrue%2C"graphql_is_translatable_rweb_tweet_is_translatable_enabled"%3Atrue%2C"view_counts_everywhere_api_enabled"%3Atrue%2C"longform_notetweets_consumption_enabled"%3Atrue%2C"tweet_awards_web_tipping_enabled"%3Afalse%2C"freedom_of_speech_not_reach_fetch_enabled"%3Afalse%2C"standardized_nudges_misinfo"%3Atrue%2C"tweet_with_visibility_results_prefer_gql_limited_actions_policy_enabled"%3Afalse%2C"interactive_text_enabled"%3Atrue%2C"responsive_web_text_conversations_enabled"%3Afalse%2C"longform_notetweets_rich_text_read_enabled"%3Afalse%2C"longform_notetweets_inline_media_enabled"%3Afalse%2C"responsive_web_enhance_cards_enabled"%3Afalse%2C"responsive_web_twitter_blue_verified_badge_is_enabled"%3Atrue} failed, giving up.
Hello I've already had myself this error it's from snscrape and happends sometimes (but if it always happend for days tell me) so you can just wait
Or find giveaway yourself and add them to the recent_url.txt file and set the crash_or_true to true
As Elon stopped data scrapping. So is there no way to make this work??
Some people trying other methods. Does it work??
Hello for the moment snscrape is "down' but I hope it will work again so the only solution is to find the giveaway yourself and add them to the recent_url.txt files and it will work fine
|
gharchive/issue
| 2023-06-30T21:36:44 |
2025-04-01T04:35:57.696418
|
{
"authors": [
"Rana-0003",
"steevenakintilo"
],
"repo": "steevenakintilo/TwitterGiveawayBot",
"url": "https://github.com/steevenakintilo/TwitterGiveawayBot/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1552268968
|
Runnable track crossing eventlisteners (iOS)
If you try to slide the runnable track on iOS-Safari (iPhone 13) the eventListener input and change are fired at the same time.
Fixed by setting a onclick on the HTML head if the touchpoints is > 0
|
gharchive/issue
| 2023-01-22T20:19:25 |
2025-04-01T04:35:57.748089
|
{
"authors": [
"stefanradouane"
],
"repo": "stefanradouane/vidplayer",
"url": "https://github.com/stefanradouane/vidplayer/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1144992364
|
Ethernet Anbindung Wbec
Guten Tag,
hat jemand eine Anbindung per Ethernet statt Wifi realisiert? Wird das unterstützt?
Danke
taschaue
Hallo taschaue,
mit dem verwendeten Mikrocontroller geht es meines Wissens nicht.
Mögliche Alternative, wenn schon ein Netzwerkkabel bei der Wallbox liegt: Das Netzwerkkabel für den Modbus nutzen und wbec am anderen Ende (ist ja in der Regel neben dem WLAN-Router) platzieren.
Ist dann fast wie mit Ethernet und hat den Vorteil, dass die Router (z.B. Fritzbox) oft auch gleich einen USB-Anschluss zur Stromversorgung haben, d.h. man spart sich das USB-Netzteil.
Danke für die Info.
Gab es mal einen Test mit einem ESP32? Weil da gibt es z.B. von Olimex den ESP32-PoE?
Bisher noch nicht. Soweit ich weiß, müsste man für den Umstieg vom ESP8266 auf den ESP32 einiges an den Libraries ändern/austauschen (z.B. SPIFFS/LitteFS, etc.). Hab ich bisher nicht in Angriff genommmen.
Danke für die Info. Ist übrigens ein echt tolles Projekt!
|
gharchive/issue
| 2022-02-20T11:26:33 |
2025-04-01T04:35:57.756086
|
{
"authors": [
"steff393",
"taschaue"
],
"repo": "steff393/wbec",
"url": "https://github.com/steff393/wbec/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
85262926
|
Refactor login component
DocumentTitle should be always set only on top most (pages) components. In case of login form, it just seems to be defined twice.
THAT WAS MY ISSUE :-1:
|
gharchive/issue
| 2015-06-04T20:28:08 |
2025-04-01T04:35:57.776726
|
{
"authors": [
"grabbou"
],
"repo": "steida/este",
"url": "https://github.com/steida/este/issues/259",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2438509458
|
services/horizon/internal/ingest: reap lookup tables without blocking ingestion
PR Checklist
PR Structure
[ ] This PR has reasonably narrow scope (if not, break it down into smaller PRs).
[ ] This PR avoids mixing refactoring changes with feature changes (split into two PRs
otherwise).
[ ] This PR's title starts with name of package that is most changed in the PR, ex.
services/friendbot, or all or doc if the changes are broad or impact many
packages.
Thoroughness
[ ] This PR adds tests for the most critical parts of the new functionality or fixes.
[ ] I've updated any docs (developer docs, .md
files, etc... affected by this change). Take a look in the docs folder for a given service,
like this one.
Release planning
[ ] I've updated the relevant CHANGELOG (here for Horizon) if
needed with deprecations, added features, breaking changes, and DB schema changes.
[ ] I've decided if this PR requires a new major/minor version according to
semver, or if it's mainly a patch change. The PR is targeted at the next
release branch if it's not a patch change.
What
Close https://github.com/stellar/go/issues/4870
This PR improves reaping of history lookup tables (e.g. history_accounts, history_claimable_balances) so that it can run safely in parallel with ingestion. Currently, reaping of history lookup tables is a blocking operation for ingestion so if the queries to reap history lookup tables take too long that can result in ingestion lag. With this PR, reaping of history lookup tables will be able to run concurrently to ingestion with minimal contention. Also, it is important to note that this PR does not add any performance degradation for either reingestion or live ingestion.
When reviewing this PR it would be helpful to read this design doc:
https://docs.google.com/document/d/1CGfBCS99MTEZDP4mMhV1o6Z5NE_Tlg7ENCcWTwzhlio/edit
Known limitations
After running a full vacuum on history_accounts, the reaping query sped up dramatically. Previously, the duration of reaping the history_accounts table peaked at ~1.9 seconds:
https://grafana.stellar-ops.com/d/x8xDSQQIk/stellar-horizon?orgId=1&from=1722295775773&to=1722400061302&var-environment=stg&var-cluster=pubnet&var-network=All&var-route=All&viewPanel=2531
After the vacuum, the average duration for reaping history_accounts is ~20 ms and the peak duration was ~400 ms:
https://grafana.stellar-ops.com/d/x8xDSQQIk/stellar-horizon?orgId=1&from=1724782666959&to=1724869066959&var-environment=stg&var-cluster=pubnet&var-network=All&var-route=All&viewPanel=2531
This means that the risk that reaping of history lookup tables taking so long that it introduces ingestion lag is a lot less of a concern.
Update:
After running reaping of history lookup tables on staging for 24 hours I have observed that the peak duration actually reaches 600 ms.
https://grafana.stellar-ops.com/d/x8xDSQQIk/stellar-horizon?orgId=1&from=1724866821793&to=1724953221793&var-environment=stg&var-cluster=pubnet&var-network=All&var-route=All&viewPanel=2531
one edge case wanted to check on, if a user reingests an older range which goes further back than the retention period cutoff, and reaping for data and lookup tables has already completed for that retention period, will the next iteration of lookup reaper sense those and delete the qualified(orhpaned) lookup ids in that case? I ask b/c of the offsets for reapers that are stored in key-value, it seems like once those advance, the reaper won't inspect that older id range anymore?
@sreuland I believe I have addressed your feedback. PTAL, thanks!
|
gharchive/pull-request
| 2024-07-30T19:15:31 |
2025-04-01T04:35:57.791847
|
{
"authors": [
"sreuland",
"tamirms"
],
"repo": "stellar/go",
"url": "https://github.com/stellar/go/pull/5405",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1360406624
|
Instruct users to checkout v0.0.4 tag for examples
What
Instruct users to checkout v0.0.4 tag for examples.
Why
We tried having a dev branch but it became a more significant overhead than I think we should maintain. @MonsieurNicolas also pointed out these repos don't have snapshots and so we're adding tags to these repos anyway. To lower our overhead we should just lean on the tags. With these changes to the docs it is not so bad an ask for users and arguably not the most complex thing contract developers will come up against.
Preview is available here:http://soroban-docs-pr107.previews.kube001.services.stellar-ops.com/
Preview is available here:http://soroban-docs-pr107.previews.kube001.services.stellar-ops.com/
|
gharchive/pull-request
| 2022-09-02T16:35:01 |
2025-04-01T04:35:57.795253
|
{
"authors": [
"leighmcculloch",
"stellar-jenkins"
],
"repo": "stellar/soroban-docs",
"url": "https://github.com/stellar/soroban-docs/pull/107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2375852795
|
[SDP-1245] Invalidate Circle Distribution Account Status upon receiving a 401/403
What
[TODO: Short statement about what is changing.]
Why
[TODO: Why this change is being made. Include any context required to understand the why.]
Known limitations
[TODO or N/A]
Checklist
PR Structure
[ ] This PR has a reasonably narrow scope (if not, break it down into smaller PRs).
[ ] This PR title and description are clear enough for anyone to review it.
[ ] This PR does not mix refactoring changes with feature changes (split into two PRs otherwise).
Thoroughness
[ ] This PR adds tests for the new functionality or fixes.
[ ] This PR contains the link to the Jira ticket it addresses.
Configs and Secrets
[ ] No new CONFIG variables are required -OR- the new required ones were added to the helmchart's values.yaml file.
[ ] No new CONFIG variables are required -OR- the new required ones were added to the deployments (pr-preview, dev, demo, prd).
[ ] No new SECRETS variables are required -OR- the new required ones were mentioned in the helmchart's values.yaml file.
[ ] No new SECRETS variables are required -OR- the new required ones were added to the deployments (pr-preview secrets, dev secrets, demo secrets, prd secrets).
Release
[ ] This is not a breaking change.
[ ] This is ready for production.. If your PR is not ready for production, please consider opening additional complementary PRs using this one as the base. Only merge this into develop or main after it's ready for production!
Deployment
[ ] Does the deployment work after merging?
Something went wrong with PR preview build please check
stellar-disbursement-platform-backend-preview is available here:SDP: https://sdp-backend-pr338.previews.kube001.services.stellar-ops.com/healthAP: https://sdp-ap-pr338.previews.kube001.services.stellar-ops.com/healthFrontend: https://sdp-backend-dashboard-pr338.previews.kube001.services.stellar-ops.com
stellar-disbursement-platform-backend-preview is available here:SDP: https://sdp-backend-pr338.previews.kube001.services.stellar-ops.com/healthAP: https://sdp-ap-pr338.previews.kube001.services.stellar-ops.com/healthFrontend: https://sdp-backend-dashboard-pr338.previews.kube001.services.stellar-ops.com
stellar-disbursement-platform-backend-preview is available here:SDP: https://sdp-backend-pr338.previews.kube001.services.stellar-ops.com/healthAP: https://sdp-ap-pr338.previews.kube001.services.stellar-ops.com/healthFrontend: https://sdp-backend-dashboard-pr338.previews.kube001.services.stellar-ops.com
|
gharchive/pull-request
| 2024-06-26T16:59:54 |
2025-04-01T04:35:57.829986
|
{
"authors": [
"stellar-jenkins",
"ziyliu"
],
"repo": "stellar/stellar-disbursement-platform-backend",
"url": "https://github.com/stellar/stellar-disbursement-platform-backend/pull/338",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1888359544
|
Small tweaks
-Add Discord to top menu
-Move MGI tutorial to application section
-Add Soroban to main nav
-Add "Basic" to Tutorials
Preview is available here:http://developers-pr230.previews.kube001.services.stellar-ops.com
|
gharchive/pull-request
| 2023-09-08T21:06:01 |
2025-04-01T04:35:57.831909
|
{
"authors": [
"briwylde08",
"stellar-jenkins"
],
"repo": "stellar/stellar-docs",
"url": "https://github.com/stellar/stellar-docs/pull/230",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1127642580
|
Add send-insights to Harden-Runner
@varunsh-coder does the file agent.service required in the dist/pre folder? when i ran the npm build it did not generate so I was wondering if it is required
@varunsh-coder does the file agent.service required in the dist/pre folder? when i ran the npm build it did not generate so I was wondering if it is required
It is required
@varunsh-coder made the changes
|
gharchive/pull-request
| 2022-02-08T19:00:06 |
2025-04-01T04:35:57.848893
|
{
"authors": [
"arjundashrath",
"varunsh-coder"
],
"repo": "step-security/harden-runner",
"url": "https://github.com/step-security/harden-runner/pull/92",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
96375532
|
debug-toolbar break modeltranslation
Django==1.8.3
django-debug-toolbar==1.3.2
django-modeltranslation==0.10
Mezzanine==4.0.0
with
`OPTIONAL_APPS = (
"debug_toolbar", ...
i get error:
Traceback (most recent call last):
File "./manage.py", line 28, in <module>
execute_from_command_line(sys.argv)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/core/management/__init__.py", line 312, in execute
django.setup()
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/apps/registry.py", line 115, in populate
app_config.ready()
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/debug_toolbar/apps.py", line 15, in ready
dt_settings.patch_all()
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/debug_toolbar/settings.py", line 232, in patch_all
patch_root_urlconf()
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/debug_toolbar/settings.py", line 220, in patch_root_urlconf
reverse('djdt:render_panel')
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/core/urlresolvers.py", line 550, in reverse
app_list = resolver.app_dict[ns]
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/core/urlresolvers.py", line 352, in app_dict
self._populate()
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/core/urlresolvers.py", line 285, in _populate
for pattern in reversed(self.url_patterns):
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/core/urlresolvers.py", line 402, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/core/urlresolvers.py", line 396, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
File "/usr/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/mnt/data2/work/modco-mbm/project/urls.py", line 11, in <module>
admin.autodiscover()
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/mezzanine/boot/__init__.py", line 77, in autodiscover
django_autodiscover(*args, **kwargs)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/contrib/admin/__init__.py", line 24, in autodiscover
autodiscover_modules('admin', register_to=site)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/utils/module_loading.py", line 74, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "/usr/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/mezzanine/pages/admin.py", line 245, in <module>
admin.site.register(Page, PageAdmin)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/mezzanine/boot/lazy_admin.py", line 28, in register
super(LazyAdminSite, self).register(*args, **kwargs)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/django/contrib/admin/sites.py", line 108, in register
self._registry[model] = admin_class(model, self)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/mezzanine/pages/admin.py", line 55, in __init__
super(PageAdmin, self).__init__(*args, **kwargs)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/mezzanine/core/admin.py", line 92, in __init__
super(DisplayableAdmin, self).__init__(*args, **kwargs)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/modeltranslation/admin.py", line 245, in __init__
super(TranslationAdmin, self).__init__(*args, **kwargs)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/modeltranslation/admin.py", line 33, in __init__
self.trans_opts = translator.get_options_for_model(self.model)
File "/home/slav0nic/work/modco-mbm/lib/python3.4/site-packages/modeltranslation/translator.py", line 546, in get_options_for_model
'translation' % model.__name__)
modeltranslation.translator.NotRegistered: The model "Page" is not registered for translation
Does changing this line: https://github.com/stephenmcd/mezzanine/blob/master/mezzanine/utils/conf.py#L126
into:
prepend("INSTALLED_APPS", app)
solves the issue ?
It's a duplicate of https://github.com/deschler/django-modeltranslation/issues/246
We can solve it in Mezzanine by using the explicit setup for debug toolbar http://django-debug-toolbar.readthedocs.org/en/1.2/installation.html#explicit-setup
I'll push a fix.
yes, this help, tnx
|
gharchive/issue
| 2015-07-21T17:46:46 |
2025-04-01T04:35:57.863735
|
{
"authors": [
"Kniyl",
"slav0nic",
"stephenmcd"
],
"repo": "stephenmcd/mezzanine",
"url": "https://github.com/stephenmcd/mezzanine/issues/1358",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
349715240
|
Migration to Boostrap 4?
Do you plan to migrate to Boostrap 4 in near future?
Can you post this to the mailing list? Thanks
|
gharchive/issue
| 2018-08-11T08:00:32 |
2025-04-01T04:35:57.864827
|
{
"authors": [
"matousc89",
"stephenmcd"
],
"repo": "stephenmcd/mezzanine",
"url": "https://github.com/stephenmcd/mezzanine/issues/1870",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2172531421
|
Gallery display is not showing
Website after fruition https://chaugary.com/
Original notion page https://chaugary.notion.site/Chau-Gary-Software-Engineer-4fe059ef795d40c588d23f1d6bd267f6
My blog section is missing.
I am also having this same issue with all database views. Even a database included on a page. I've tried performing the two edits suggested on the fruition site, but to no avail.
Duplicate of #241.
|
gharchive/issue
| 2024-03-06T22:10:21 |
2025-04-01T04:35:57.866962
|
{
"authors": [
"LudwigStumpp",
"jachane99",
"sunnythedev"
],
"repo": "stephenou/fruitionsite",
"url": "https://github.com/stephenou/fruitionsite/issues/278",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
726981220
|
Can it support tailing log files, too?
Note: this issue was imported from https://github.com/wercker/stern/issues/54, but it was originally created by ryuheechul...
Just tried stern and it was pretty awesome, thank you for your work!
In our case though, sometimes some logs are not being printed as stdout/stderr. Some logs are being generated as just files.
So I thought it would be great if stern can support tailing some files in containers, too!
Note: this comment was imported, but it was originally made by majewsky...
FYI:
When your service only supports logging to a file, you can try to configure the pseudo-file /dev/stdout aka /dev/fd/1 aka /proc/1/fd/1.
When your service only supports logging to syslog and the file is where you direct your syslog to, you can try https://github.com/sapcc/syslog-stdout instead of a normal syslogd.
|
gharchive/issue
| 2020-10-22T01:59:38 |
2025-04-01T04:35:57.886130
|
{
"authors": [
"superbrothers"
],
"repo": "stern/stern",
"url": "https://github.com/stern/stern/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
217058384
|
Derive Eq for VersionReq
Seems to make sense to me. Version already implements it.
That said, maybe Hash would make sense, too. At least my current investigation of https://github.com/rust-lang/cargo/issues/1982 indicates that this would make it easier. Thoughts?
Thoughts?
Mostly, "I haven't thought about it enough", especially EQ. Are these actually total? I'm not sure.
Why shouldn't it be reflexive? Maybe I'm missing something, but I think a == a is trivially the case here.
I'm not saying it's not, I'm saying "this is a guarantee and I have given literally zero thought as to how it's true, especially when you bring in stuff like the prerelease rules, etc."
Closing, this ain't going nowhere
|
gharchive/pull-request
| 2017-03-26T15:09:42 |
2025-04-01T04:35:57.894951
|
{
"authors": [
"jonas-schievink",
"steveklabnik"
],
"repo": "steveklabnik/semver",
"url": "https://github.com/steveklabnik/semver/pull/109",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
384819312
|
Enabled selection when grouping rows
Hi! I'm a customer of ZippyTech, with an owned license =)
We haven't yet switched to the new DataGrid, and are still using this one, and found that row selection wasn't enabled when using groupBy.
We have forked the project and done a quick change to solve it. Feel free to incorporate it to the project if you find it appropriate!
We have tried to run the tests, but it's been kinda messy =P (JSDOM versions not compatible, and lots of troubleshoting), but we have done some extensive usage during the last 3 days and is working OK.
NOTE: Only thing is, since we're updating the counter to give each row an individual index, the odd-even iterator works cross-group, which is a side effect.
Thanks for the PR! One quick question, could you update the src/*jsx files instead of lib? That wat it will auto-compile to es5 for generic browser support.
Cheers!
Sure thing, done!
published to 3.2.1
|
gharchive/pull-request
| 2018-11-27T14:25:33 |
2025-04-01T04:35:57.897786
|
{
"authors": [
"stevelacy",
"thoriphes"
],
"repo": "stevelacy/react-datagrid2",
"url": "https://github.com/stevelacy/react-datagrid2/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
650352904
|
Uncaught Error: Error from Instagram: This endpoint has been retired
Suddenly not working from yesteday.
Hey @boseakash7! Instagram has retired the API in favor of the new Basic Display API:
From the Instagram API page
The remaining Instagram Legacy API permission ("Basic Permission") was disabled on June 29, 2020. As of June 29, third-party apps no longer have access to the Legacy API. To avoid disruption of service to your app and business, developers previously using the Legacy API should instead rely on Instagram Basic Display API and Instagram Graph API . Please request approval for required permissions through the App Review process.
You’ll need to upgrade to the v2 version of Instafeed.js to use the new Basic Display API
|
gharchive/issue
| 2020-07-03T05:29:18 |
2025-04-01T04:35:57.916129
|
{
"authors": [
"boseakash7",
"stevenschobert"
],
"repo": "stevenschobert/instafeed.js",
"url": "https://github.com/stevenschobert/instafeed.js/issues/678",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
45600369
|
API auth
Hello!
First of all, good job on track-o-bot! That's also great to open source it.
I began working on a little companion Android App that will use the API.
At first, I'll show a "login" screen so the user enters his username and token. I'd like to have a way to ask the API if these credentials are valid.
As for now, I could try to get the history or any other json, and if it does return an error I would take for granted that the credentials are not correct.
But I think that would be a nice addition to have a simple "auth".
Thanks!
@Mathbl @stevschmid if you're still interested in this, I can have a look?
I don't know if it would be useful for others. As for myself, I do not need it anymore. I'll close this. Thanks!
Thanks @Mathbl :smile:
|
gharchive/issue
| 2014-10-13T01:32:03 |
2025-04-01T04:35:57.937534
|
{
"authors": [
"Mathbl",
"emilesilvis"
],
"repo": "stevschmid/trackobot.com",
"url": "https://github.com/stevschmid/trackobot.com/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
665616613
|
crash on launch hrev54460
crash report: http://fatelk.com/jim/WonderBrush-555-debug-25-07-2020-15-52-37.report.zip
Fixed in https://git.haiku-os.org/haiku/commit/?id=d548fb2b3e493004761739c3e3fcc008ec497860
Indeed it now woeks.
This issue can be closed.
|
gharchive/issue
| 2020-07-25T16:42:35 |
2025-04-01T04:35:57.964414
|
{
"authors": [
"bbjimmy",
"diversys",
"waddlesplash"
],
"repo": "stippi/WonderBrush-v2",
"url": "https://github.com/stippi/WonderBrush-v2/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
992850952
|
Fix to show SafariView in nested sheets
See #8
Hi @twodayslate,
Thank you for your contribution! I have checked that your PR fixes the issue and working well :)
However, I'm sorry that it will not be merged because v2.4.0 (which migrates SafariViewPresenter to UIViewRepresentable in 6e10ed37f83d8ebe62dd952a7bce81942c077038) also contains a bug fix for the issue you have reported.
I hope you understand, and I really appreciate your contribution!
|
gharchive/pull-request
| 2021-09-10T03:21:59 |
2025-04-01T04:35:57.970219
|
{
"authors": [
"stleamist",
"twodayslate"
],
"repo": "stleamist/BetterSafariView",
"url": "https://github.com/stleamist/BetterSafariView/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
150733282
|
Added unit tests for escaped quotes.
Per the comments in the associated pull request, adding unit tests for escaped quotes.
I'm more familiar with TestNG, but I see that some of the existing test have things like this:
@Test(expected=NullPointerException.class)
If it possible to use the same notation for your tests with Junit? In TestNG I know you can't specify the message expected, but maybe JUnit lets you? It would make the test cases simpler if you could.
I'm not as familiar with JUnit either, so I'll follow whatever convention you prefer (I just tried to copy what I already saw in the CDL tests).
Aside from the test cases that are expecting to generate a NullPointerException, I only saw "@Test" used. For that reason, I only prefaced each test case with "@Test". What convention do you have in mind as an alternative?
Looks good, thanks for taking the time to do the unit tests. Either style for catching exceptions is acceptable. Will merge after the JSON-Java change is accepted.
|
gharchive/pull-request
| 2016-04-25T03:07:36 |
2025-04-01T04:35:57.972738
|
{
"authors": [
"captainIowa",
"johnjaylward",
"stleary"
],
"repo": "stleary/JSON-Java-unit-test",
"url": "https://github.com/stleary/JSON-Java-unit-test/pull/45",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1256988981
|
SPI Outputs not initializied with Reset
Hi Stephan,
if the Reset is applied to the neorv32 the outputs of the spi are staying uninitilized. The SPI uses also only a clock. To avoid unpredictable behaviour needs at least the CSn outputs an reset defined output value. This avoid accidentally activation of the driven SPI component. Better would to assign also CLK and MOSI. In my use case are 180us necessary to assign an valid value.
If you like i can submit a patch. What do you think?
Greetings,
Andreas
Hey Andreas,
the SPI chip-select lines are initilized, but it takes quite some time since they are configured by a software reset (crt0 writing zero to all IO device's registers) rater than by an actual hardware reset.
I thought this would not cause any troubles with SPI devices. All outgoing SPI signals might be undefined, but for sure there are NO clock pulses generated on the SPI clock line so any SPI peripheral should not perform any internal operations. We also have the same situation during power-up of the FPGA where all FPGA pins might be in tr-istate mode until the FPGA is configured.
However, I am open for a discussion. Maybe it is about time to add a real hardware reset to all IO/peripheral devices. 🤔
For the SPI looks also good, now is the Chipselect de-asserted with the reset:
Thanks for providing the fix!
Awesome! The MOSI signal seems to clear correctly now, too
Same question here: you have used the modified version from #334, didn't you?
yes i used #334
|
gharchive/issue
| 2022-06-01T20:39:55 |
2025-04-01T04:35:57.983156
|
{
"authors": [
"akaeba",
"andkae",
"stnolting"
],
"repo": "stnolting/neorv32",
"url": "https://github.com/stnolting/neorv32/issues/330",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1994764347
|
Implementation with nextpnr-xilinx + prjxray doesn't work.
Hello everyone!
Context:
Until now, I synthesized the NEORV32 with Vivado and there wasn't any problem. Even I added a module via Stream Link and it worked successfully.
However, I'm trying to synthesize the NEORV32 CPU by default using only open tools and I'm having problems.
The board that I'm using is the ARTY A7 35T.
And the open tools are: GHDL + yosys + GHDL yosys plugin + nextpnr-xilinx + prjxray + openFPGALoader
I follow this script to implement the design:
set -ex
cd $(dirname "$0")
cd ..
git clone --recursive https://github.com/stnolting/neorv32-setups
cd synth
mkdir -p build
echo "Analyze NEORV32 CPU"
ghdl -i --workdir=build --work=neorv32 ../neorv32-setups/neorv32/rtl/core/*.vhd
ghdl -i --workdir=build --work=neorv32 ../neorv32-setups/neorv32/rtl/core/mem/neorv32_dmem.default.vhd
ghdl -i --workdir=build --work=neorv32 ../neorv32-setups/neorv32/rtl/core/mem/neorv32_imem.default.vhd
ghdl -i --workdir=build --work=neorv32 ../neorv32-setups/neorv32/rtl/test_setups/neorv32_test_setup_bootloader.vhd
ghdl -m --workdir=build --work=neorv32 neorv32_test_setup_bootloader
echo "Synthesis with yosys and ghdl as module"
yosys -m ghdl -p 'ghdl --workdir=build --work=neorv32 neorv32_test_setup_bootloader; synth_xilinx -flatten -abc9 -arch xc7 -top neorv32_test_setup_bootloader; write_json neorv32_test_setup_bootloader.json'
echo "Place and route"
nextpnr-xilinx --chipdb /usr/local/share/nextpnr/xilinx-chipdb/xc7a35t.bin --xdc arty.xdc --json neorv32_test_setup_bootloader.json --write neorv32_test_setup_bootloader_routed.json --fasm neorv32_test_setup_bootloader.fasm
echo "Generate bitstream"
/home/usainz/Descargas/prjxray/utils/fasm2frames.py --part xc7a35tcsg324-1 --db-root /usr/local/share/nextpnr/prjxray-db/artix7 neorv32_test_setup_bootloader.fasm > neorv32_test_setup_bootloader.frames
/home/usainz/Descargas/prjxray/build/tools/xc7frames2bit --part_file /usr/local/share/nextpnr/prjxray-db/artix7/xc7a35tcsg324-1/part.yaml --part_name xc7a35tcsg324-1 --frm_file neorv32_test_setup_bootloader.frames --output_file neorv32_test_setup_bootloader.bit
echo "Load bitstream in FPGA"
#To send to SRAM:
openFPGALoader --board arty neorv32_test_setup_bootloader.bit
There isn't any errors in the script process but when it is synthesize the bootloader doesn't work. The leds don't light and nothing is displayed in CuteCom terminal.
Is obviously that there isn't any problem in the design therefore any of these tools don't work right.
Questions:
Have any of you implemented NEORV32 in Arty A7 using only open tools?
What tools did you use? (Maybe verilog to routing.)
Can any of you implemented NEORV32 using these tools? (For confirm this issue.)
It is important to note that:
I followed this tutorial for install nextpnr-xilinx + prjxray and the led example it worked.
/cc @gatecat @f4pga @umarcor
Hey @Unike267!
Unfortunately, I have never tested the open-source toolchain for AMD FPGAs. However, @AWenzel83 did a lot of work to add a test setup for these platforms in https://github.com/stnolting/neorv32-setups/pull/10.
You could try to feed the auto-generated Verilog version of the processor (neorv32-verilog) into your flow - just to double-check that the yosys-ghdl plugin runs fine.
Hello @stnolting!
You could try to feed the auto-generated Verilog version of the processor (neorv32-verilog) into your flow - just to double-check that the yosys-ghdl plugin runs fine
For check if the ghdl-yosys plugin runs fine I have synthesized the design in ICE40 (Alhambra II board).
(After adapting the neorv32_test_setup_bootloader.vhd, changing the frequency to 12 MHz, reducing the IMEM to 4x1024 and the DMEM to 2x1024 and setting a appropriate .pcf)
Following this script:
set -ex
cd $(dirname "$0")
cd ..
git clone --recursive https://github.com/stnolting/neorv32-setups
echo "Copy the bootloader for ICE40"
cp synth/neorv32_test_setup_bootloader.vhd neorv32-setups/neorv32/rtl/test_setups
cd synth
mkdir -p build
echo "Analyze NEORV32 CPU"
ghdl -i --workdir=build --work=neorv32 ../neorv32-setups/neorv32/rtl/core/*.vhd
ghdl -i --workdir=build --work=neorv32 ../neorv32-setups/neorv32/rtl/core/mem/neorv32_dmem.default.vhd
ghdl -i --workdir=build --work=neorv32 ../neorv32-setups/neorv32/rtl/core/mem/neorv32_imem.default.vhd
ghdl -i --workdir=build --work=neorv32 ../neorv32-setups/neorv32/rtl/test_setups/neorv32_test_setup_bootloader.vhd
ghdl -m --workdir=build --work=neorv32 neorv32_test_setup_bootloader
echo "Synthesis with yosys and ghdl as module"
yosys -m ghdl -p 'ghdl --workdir=build --work=neorv32 neorv32_test_setup_bootloader; synth_ice40 -json neorv32_test_setup_bootloader.json'
echo "Place and route"
nextpnr-ice40 --hx8k --package tq144:4k --pcf lib.pcf --asc neorv32_test_setup_bootloader.asc --json neorv32_test_setup_bootloader.json
echo "Generate bitstream"
icepack neorv32_test_setup_bootloader.asc neorv32_test_setup_bootloader.bin
echo "Load bitstream in FPGA (make with sudo)"
sudo iceprog neorv32_test_setup_bootloader.bin
And it works.
So we can discard that the problem comes from ghdl-yosys plugin.
Anyway tomorrow I could try with the auto-generated Verilog version in Arty (now in home only have ICE40).
Thanks for the feedback!!
:smiley: :+1:
That looks good! :wink:
Any progress on the Arty approach?
Hello @stnolting!
Yes, I fixed the problem!
I solved it adding -nodsp and -nolutram arguments to yosys. As shown in the following command:
yosys -m ghdl -p 'ghdl --std=08 --workdir=build --work=neorv32 neorv32_test_setup_bootloader; synth_xilinx -nodsp -nolutram -flatten -abc9 -arch xc7 -top neorv32_test_setup_bootloader; write_json neorv32_test_setup_bootloader.json'
The rest of commands don't change.
I have checked it in Arty A7-35T and in Arty A7-100T and it works fine.
Here the proof:
I think that the optimization for dsps and for lutrams is not well implemented for xilinx fpgas.
Anyway thanks for your interest! :smiley:
This is great! Thank you very much for sharing your findings! :+1:
I think that the optimization for dsps and for lutrams is not well implemented for xilinx fpgas.
I've heard that before. But anyway, it is a start 😉
|
gharchive/issue
| 2023-11-15T13:18:27 |
2025-04-01T04:35:57.996262
|
{
"authors": [
"Unike267",
"stnolting"
],
"repo": "stnolting/neorv32",
"url": "https://github.com/stnolting/neorv32/issues/726",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1431033120
|
Support to show hub informations
ref: https://github.com/stolostron/backlog/issues/25831
Signed-off-by: clyang82 chuyang@redhat.com
/assign @KevinFCormier Please take a look at.
I will continue to increase the code coverage. Thanks.
|
gharchive/pull-request
| 2022-11-01T08:01:15 |
2025-04-01T04:35:58.004519
|
{
"authors": [
"clyang82"
],
"repo": "stolostron/console",
"url": "https://github.com/stolostron/console/pull/2224",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1864500287
|
[release-2.8] donot delete addon if cluster is not found
issue: https://issues.redhat.com/browse/ACM-6997
/lgtm
|
gharchive/pull-request
| 2023-08-24T06:35:58 |
2025-04-01T04:35:58.005596
|
{
"authors": [
"elgnay",
"zhiweiyin318"
],
"repo": "stolostron/klusterlet-addon-controller",
"url": "https://github.com/stolostron/klusterlet-addon-controller/pull/212",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1715016690
|
Disable the logs for event by default
Disable the logs for event-watch by default, because it is too noisy.
/LGTM
|
gharchive/pull-request
| 2023-05-18T05:39:47 |
2025-04-01T04:35:58.006362
|
{
"authors": [
"clyang82",
"yanmxa"
],
"repo": "stolostron/multicluster-global-hub",
"url": "https://github.com/stolostron/multicluster-global-hub/pull/442",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2360297700
|
increase Search operator mem limit
Description
Increase memory limit of manager container in search-operator pod
Related Issue
If applicable, please reference the issue(s) that this pull request addresses.
https://issues.redhat.com/browse/ACM-11936
Changes Made
Provide a clear and concise overview of the changes made in this pull request.
Screenshots (if applicable)
Add screenshots or GIFs that demonstrate the changes visually, if relevant.
Checklist
[ ] I have tested the changes locally and they are functioning as expected.
[ ] I have updated the documentation (if necessary) to reflect the changes.
[ ] I have added/updated relevant unit tests (if applicable).
[ ] I have ensured that my code follows the project's coding standards.
[ ] I have checked for any potential security issues and addressed them.
[ ] I have added necessary comments to the code, especially in complex or unclear sections.
[ ] I have rebased my branch on top of the latest main/master branch.
Additional Notes
Add any additional notes, context, or information that might be helpful for reviewers.
Reviewers
Tag the appropriate reviewers who should review this pull request. To add reviewers, please add the following line: /cc @reviewer1 @reviewer2
Definition of Done
[ ] Code is reviewed.
[ ] Code is tested.
[ ] Documentation is updated.
[ ] All checks and tests pass.
[ ] Approved by at least one reviewer.
[ ] Merged into the main/master branch.
/lgtm
/hold
Covered in https://github.com/stolostron/multiclusterhub-operator/pull/1573
Covered in https://github.com/stolostron/multiclusterhub-operator/pull/1573
|
gharchive/pull-request
| 2024-06-18T16:50:55 |
2025-04-01T04:35:58.013141
|
{
"authors": [
"SherinV",
"bjoydeep",
"dislbenn"
],
"repo": "stolostron/multiclusterhub-operator",
"url": "https://github.com/stolostron/multiclusterhub-operator/pull/1567",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
212876460
|
Feedback de retorno
Feedback visual muito discreto de que retornou com sucesso (aumentar o tempo do toast ou criar um alerta com botão de confirmação).
E no pinpad fica escrito: "TRANSAC APROVADA" eternamente se a transação foi feita por esse applinking.
Acredito que poderia antes de retornar da mPos, voltar ao padrão de ficar escrito: STONE PAGAMENTOS, que é o texto que fica se vc faz direto via mPos, sem applinking.
@guilhermebruzzi, estamos subindo para alpha a versão 2.1.2 do app com essa correção
@jgabrielfreitas Irado! :D
|
gharchive/issue
| 2017-03-08T22:28:16 |
2025-04-01T04:35:58.015084
|
{
"authors": [
"guilhermebruzzi",
"jgabrielfreitas"
],
"repo": "stone-pagamentos/Android-Uri-demo",
"url": "https://github.com/stone-pagamentos/Android-Uri-demo/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2262714104
|
🚀 Feature: Default share expiration configurable in administration
🔖 Feature description
Default share expiration configurable in administration
🎤 Pitch
I'm likely to use pingvin for a group of friend's in a private discord, but we have no things to be automatically deleted. Currently the default behavior is for a share to expire in 1 day, but it would be nice if this default could be configured from never to whenever.
Hello,
This feature would be extremely useful, even for professional use.
Thank you very much for your work.
Indeed the default settings. I am absolutely fine with our guests to lower this to a single day if they so desire. However now most of the clients dont think about it and thus forget to set it longer than a single day.
So my wish would be that the server default expiry can be changed. Whatever people set afterwards is up to them.
|
gharchive/issue
| 2024-04-25T05:41:21 |
2025-04-01T04:35:58.018813
|
{
"authors": [
"alexdaums",
"jimz011",
"pcmike"
],
"repo": "stonith404/pingvin-share",
"url": "https://github.com/stonith404/pingvin-share/issues/454",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1980365485
|
feat: preload next article for infinite scroll
https://storipress-media.atlassian.net/browse/SPMVP-6295
Add a "preload" option for useLoadMore and InfiniteScroll.
Ds error?
Ds error?
seems false positive
|
gharchive/pull-request
| 2023-11-07T01:27:43 |
2025-04-01T04:35:58.022305
|
{
"authors": [
"DanSnow",
"SidStraw",
"ches4117"
],
"repo": "storipress/karbon",
"url": "https://github.com/storipress/karbon/pull/293",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1748611776
|
feature-request: Use storyblok functions from this package directly on API routes
Description
Can we import storyblok/nuxt directly on a nuxt API route instead of using the client? The setup is already done in the nuxt.config. Since the functions are only in the composable context, it would be nice to use them now on the API routes to get data and combine them with some logic.
Suggested solution or improvement
/* Use storyblok/nuxt package on a nuxt API route like this: */
import { useStoryblokApi } from '@storyblok/nuxt' // and async functions
export default defineEventHandler(async (event) => {
const story = await useStoryblokApi().getStory('someStory') {...})
return story.data.story
})
Additional context
I want to get data from storyblok, cache them with Nitro, and use the CMS data for our own (independent) endpoint to have consistent data and a clean setup.
Validations
[X] Follow our Code of Conduct
Hi @chstappert thanks for reaching out.
So Vue composables are meant to work under a setup context, which means, inside components. Since server routes in Nuxt work with Nitro, I'm afraid composables won't work here.
An option would be to create a server utility for Nitro.
I'd second that!
If you're trying to satisfy the requirement from nuxt's simple-sitemap to provide a list of dynamic urls - obviously based on what we have in storyblok, this little addition would be superb.
Otherwise we're doing the whole initialization again, potentially duplicating our config
I'm currently having to setup a separate instance of storyblok-js-client to make calls to the content delivery API from my server/api routes 😕 would be really nice to just use the Nuxt library exclusively
|
gharchive/issue
| 2023-06-08T20:46:37 |
2025-04-01T04:35:58.052072
|
{
"authors": [
"alvarosabu",
"chstappert",
"steffkes",
"zacwebb"
],
"repo": "storyblok/storyblok-nuxt",
"url": "https://github.com/storyblok/storyblok-nuxt/issues/440",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
497565250
|
Reworked with new template, TS support, dependencies updated
Use current versions of storyblok modules (especially storyblok-js-client).
TypeScript now works correctly, extended Vue interface for use of $storyapi & $storybridge
'storyblok-nuxt' needs to be added to the types in tsconfig.json of the nuxt application for autocompletion
@onefriendaday Please have a look into it. Tested it with the current nuxt version and it worked fine. :) Thank you!
Currently Nuxt still uses Corejs 2.x. To avoid conflicts I would use storybloks-js-client V1 as V2 is installing Corejs 3.x.
Currently Nuxt still uses Corejs 2.x. To avoid conflicts I would use storybloks-js-client V1 as V2 is installing Corejs 3.x.
Whoops I will try to fix it soon. :) Thank you!
@onefriendaday Downgraded storyblok-js-client to 1.0.34 and added a separate branch with 2.0.2 named core-js@3 for future. :)
@onefriendaday Did some more work into it and updated the structure, TypeScript support, README and package.json. Would be nice if you can have a look into it. :)
@onefriendaday It would be great to actually publish a storyblok-nuxt@next version in prior to use core-js v3, what do you think?
As i am using a lot and plan to use a lot storyblok later i would like to help for the plugin. Can you publish a @next branch and some roadmap for us for contributing ?
Kinda inactive :/
Will check with Alex. Currently many new core features are getting reviewed by him. 👍
The PR has merge conflicts, please sync with master to resolve the conflicts.
@MarvinRudolph any chance to resolve these conflicts? TS support would be great!
Will be reviewed this weekend :)
Hello @MarvinRudolph! How are you? Until the moment, I have some changes requests before merge this PR:
When you resolve the conflicts:
In the package.json, you can maintain the storyblok-js-client in v2, because the corejs3 is required.
In the package.json, could you change te version field to '0-0-development'.
Could you add a section in the README, explaining how to setup the types for autocomplete?
Could you update the dependencies of the project?
We use the semantic-release for releasing new versions. So, about these changes, could you add a commit of feat type?
Any doubts I'm available. Thanks!
Hi folks! Is there any way how I can help to finish this PR?
CC: @emanuelgsouza @MarvinRudolph
@Lindar90 I think it might be better to drop the pull request and refactor it. It's nearly a year old and a lot has changed since then (in Nuxt at least AFAIK). And I haven't had much time to check that, sorry! What do you think?
This doesn't mean I don't want to work on it, would be really cool to bring types for the modules out! It's on my list but not planned in the next weeks.
Hi, @m4rvr sorry that this Draft wasn't updated in a while. The good news, we are working on the Typescript support atm here https://github.com/storyblok/storyblok-nuxt/pull/296 . I will close this PR
|
gharchive/pull-request
| 2019-09-24T09:22:01 |
2025-04-01T04:35:58.062128
|
{
"authors": [
"DominikAngerer",
"Gomah",
"Lindar90",
"MarvinRudolph",
"alvarosabu",
"emanuelgsouza",
"f3ltron",
"m4rvr",
"onefriendaday",
"tillmon"
],
"repo": "storyblok/storyblok-nuxt",
"url": "https://github.com/storyblok/storyblok-nuxt/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1139419680
|
fix syntax of code example in docs: hierarchy-separator
Issue: n/a
What Changed
add a closing quote on string of code example in docs
run yar update-all which added a trailing comma on the same code example
Checklist
Check the ones applicable to your change:
[X] Ran yarn update-all
[ ] Tests are updated
[X] Documentation is updated
Change Type
Indicate the type of change your pull request is:
[ ] maintenance
[X] documentation
[ ] patch
[ ] minor
[ ] major
Great catch @aurmer, thank you so much for your contribution!
|
gharchive/pull-request
| 2022-02-16T01:36:23 |
2025-04-01T04:35:58.066137
|
{
"authors": [
"aurmer",
"yannbf"
],
"repo": "storybookjs/eslint-plugin-storybook",
"url": "https://github.com/storybookjs/eslint-plugin-storybook/pull/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
235251007
|
Feature: Support Vue
Talking with @ndelangen about storybook also supporting rendering Vue components; seems like it is a viable solution instead forking or building a separate tool. I believe there has been interest in supporting it from others and the team but no one actively working on this?
https://github.com/storybooks/storybook/blob/master/ROADMAP.md#supporting-other-frameworks-and-libraries
Some notes:
Rendering is done in: https://github.com/storybooks/storybook/blob/master/app/react/src/client/preview/render.js
There are a few UI elements in storybook itself written with react but maybe we can just leave those in?
Might be easy to support the JSX way of writing a vue component, but most of vue is in .vue files so need to figure out how to load those files? Or maybe just use https://github.com/vuejs/babel-plugin-transform-vue-jsx to write the stories themselves but it would support .vue if that's easier (maybe weird to do that)?
For more in-depth info/discussion we can also discuss in https://storybooks-slackin.herokuapp.com
I think optimally it would be best to show both React and Vue components together, especially for companies working with multiple stacks.
The vue-template-compiler has a great API that could be leveraged to take a .vue file and get back render functions.
https://github.com/vuejs/vue/blob/dev/packages/vue-template-compiler/README.md#compilercompiletofunctionstemplate
I'm insterested in vue component supporting at the storybook.
As @TheLarkInn mentioned about it, Vue.js is publishing the vue-template-compiler has some API, We can get render function with API from template. However, before we are processing about it, we need to extract (parse) the vue component from .vue files.
There's something which I'm worrying about how to render the Vue and React component. 🤔
I really like the idea of supporting other frameworks. I'd likely approach this more generically though by providing an API that frameworks could hook into. That way framework specific approaches could be isolated out, tested separately, and it gives the community a platform to build off of for other frameworks.
I got webpack to build succesful with a vue file with the vue loader.
https://github.com/storybooks/storybook/pull/1267
I need help getting it to actually render
Hi @ndelangen
I want to help vue supporting as collaborator of storybook.
I had started implementing now.
https://github.com/kazupon/storybook/commits/add-app-vue
An alpha was juste released guys !
getstorybook command => @storybook/cli 3.2.0-alpha.0
runtime => @storybook/vue 3.0.0-alpha.3
addon notes => @storybook/addon-notes 3.2.0-alpha.0
addon knobs => @storybook/addon-knobs 3.2.0-alpha.0
Bug report and Feedbacks are very much welcome !!
I think this is a great idea as the name doesn't allude to it being exclusive to React so support for other frameworks would be great.
I've released an updated alpha of @alexandrebodin / @kazupon / @ndelangen 's wonderful @storybook/vue along with some other pre-release features such as story hierarchies. Please try it out and give feedback to help us get this ready for a broader release:
https://gist.github.com/shilman/947a3d1d4cfdf5c3a8bb06d3d4eb84cf
cc @hzoo @gustojs @nblackburn @thelarkinn
@ndelangen @alexandrebodin @kazupon Currently there's a problem with getstorybook for Vue. See these instructions for details:
https://gist.github.com/shilman/947a3d1d4cfdf5c3a8bb06d3d4eb84cf#vue-support-1262
Any ideas?
Wow! This looks so awesome!
@shilman
I just published 3.2.0-alpha.6which works with getstorybook now.
We will be making an announcement tomorrow 🥁
|
gharchive/issue
| 2017-06-12T14:28:02 |
2025-04-01T04:35:58.189662
|
{
"authors": [
"TheLarkInn",
"alexandrebodin",
"andreiglingeanu",
"gustojs",
"hzoo",
"kazupon",
"nblackburn",
"ndelangen",
"shilman",
"zephraph"
],
"repo": "storybooks/storybook",
"url": "https://github.com/storybooks/storybook/issues/1262",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
226362435
|
Allow passing options to storiesOf().add() as hook to customizing story behaviour in plugins
Having the generic ability to pass options alongside each story to the storiesOf and add method would open the door to some easier per-story customization in decorators or other add-ons.
I see few possible solutions:
Option 1: add an options argument
Make options an argument to storiesOf and add. This would look like this:
storiesOf('kind', kindOptions)
.add('title', render, storyOptions);
Option 2: Make storiesOf and add accept a single options hash arg
Change the api for storiesOf and add so that they accept a single options arg. Something along the lines of:
storiesOf({title: 'kind title'})
.add({
title: 'story title',
render,
myOtherOption
});
Option 3: use a React component based API.
I think this is the most natural solution: Abandon the storiesOf and add functions in place of a react component-oriented design.
For a story about a component called MyComponent the old story would look like:
storiesOf('My Component')
.add(
'story title',
() => <MyComponentToTest />,
{ myOtherOption: {foo: bar, ... } }
);
I would propose that could be expressed like so:
<StoriesOf title="My Component">
<Story title="story title" myOtherOption={{foo: bar, ... }} >
<MyComponentToTest />
</Story>
</StoriesOf>
... I would maybe refine the language at the same time and call the <StoriesOf /> component a <Chapter /> instead. That would look like this:
<Chapter title="My Component">
<Story title="story title" myOtherOption={{foo: bar, ... }} >
<MyComponentToTest />
</Story>
</Chapter>
Some related issues that could be solved with a react-component-based api:
#881 Allow passing options to react-test-renderer .create() method
<Chapter title="Chapter">
<Story title="Story" createNodeMock={node => { ... }} > ... </Story>
</Chapter>
#151 Substories/Hierarchy
<Chapter title="Grandparent category">
<Chapter title="Parent category">
<Chapter title="Child category">
<Story title="Story" createNodeMock={node => { ... }} > ... </Story>
</Chapter>
</Chapter>
</Chapter>
#58 Toggle global settings
<Chapter title="Chapter">
<Story title="Story" createNodeMock={node => { ... }} >
<ThemeToggle> ... </ThemeToggle>
</Story>
</Chapter>
And probably a whole host of other problems
Thank you @theinterned for opening this discussion
One more variant (actually something between Options 1 and 2) was offered by @nutgaard in #705
storiesOf('Kind without metadata')
.add('no-metadata', () => (<p>No metadata here</p>))
.add('metadata', { story: () => (<p>No metadata here</p>), additionData: 'metadata' })
the big advantage of this is that we can pass data through addons like addon-info to ensure compatibility with them
Some thoughts:
The idea of passing arbitrary options w/ a story seems good.
I like the name "chapter" myself, especially to help with the nested problem, although I wonder if we should just adopt a more generic describe/it terminology too.
I don't really understand why the JSX syntax helps? Also doesn't the story part need to be a function: how would that work in JSX?
@tmeasday in answer to your question three, I guess the big advantages taht a JSX approach gives are the same familiar advantages that all react development gives:
Familiarity: Storybook is a react tool so devs are familiar with the component style. There are a lot of use-cases that are solvable with simple and familiar patterns.
Composition: This to me is the big one — components give you great flexibility to compose with sibling and child relationships.
Replaceability: Any component could replace any other component with the same interface (like in the Liskov sense). Imagine being able to customize how how a Story works by just wrapping it or requiring an alternate Story from a different package.
Consistent handling of arbitrary props: PropTypes, default props, props spreading ... all the features that react has for managing the complexity of passing around arbitrary props arguments.
There are probably a lot of other reasons. These are the top three for me ...
@tmeasday as to your question "doesn't the story part need to be a function: how would that work in JSX?" Can you clarify that a bit? Aren't components functions (and sometimes classes)? or do you mean something else?
On the function part: I mean that the current interface expects a function that returns a React element (I guess equivalent to a stateless functional component). Your version above passes an element as a child to the Story component. (in the OO sense, it passes an "instance" rather than a "class").
On the general question of a "react" interface, let's unpack this a little more. I suspect you can get a lot of the advantages you've stated with a basic OO class-based interface. I'm not necessarily advocating for that, but I want to use it for comparison. So something like:
new Chapter({
title: 'Chapter',
stories: [
new Story({
title: 'Story',
render() { return <My Component ... />; },
}),
});
To your points:
I think people are similarly used to a functional or OO style.
I don't really understand the composition point; can you give an example of what you mean?
Classes are certainly easily extendable and replaceable.
Consistent handling of props: What does React do here that helps? Isn't JSX just a thin wrapper around basic JS? Can you help me with an example?
I'm not necessarily against it but I am trying to tease it out so bear with me. I guess I just wonder if the "rendering" interface is restrictive for little benefit. For instance, it no longer becomes easy to take an instance of a chapter and get its stories (compare the two cases: a class instance with a stories property to an Chapter element with a set of children, some of which are Story elements), or easily change the way a chapter/story is rendered.
Is there a reason why restricting the way we think about stories to "rendering" is a good thing? I'm not sure there is.
@tmeasday I agree that you could absolutely describe any interface that is currently described in JSX in pure JS: that's one of the fundamentaly great things about JSX: it transpiles to just pure JS, but is so much nicer to look at then a deeply nested tree of objects and arrays.
So yes people are familiar with OO style and with, say, the functional style of most other testing packages (which I would sort of prefer to see somehow then the fluent api of storybooks as it stands now). But there's a certain expressiveness to the JSX syntax.
I think the value of composition is well expressed by @UsulPro in slack: it makes it easy to solve problems like this: https://github.com/storybooks/storybook/issues/766 because the add-on could be a component rather than a plugin.
I created a separate issue about the JSX syntax: https://github.com/storybooks/storybook/issues/1006
we can continue the discussion there.
I feel that the JSX syntax may be a distraction from the conversation about a generic way to pass props to storiesOf and add which would be a very useful feature on it's own without a major overhall to the storybook api.
I'm fully on board with this idea!
Very solid proposal for #1209 api-v2
I'm board with Option 1. Any breaking change in the API had better be motivated by enabling something we can't do with the current API, IMHO.
I am working on this actually #2679!
This is done actually, just waiting to be merged in #2679.
Released as 4.0.0-alpha.0
|
gharchive/issue
| 2017-05-04T18:03:05 |
2025-04-01T04:35:58.208361
|
{
"authors": [
"Hypnosphi",
"UsulPro",
"ndelangen",
"shilman",
"theinterned",
"tmeasday"
],
"repo": "storybooks/storybook",
"url": "https://github.com/storybooks/storybook/issues/993",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2620510553
|
Expose occupied() in unordered map
Useful when iterating over the whole hash and copying valid entries.
Thanks for the feature request. The same change has also been proposed in #427 to address #423. As already mentioned there, I believe that publicly exposing the internal occupied() function is not the right design. Instead, device_range() should be used to iterate over the (compressed) set of values which is more efficient, especially at low load factors.
The problems concerning incomplete stream support has been fixed in #450. Exposing the internal occupancy status should thus not be necessary in general. Feel free to open an issue if this did not address your use case.
|
gharchive/pull-request
| 2024-10-29T08:29:32 |
2025-04-01T04:35:58.210888
|
{
"authors": [
"david-tingdahl-nvidia",
"stotko"
],
"repo": "stotko/stdgpu",
"url": "https://github.com/stotko/stdgpu/pull/436",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
580439537
|
Version with prerelease tag "x.x.x-x" is working for android but not for ios
Hello,
I'm trying to use react-native-version to propagate the version 0.3.0-0 to both android and ios. It works fine for android but ios checks version 0.3.0 which is unwanted.
yarn version --preminor changes package.json to the following :
npx react-native-version changes the build.gradle to the following :
... and info.plist to the following :
Expected behavior :
info.plist should be updated to have version 0.3.0-0
Thank you !
Hi @ThomasCarca, unfortunately this is on purpose, please read #41. Apple won't allow a TestFlight/App Store release with a x.x.x-x format, only x.x.x. Sorry for this, but for now it seems to be out of our hands.
I will add some info to the README about this.
|
gharchive/issue
| 2020-03-13T08:12:51 |
2025-04-01T04:35:58.215096
|
{
"authors": [
"ThomasCarca",
"stovmascript"
],
"repo": "stovmascript/react-native-version",
"url": "https://github.com/stovmascript/react-native-version/issues/178",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
509481553
|
Add support for custom scalars
Description
This adds support for custom GraphQL scalars. The usage is a bit different from what was proposed in #114: the serialize/parse functions are passed as argument to the decorator instead of being defined on the type itself. This allows to use an existing type without any changes to the type itself as custom scalar.
Usage examples can be found in the docstring of the strawberry.scalar decorator and the accompanying tests.
Types of Changes
[ ] Core
[ ] Bugfix
[x] New feature
[ ] Enhancement/optimization
[x] Documentation
Issues Fixed or Closed by This PR
Closes #114.
Checklist
[x] My code follows the code style of this project. (as far as I can tell)
[ ] My change requires a change to the documentation.
[x] I have updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
[x] I have added tests to cover my changes.
[x] I have tested the changes and verified that they work and don't break anything (as well as I can manage).
Thanks for adding the RELEASE.md file!
Here's a preview of the changelog:
Add support for custom GraphQL scalars.
Added custom scalars for Date, DateTime, and Time. They can't go into scalars.py because that leads to a cyclic import (because strawberry.scalar needs to be imported there, which imports type_converter.py which again imports scalars.py).
Seems like aniso8601 isn't included in the dependencies? Should it be added or should we rely on the Python fromisoformat methods (which apparently cannot parse arbitrary ISO 8601 strings).
I update the PR.
I added aniso8601 as an optional dependency. This dependency is not installed on the CI system (because it is optional). I need to figure out how to fix that.
Also, the pytest-mypy-plugins 1.1 version is breaking the tests for me, so I fixed the version to ~1.0 (I guess fixing the issue would be preferrable, but I'd consider that out of the scope of this PR).
The question regarding parse_literal is still open.
@jgosmann turns out that it is used when parsing arguments passed directly, instead of using variables, see:
I'll take a look at parse_literal later this week. Thanks, for figuring that out!
|
gharchive/pull-request
| 2019-10-19T17:10:46 |
2025-04-01T04:35:58.311297
|
{
"authors": [
"botberry",
"jgosmann",
"patrick91"
],
"repo": "strawberry-graphql/strawberry",
"url": "https://github.com/strawberry-graphql/strawberry/pull/191",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1841051727
|
[strawberry.ext.mypy_plugin] Fix import error for users who don't use pydantic
This should fix import error in strawberry.ext.mypy_plugin for users who don't use pydantic introduced in latest version. Hopefully the new location of the import is fine.
Error importing plugin "strawberry.ext.mypy_plugin": No module named 'pydantic'
Hi, thanks for contributing to Strawberry 🍓!
We noticed that this PR is missing a RELEASE.md file. We use that to automatically do releases here on GitHub and, most importantly, to PyPI!
So as soon as this PR is merged, a release will be made 🚀.
Here's an example of RELEASE.md:
Release type: patch
Description of the changes, ideally with some examples, if adding a new feature.
Release type can be one of patch, minor or major. We use semver, so make sure to pick the appropriate type. If in doubt feel free to ask :)
Here's the tweet text:
🆕 Release (next) is out! Thanks to David Němec for the PR 👏
Get it here 👉 https://github.com/strawberry-graphql/strawberry/releases/tag/(next)
|
gharchive/pull-request
| 2023-08-08T10:41:02 |
2025-04-01T04:35:58.314640
|
{
"authors": [
"botberry",
"davidnemec"
],
"repo": "strawberry-graphql/strawberry",
"url": "https://github.com/strawberry-graphql/strawberry/pull/3018",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
637041269
|
Add call method on result of strawberry.union
This fixes an issue when using named unions with generic types.
Error that was fixed:
Thanks for adding the RELEASE.md file!
Here's a preview of the changelog:
This release fixes an issue when using named union types in generic types,
for example using an optional union. This is now properly supported:
@strawberry.type
class A:
a: int
@strawberry.type
class B:
b: int
Result = strawberry.union("Result", (A, B))
@strawberry.type
class Query:
ab: Optional[Result] = None
|
gharchive/pull-request
| 2020-06-11T14:00:45 |
2025-04-01T04:35:58.317210
|
{
"authors": [
"botberry",
"patrick91"
],
"repo": "strawberry-graphql/strawberry",
"url": "https://github.com/strawberry-graphql/strawberry/pull/350",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
842387490
|
Add support for specifying private fields through a strawberry.type variable
Description
strawberry.Private is great for specifying private fields, however, does not play well with IDE's: https://github.com/strawberry-graphql/strawberry/issues/702
This PR adds supports for specifying which fields on a type should be made private, through a private_fields variable on strawberry.type decorator
Types of Changes
[ ] Core
[ ] Bugfix
[ ] New feature
[x] Enhancement/optimization
[x] Documentation
Issues Fixed or Closed by This PR
https://github.com/strawberry-graphql/strawberry/issues/702
Checklist
[x] My code follows the code style of this project.
[x] My change requires a change to the documentation.
[x] I have updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
[x] I have added tests to cover my changes.
[x] I have tested the changes and verified that they work and don't break anything (as well as I can manage).
Hi, thanks for contributing to Strawberry 🍓!
We noticed that this PR is missing a RELEASE.md file. We use that to automatically do releases here on GitHub and, most importantly, to PyPI!
So as soon as this PR is merged, a release will be made 🚀.
Here's an example of RELEASE.md:
Release type: patch
Description of the changes, ideally with some examples, if adding a new feature.
Release type can be one of patch, minor or major. We use semver,so make sure to pick the appropriate type. If in doubt feel free to ask :)
I'm always a bit hesitant to keep tacking on more arguments to the function signatures for our types. I do feel like there's gotta be a cleaner way to get this done.
I'll let @patrick91 give his opinions here, though
I agree
It's not urgent on my end to get a solution for this in either, we can sleep on this for a while
I'm always a bit hesitant to keep tacking on more arguments to the function signatures for our types. I do feel like there's gotta be a cleaner way to get this done.
I'll let @patrick91 give his opinions here, though
I agree
It's not urgent to get a solution for this on my end, we can sleep on this
@lijok I think we should work on a plugin for PyCharm instead of changing our code, I don't have bandwidth for that at this time, but maybe in future :)
What do you think? Happy to close this PR for now?
@lijok I think we should work on a plugin for PyCharm instead of changing our code, I don't have bandwidth for that at this time, but maybe in future :)
What do you think? Happy to close this PR for now?
Yeah, that makes sense
Happy to close this
|
gharchive/pull-request
| 2021-03-27T01:42:23 |
2025-04-01T04:35:58.326656
|
{
"authors": [
"botberry",
"eimantas-gecas-axomic",
"lijok",
"patrick91"
],
"repo": "strawberry-graphql/strawberry",
"url": "https://github.com/strawberry-graphql/strawberry/pull/792",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
268517555
|
paddr_base
Why is paddr_base is ignored? I have a 32bit ARM device that passes on custom memtester app (which opens /dev/mem and maps test mem from a predefined physical address) but fails on stressapptest - it shows several miscompares
paddr_base was used for a Google-specific mmap interface at one point. You
can add a new implementation of OsLayer::AllocateTestMem if you have some
platform specific code you'd like to use.
memtester and stressapptest have differing test patterns and usage patterns
of memory, so it's not uncommon for one to find errors where the other
didn't.
On Wed, Oct 25, 2017 at 12:21 PM, HarshBokil notifications@github.com
wrote:
Why is paddr_base is ignored? I have a 32bit ARM device that passes on
custom memtester app (which opens /dev/mem and maps test mem from a
predefined physical address) but fails on stressapptest - it shows several
miscompares
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/stressapptest/stressapptest/issues/51, or mute the
thread
https://github.com/notifications/unsubscribe-auth/ANkUIbiQtDqd8DGS7pK4Mt7sGM2U3SOiks5sv4ofgaJpZM4QGhcv
.
Thanks Nick.
I had already modified AllocateTestMem to honor paddr_base, such that it opens /dev/mem and allocates like 300M or so starting from paddr_base. However, paddr_base in my case is a carved out portion of memory from Linux that is guaranteed to be untouched by any process. what puzzles me is stressapptest with --paddr_base is successful but without --paddr_base shows several miscompares. Any hints on that?
I'd guess that the location of the failed memory cell is outside that 300M
region. Do you have the actual failure message handy?
On Thu, Oct 26, 2017 at 11:25 AM, HarshBokil notifications@github.com
wrote:
Thanks Nick.
I had already modified AllocateTestMem to honor paddr_base, such that it
opens /dev/mem and allocates like 300M or so starting from paddr_base.
However, paddr_base in my case is a carved out portion of memory from Linux
that is guaranteed to be untouched by any process. what puzzles me is
stressapptest with --paddr_base is successful but without --paddr_base
shows several miscompares. Any hints on that?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/stressapptest/stressapptest/issues/51#issuecomment-339755980,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ANkUIZGLm-gURWuHDcBDL0_kwbJbzBBRks5swM6tgaJpZM4QGhcv
.
Yes, the address is outside those 300M:
1970/01/02-04:40:02(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 000000ffbdffff01 000000ffffffff01 0000fe4957ff0380 0000fe7fffff0380 != 000000ffffffff01 000000ffffffff01 0000fe7fffff0380 0000fe7fffff0380
1970/01/02-04:40:02(UTC) Report Error: miscompare : DIMM Unknown : 1 : 103012s
1970/01/02-04:40:02(UTC) Hardware Error: miscompare on CPU 0(0x3) at 0xb2e6f960(0x0:DIMM Unknown): read:0xffffffffbdffffff, reread:0xffffffffbdffffff expected:0xffffffffffffffff
I have paddr_base at 0x1CA00000 and the physical address shown in miscompare is 0x0.
Does this happen on multiple devices or just one?
On Fri, Oct 27, 2017 at 1:18 PM, HarshBokil notifications@github.com
wrote:
Yes, the address is outside those 300M:
1970/01/02-04:40:02(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000000ffbdffff01 000000ffffffff01 0000fe4957ff0380
0000fe7fffff0380 != 000000ffffffff01 000000ffffffff01 0000fe7fffff0380
0000fe7fffff0380
1970/01/02-04:40:02(UTC) Report Error: miscompare : DIMM Unknown : 1 :
103012s
1970/01/02-04:40:02(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb2e6f960(0x0:DIMM Unknown): read:0xffffffffbdffffff,
reread:0xffffffffbdffffff expected:0xffffffffffffffff
I have paddr_base at 0x1CA00000 and the physical address shown in
miscompare is 0x0.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/stressapptest/stressapptest/issues/51#issuecomment-340075103,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ANkUIb0treAtvJmD7BX9MvkeZPeHa2Vsks5swjp5gaJpZM4QGhcv
.
2 devices
Same bits were bad in the failed word?
I ran into an interesting issue once where a driver had a bad pointer for
interrupt status, and would overwrite data in userspace. This showed up as
memory corruption, but was actually a sw error. Basically the ISR would
update a bit in an uninitialized hwaddr which sometimes would end up in
stressapptest's memory.
On Fri, Oct 27, 2017 at 4:12 PM, HarshBokil notifications@github.com
wrote:
2 devices
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/stressapptest/stressapptest/issues/51#issuecomment-340118677,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ANkUIR14hanUdhgcgQlhh56s1xGl1l9fks5swmNagaJpZM4QGhcv
.
There are several others in the same log:
1970/01/01-16:51:37(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 000001004201ff01 000000ffffffff01 000102ae6966ff80 0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80 0001027ffffeff80
1970/01/01-16:51:37(UTC) Report Error: miscompare : DIMM Unknown : 1 : 60507s
1970/01/01-16:51:37(UTC) Hardware Error: miscompare on CPU 0(0x3) at 0xb0629a60(0x0:DIMM Unknown): read:0x0000000042020000, reread:0x0000000042020000 expected:0x0000000000000000
1970/01/01-17:51:06(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 0000010041ffff01 000000ffffffff01 000100f277ff0180 0001007fffff0180 != 000000ffffffff01 000000ffffffff01 0001007fffff0180 0001007fffff0180
1970/01/01-17:51:06(UTC) Report Error: miscompare : DIMM Unknown : 1 : 64076s
1970/01/01-17:51:06(UTC) Hardware Error: miscompare on CPU 1(0x3) at 0xb5548220(0x0:DIMM Unknown): read:0xa55a5aa5e75a5aa5, reread:0xa55a5aa5e75a5aa5 expected:0xa55a5aa5a55a5aa5
1970/01/01-19:29:36(UTC) Log: CrcWarmCopyPage Falling through to slow compare, CRC mismatch 000000fffdffff01 000000ffffffff01 0000fe7c17ff0380 0000fe7fffff0380 != 000000ffffffff01 000000ffffffff01 0000fe7fffff0380 0000fe7fffff0380
1970/01/01-19:29:36(UTC) Report Error: miscompare : DIMM Unknown : 1 : 69986s
1970/01/01-19:29:36(UTC) Hardware Error: miscompare on CPU 0(0x1) at 0xa455c060(0x0:DIMM Unknown): read:0xfffffffffdffffff, reread:0xfffffffffdffffff expected:0xffffffffffffffff
1970/01/01-22:43:09(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 000001003fffff01 000000ffffffff01 000102f0fffeff80 0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80 0001027ffffeff80
1970/01/01-22:43:09(UTC) Report Error: miscompare : DIMM Unknown : 1 : 81599s
1970/01/01-22:43:09(UTC) Hardware Error: miscompare on CPU 1(0x3) at 0xa57011e0(0x0:DIMM Unknown): read:0x0000000040000000, reread:0x0000000040000000 expected:0x0000000000000000
1970/01/02-04:40:02(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 000000ffbdffff01 000000ffffffff01 0000fe4957ff0380 0000fe7fffff0380 != 000000ffffffff01 000000ffffffff01 0000fe7fffff0380 0000fe7fffff0380
1970/01/02-04:40:02(UTC) Report Error: miscompare : DIMM Unknown : 1 : 103012s
1970/01/02-04:40:02(UTC) Hardware Error: miscompare on CPU 0(0x3) at 0xb2e6f960(0x0:DIMM Unknown): read:0xffffffffbdffffff, reread:0xffffffffbdffffff expected:0xffffffffffffffff
1970/01/02-06:49:06(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 0000000c03fffff1 000001f3fffffe11 00000c18cffff1e8 0001f4edfffe1118 != 0000000bfffffff1 000001f3fffffe11 00000c11fffff1e8 0001f4edfffe1118
1970/01/02-06:49:06(UTC) Report Error: miscompare : DIMM Unknown : 1 : 110756s
1970/01/02-06:49:06(UTC) Hardware Error: miscompare on CPU 0(0x3) at 0xb5d54260(0x0:DIMM Unknown): read:0x0100000005000000, reread:0x0100000005000000 expected:0x0100000001000000
1970/01/02-08:09:11(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028297feff80 0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80 0001027ffffeff80
1970/01/02-08:09:11(UTC) Report Error: miscompare : DIMM Unknown : 1 : 115561s
1970/01/02-08:09:11(UTC) Hardware Error: miscompare on CPU 0(0x3) at 0xa59045a0(0x0:DIMM Unknown): read:0x0000000002000000, reread:0x0000000002000000 expected:0x0000000000000000
1970/01/02-08:28:19(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028387feff80 0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80 0001027ffffeff80
1970/01/02-08:28:19(UTC) Report Error: miscompare : DIMM Unknown : 1 : 116709s
1970/01/02-08:28:19(UTC) Hardware Error: miscompare on CPU 1(0x3) at 0xad63c1e0(0x0:DIMM Unknown): read:0x0000000002000000, reread:0x0000000002000000 expected:0x0000000000000000
1970/01/02-11:54:14(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 000001004601ff01 000000ffffffff01 0001027ef8a6ffa0 00010267fffeffa0 != 000000ffffffff01 000000ffffffff01 00010267fffeffa0 00010267fffeffa0
1970/01/02-11:54:14(UTC) Report Error: miscompare : DIMM Unknown : 1 : 129064s
1970/01/02-11:54:14(UTC) Hardware Error: miscompare on CPU 0(0x3) at 0xa9c69d60(0x0:DIMM Unknown): read:0x0000020046020200, reread:0x0000020046020200 expected:0x0000020000000200
1970/01/02-15:12:01(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028337feff80 0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80 0001027ffffeff80
1970/01/02-15:12:01(UTC) Report Error: miscompare : DIMM Unknown : 1 : 140931s
1970/01/02-15:12:01(UTC) Hardware Error: miscompare on CPU 0(0x3) at 0xa5160320(0x0:DIMM Unknown): read:0x0000000002000000, reread:0x0000000002000000 expected:0x0000000000000000
1970/01/01-16:51:37(UTC) Log: CrcCheckPage Falling through to slow compare, CRC mismatch 000001004201ff01 000000ffffffff01 000102ae6966ff80 0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80 0001027ffffeff80
1970/01/01-16:51:37(UTC) Report Error: miscompare : DIMM Unknown : 1 : 60507s
1970/01/01-16:51:37(UTC) Hardware Error: miscompare on CPU 0(0x3) at 0xb0629a60(0x0:DIMM Unknown): read:0x0000000042020000, reread:0x0000000042020000 expected:0x0000000000000000
Not sure. Could be a driver problem corrupting memory, or could be a design
problem with your board. Stressapptest really is just writing the data and
reading it back again, and noticing that some of the bits are no longer the
same. So there's not really much likelihood that it's a bug in
stressapptest.
On Fri, Oct 27, 2017, 5:26 PM HarshBokil notifications@github.com wrote:
There are several others in the same log:
1970/01/01-16:51:37(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000001004201ff01 000000ffffffff01 000102ae6966ff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/01-16:51:37(UTC) Report Error: miscompare : DIMM Unknown : 1 :
60507s
1970/01/01-16:51:37(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb0629a60(0x0:DIMM Unknown): read:0x0000000042020000,
reread:0x0000000042020000 expected:0x0000000000000000
1970/01/01-17:51:06(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000010041ffff01 000000ffffffff01 000100f277ff0180
0001007fffff0180 != 000000ffffffff01 000000ffffffff01 0001007fffff0180
0001007fffff0180
1970/01/01-17:51:06(UTC) Report Error: miscompare : DIMM Unknown : 1 :
64076s
1970/01/01-17:51:06(UTC) Hardware Error: miscompare on CPU 1(0x3) at
0xb5548220(0x0:DIMM Unknown): read:0xa55a5aa5e75a5aa5,
reread:0xa55a5aa5e75a5aa5 expected:0xa55a5aa5a55a5aa5
1970/01/01-19:29:36(UTC) Log: CrcWarmCopyPage Falling through to slow
compare, CRC mismatch 000000fffdffff01 000000ffffffff01 0000fe7c17ff0380
0000fe7fffff0380 != 000000ffffffff01 000000ffffffff01 0000fe7fffff0380
0000fe7fffff0380
1970/01/01-19:29:36(UTC) Report Error: miscompare : DIMM Unknown : 1 :
69986s
1970/01/01-19:29:36(UTC) Hardware Error: miscompare on CPU 0(0x1) at
0xa455c060(0x0:DIMM Unknown): read:0xfffffffffdffffff,
reread:0xfffffffffdffffff expected:0xffffffffffffffff
1970/01/01-22:43:09(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000001003fffff01 000000ffffffff01 000102f0fffeff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/01-22:43:09(UTC) Report Error: miscompare : DIMM Unknown : 1 :
81599s
1970/01/01-22:43:09(UTC) Hardware Error: miscompare on CPU 1(0x3) at
0xa57011e0(0x0:DIMM Unknown): read:0x0000000040000000,
reread:0x0000000040000000 expected:0x0000000000000000
1970/01/02-04:40:02(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000000ffbdffff01 000000ffffffff01 0000fe4957ff0380
0000fe7fffff0380 != 000000ffffffff01 000000ffffffff01 0000fe7fffff0380
0000fe7fffff0380
1970/01/02-04:40:02(UTC) Report Error: miscompare : DIMM Unknown : 1 :
103012s
1970/01/02-04:40:02(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb2e6f960(0x0:DIMM Unknown): read:0xffffffffbdffffff,
reread:0xffffffffbdffffff expected:0xffffffffffffffff
1970/01/02-06:49:06(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000000c03fffff1 000001f3fffffe11 00000c18cffff1e8
0001f4edfffe1118 != 0000000bfffffff1 000001f3fffffe11 00000c11fffff1e8
0001f4edfffe1118
1970/01/02-06:49:06(UTC) Report Error: miscompare : DIMM Unknown : 1 :
110756s
1970/01/02-06:49:06(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb5d54260(0x0:DIMM Unknown): read:0x0100000005000000,
reread:0x0100000005000000 expected:0x0100000001000000
1970/01/02-08:09:11(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028297feff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/02-08:09:11(UTC) Report Error: miscompare : DIMM Unknown : 1 :
115561s
1970/01/02-08:09:11(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xa59045a0(0x0:DIMM Unknown): read:0x0000000002000000,
reread:0x0000000002000000 expected:0x0000000000000000
1970/01/02-08:28:19(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028387feff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/02-08:28:19(UTC) Report Error: miscompare : DIMM Unknown : 1 :
116709s
1970/01/02-08:28:19(UTC) Hardware Error: miscompare on CPU 1(0x3) at
0xad63c1e0(0x0:DIMM Unknown): read:0x0000000002000000,
reread:0x0000000002000000 expected:0x0000000000000000
1970/01/02-11:54:14(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000001004601ff01 000000ffffffff01 0001027ef8a6ffa0
00010267fffeffa0 != 000000ffffffff01 000000ffffffff01 00010267fffeffa0
00010267fffeffa0
1970/01/02-11:54:14(UTC) Report Error: miscompare : DIMM Unknown : 1 :
129064s
1970/01/02-11:54:14(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xa9c69d60(0x0:DIMM Unknown): read:0x0000020046020200,
reread:0x0000020046020200 expected:0x0000020000000200
1970/01/02-15:12:01(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028337feff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/02-15:12:01(UTC) Report Error: miscompare : DIMM Unknown : 1 :
140931s
1970/01/02-15:12:01(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xa5160320(0x0:DIMM Unknown): read:0x0000000002000000,
reread:0x0000000002000000 expected:0x0000000000000000
1970/01/01-16:51:37(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000001004201ff01 000000ffffffff01 000102ae6966ff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/01-16:51:37(UTC) Report Error: miscompare : DIMM Unknown : 1 :
60507s
1970/01/01-16:51:37(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb0629a60(0x0:DIMM Unknown): read:0x0000000042020000,
reread:0x0000000042020000 expected:0x0000000000000000
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/stressapptest/stressapptest/issues/51#issuecomment-340125933,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ANkUIcojB4hcUEAVUCFb-rbuVHVUrZPtks5swnSugaJpZM4QGhcv
.
Maybe try lowering the memory clock and see if the errors go away?
On Sat, Oct 28, 2017, 1:14 AM Nick Sanders pudhed@gmail.com wrote:
Not sure. Could be a driver problem corrupting memory, or could be a
design problem with your board. Stressapptest really is just writing the
data and reading it back again, and noticing that some of the bits are no
longer the same. So there's not really much likelihood that it's a bug in
stressapptest.
On Fri, Oct 27, 2017, 5:26 PM HarshBokil notifications@github.com wrote:
There are several others in the same log:
1970/01/01-16:51:37(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000001004201ff01 000000ffffffff01 000102ae6966ff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/01-16:51:37(UTC) Report Error: miscompare : DIMM Unknown : 1 :
60507s
1970/01/01-16:51:37(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb0629a60(0x0:DIMM Unknown): read:0x0000000042020000,
reread:0x0000000042020000 expected:0x0000000000000000
1970/01/01-17:51:06(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000010041ffff01 000000ffffffff01 000100f277ff0180
0001007fffff0180 != 000000ffffffff01 000000ffffffff01 0001007fffff0180
0001007fffff0180
1970/01/01-17:51:06(UTC) Report Error: miscompare : DIMM Unknown : 1 :
64076s
1970/01/01-17:51:06(UTC) Hardware Error: miscompare on CPU 1(0x3) at
0xb5548220(0x0:DIMM Unknown): read:0xa55a5aa5e75a5aa5,
reread:0xa55a5aa5e75a5aa5 expected:0xa55a5aa5a55a5aa5
1970/01/01-19:29:36(UTC) Log: CrcWarmCopyPage Falling through to slow
compare, CRC mismatch 000000fffdffff01 000000ffffffff01 0000fe7c17ff0380
0000fe7fffff0380 != 000000ffffffff01 000000ffffffff01 0000fe7fffff0380
0000fe7fffff0380
1970/01/01-19:29:36(UTC) Report Error: miscompare : DIMM Unknown : 1 :
69986s
1970/01/01-19:29:36(UTC) Hardware Error: miscompare on CPU 0(0x1) at
0xa455c060(0x0:DIMM Unknown): read:0xfffffffffdffffff,
reread:0xfffffffffdffffff expected:0xffffffffffffffff
1970/01/01-22:43:09(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000001003fffff01 000000ffffffff01 000102f0fffeff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/01-22:43:09(UTC) Report Error: miscompare : DIMM Unknown : 1 :
81599s
1970/01/01-22:43:09(UTC) Hardware Error: miscompare on CPU 1(0x3) at
0xa57011e0(0x0:DIMM Unknown): read:0x0000000040000000,
reread:0x0000000040000000 expected:0x0000000000000000
1970/01/02-04:40:02(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000000ffbdffff01 000000ffffffff01 0000fe4957ff0380
0000fe7fffff0380 != 000000ffffffff01 000000ffffffff01 0000fe7fffff0380
0000fe7fffff0380
1970/01/02-04:40:02(UTC) Report Error: miscompare : DIMM Unknown : 1 :
103012s
1970/01/02-04:40:02(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb2e6f960(0x0:DIMM Unknown): read:0xffffffffbdffffff,
reread:0xffffffffbdffffff expected:0xffffffffffffffff
1970/01/02-06:49:06(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000000c03fffff1 000001f3fffffe11 00000c18cffff1e8
0001f4edfffe1118 != 0000000bfffffff1 000001f3fffffe11 00000c11fffff1e8
0001f4edfffe1118
1970/01/02-06:49:06(UTC) Report Error: miscompare : DIMM Unknown : 1 :
110756s
1970/01/02-06:49:06(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb5d54260(0x0:DIMM Unknown): read:0x0100000005000000,
reread:0x0100000005000000 expected:0x0100000001000000
1970/01/02-08:09:11(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028297feff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/02-08:09:11(UTC) Report Error: miscompare : DIMM Unknown : 1 :
115561s
1970/01/02-08:09:11(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xa59045a0(0x0:DIMM Unknown): read:0x0000000002000000,
reread:0x0000000002000000 expected:0x0000000000000000
1970/01/02-08:28:19(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028387feff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/02-08:28:19(UTC) Report Error: miscompare : DIMM Unknown : 1 :
116709s
1970/01/02-08:28:19(UTC) Hardware Error: miscompare on CPU 1(0x3) at
0xad63c1e0(0x0:DIMM Unknown): read:0x0000000002000000,
reread:0x0000000002000000 expected:0x0000000000000000
1970/01/02-11:54:14(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000001004601ff01 000000ffffffff01 0001027ef8a6ffa0
00010267fffeffa0 != 000000ffffffff01 000000ffffffff01 00010267fffeffa0
00010267fffeffa0
1970/01/02-11:54:14(UTC) Report Error: miscompare : DIMM Unknown : 1 :
129064s
1970/01/02-11:54:14(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xa9c69d60(0x0:DIMM Unknown): read:0x0000020046020200,
reread:0x0000020046020200 expected:0x0000020000000200
1970/01/02-15:12:01(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 0000010001ffff01 000000ffffffff01 0001028337feff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/02-15:12:01(UTC) Report Error: miscompare : DIMM Unknown : 1 :
140931s
1970/01/02-15:12:01(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xa5160320(0x0:DIMM Unknown): read:0x0000000002000000,
reread:0x0000000002000000 expected:0x0000000000000000
1970/01/01-16:51:37(UTC) Log: CrcCheckPage Falling through to slow
compare, CRC mismatch 000001004201ff01 000000ffffffff01 000102ae6966ff80
0001027ffffeff80 != 000000ffffffff01 000000ffffffff01 0001027ffffeff80
0001027ffffeff80
1970/01/01-16:51:37(UTC) Report Error: miscompare : DIMM Unknown : 1 :
60507s
1970/01/01-16:51:37(UTC) Hardware Error: miscompare on CPU 0(0x3) at
0xb0629a60(0x0:DIMM Unknown): read:0x0000000042020000,
reread:0x0000000042020000 expected:0x0000000000000000
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/stressapptest/stressapptest/issues/51#issuecomment-340125933,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ANkUIcojB4hcUEAVUCFb-rbuVHVUrZPtks5swnSugaJpZM4QGhcv
.
Hi Nick,
In your comment about the bad pointer in the driver, rises a question: At low level, is there a pause (some other process is going to be run) between write and read and reread at a particular address. or at DDR phy level, is it read after write command (consecutive / non-preemptive) ?
stressapptest writes the data, then comes back to read it after some time.
So at any given time something like 60% of memory is written and waiting to
be read back. This means any bad kernel pointer dereferences are likely to
cause stressapptest errors.
When a miscompare is detected, stressapptest will flush the cache and
reread the same DRAM location, and print out "expected X, read Y, reread Z"
with the intention that if reread is the same as read, likely the data was
written incorrectly or corrupted in DRAM. if reread is the same as
expected, the DDR read transaction had some problem but the data in DRAM is
actually not corrupted.
On Sat, Oct 28, 2017 at 11:59 AM, Manoj notifications@github.com wrote:
Hi Nick,
In your comment about the bad pointer in the driver, rises a question: At
low level, is there a pause (some other process is going to be run) between
write and read and reread at a particular address. or at DDR phy level, is
it read after write command (consecutive / non-preemptive) ?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/stressapptest/stressapptest/issues/51#issuecomment-340212526,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ANkUIfvEFUcAg65dcIvNSn1ULVfGF5ACks5sw3mkgaJpZM4QGhcv
.
Closing
|
gharchive/issue
| 2017-10-25T19:21:02 |
2025-04-01T04:35:58.458886
|
{
"authors": [
"HarshBokil",
"manojngb",
"nickjsanders"
],
"repo": "stressapptest/stressapptest",
"url": "https://github.com/stressapptest/stressapptest/issues/51",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
96943190
|
Add Maintainer
@tylerb Can I be added to this as a maintainer, seems some PR's are stale
Certainly. Thanks for volunteering!
@tylerb Thank u sir, writing a Gitlab Provider
|
gharchive/issue
| 2015-07-24T01:57:01 |
2025-04-01T04:35:58.460834
|
{
"authors": [
"bscott",
"tylerb"
],
"repo": "stretchr/gomniauth",
"url": "https://github.com/stretchr/gomniauth/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
452757796
|
(styled or unstyled) displays red line separators between exp/cvc/zip even when valid
Summary
<CardElement> displays red (invalid?) line separators between expiration / CVC / zip code when viewing in responsive design mode on firefox quantum. happens on all mobile screen widths from ipad (vertical) and below
Other information
unstyled (raw <CardElement>)
styled with inline style tag
this is occurring in firefox quantum on OSX - 67.0.1 (64-bit) mobile responsiveness view
i just tested this out locally on my phone and it does not appear. so this must be a bug with firefox. on widths > ipad vertical the lines do not appear.
not sure if this should be a separate issue:
when testing on my phone i noticed that clicking in the CreditElement causes a zoom effect on focus. this does not return back to normal zoom level after unfocusing / switching to another input. is there a way to correct this behavior? it looks broken during usage. occurs with and without inline styling.
for the second issue this is a problem with safari and inputs that have a font-size < 16px.
i added this meta tag rule content="user-scalable=0, ..." and it restricts this nonsense. despite the name it still allows users to manually zoom / pinch to zoom.
full tag if anyone needs it
<meta
name="viewport"
content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=0, shrink-to-fit=no"
/>
solution from here
Hi @the-vampiire - we've looked into this and have discovered the root issue. We'll be working on a fix and will let you know when it's ready.
We finally narrowed down the real cause of this problem. It only occurs in Firefox Responsive Design mode, not in actual Firefox (or other browsers) running on native devices. It is caused by the browser-provided base stylesheet for responsive design mode having a few rules that don't match the actual base stylesheet of a native mobile browser. Specifically, there's a rule to add a red outline to invalid text inputs with a very specific selector. This rule is not present in any other situation. We believe this is a bug/oddity of responsive design mode itself.
|
gharchive/issue
| 2019-06-05T23:14:41 |
2025-04-01T04:35:58.506982
|
{
"authors": [
"asolove-stripe",
"mbarrett-stripe",
"the-vampiire"
],
"repo": "stripe/react-stripe-elements",
"url": "https://github.com/stripe/react-stripe-elements/issues/360",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
714132653
|
Issue-1861: Upgraded commons-io version to 2.8.0
This pull request fixes strongbox/strongbox#1861.
Incremented commons-io version from 2.6 to 2.8.0. Successfully ran the integration tests and confirmed the dependency tree does not contain any other version that 2.8.0.
Hi!
Thank you for your contribution!
Would you mind signing the ICLA, as described in the Contributing page?
Also, please, feel free to join our chat channel, if you'd like to learn more about the project and/or like to find out what else you could help with.
Kind regards,
Martin
Also, as per our article on [how to upgrade dependencies and plugins], have you created an associated pull for this in the strongbox project?
|
gharchive/pull-request
| 2020-10-03T17:05:32 |
2025-04-01T04:35:58.533952
|
{
"authors": [
"TheNytangel",
"carlspring"
],
"repo": "strongbox/strongbox-parent",
"url": "https://github.com/strongbox/strongbox-parent/pull/88",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
268232143
|
Figure out artifact extensions using Apache Tika
We need to be able to figure out the extensions of artifact files. It seems that Apache Tika is one way to do this. Doing a simple path.subString(path.lastIndexOf(File.separatorChar), path.length)) is not good enough, because there are certain extensions that are more complex like for example tar.gz, tar.bz2 and the likes.
This will probably have to be implemented in the LayoutProvider implementations.
Assuming I create a method findExtension(Argument: argument) -
What should be the return type - String name of the extension - ex - "xlsx" or "tar.gz"?
What would be the input argument file or list of files(will return list or Hashmap<fileName, extension> ?
I think it should be a method in 'RepositoryPath' without parameters and extension string result.
Assuming I create a method findExtension(Argument: argument) -
What should be the return type - String name of the extension - ex - "xlsx" or "tar.gz"?
Yes.
Examples:
foo-1.2.3.tar.gz --> tar.gz
foo-1.2.3.gz --> gz
foo-1.2.3.tar.bz2 --> tar.bz2
foo-1.2.3.jar --> jar
foo-1.2.3.zip --> zip
What would be the input argument file or list of files(will return list or Hashmap<fileName, extension> ?
As far, as I'm aware Apache Tika works with InputStream-s. What I would like is this:
We currently have ArtifactInputStream (this is what is used for the artifacts to be read; it also calculates the artifact's checksums in realtime)
It would be great, if we could somehow use Tika from here while the artifact is being read (when it's being deployed to strongbox), instead of having to re-read it, once it's written to the underlying file system.
Got it. I can take this up and start working on it.
|
gharchive/issue
| 2017-10-25T00:46:26 |
2025-04-01T04:35:58.540531
|
{
"authors": [
"carlspring",
"dinesh19aug",
"sbespalov"
],
"repo": "strongbox/strongbox",
"url": "https://github.com/strongbox/strongbox/issues/370",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
158826292
|
Madcow - syke
Changes Unknown when pulling 25a281117fef668b4830c7e25fb274004974379d on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling cb9987e9ae932f55925ee971b41eb68c4dd68335 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 9c79f5eff10107386f90d509f32dcd9e9aa89107 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 673863758fbdb89822a2d49abca37c4842447720 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling b9daad61a334ed953544b7190d21f63116592d3f on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 20f8437faba37945365d53d1277cfe4da262de41 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling e9e5a902649ffa02b50b11f98500f3ef9e2d7245 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 1b381682c226ca7d82439c4d0a502a62aa7ac6e0 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 10fe7a1ccb91ca463d915643bc713bbc945c7c51 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 81bb93f76bca1ba84f5152853955a9e04be333e9 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 281f8347fb8e67c636d961a4bc7c398625f4a7af on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 4a6be5c03daf44cd1904ef092f2fe62802f16f31 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 886f2b85bcf0da119e530ee41ce7a35de8d65804 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling b2a7d7880e3d7998eeddb2baa1d1095afc42d794 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 71ad00fc25faa48c39fa69f6949850f6056a1a21 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 844500c1abb8f1820910885bf68fdd4828f80054 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 3c8fcb60cc7091686bee22fae89c509811a191a0 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling eb1db870c5740979a975c017c0ad1b3273d043f1 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling c5a68f7d4f9253a3181966955b9e2de3a77a94e2 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 1ee425a4ac5b70e4388a9e61e5569448eafd4b06 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 1ee425a4ac5b70e4388a9e61e5569448eafd4b06 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 4f8c1f27ddb0a8576ec1cb30df7ed77e1b91da13 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 4f8c1f27ddb0a8576ec1cb30df7ed77e1b91da13 on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 7816f9d2769d05d4c9414f5bfd1af367e9a4122f on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 7816f9d2769d05d4c9414f5bfd1af367e9a4122f on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 87d3ded63ec266ca426eae8e9692fce5686fabbb on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling 87d3ded63ec266ca426eae8e9692fce5686fabbb on mokhan:madcow into ** on stronglifters:master**.
Changes Unknown when pulling ff4a5a929d9f5d2e72cb9125226d9212f6d56824 on mokhan:madcow into ** on stronglifters:master**.
|
gharchive/pull-request
| 2016-06-07T03:13:51 |
2025-04-01T04:35:58.567628
|
{
"authors": [
"coveralls",
"mokhan"
],
"repo": "stronglifters/surface",
"url": "https://github.com/stronglifters/surface/pull/45",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
353690152
|
Warn when component is tried to attached to a node already having a component
Is your feature request related to a problem? Please describe.
Currently when developer creates a component that conflicts with selector of already existing one component is silently not attached - it may be hard to understand why the component is not working.
Describe the solution you'd like
console.warn in development mode should be displayed.
Is this still valid issue? I can see console error being thrown on already initialised node.
Are you sure?
https://github.com/strudeljs/strudel/blob/dev/src/core/linker.js#L48 - here looks like there is no message
|
gharchive/issue
| 2018-08-24T08:22:58 |
2025-04-01T04:35:58.632363
|
{
"authors": [
"mateuszluczak",
"xolir"
],
"repo": "strudeljs/strudel",
"url": "https://github.com/strudeljs/strudel/issues/58",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
323216270
|
Make Yeoman generator more usable
The generator is a good way to quickly get up and running with Atlas, but its current behaviour is lacking, to say the least:
It generates JS code which requires compilation with Babel (ES modules, class properties)
It does not add compilation pipeline to the generated code 😱
It cannot be told to generate vanillaJS™️ code to avoid using Babel in a project
This leads to a project with code that requires several additional steps before it can be used in any meaningful way.
We need to fix this.
[x] Have the generator spit out everything required to just get it running, including Babel config and a makefile
[x] Optionally include ESLint setup
[ ] Optionally include Mocha setup
[ ] Have the generator spit out vanillaJS code if the user provides some flag
Improved the generator to generate fully working project: 1bf749c52ebe5357b5e13c5a5f797b2f1125de0b
Improved the generator to generate ESLint setup: 45f2d56b3a803184137211ada54c03a4483d9219
|
gharchive/issue
| 2018-05-15T13:24:14 |
2025-04-01T04:35:58.635560
|
{
"authors": [
"robertrossmann"
],
"repo": "strvcom/atlas.js",
"url": "https://github.com/strvcom/atlas.js/issues/44",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
429812934
|
Support for mutation-testing-elements
Time for the next step: hosting mutation testing reports. We will use mutation-testing-elements for the json schema as well as the html report itself.
Suggestion for the new api:
HTTP POST https://dashboard.stryker-mutator.io/api/reports
{
"apiKey": "61213121e3wdasdASD",
"branch": "master",
"mutationScore": 80,
"repositorySlug": "github.com/stryker-mutator/stryker",
"report": { "...": "..." }
}
Apparently Label isn't supported yet by the stryker dashboard. Removing it from my example
We decided to do this with blob storage.
Duplicate of #108
|
gharchive/issue
| 2019-04-05T15:27:25 |
2025-04-01T04:35:58.637983
|
{
"authors": [
"nicojs",
"simondel"
],
"repo": "stryker-mutator/stryker-dashboard",
"url": "https://github.com/stryker-mutator/stryker-dashboard/issues/105",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
900666723
|
Playertools broken
We seem to have a bug that only affects Edge where it returns a blank page whilst processing player.json.
@ussjohnjay I think this might be coming from your voyage tools based on the errors?
Hmmm. I haven't been able to reproduce this on Edge (version 90.0.818.66). Tested player tools with about a dozen player.json files I have on hand and everything seems to work as expected. @AlexCPU do you still have the player file that triggered these errors?
@joshurtree Looking at the renderCrew method in your voyagestats component, the CrewPopup component is expecting a crew object formatted from playerData, not voyageData, which is why imageUrlPortrait isn't initially there and may be why the individual skill numbers don't line up exactly. Passing playerData as a prop down from playertools to voyagestats (and slightly rewriting renderCrew to get the needed crew properties from the playerData crew array instead) might fix the underlying issue here, though honestly I don't see any reason why what you've done here doesn't always work.
@ussjohnjay there's an example player.json on #243 (which I've closed as a dupe of this) which seems to reliably cause the issue. Can you have a look at that?
This is different from #241 . Please reopen it.
This issue, which affected specific browsers and versions of browsers, should now be resolved.
@mururoa Can you confirm if your issue is still occurring or not?
|
gharchive/issue
| 2021-05-25T11:19:40 |
2025-04-01T04:35:58.652969
|
{
"authors": [
"AlexCPU",
"ineffyble",
"mururoa",
"ussjohnjay"
],
"repo": "stt-datacore/website",
"url": "https://github.com/stt-datacore/website/issues/233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2295975752
|
🛑 Beefs and Beers is down
In 3240f74, Beefs and Beers (https://beefsandbeers.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Beefs and Beers is back up in f6ebe00 after 13 minutes.
|
gharchive/issue
| 2024-05-14T17:09:27 |
2025-04-01T04:35:58.684091
|
{
"authors": [
"studiovlijmscherp"
],
"repo": "studiovlijmscherp/uptime",
"url": "https://github.com/studiovlijmscherp/uptime/issues/1110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1350083780
|
🛑 Lightronics is down
In 9527314, Lightronics (https://www.lightronics.nl) was down:
HTTP code: 500
Response time: 2420 ms
Resolved: Lightronics is back up in c21de74.
|
gharchive/issue
| 2022-08-24T22:21:27 |
2025-04-01T04:35:58.686434
|
{
"authors": [
"studiovlijmscherp"
],
"repo": "studiovlijmscherp/uptime",
"url": "https://github.com/studiovlijmscherp/uptime/issues/225",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
464911838
|
Update README.md with my information
Update README.md with my information
Thanks for the pull request; I'm merging it now.
|
gharchive/pull-request
| 2019-07-07T03:49:13 |
2025-04-01T04:35:58.701995
|
{
"authors": [
"DJankauskas",
"mzhang00"
],
"repo": "stuyspec/stuyspec.github.io",
"url": "https://github.com/stuyspec/stuyspec.github.io/pull/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
231396099
|
Windoze plz
Hi, I tried to use this on windows and came across a few issues trying to look at the styled-components section
I followed the instructions on installation, had to change the scripts as that was failing, I added full file path:
"main": "src/index.js",
"scripts": {
"start": "C:/Users/spenc/gitrepos/comparison/examples/node_modules/.bin/webpack-dev-server --config C:/Users/spenc/gitrepos/comparison/examples/webpack.base.babel.js --content-base build"
},
"author": "Max Stoiber",
Then get the following output:
closing, this. Feel free to open your own issue to resolve it, I'm no longer interested in it being resolved
|
gharchive/issue
| 2017-05-25T16:53:22 |
2025-04-01T04:35:58.704184
|
{
"authors": [
"spences10"
],
"repo": "styled-components/comparison",
"url": "https://github.com/styled-components/comparison/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
434277950
|
Migration path and breaking changes in v4
This is not really an issue itself, but a statement that I want to pass to the kind authors and contributors to styled-components. First of all, I want to thank you for all your hard work and changing the way we style our web projects ❤️
I was really excited when v4 was announce, not only because of its improvement in performance, but also because of as attribute which will let as improve accessibility and a better global injection paradigm, at least in my opinion.
I was though surprised that there wasn't a graceful path of migration for the new global injection API in the package. As a library maintainer, this is a must when trying to drive adoption and allow people to move to a newer version of a project which I maintain and have put time on.
In my opinion, styled-components should have a release which warn about the deprecated use of injectGlobal in favor of v4 new form.
Right now in my company we have internal dependencies, as most big companies with cross functional teams. This means that we have a design system, an app header and various product screens that are using styled-components. The problem is that the migration is quite impossible for us, or involves eternal work hours and lots of produciton and cache checks since the release of one of them using v4 will break the others.
How could this be avoided?
Supporting both APIs at some point of the releases. Why? Because then we could safely bump the version of our vendor bundle and deppendencies in our services. Then progressively migrate each part of our product, and when all were migrated we can actually bump styled-components to v4 without any worries.
Right now, it's really hard to explain to business departments the benefit of moving to a new version given the value it really goes to the product, no matter how much we think about it it doesn't makes sense to us and we are thinking on staying on v3 or directly move away to another css-in-js solution.
React is a good example to look at regarding developer experience, and it's never stopped us form moving to newer versions.
I hope the team can take the feedback in a constructive way for future changes.
Cheers,
Jeremias.
I'm immediatelly closing the issue to avoid noise in the repository.
I was though surprised that there wasn't a graceful path of migration for the new global injection API in the package. As a library maintainer, this is a must when trying to drive adoption and allow people to move to a newer version of a project which I maintain and have put time on.
Pardon? We posted a very easy to follow migration guide AND codemods for this.
https://www.styled-components.com/docs/faqs#what-do-i-need-to-do-to-migrate-to-v4
https://github.com/styled-components/styled-components-codemods
Hi @probablyup, thanks for joining the conversation. I'm not saying that migrating is hard, not at all actually and the documentation is on point 👍
I'm saying that not having a version of the package where both the old and new API exist together doesn't allow us with interoperational dependencies to migrate since updating one will break the other.
I hear what you are saying @jeremenichelli, but also we did make it as easy as possible (a single command to run codemods) to migrate to v4. It should be a two minute process, and publishing a version with all APIs would have 1. massively increased your bundle size and 2. not made migration any easier, so we did not do it.
Hi @mxstbr, I want to clarify this again in case it wasn't clear before: the team did a great job by first, releasing a better version of the library, then by documenting it and later to provide straightforward mechanisms to migrate a single project to the new version. That's settled up 👍
I was talking about having at some point both global style APIs to allow cross functional teams to gradually migrate to v4. Right now, we have an app shell, multiple screens, and a design system consuming styled components from a vendor bundle (because you don't want to bundle the same dependency multiple times). That said, coordinating this migration across all projects and teams is really hard, without ignoring that a user can have one of these bundles cached but download the new version of the vendor for example (because cache is complicated 🤷♂️) and our product will crash.
So, to give the team an update, we will need to bundle styled components on each bundle, until they are all on v4, and then switch back to rely on our vendor version of the package.
So, we will make the user download styled components three times in some parts of the product, which is still affecting bundle size and probably with a bigger impact.
Sorry for the long message, but I wanted to explain better the reasoning around this, and again it's coming from a good place 🙏 I am an open source, library and internal tooling maintainer too and I work around migrations and DX a lot. I'm happy to jump in a call to explain this even deeper or more clear if the message and my English are confusing.
Let me know if I can help you folks with anything on this, whether is with my experience, or with coding or whatever, the communication channel is always open here. Thanks!
|
gharchive/issue
| 2019-04-17T13:16:58 |
2025-04-01T04:35:58.712897
|
{
"authors": [
"jeremenichelli",
"mxstbr",
"probablyup"
],
"repo": "styled-components/styled-components",
"url": "https://github.com/styled-components/styled-components/issues/2506",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
563012146
|
Styled Components doesn't work after "npm run build" in a local build folder.
Hey!
I'm using the create-react-app with styled-components. I'm running index.html from the build folder(using "homepage": "." in the package.json) and it's running with styled-components but without any styles for sc classes. I've found solutions with react-app-rewired, styled-components-babel-macros and it's not help me to solve it. I don't want to use the eject for my CRA, so help me please with the solution to fix it.
https://styled-components.com/docs/tooling#babel-macro
On Tue, Feb 11, 2020 at 2:01 AM EntaltsevSN notifications@github.com
wrote:
Hey!
I'm using the create-react-app with styled-components. I'm running
index.html from the build folder(using "homepage": "." in the package.json)
and it's running with styled-components but without any styles for sc
classes. I've found solutions with react-app-rewired,
styled-components-babel-macros and it's not help me to solve it. I don't
want to use the eject for my CRA, so help me please with the solution to
fix it.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/styled-components/styled-components/issues/3014?email_source=notifications&email_token=AAELFVTC6PIJX52WMG4RATTRCJEMBA5CNFSM4KS3KW2KYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IMO4IZA,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAELFVWNBDYP5RRYBJHGRLTRCJEMBANCNFSM4KS3KW2A
.
@probablyup I've already test it and it's not working for me. Can you please clarify or provide me with the example of code which works after building?
@probablyup Also, I tried to deploy it to the Netlify and still has same issue.
Sorry but first and foremost please post usage questions on StackOverflow or our Spectrum as listed in our issue template.
Also as you’ve found: Babel Macros work in CRA, so you can use our Babel Macro.
Now I do want to help and would love to resolve this quickly for you, but I have nothing to go on 😔 So you’ve said it isn’t working for you, what do you mean by that? You said you think you need to use our Babel plugin I assume? But you haven’t said why you think so.
I’m running Index.html from the build folder
So you’ve built your CRA app with the build script? Does it work in development when you just start the app though or what happens instead?
The issue template still gives you a good guide on how to write an issue that helps me help you 🙌 maybe check whether you can describe what you’re seeing, what you’ve done and tried, and what you expect?
The issue is fixed. It was happened because the createGlobalStyle component was added before the app body. I've placed it in the end of the app component and it works now.
|
gharchive/issue
| 2020-02-11T07:01:18 |
2025-04-01T04:35:58.721123
|
{
"authors": [
"EntaltsevSN",
"kitten",
"probablyup"
],
"repo": "styled-components/styled-components",
"url": "https://github.com/styled-components/styled-components/issues/3014",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1838480110
|
Add missing types CSSProperties, CSSObject, CSSPseudos and CSSKeyframes
https://github.com/styled-components/styled-components/issues/4062
I found that styled-components has lost some types since v6. There is actually an issue.
I added some of them.
CSSProperties
CSSObject
CSSPseudos
CSSKeyframes
I tried to work on the other types, but they were too complicated, so I am dealing with what I can for now.
I think CSSObject is basically the same as StyledObject, maybe we just need to add what's missing to StyledObject and just alias it.
@probablyup
I certainly thought so, so I fixed it here. What do you think?
https://github.com/styled-components/styled-components/pull/4117/commits/dea5bfd2d60ad05359b992c51c1b13dd3b1487a1
|
gharchive/pull-request
| 2023-08-07T00:26:34 |
2025-04-01T04:35:58.724442
|
{
"authors": [
"probablyup",
"takurinton"
],
"repo": "styled-components/styled-components",
"url": "https://github.com/styled-components/styled-components/pull/4117",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
490094458
|
Add CSS Columns module
I’d like an almost-clone of the Grid module in Styled System made for CSS Columns properties. column-gap to use the whitespace scale, column-rule (& its derivatives) to use the border scale, etc.
Is your feature request related to a problem? Please describe.
Styling CSS columns with styled-system is very inelegant right now—I want to use my whitespace scale for column-gap (like grid-gap), etc, but have to manually access the theme for those values.
@lachlanjc can you not do that using the props that are already a part of styled-system?
@The-Code-Monkey Styled System doesn’t currently use your theme spacing scale for columnGap, etc. You can use the raw CSS properties regardless, but it’d be great if Styled System applied the theme where it makes sense. Until then we have to manually fetch the spacing scale from the theme for using CSS Columns. (Note: this isn’t CSS Grid, it’s for text columns, if I didn’t make that clear.)
|
gharchive/issue
| 2019-09-06T02:21:04 |
2025-04-01T04:35:58.726964
|
{
"authors": [
"The-Code-Monkey",
"lachlanjc"
],
"repo": "styled-system/styled-system",
"url": "https://github.com/styled-system/styled-system/issues/763",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
301097637
|
Provide a way to skip some examples
Snapguidist should ideally provide a way to filter which examples are tested, or to disable testing for specific code blocks. For example, users may not want snapguidist to run tests on example blocks in general doc sections.
+100 @cef62 and I talked a lot about this.
PR anyone? :)
One way to go about it would be to give snapguidist's Preview a prop that would disable testing, and have Playground set that on Preview based on a setting passed into Playground, like notest (for consistency with noeditor).
That would require duplicating a bit more of styleguidist's Playground's code in snapguidist, but it shouldn't be too bad.
|
gharchive/issue
| 2018-02-28T16:44:36 |
2025-04-01T04:35:58.744519
|
{
"authors": [
"MicheleBertoli",
"jason0x43"
],
"repo": "styleguidist/snapguidist",
"url": "https://github.com/styleguidist/snapguidist/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
225756423
|
Adds Faker::Witcher
Adds characters, locations, and such from the Witcher series, as Faker::Witcher
@llamicron I worked on my own fork on something for the Witcher saga. I consider mine a little bit more complete. I don't know if but if this pull request doesn't get approved I'll put up one of mine.
Either way if this gets accepted I'd be happy to extend the Witcher Faker :D
@JoaoCnh That's fine. I threw this together in a few hours, so my list isn't complete at all. I'm going to close this PR so you can open one.
|
gharchive/pull-request
| 2017-05-02T16:55:05 |
2025-04-01T04:35:58.762915
|
{
"authors": [
"JoaoCnh",
"llamicron"
],
"repo": "stympy/faker",
"url": "https://github.com/stympy/faker/pull/899",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
904216500
|
Auto generate collections index page
Automatically generate index page under the paths specified in routesConfig.ts.
There are a couple of observations:
For some reason, when viewing a post under a collection path, Nextjs will request favicon.png and Ghost will report that it's an invalid slug. This doesn't affect the post at all but it'd be nice to know why Nextjs does that.
A post will always be reachable at the root as well as under the collection path. Not sure why and if it matters at all.
This PR assumes that the Ghost installation does NOT have its own collections setup.
@chancharles: thanks so much for this implementation. I will have do the code review within the next week. Form an architectural view, you have to make sure the post is only reachable under one path - otherwise SEO is affected badly, so that should be sorted out.
We haven't talked about it yet, but this repo has 2 modes of operation (one with (isr=false and one with isr=true). You need to make sure it works for both modes (start with isr=false first).
Don't know about the favicon, but am sure this can be sorted out - that's not so important at the moment.
Just tried with the ISR flag and my changes didn't work with ISR set to true. :(
Unfortunately, I have to switch back to Gatsby due to a more pressing need. If you get around to this issue, I would love to see what was your solution and learn from it. Thanks!
Hi Charles, no problem. Seems like there is some more work that needs to be put into this feature. If you are switching priorities, I will decline this PR and keep it for later reference. I might revisit it, once I get more time for it myself. Or someone else with the same need will take the lead, let's wait and see.
For future reference, I think the following requirements must be fulfilled
better separation between collection and regular endpoints
no duplicate content (articles showing up in a collection should not be reachable elsewhere)
support for ISR
@chancharles: thanks so much for this first implementation, I think it will serve as an inspiration.
|
gharchive/pull-request
| 2021-05-27T21:33:33 |
2025-04-01T04:35:58.767620
|
{
"authors": [
"chancharles",
"styxlab"
],
"repo": "styxlab/next-cms-ghost",
"url": "https://github.com/styxlab/next-cms-ghost/pull/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1149824449
|
Support PNG files
The world just doesn't seem to be ready for SVG files :-(...
Work plan:
[x] Make unpersonalized Square PNG files using ImageMagick -- README.md
[x] Make personalized Square PNG files using ImageMagick -- build-square.sh
[x] Update JSON metadata to reference the PNG files -- load-blockchain.mjs
[x] Deploy and update existing images
[ ] Set Will's Twitter profile picture to a Square
Progress: a2d22b305174fa5b8d795c024cc6a2b4bd18cf03
Huge progress on this...
|
gharchive/issue
| 2022-02-24T22:26:17 |
2025-04-01T04:35:58.770606
|
{
"authors": [
"fulldecent"
],
"repo": "su-squares/update-script",
"url": "https://github.com/su-squares/update-script/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1706811872
|
UnboundLocalError:
UnboundLocalError: cannot access local variable 'loader' where it is not associated with a value
Git missed the fix. Now pull and it should work.
see
I still have this same error-
Traceback (most recent call last):
File "/Users/yenielfeliciano/Downloads/CASALIOY/ingerir.py", line 42, in
main(sources_directory, cleandb)
File "/Users/yenielfeliciano/Downloads/CASALIOY/ingerir.py", line 30, in main
documents = loader.load()
^^^^^^
UnboundLocalError: cannot access local variable 'loader' where it is not associated with a value
make sure your running python3 ingest.py <your_directory>/ y
I tested it on several machines. Also make sure your requirements file is run.
@anonimo28 check out latest release.
|
gharchive/issue
| 2023-05-12T02:08:53 |
2025-04-01T04:35:58.778527
|
{
"authors": [
"anonimo28",
"su77ungr"
],
"repo": "su77ungr/CASALIOY",
"url": "https://github.com/su77ungr/CASALIOY/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1523010926
|
Notes with date and time.
A user can write notes using voice and it will be saved with date and time in a text file
Just like a Voice Diary.
Kindly assign me this feature for SWOC.
Go on !
|
gharchive/issue
| 2023-01-06T19:01:54 |
2025-04-01T04:35:58.803074
|
{
"authors": [
"programmingninjas",
"subhadip-saha-05"
],
"repo": "subhadip-saha-05/Python-Voice-Assistant",
"url": "https://github.com/subhadip-saha-05/Python-Voice-Assistant/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2254906231
|
🛑 Cubari (Dynasty Scans) is down
In 79eeb5e, Cubari (Dynasty Scans) (https://cubari.moe/read/dynasty/the_engagement_of_the_disgraced_witch_and_the_cross_dressing_princess/?cache_buster=$GITHUB_RUN_NUMBER) was down:
HTTP code: 500
Response time: 8206 ms
Resolved: Cubari (Dynasty Scans) is back up in a40ed03 after 13 minutes.
|
gharchive/issue
| 2024-04-21T05:12:30 |
2025-04-01T04:35:58.806055
|
{
"authors": [
"funkyhippo"
],
"repo": "subject-f/cubari-status-page",
"url": "https://github.com/subject-f/cubari-status-page/issues/2001",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2341987391
|
🛑 Cubari (MangaDex) is down
In 61396f4, Cubari (MangaDex) (https://cubari.moe/read/mangadex/d86cf65b-5f6c-437d-a0af-19a31f94ec55/?cache_buster=$GITHUB_RUN_NUMBER) was down:
HTTP code: 500
Response time: 599 ms
Resolved: Cubari (MangaDex) is back up in 68fea19 after 1 minute.
|
gharchive/issue
| 2024-06-09T02:45:46 |
2025-04-01T04:35:58.808804
|
{
"authors": [
"funkyhippo"
],
"repo": "subject-f/cubari-status-page",
"url": "https://github.com/subject-f/cubari-status-page/issues/2427",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1122656316
|
Should there be a way to adjust the style of inlay hints in the README?
The default style of inlay hints may not be suitable for everyone.
It may be convenient to add custom css styles in LSP-rust-analyzer.sublime-settings to solve this issue.
Inlay hints in default style:
Inlay hints after adjusting the style:
The css code I use:
body {{
padding: 0;
margin: 0 -1.4em -2.2em -1.2em;
border: 0;
font-size: 0.8em;
position: relative;
top: -0.82rem;
line-height: 0.8em;
}}
Related: https://github.com/sublimelsp/LSP-rust-analyzer/issues/19
Is it possible to actually toggle the inlay hints on or off? that's all I really want for now.
Looks at the options it provides.
Is it possible to actually toggle the inlay hints on or off? that's all I really want for now.
Looks at the options it provides.
Sorry but can you be specific which option? I looked at LSP -> Settings and also lsp-rust-analyzer -> Settings and did not see anything about hints in either one.
Nevermind, I found the option rust-analyzer.inlayHints.enable
|
gharchive/issue
| 2022-02-03T04:59:34 |
2025-04-01T04:35:58.863881
|
{
"authors": [
"Rapptz",
"rchl",
"thep0y",
"velvia"
],
"repo": "sublimelsp/LSP-rust-analyzer",
"url": "https://github.com/sublimelsp/LSP-rust-analyzer/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
731003053
|
Use Directive when creating bundle
When creating a bundle, a Directive should be generated using the metadata from each Runnable's DotHive, and the Directive file should be included in the bundle
If a version is not provided, one should be sussed out using git tags?
Basic version of this is done, will open new issue for generating a version number
|
gharchive/issue
| 2020-10-28T01:34:55 |
2025-04-01T04:35:58.867289
|
{
"authors": [
"cohix"
],
"repo": "suborbital/subo",
"url": "https://github.com/suborbital/subo/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1283918721
|
allow php 8.x and elasticsearch/elasticsearch 6.x
I've needed elasticsearch 6.8 compatibility, so I've tested it with elasticsearch/elasticsearch:^6.0 and its also compatible with php 8.x..
Thanks
|
gharchive/pull-request
| 2022-06-24T15:59:20 |
2025-04-01T04:35:58.876036
|
{
"authors": [
"Basster",
"subsan"
],
"repo": "subsan/codeception-module-elasticsearch",
"url": "https://github.com/subsan/codeception-module-elasticsearch/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
158197945
|
Browserify bundle contains absolute link to some development files
I'm trying to bundle my CLI app with browserify (13.0.1) to make it easy to distribute, but I'm running into some issues with some (4 in particular) paths within the created bundle which contain absolute links to npm packages on my development environment. If I run the bundle from another machine, it can't find those resources.
Here is what I'm running:
browserify ./src/js/cmd/interface.js > ./dist/test.js --node --dg
// test.js
Line 52285
}).call(this,"/Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/node_modules/pug/node_modules/pug-filters/node_modules/clean-css/node_modules/source-map/node_modules/amdefine/amdefine.js")
Line 75816
}).call(this,"/Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/node_modules/yargs/lib")
Line 84494
}).call(this,"/Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/node_modules/yargs")
Line 85470
}).call(this,"/Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/src/js")
Is there any way to prevent this?
Hi @dominickp. I'm going to take a guess that this may be related to using the --no-commondir option that is implied by --node / --bare. I suggest you try using the specific options that equate to --node, excluding --no-commondir, and see if that has the desired effect. Please provide a minimal reproduction to keep the issue open, or report back if that works and close the issue if it's resolved.
BTW when posting code like this, please mark it up as a block instead of inline code. See How to report issues with browserify (and also the example section) for more information. Thanks!
Thanks for the reply. I retried with the following but the same lines are causing errors, except now they are relative paths which do not resolve.
$ browserify ./src/js/cmd/interface.js > ./dist/test.js --no-builtins --insert-global-vars="__filename,__dirname" --no-browser-field --dg
$ node ./dist/test.js --version
fs.js:839
return binding.lstat(pathModule._makeLong(path));
^
Error: ENOENT: no such file or directory, lstat '/node_modules'
at Error (native)
at Object.fs.lstatSync (fs.js:839:18)
at Object.realpathSync (fs.js:1439:21)
at /Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/dist/test.js:29859:38
at Array.map (native)
at Object.<anonymous> (/Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/dist/test.js:29858:3)
at Object.__dirname.150.fs (/Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/dist/test.js:29989:4)
at s (/Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/dist/test.js:1:316)
at /Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/dist/test.js:1:367
at Object.__dirname.134../shared (/Users/dominickpeluso/Desktop/Shawmut/VDPBuddy1/dist/test.js:26835:16)
To reproduce, please clone this: https://github.com/shawmut/DataPreflight. Then run :
$ browserify ./src/js/cmd/interface.js > ./dist/test.js --no-builtins --insert-global-vars="__filename,__dirname" --no-browser-field --dg
$ node ./dist/test.js --version
This looks like a more serious issue that browserify may not be able to directly help you with. There are some dependencies in your bundle, like UglifyJS, which are using node filesystem methods to load and execute their own internal dependencies. (I don't claim to understand why it's doing this.) The presence of this code prevents having a single bundled file that can be run on its own. It's way outside the scope of browserify to detect and replace it with a suitable equivalent that works in a standalone environment, unfortunately.
If you open the bundled file and search for __filename or __dirname, you can find all the cases where your bundle is trying to use its location in the filesystem. These variables were what produced the absolute paths you were seeing before.
|
gharchive/issue
| 2016-06-02T17:38:09 |
2025-04-01T04:35:58.886949
|
{
"authors": [
"MellowMelon",
"dominickp",
"jmm"
],
"repo": "substack/node-browserify",
"url": "https://github.com/substack/node-browserify/issues/1571",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
466098731
|
Pass options verbatim
Any chance to get something like this:
quote(['echo', { op: '<' }, 'file']) to produce echo < file instead of echo \< file?
Basically a way to mark verbatim parts in the array and don't escape them.
Happy to open a PR!
I think this is a duplicate of #12—I think adding an unparse() method that has this behaviour would be nice, or adding an option to quote so you can do quote([], { escapeOperators: false }).
|
gharchive/issue
| 2019-07-10T04:53:26 |
2025-04-01T04:35:58.889142
|
{
"authors": [
"goto-bus-stop",
"joscha"
],
"repo": "substack/node-shell-quote",
"url": "https://github.com/substack/node-shell-quote/issues/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
53427397
|
1.0.1 - node.js complains about window.URL
Upgrading to the latest 1.0.1 causes the following issue when trying to run.
This is how it's being used, so pretty basic.
var Work = require('webworkify');
var ImageWorker = require('../workers/process-image');
.
.
.
var imageWorker = Work(ImageWorker);
Here's the stack output:
/Users/snypelife/project/node_modules/webworkify/index.js:6
var URL = window.URL || window.webkitURL || window.mozURL || window.msURL;
^
ReferenceError: window is not defined
at Object.<anonymous> (/Users/snypelife/project/node_modules/webworkify/index.js:6:11)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/Users/snypelife/project/app/shared/partials/invoice.jsx:19:12)
at Module._compile (module.js:456:26)
at Object.require.extensions.(anonymous function) [as .jsx] (/Users/snypelife/project/node_modules/node-jsx/index.js:26:12)
Looks like you might wanna bring that window.URL back into the module.exports.
There is no window on client side, you need to change your design .
But if you have the same code running on both server and client?
Doesn't matter , you get this error from the part that is running on server
Patch merged, closing issue.
|
gharchive/issue
| 2015-01-05T18:35:19 |
2025-04-01T04:35:58.891831
|
{
"authors": [
"asvsfs",
"snypelife"
],
"repo": "substack/webworkify",
"url": "https://github.com/substack/webworkify/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
162433237
|
Add support for all simple annotation types
We've got most basic annotations covered. However we could need help for the remaining items.
I just activated command + tool for bold, so a bold toggle shows up in the toolbar. See #31
Anyone wants to help activating the other simple annotation tools?
[ ] italic
[ ] superscript
[ ] subscript
[ ] monospace
These are just missing the commands + tools. However there's a whole bunch of unsupported JATS elements that could be enabled. E.g. (overline) If someone wanted to help us out here, please ping me and I'll get you started.
|
gharchive/issue
| 2016-06-27T11:45:54 |
2025-04-01T04:35:58.894461
|
{
"authors": [
"michael"
],
"repo": "substance/scientist",
"url": "https://github.com/substance/scientist/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
406770563
|
bring back thumbnail of panels?
I would prefer if a thumbnail carousel of some sort would provide preview of what the panels are. In first demo, it was more intuitive. Maybe similar but horizontal (top?)
We tried this but realised there's a number of problems with the thumbnails on top approach:
does not scale (we would need to introduce a complex carousel navigation if there is like more than 3 sub-figures)
images could have different aspect ratio, which makes it hard to position properly
So that's why we went for a space-saving minimal solution, with the only problem that you don't have random access anymore. Is that a requirement?
To address random access we would propose to allow expand the figure, to see all panels at once during editing. We thought it might be good to have that as a default even especially for editing figure packages which don't have a lot text content anyway.
yes, the scalability of thumbnails is an issue. Maybe it helps to separate 2 issues: 1) provide a visual hint and overview that the figure as multiple panels; 2) provide random access. I think your solution for random access is very good. But before the figure is expanded there is probably the need to have a slightly more explicit hint that there are sveral panels and how many?
Good suggestion!
@source-data how about this approach?
I'd then leave out the navigation controls completely and allow only collapsed and expanded modes.
Another option for SourceData, to generally have them always expanded?
I personally do not like the option above... I like the current implementation with the count and the arrows. Maybe this could be made a bit more explicit with some wording, like 'panel 1 of x'. And maybe the whole wording + arrows should be wrapped in something like a button or something that makes it more visible?
|
gharchive/issue
| 2019-02-05T12:57:45 |
2025-04-01T04:35:58.898489
|
{
"authors": [
"evabenitoembo",
"michael",
"oliver----",
"source-data"
],
"repo": "substance/texture",
"url": "https://github.com/substance/texture/issues/1139",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1536396295
|
Update monitor-node-metrics.md
fixed indentations
Thanks!
|
gharchive/pull-request
| 2023-01-17T13:38:21 |
2025-04-01T04:35:58.899670
|
{
"authors": [
"kalaninja",
"lisa-parity"
],
"repo": "substrate-developer-hub/substrate-docs",
"url": "https://github.com/substrate-developer-hub/substrate-docs/pull/1739",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
}
|
267543361
|
Left click the Tray on Windows
Currently to open up Tray on Windows you have to right click it and then it opens up, but on other operating systems like MacOS and Ubuntu , you just left click it and then Tray pops up, suggestion:
Lets left click Tray to open it on Windows
This issue has the Testing label ,where was this implemented? how can I test it?
|
gharchive/issue
| 2017-10-23T05:12:19 |
2025-04-01T04:35:58.922836
|
{
"authors": [
"akylbekesenaliev"
],
"repo": "subutai-io/tray",
"url": "https://github.com/subutai-io/tray/issues/316",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
296090109
|
Failed to load the tool-bar package
`
Error compiling Less stylesheet: /home/sergio/.atom/packages/tool-bar/styles/tool-bar.less
Line number: 37
variable @button-text-color-selected is undefined
Atom: 1.23.3 x64
Electron: 1.6.15
OS: Debian GNU/Linux
Thrown From: tool-bar package 1.1.5
Stack Trace
Failed to load the tool-bar package
At variable @button-text-color-selected is undefined in /home/sergio/.atom/packages/tool-bar/styles/tool-bar.less:37:13
LessError: variable @button-text-color-selected is undefined
at /packages/tool-bar/styles/tool-bar.less:37:13
Commands
Non-Core Packages
autocomplete-clang 0.11.4
build 0.70.0
busy 0.7.0
busy-signal 1.4.3
file-icons 2.1.16
git-plus 7.10.0
intentions 1.1.5
language-ini 1.19.0
linter 2.2.0
linter-gcc 0.7.1
linter-ui-default 1.6.10
minimap 4.29.7
platformio-ide 2.0.1
platformio-ide-debugger 1.2.5
platformio-ide-terminal 2.8.0
remote-ftp 2.1.4
remote-sync 4.1.8
tool-bar 1.1.5
`
ok sorry already reported
|
gharchive/issue
| 2018-02-10T10:14:35 |
2025-04-01T04:35:58.928182
|
{
"authors": [
"sergiocntr"
],
"repo": "suda/tool-bar",
"url": "https://github.com/suda/tool-bar/issues/222",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
350093808
|
Bulk Tracks and Spam Moderation
Right now uploaded tracks and comments both go through spam detection.
Both can be un-spammed, but neither can be done in bulk, so probably some sort of moderator view is necessary.
Spam comments currently show up "in bulk" (groups of 10) here:
http://alonetone.com/comments
But tracks are kind of harder to find, they are here:
https://alonetone.com/radio/private/40/1
Thoughts on refining:
[ ] It might be nice to see
[ ] Artists get pretty confused when they upload a track and it doesn't show up. What should we do? The UI needs to show the track to them I think. But do we make it clear it's spam? Do we allow them to unspam if they pass some criteria?
[ ] The normal workflow for dealing with spam is to mark the track as spam (so Akamai gets a chance to learn from it) and then see what the other tracks are from the user, mark those as spam if they aren't already, and then delete the user. This workflow should probably be better encoded...
We have an admin area now that addresses most of this.
|
gharchive/issue
| 2018-08-13T16:06:02 |
2025-04-01T04:35:58.931214
|
{
"authors": [
"sudara"
],
"repo": "sudara/alonetone",
"url": "https://github.com/sudara/alonetone/issues/167",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1444732604
|
Disable Deposit button when globus option is selected
UI flow: https://projects.invisionapp.com/share/YQ132HXFX2GR#/screens/470591811_Globus_Workflow
When user selects the globus option, they should not be able to deposit, as they still have to move the files into the globus endpoint first.
There is also some additional instruction text to be shown above the deposit and save as draft buttons as shown in the UI flow above.
I can't follow the UI flows in invision. @peetucket Can you extract the relevant parts and add to this ticket?
I updated the description of this ticket, trying to break down the various pieces of the user globus workflow into separate tickets (of which this is just one). The others are linked to from the description and are under the globus epic
|
gharchive/issue
| 2022-11-11T00:11:03 |
2025-04-01T04:35:58.985716
|
{
"authors": [
"justinlittman",
"peetucket"
],
"repo": "sul-dlss/happy-heron",
"url": "https://github.com/sul-dlss/happy-heron/issues/2846",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
725985606
|
Feature flag
Why was this change made?
Fixes #103
This feature flag will allow deployment to production early without risk to existing SDR content. (Think: before the PO has signed off)
Flag defaults to NOT ALLOWING content to be updated - this makes for more work in tests, but avoids possibly messing up in prod ... I can reverse the default if desired.
If you try to create a collection or a work, you will be redirected to the dashboard with the alert message shown:
^^ NOTE that the close button for the alert text is on the LEFT. This is how the access team seems to do it, which is good if the window is wide and/or if the alert text is short. The last commit is the simple stylesheet change I made for this.
How was this change tested?
on my laptop and I also wrote some request specs for it.
Which documentation and/or configurations were updated?
@jcoyne: renamed method - thanks for spotting that.
|
gharchive/pull-request
| 2020-10-20T22:23:02 |
2025-04-01T04:35:58.988317
|
{
"authors": [
"ndushay"
],
"repo": "sul-dlss/happy-heron",
"url": "https://github.com/sul-dlss/happy-heron/pull/197",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
257450273
|
[Depends on #24] Build spark streaming
This is a partial solution for #21 to assemble the spark streaming app on the spark cluster.
This PR depends on PR #24
This capistrano task assembles the spark app on the spark nodes. The first time it runs, it has to install sbt and the first time sbt runs it has to download and cache all the dependencies for the build system and the project. Once that is all done once, it is cached on each node and all subsequent assemblies are very quick.
I've done the integration test on this, fixed all the bugs, and it works.
@darrenleeweber needs a rebase now that #24 is merged
I need to trash this PR and replace it with a different branch to do the same thing - the rebase on master did not go smoothly due to minor diffs.
|
gharchive/pull-request
| 2017-09-13T16:53:14 |
2025-04-01T04:35:58.990353
|
{
"authors": [
"atz",
"darrenleeweber"
],
"repo": "sul-dlss/ld4p-data-pipeline",
"url": "https://github.com/sul-dlss/ld4p-data-pipeline/pull/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.