id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
927437489
|
Would it be possible to make Postgres bigint map to Go int64 by default?
Currently, it maps to int, which may not actually be 64 bits.
A workaround is for me to pass in --go-type bigint=*int64, but this converts all the non-null bigint columns into *int64s as well instead of int64. If I pass in --go-type bigint=int64, then I lose nullability altogether.
EDIT: Is it intended behavior that trying to map a bigint to an int64 means that all bigint columns (nullable and not-null) become int64 instead of nullable ones becoming *int64 and non-null ones becoming int64?
Would it be possible to make Postgres bigint map to Go int64 by default?
That's admittedly more correct, but less ergonomic (assuming the majority of platforms are 64 bit platforms). I happen to like int so I'm loathe to change it now.
Currently, it maps to int, which may not actually be 64 bits.
Gotcha, I'm used to beefy servers. What platforms does int map to something other than 64 bits?
Is it intended behavior that trying to map a bigint to an int64 means that all bigint columns (nullable and not-null) become int64 instead of nullable ones becoming *int64 and non-null ones becoming int64?
Yes, there's a couple problems with specifying nullability:
A minor concern is there's not a great way to support specifying a null and non-null type with a flag. I could use a comma delimited flag, e.g. --go-type bigint=*int64,int64
It's not possible to infer whether an output column is nullable in anything other than simple cases. Consider SELECT foo FROM some_black_box_function()? pggen can get the type of foo but Postgres doesn't report nullability, so pggen can't know whether foo is nullable or not. I did some incredibly basic null checking in pginfer/nullability but it's hard to cover any advanced usecases. I think I'd need a real SQL parser and control-flow graph which sounds hard.
The nullability concerns make sense to me --- thanks for the explanation.
What platforms does int map to something other than 64 bits?
This would occur on a 32-bit system, for instance.
This ship has sailed; it's too late to switch from int to int64. A workaround is --go-type bigint=*int64. This would be improved by allowing the user to specify the nullable and non-nullable types.
|
gharchive/issue
| 2021-06-22T16:42:19 |
2025-04-01T04:34:43.144260
|
{
"authors": [
"djsavvy",
"jschaf"
],
"repo": "jschaf/pggen",
"url": "https://github.com/jschaf/pggen/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1717634091
|
🛑 API is down
In d346bf4, API ($SERVER_BASE/api/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: API is back up in 51e5a61.
|
gharchive/issue
| 2023-05-19T18:12:56 |
2025-04-01T04:34:43.158613
|
{
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/1527",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2254041461
|
🛑 API is down
In 671a11e, API ($SERVER_BASE/api/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: API is back up in d022757 after 10 minutes.
|
gharchive/issue
| 2024-04-19T21:37:20 |
2025-04-01T04:34:43.160912
|
{
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/6187",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1533687003
|
🛑 Linus-Wordpress is down
In 28dbe44, Linus-Wordpress ($SERVER_BASE/wordpress/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Linus-Wordpress is back up in 25e3e15.
|
gharchive/issue
| 2023-01-15T06:54:45 |
2025-04-01T04:34:43.162977
|
{
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/688",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
410339
|
[1.9.2] saikuro/usage.rb:4: rdoc/usage (LoadError)
It seems that this is gone in 1.9.2...
http://redmine.ruby-lang.org/issues/show/2713
$ rake metrics:all --trace
(in /usr/src/fog)
** Invoke metrics:all (first_time)
** Execute metrics:all
saikuro --output_directory tmp/saikuro --input_directory spec/lib/fog spec/aws lib --cyclo --filter_cyclo 0 --warn_cyclo 5 --error_cyclo 7 --formater text --input_directory spec/lib/fog --input_directory spec/aws --input_directory lib
/home/hedge/.rvm/gems/ruby-1.9.2-p0@fog/gems/devver-Saikuro-1.2.0/lib/saikuro/usage.rb:4:in `require': no such file to load -- rdoc/usage (LoadError)
I have the same problem? How can I solve it?
@fbrunoneves, FWIW, rcov doesn't work properly with Ruby 1.9. The same may be true of the other metrics gems. The best library available (linked from Rcov github even) for Ruby 1.9 is SimpleCov
|
gharchive/issue
| 2010-11-10T07:13:53 |
2025-04-01T04:34:43.175873
|
{
"authors": [
"fbrunoneves",
"hedgehog",
"sdhull"
],
"repo": "jscruggs/metric_fu",
"url": "https://github.com/jscruggs/metric_fu/issues/36",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
172604082
|
Beautiful Soup 4 no longer supports Python versions < 2.7
Current releases of Beautiful Soup 4 have dropped support for Python 2.6 and earlier. Since this is a major dependency, suggest changing ofxparse to match.
There is also at least one test ("testThatParseTransactionWithCommaAsDecimalPointAndDotAsSeparator") that uses python 2.7 specific features.
I'm working on getting a continuous build going, should I drop support for Python 2.6?
Yes, let's drop support for Python 2.6
This was addressed in PR #121
|
gharchive/issue
| 2016-08-23T03:24:05 |
2025-04-01T04:34:43.188301
|
{
"authors": [
"jseutter",
"nathangrigg",
"rdsteed"
],
"repo": "jseutter/ofxparse",
"url": "https://github.com/jseutter/ofxparse/issues/111",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1931535792
|
🛑 PlayYourPride is down
In c649426, PlayYourPride (https://playyourpride.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PlayYourPride is back up in d1e9273 after 16 minutes.
|
gharchive/issue
| 2023-10-07T22:51:18 |
2025-04-01T04:34:43.197936
|
{
"authors": [
"jshwlkr"
],
"repo": "jshwlkr/upptime",
"url": "https://github.com/jshwlkr/upptime/issues/978",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1966757868
|
Feat -> Jobs Landing Page -> Filter component
Mobile Design
Desktop Design
Hey @ekas can i work on this
Hey @Neha , justed confirm, the requirements.
Displaying the option in each filter select dropdown.
Display the jobs in the container below, based on the option selected on the dropdown.
Hey @ekas, Can I work on it?
@Stroller15 thank you.This issues is already picked up by Sushmita and PR is already raised.
|
gharchive/issue
| 2023-10-28T22:26:36 |
2025-04-01T04:34:43.204399
|
{
"authors": [
"Neha",
"Stroller15",
"ekas",
"sushmita2109"
],
"repo": "jslovers/jslovers-official-website",
"url": "https://github.com/jslovers/jslovers-official-website/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
64645055
|
Pagination of to-many relationships is underspecified
There's no clarification or example for this line:
A link object that represents a to-many relationship MAY also contain pagination links, as described below.
I think that means something like this:
// ...
{
"type": "articles",
"id": "1",
"title": "Rails is Omakase",
"links": {
"self": "http://example.com/articles/1",
"comments": {
"self": "http://example.com/articles/1/links/comments",
"resource": "http://example.com/articles/1/comments",
"linkage": [{"type": "comments", "id": "5"}, {"type": "comments", "id": "12"}],
"next": "http://example.com/articles/1/comments?page[cursor]=XYZCAFE"
}
}
}
// ...
but I'm not completely sure that's correct. Is that next link what the spec intended?
Also: should that next link be more like a self or a resource link for the relationship? I'm actually not completely sure I even know the difference between the two kinds of links in the case of a To-Many relationship :-/
Yes, this needs further clarification.
The pagination links must be for the relationship itself, rather than the related resources (and I have a foggy notion of mis-stating this recently in another issue, so apologies if I did).
Therefore, in your example, the next link might point to: ""http://example.com/articles/1/links/comments?page[cursor]=XYZCAFE" and return the next set of linkage data.
The related resources MUST be returned now in a related link (not resource). See the definition of "related resource URL" here: http://jsonapi.org/format/#document-structure-resource-relationships
Related resources can be fetched by following the related resource URL to get to the first "page", and then subsequent pagination links can be retrieved from the top-level links object in the response.
So to clarify your example would be:
// ...
{
"type": "articles",
"id": "1",
"title": "Rails is Omakase",
"links": {
"self": "http://example.com/articles/1",
"comments": {
"self": "http://example.com/articles/1/links/comments",
"related": "http://example.com/articles/1/comments",
"linkage": [{"type": "comments", "id": "5"}, {"type": "comments", "id": "12"}],
"next": "http://example.com/articles/1/links/comments?page[cursor]=XYZCAFE"
}
}
}
// ...
Sorry about the resource/related confusion in my initial question, I had recently been referring to RC2.
What I'm hoping to do is provide a link where a client can actually download the next page of comments in one step - is it possible to do that with this pagination scheme? I think I can if I am allowed to put the actual comment data in included on the GET response for a relationship url, but from the spec I can't tell if I'm allowed to return anything other than linkages
If the original request specifies include=comments then the server could append that query param to the next link: "http://example.com/articles/1/links/comments?page[cursor]=XYZCAFE&include=comments"
This should return the next page of linkage data as primary data together with the included comments in a compound document.
Thanks, this is enough to set me in the right direction.
@dgeb Wait, what? An /articles/1/links/* style request can have an include parameter? Even though the primary data returned isn't a resource object(s) and doesn't have a links bag? Now I'm beyond confused about include's semantics. We really need to figure this and #497 out before 1.0.
We really need to figure this and #497 out before 1.0.
I agree that the spec should be explicit about the usage of include with relationship data, so I'm fine milestoning it. I'm not sure it's a blocker, but we can certainly discuss and clarify it.
Does this statement:
An endpoint MAY return resources related to the primary data by default
also apply to relationship data requests?
@dgeb Cool. But the confusion isn't just for relationship data...in #497, there seems to be confusion (or, at least I'm confused) about includes fundamental semantics in the typical compound document case. Please do weigh in when you have a sec!
Removing milestone now that #641 clarifies the usage of include with relationship endpoints.
Closing. @jes5199 If you feel this warrants further discussion, please feel free to continue the discussion.
Reopening this. It got marked as closed after we resolved an issue with ?include, even though we never added @dgeb's clarification about the target of the pagination links to the base spec, which is still very vague on this point. I'll try to open a PR soon, unless someone else wants to tackle it.
Stumbled upon this issue when trying to implement nested pagination for django-json-api. Also noticed that the spec is very vague regarding the links.
For example, when I paginate the nested relationship by 3 resources at a time:
"relationships": {
"comments": {
"meta": {
"count": 21,
"pagination": {
"page": 1,
"pages": 1,
"count": 0
}
},
"data": [
{
"type": "Comment",
"id": "874"
},
{
"type": "Comment",
"id": "877"
},
{
"type": "Comment",
"id": "2273"
}
],
"links": {
"self": "http://127.0.0.1:9000/drf/users/1/relationships/comments/",
"related": "http://127.0.0.1:9000/drf/users/1/comments/",
"first": "http://127.0.0.1:9000/drf/users/1/comments/?page=1",
"last": "http://127.0.0.1:9000/drf/users/1/comments/?page=1",
"next": null,
"prev": null
}
},
It is very confusing what the next. prev, etc. links should be, and what the pagination meta should represent. The pagination of the target http://127.0.0.1:9000/drf/users/1/comments/ (which can be paginated differently btw, i.e. by 20 at a time), or the pagination of the nested structure (3 at a time)?
Is the meta supposed to be there or not for the pagination?
Not paginating the nested structure is not an option, as I can have hundreds of relations.
@dgeb @ethanresnick are there any updates/thoughts/guidance regarding nested pagination of relationships? Because to me this feels like the primary blocker to trying this API format on a live system.
@dgeb I notice this issue is still open and doesn't appear to have been solved. This appears to be a blocking issue in Ember Data that implements JSON API. Just wondering if there is any update to this, been over a year since the last comment. Thanks
another year later ;)
Hmmm .... sorry, didn't realize this was still open.
The above comment remains correct. In reviewing, @ethanresnick is correct that this needs to be clarified further in the spec. Probably in http://jsonapi.org/format/#fetching-pagination or http://jsonapi.org/format/#document-resource-object-relationships (or both).
The key point is that pagination links always paginate the collection referenced by the sibling self link. Thus, any pagination links in a relationship are for paginating the relationship data (NOT the related resources). Any pagination links for a primary data collection are for the primary data.
@oligriffiths what is the blocking issue in Ember Data? Is it still open?
@oligriffiths Thanks for clarifying. I believe that the spec does provide complete support for relationship pagination, and am hopeful that #1251 clarifies this. To sum up:
To paginate relationship data, pagination links (next, prev, etc.) can be included in links in the relationship object.
To paginate related resources only, follow the related link from links in the relationship object. Then pagination links (next, prev, etc.) can be included in the top-level links object of the response.
Please let me know if you think anything is missing.
Thanks @dgeb
Having read over the docs, I don't think it's clear as there are no examples under the pagination section for either the main resource or relationships.
http://jsonapi.org/format/#fetching-pagination
http://jsonapi.org/format/#document-compound-documents
I think it'd be prudent to add some examples so it's clear what this looks like.
3 years that the last update was made and I'm don't find a way to paginate related resources. Do you know some way to do something like this?
|
gharchive/issue
| 2015-03-26T22:11:16 |
2025-04-01T04:34:43.229663
|
{
"authors": [
"Exelord",
"LukasTsunami",
"dgeb",
"ethanresnick",
"jes5199",
"maryokhin",
"oligriffiths",
"tkellen"
],
"repo": "json-api/json-api",
"url": "https://github.com/json-api/json-api/issues/509",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
780777284
|
Fails to built with numpy 1.20.0, py 3.10 or 3.9.
https://bugzilla.redhat.com/show_bug.cgi?id=1913439
python-jsonpickle fails to build with Python 3.10.0a4 and Python 3.9.1. This seems related to the updated NumPy.
______________________ NumpyTestCase.test_dtype_roundtrip ______________________
self = <numpy_test.NumpyTestCase testMethod=test_dtype_roundtrip>
def test_dtype_roundtrip(self):
if self.should_skip:
return self.skip('numpy is not importable')
dtypes = [
np.int,
np.float,
np.complex,
np.int32,
np.str,
np.object,
np.unicode,
np.dtype('f4,i4,f2,i1'),
np.dtype(('f4', 'i4'), ('f2', 'i1')),
np.dtype('1i4', align=True),
np.dtype('M8[7D]'),
np.dtype(
{
'names': ['f0', 'f1', 'f2'],
'formats': ['<u4', '<u2', '<u2'],
'offsets': [0, 0, 2],
},
align=True,
),
]
if not PY2:
dtypes.extend(
[
np.dtype([('f0', 'i4'), ('f2', 'i1')]),
np.dtype(
[
(
'top',
[
('tiles', ('>f4', (64, 64)), (1,)),
('rtile', '>f4', (64, 36)),
],
(3,),
),
(
'bottom',
[
('bleft', ('>f4', (8, 64)), (1,)),
('bright', '>f4', (8, 36)),
],
),
]
),
]
)
for dtype in dtypes:
self.assertEqual(self.roundtrip(dtype), dtype)
tests/numpy_test.py:95:
/usr/lib64/python3.10/site-packages/numpy/core/_internal.py:61: in _usefields
names, formats, offsets, titles = _makenames_list(adict, align)
adict = {'dtype': "[('f0', '<f4'), ('f1', '<i4'), ('f2', '<f2'), ('f3', 'i1')]", 'py/object': 'numpy.dtype[void]'}
align = 0
def _makenames_list(adict, align):
allfields = []
for fname, obj in adict.items():
n = len(obj)
if not isinstance(obj, tuple) or n not in (2, 3):
raise ValueError("entry not a 2- or 3- tuple")
E ValueError: entry not a 2- or 3- tuple
/usr/lib64/python3.10/site-packages/numpy/core/_internal.py:31: ValueError
It installs fine for me on Ubuntu 20.04 running python 3.9.1 on pyenv with numpy 1.20rc2 and jsonpickle 1.5.0. I can't find a published numpy 1.20.0, only the second release candidate for that.
In summary, cannot reproduce issue on 3.9.1, testing now on 3.10a4
It installs fine for me on Ubuntu 20.04 running python 3.9.1 on pyenv with numpy 1.20rc2 and jsonpickle 1.5.0. I can't find a published numpy 1.20.0, only the second release candidate for that.
In summary, cannot reproduce issue on 3.9.1, testing now on 3.10a4
Update: On 3.10a4, I get an error trying to install numpy 1.20rc2, but jsonpickle installs fine.
Update: On 3.10a4, I get an error trying to install numpy 1.20rc2, but jsonpickle installs fine.
The failure above is during tests, do those complete for you on 3.9 and 3.10?
The failure above is during tests, do those complete for you on 3.9 and 3.10?
Oh I thought you meant the failure happened when installing/building, I'll run the tests now.
Oh I thought you meant the failure happened when installing/building, I'll run the tests now.
Sorry, I started feeling sick before I finished the tests (covid test came back negative luckily). Just started feeling better a few days ago, I'll try and finish the tests today.
Sorry, I started feeling sick before I finished the tests (covid test came back negative luckily). Just started feeling better a few days ago, I'll try and finish the tests today.
No worries! Health always comes first. Glad you're feeling better and your test was negative. :)
No worries! Health always comes first. Glad you're feeling better and your test was negative. :)
Thanks :)
Anyway, the tests on 3.10 pass fine for me when I run make tox multi=1. This tests every version of Python I have with pyenv, so 2.7.18, 3.6.x, 3.7.x, 3.8.6, 3.9.1, and 3.10a4. All of them pass on every version.
Thanks :)
Anyway, the tests on 3.10 pass fine for me when I run make tox multi=1. This tests every version of Python I have with pyenv, so 2.7.18, 3.6.x, 3.7.x, 3.8.6, 3.9.1, and 3.10a4. All of them pass on every version.
Odd. And they pass with numpy 1.20.0?
Odd. And they pass with numpy 1.20.0?
They pass with numpy 1.20rc2. I can't pip install numpy 1.20.0 apparently, it says the latest release is 1.20rc2.
They pass with numpy 1.20rc2. I can't pip install numpy 1.20.0 apparently, it says the latest release is 1.20rc2.
How should I be running the tests? Maybe that's an issue. Previously we did py.test-3.9, and I've tried make test and tox. tox won't work in koji due to a lack of internet.
How should I be running the tests? Maybe that's an issue. Previously we did py.test-3.9, and I've tried make test and tox. tox won't work in koji due to a lack of internet.
Gwyn, https://src.fedoraproject.org/rpms/pyproject-rpm-macros can run tox tests in Koji.
Gwyn, https://src.fedoraproject.org/rpms/pyproject-rpm-macros can run tox tests in Koji.
Fascinating, thanks! I've added that in and it now fails thusly:
tox.exception.Error: Error: break infinite loop provisioning within /home/gwyn/fedora/git/python-jsonpickle/jsonpickle-1.5.0/.tox/.tox/bin/python missing ['tox-pip-version>=0.0.6', 'tox-venv']
Fascinating, thanks! I've added that in and it now fails thusly:
tox.exception.Error: Error: break infinite loop provisioning within /home/gwyn/fedora/git/python-jsonpickle/jsonpickle-1.5.0/.tox/.tox/bin/python missing ['tox-pip-version>=0.0.6', 'tox-venv']
Do you have tox-pip >= 0.0.6 and tox-venv installed?
Do you have tox-pip >= 0.0.6 and tox-venv installed?
So, technically, either plugin won't work in Koji and pyproject-rpm-macros don't support this:
https://github.com/jsonpickle/jsonpickle/blob/f5e8a617b8be7484597716e1d65c0e07f7525936/tox.ini#L7-L9
Latest pip from Fedora will always be used and we use https://github.com/fedora-python/tox-current-env -- it might fight with tox-venv.
As a mitigation, try patching/sedding the lines out, but I don't know if it will work.
So, technically, either plugin won't work in Koji and pyproject-rpm-macros don't support this:
https://github.com/jsonpickle/jsonpickle/blob/f5e8a617b8be7484597716e1d65c0e07f7525936/tox.ini#L7-L9
Latest pip from Fedora will always be used and we use https://github.com/fedora-python/tox-current-env -- it might fight with tox-venv.
As a mitigation, try patching/sedding the lines out, but I don't know if it will work.
Progress. I patched that part out, and am back to the original failure! :)
=================================== FAILURES ===================================
______________________ NumpyTestCase.test_dtype_roundtrip ______________________
self = <numpy_test.NumpyTestCase testMethod=test_dtype_roundtrip>
def test_dtype_roundtrip(self):
if self.should_skip:
return self.skip('numpy is not importable')
dtypes = [
np.int,
np.float,
np.complex,
np.int32,
np.str,
np.object,
np.unicode,
np.dtype('f4,i4,f2,i1'),
np.dtype(('f4', 'i4'), ('f2', 'i1')),
np.dtype('1i4', align=True),
np.dtype('M8[7D]'),
np.dtype(
{
'names': ['f0', 'f1', 'f2'],
'formats': ['<u4', '<u2', '<u2'],
'offsets': [0, 0, 2],
},
align=True,
),
]
if not PY2:
dtypes.extend(
[
np.dtype([('f0', 'i4'), ('f2', 'i1')]),
np.dtype(
[
(
'top',
[
('tiles', ('>f4', (64, 64)), (1,)),
('rtile', '>f4', (64, 36)),
],
(3,),
),
(
'bottom',
[
('bleft', ('>f4', (8, 64)), (1,)),
('bright', '>f4', (8, 36)),
],
),
]
),
]
)
for dtype in dtypes:
self.assertEqual(self.roundtrip(dtype), dtype)
tests/numpy_test.py:95:
/usr/lib64/python3.9/site-packages/numpy/core/_internal.py:61: in _usefields
names, formats, offsets, titles = _makenames_list(adict, align)
adict = {'dtype': "[('f0', '<f4'), ('f1', '<i4'), ('f2', '<f2'), ('f3', 'i1')]", 'py/object': 'numpy.dtype[void]'}
align = 0
def _makenames_list(adict, align):
allfields = []
for fname, obj in adict.items():
n = len(obj)
if not isinstance(obj, tuple) or n not in (2, 3):
raise ValueError("entry not a 2- or 3- tuple")
E ValueError: entry not a 2- or 3- tuple
/usr/lib64/python3.9/site-packages/numpy/core/_internal.py:31: ValueError
Progress. I patched that part out, and am back to the original failure! :)
=================================== FAILURES ===================================
______________________ NumpyTestCase.test_dtype_roundtrip ______________________
self = <numpy_test.NumpyTestCase testMethod=test_dtype_roundtrip>
def test_dtype_roundtrip(self):
if self.should_skip:
return self.skip('numpy is not importable')
dtypes = [
np.int,
np.float,
np.complex,
np.int32,
np.str,
np.object,
np.unicode,
np.dtype('f4,i4,f2,i1'),
np.dtype(('f4', 'i4'), ('f2', 'i1')),
np.dtype('1i4', align=True),
np.dtype('M8[7D]'),
np.dtype(
{
'names': ['f0', 'f1', 'f2'],
'formats': ['<u4', '<u2', '<u2'],
'offsets': [0, 0, 2],
},
align=True,
),
]
if not PY2:
dtypes.extend(
[
np.dtype([('f0', 'i4'), ('f2', 'i1')]),
np.dtype(
[
(
'top',
[
('tiles', ('>f4', (64, 64)), (1,)),
('rtile', '>f4', (64, 36)),
],
(3,),
),
(
'bottom',
[
('bleft', ('>f4', (8, 64)), (1,)),
('bright', '>f4', (8, 36)),
],
),
]
),
]
)
for dtype in dtypes:
self.assertEqual(self.roundtrip(dtype), dtype)
tests/numpy_test.py:95:
/usr/lib64/python3.9/site-packages/numpy/core/_internal.py:61: in _usefields
names, formats, offsets, titles = _makenames_list(adict, align)
adict = {'dtype': "[('f0', '<f4'), ('f1', '<i4'), ('f2', '<f2'), ('f3', 'i1')]", 'py/object': 'numpy.dtype[void]'}
align = 0
def _makenames_list(adict, align):
allfields = []
for fname, obj in adict.items():
n = len(obj)
if not isinstance(obj, tuple) or n not in (2, 3):
raise ValueError("entry not a 2- or 3- tuple")
E ValueError: entry not a 2- or 3- tuple
/usr/lib64/python3.9/site-packages/numpy/core/_internal.py:31: ValueError
pyproject-rpm-macros don't support this
I've noted it in https://bugzilla.redhat.com/show_bug.cgi?id=1922495
pyproject-rpm-macros don't support this
I've noted it in https://bugzilla.redhat.com/show_bug.cgi?id=1922495
NOTE: our setup.cfg has been locked down to numpy 1.19 until this is resolved.
I'm able to reproduce this on python3 / numpy 1.20 using these instructions.
vx is https://github.com/davvid/vx -- a single-file shell script that you can drop in your $PATH. You can activate the virtualenv manually as well and get the result.
Setup
# Create the virtualenv
python3 -m venv env3
# Install dev requirements using vx
vx env3 pip install -r requirements-dev.txt
# Or install requirements by digging into the virtualenv
./env3/bin/pip install -r requirements-dev.txt
Test recipe
# Activate env3 using vx and run the tests
vx env3 make test flags=-x
# Or activate the virtualenv by pointing to the virtualenv's python
make PYTHON=$PWD/env3/bin/python test flags=-x
NOTE: our setup.cfg has been locked down to numpy 1.19 until this is resolved.
I'm able to reproduce this on python3 / numpy 1.20 using these instructions.
vx is https://github.com/davvid/vx -- a single-file shell script that you can drop in your $PATH. You can activate the virtualenv manually as well and get the result.
Setup
# Create the virtualenv
python3 -m venv env3
# Install dev requirements using vx
vx env3 pip install -r requirements-dev.txt
# Or install requirements by digging into the virtualenv
./env3/bin/pip install -r requirements-dev.txt
Test recipe
# Activate env3 using vx and run the tests
vx env3 make test flags=-x
# Or activate the virtualenv by pointing to the virtualenv's python
make PYTHON=$PWD/env3/bin/python test flags=-x
The solution ended up being pretty simple. The numpy release notes mention https://numpy.org/devdocs/release/1.20.0-notes.html#compatibility-notes and that seems related. It's surprising -- we do register our dtype handler so that it also handles derived classes, but for some reason we now have to enumerate some of the custom dtypes when registering. 😹
The solution ended up being pretty simple. The numpy release notes mention https://numpy.org/devdocs/release/1.20.0-notes.html#compatibility-notes and that seems related. It's surprising -- we do register our dtype handler so that it also handles derived classes, but for some reason we now have to enumerate some of the custom dtypes when registering. 😹
Works, thank you so much!
|
gharchive/issue
| 2021-01-06T19:00:39 |
2025-04-01T04:34:43.299204
|
{
"authors": [
"Theelgirl",
"davvid",
"hroncok",
"limburgher"
],
"repo": "jsonpickle/jsonpickle",
"url": "https://github.com/jsonpickle/jsonpickle/issues/336",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
92715445
|
How do I publish themes?
I've created a customized theme for my json resume. How do I publish it so that everyone can use it?
Publish your package on npm using
npm publish
Make sure the package name is of the format jsonresume-theme-yourthemename
Examples: https://github.com/jsonresume/jsonresume-theme-modern, https://github.com/mudassir0909/jsonresume-theme-elegant
|
gharchive/issue
| 2015-07-02T19:27:14 |
2025-04-01T04:34:43.301962
|
{
"authors": [
"aadesh",
"mudassir0909"
],
"repo": "jsonresume/theme-manager",
"url": "https://github.com/jsonresume/theme-manager/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
247078346
|
Remove close event when clicking on overlay
Hello,
Is there a way I could remove the event making lity to close when clicking on the overlay?
The only way is to set the template option with the data-lity-close attribute removed from the wrap element.
You can do this globally for example via
lity.options('template', '<div class="lity" role="dialog" aria-label="Dialog Window (Press escape to close)" tabindex="-1"><div class="lity-wrap" role="document"><div class="lity-loader" aria-hidden="true">Loading...</div><div class="lity-container"><div class="lity-content"></div><button class="lity-close" type="button" aria-label="Close (Press escape to close)" data-lity-close>×</button></div></div></div>');
Yeah, work #1 !
|
gharchive/issue
| 2017-08-01T13:58:03 |
2025-04-01T04:34:43.304157
|
{
"authors": [
"jsor",
"mikoelsuperbeasto"
],
"repo": "jsor/lity",
"url": "https://github.com/jsor/lity/issues/132",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2344593779
|
Re-justify or remove the discussion of FluentIterable in "the ImmutableList.Builder case"
https://jspecify.dev/docs/spec/#substitution explains why our substitution rules have a special case for null-exclusive type variables. That overall point is still true. But we go on to discuss why we use MINUS_NULL instead of NO_CHANGE, and we justify it by appealing to FluentIterable, whose annotations have subsequently changed with the introduction of @NonNull. We should see if the justification still holds up (perhaps with a FluentIterable-like type that is null-unmarked) or not, and we should update the spec accordingly.
Maybe also explain that "the ImmutableList.Builder case" has an effect not just for unspecified-nullness type arguments but for nullable type arguments?
In one way, the latter case sounds less important: It can arise only in the case of code that would already have other nullness errors if it were all being checked for nullness.
In another way, it sounds more important: We're generally more comfortable with the idea of leaving types unspecified and letting tools choose to refine them than we are with the idea of leaving types nullable and letting tools choose to refine them. (OK, we actually expect plenty of refining of nullable types, too. But we expect it in the case of "I know that this nullable expression is non-null in this case," not "I know that this nullable parameter is non-null in this case.")
Hmm. By "nullable type arguments" you mean cases like this?
@NullMarked
interface Foo<T> {
void foo(T t);
}
@NullMarked
// test:not-nullness-subtype:String?:String!:Foo#T
void test(Foo<@Nullable String> foo) {
foo(null); // not-nullness-subtype or no?
}
I continue to think that a smart tool should know that any expression using a symbol whose declaration had a type argument that wasn't a nullness subtype of its type parameter's bound is suspect, certainly if the expression involves that type variable, without the spec making any extra effort to point that out.
Right, a case like that.
I suspect we agree that the spec and conformance tests should either say that there's a not-nullness-subtype finding there or say that there's not? But I'm not sure what you have in mind when you refer to "making any extra effort." I think we've talked about removing the special case entirely (and a special case is clearly "extra effort" of a sort). Another possibility is to use NO_CHANGE instead of MINUS_NULL (which wouldn't change the effect on the example you just gave but which might feel simpler), which is more what this issue is about.
Maybe we can add this to the list of things to talk more about in a meeting :)
If this issue is just about changing the non-normative discussion in the spec, then we don't have to debate it here; let's talk about the actual change you want to make (like, in a PR).
If this issue is about changing the normative part of the spec that says that when substituting a type argument for a type parameter, then use NO_CHANGE for all the type variables (before any projections) regardless of the operator of the type argument, then the issue title is wrong. 😁
I'll update my last comment to say that if this issue is about changing only the non-normative parts of the spec, then perhaps it shouldn't block the 1.0 release of the spec.
OK, I have convinced myself that it does still make sense for the rule to use MINUS_NULL (as opposed to NO_CHANGE). So there is no normative spec change to be made, only an improvement to the non-normative part. Thus:
This issue doesn't block spec 1.0, as noted above. I'm removing it from the milestone.
I've mailed https://github.com/jspecify/jspecify/pull/657 to update the example. (But now it looks like I'm going to remove the example instead :))
|
gharchive/issue
| 2024-06-10T18:40:31 |
2025-04-01T04:34:43.313076
|
{
"authors": [
"cpovirk",
"netdpb"
],
"repo": "jspecify/jspecify",
"url": "https://github.com/jspecify/jspecify/issues/521",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1409773420
|
Selecting (clicking) individual cells in quick succession selects multiple cells
Clicking on individual cells in quick succession often selects multiple cells (usually 3, but sometimes 2 or 4 cells), usually in a column above or below the cell that was actually clicked. The far left "row number" column also indicates that cells in multiple rows have been selected.
If you edit (start typing new text) when those multiple cells are selected, it appears that the "proper" cell (the last one you clicked) is the one that's edited, which although correct, but it is very confusing when multiple cells are selected.
Sometimes, after those multiple cells have been selected, clicking on a cell to the right or left of the selected cells will move the selection to that other column, but block the same number of vertically selected cells. In these circumstances, it may take several clicks in and out of the cell you actually want to edit, before the multi-cell column goes away and only that one cell is selected.
We were using version 4.3.2, so I upgraded to 4.6.0, however the issue is still there.
It is very hard from your description give some feedback and I don't think think is a version problem.
|
gharchive/issue
| 2022-10-14T19:37:52 |
2025-04-01T04:34:43.343455
|
{
"authors": [
"hodeware",
"it-admins-3degreesinc"
],
"repo": "jspreadsheet/ce",
"url": "https://github.com/jspreadsheet/ce/issues/1566",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
462479079
|
Safari Issues
Something with Safari (Mojave) doesn't work. See here.
I lack access to a device running Safari and therefore cannot replicate the issue.
This problem may be related, or it may be the user not understanding the application. I'm leaning towards the latter, but worth investigating if possible.
I took a look on the iPad Pro Safari and OSX Safari (both in developer beta) -- seems to work.
I don’t get anything other than a page with three text areas on iOS’s Safari.
I think the user expected something else or something more when they clicked the link.
Imgur
Imgur
Thanks for taking a look! I'm going to close the issue for now.
|
gharchive/issue
| 2019-07-01T02:12:26 |
2025-04-01T04:34:43.379831
|
{
"authors": [
"jstrieb",
"kidGodzilla"
],
"repo": "jstrieb/urlpages",
"url": "https://github.com/jstrieb/urlpages/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
490600414
|
Added build for macos
Added the build for macos. It should be able to run by double-clicking neural-mmo-mac.app located in the UnityClient folder.
Sorry I submitted this to the wrong branch.
Nevermind this is the right branch haha :)
Sorry I missed this -- thank you so much for the build!
Great job. Eddie Siman
Sent from my iPhone
On Sep 18, 2019, at 2:59 AM, Joseph Suarez <notifications@github.commailto:notifications@github.com> wrote:
Sorry I missed this -- thank you so much for the build!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://nam05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fjsuarez5341%2Fneural-mmo-client%2Fpull%2F6%3Femail_source%3Dnotifications%26email_token%3DAIDJ7PQTIDBOGHTTJZJXBBDQKHGWZA5CNFSM4IUPCAF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD67AY6I%23issuecomment-532548729&data=02|01||4a0c1dc5162d49cf2b2008d73c05d0b2|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637043867975576729&sdata=OinoNBkB636eUcLdSDS8lv28mfZMjddQ6T0GR5iSxec%3D&reserved=0, or mute the threadhttps://nam05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAIDJ7PW7MHWSNP5PZK4BCALQKHGWZANCNFSM4IUPCAFQ&data=02|01||4a0c1dc5162d49cf2b2008d73c05d0b2|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637043867975596745&sdata=e6DDx5Xc8zjSn7nUwX7t9rn2cOlTmYvSUhhYCR33W2E%3D&reserved=0.
|
gharchive/pull-request
| 2019-09-07T06:47:44 |
2025-04-01T04:34:43.384215
|
{
"authors": [
"cehinson",
"jsuarez5341",
"monksealseal"
],
"repo": "jsuarez5341/neural-mmo-client",
"url": "https://github.com/jsuarez5341/neural-mmo-client/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1415258089
|
Update calendar.js
Update extractDateFromString with better detection format Excel
Thanks
|
gharchive/pull-request
| 2022-10-19T16:16:57 |
2025-04-01T04:34:43.386262
|
{
"authors": [
"GBonnaire",
"hodeware"
],
"repo": "jsuites/jsuites",
"url": "https://github.com/jsuites/jsuites/pull/126",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
134734386
|
Release checklist setup.py clean
Add step python setup.py clean --all. Maybe there's more.
Why? Write better bug reports!
|
gharchive/issue
| 2016-02-19T00:03:37 |
2025-04-01T04:34:43.415414
|
{
"authors": [
"jtpereyda"
],
"repo": "jtpereyda/ezoutlet",
"url": "https://github.com/jtpereyda/ezoutlet/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2614257090
|
FormFutura EasyFil ePLA (Cardboard Spool)
Printing temp: 200-220C
Bed temp: 0-60C
Empty spool weight: 165g
Advertised filament weight: 1000g - 1kg
Outside Diameter: 200mm
Height: 52mm
Hole Diameter: 56mm
the empty spool is 10g heavier than FormFutura ReForm - rPLA (Cardboard Spool)
Thanks, I will add. It seems the weight of the cardboard spools generally vary a lot
But actually -- we will need picture of the empty spool placed on the scale
i didn't take any photo and i trashed it afterwards. i will also not buy this filament again as its pretty much trash.
the only reason i measured it because i used the 155g value from the website to calculate my print but because it was wrong the print didnt finish. i wanted to help others avoid this 👍🏻
|
gharchive/issue
| 2024-10-25T14:21:01 |
2025-04-01T04:34:43.418298
|
{
"authors": [
"jtrmal",
"rursache"
],
"repo": "jtrmal/spoolz",
"url": "https://github.com/jtrmal/spoolz/issues/22",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
400860400
|
Consensus sequence after nanopolish eventalign for rRNA
Hello Jared,
I am trying to polished a Direct Ribosomal Sequence. I got 45S rRNA using MinION and I was able to run nanopolish eventalign. How I can produce a consensus sequence for the summary data?
Hi,
As we discussed this over email I'm going to close this issue. For anyone with a similar question that finds this issue, its not possible to polish from direct RNA data at this time.
Jared
|
gharchive/issue
| 2019-01-18T19:34:44 |
2025-04-01T04:34:43.431751
|
{
"authors": [
"jts",
"maculatus"
],
"repo": "jts/nanopolish",
"url": "https://github.com/jts/nanopolish/issues/535",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1630005219
|
core: proxy_url support
Resolves https://github.com/mohammed90/caddy-ngrok-listener/issues/6
Merge after https://github.com/mohammed90/caddy-ngrok-listener/pull/22
Current dependencies on/for this PR:
master
PR #6
PR #5
PR #7
PR #8
PR #9
PR #10 👈
This comment was auto-generated by Graphite.
|
gharchive/pull-request
| 2023-03-17T22:56:40 |
2025-04-01T04:34:43.437861
|
{
"authors": [
"jtszalay"
],
"repo": "jtszalay/caddy-ngrok-listener",
"url": "https://github.com/jtszalay/caddy-ngrok-listener/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
613059797
|
Add "Contact Us" Page
I created a page for users to be able to contact us. I added a reference to this page to the header.
The contact form requires the user to input his name, reason for contacting (with some options, just to give us more context), and the message.
I set it up so that an email is sent to matchmocker@gmail.com with all of the information along with the user's email for us receive the information and be able to get back to him quickly and easily.
Upon submission of a contact message, an alert thanks the user and notifies that we will be in touch soon. We then redirect the user to his home dashboard.
LGTM!
|
gharchive/pull-request
| 2020-05-06T05:27:56 |
2025-04-01T04:34:43.440827
|
{
"authors": [
"alexschles",
"juan-t0rres"
],
"repo": "juan-t0rres/mock-interview-app",
"url": "https://github.com/juan-t0rres/mock-interview-app/pull/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
891918986
|
Basic question
Hi,
I was looking for a replacement of https://docs.ethers.io/v5/concepts/security/#security--pbkdf . Because it is too slow for react-native. My knowledge on cryptography is limited. So in your examples I can see how to encrypt. But do you have an example on how to de-crypt and get the original values.
Regards,
Mariano
hey @elranu how did you get it working in react native?
I get error: Can't find variable: crypto.
are you using expo or react-native init?
@elranu, a kdf is by design a deterministic one-way function, there is no "decrypt". The example you've found is encrypting with a key derived from a password. Decrypting would involve deriving the same key again and then decrypting with it.
@loekTheDreamer, I'm sorry I haven't tested it on react native and thus I can't help. Current implementation uses node's crypto in Node.js and subtle crypto in browsers. Not sure which one in react-native but you likely need a wrapper library for them to work. I'm currently not planning to add react-native support, although I may consider in the future (and I'm obviously open to contributions). In any case, consider creating a new issue for explicitly requesting react-native support.
|
gharchive/issue
| 2021-05-14T13:22:12 |
2025-04-01T04:34:43.444624
|
{
"authors": [
"elranu",
"juanelas",
"loekTheDreamer"
],
"repo": "juanelas/scrypt-pbkdf",
"url": "https://github.com/juanelas/scrypt-pbkdf/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
189405483
|
declarar 3 cosas dentro de stmtMain
@juanpale podes probar esto, no entiendo porque no me anda.
{
function int fun1(int a) skip;
int a = 1;
function int fun1(int a) skip;
}
Syntax error at [0,0]!Couldn't repair and continue parse
java.lang.Exception: Can't recover from previous error(s)java.lang.Exception: Can't recover from previous error(s)
sospecho que esas reglas se estan pisando.
parser
| repeat$stmtMain:$1 stmtMain:$2
{: List<Stmt> $0;
$1.add($2); $0 = $1;
RESULT = $0; :}
| repeat$stmtMain:$1 NEW_LINE stmtMain:$2
{: List<Stmt> $0;
$1.add($2); $0 = $1;
RESULT = $0; :}
lexer
\n[ \t\r\f\v]*\n[ \t\r\n\f\v]*\n
{ CheckStateLinter.addError1(yyline, yycolumn); }
[ \t\r\f\v]*\n[ \t\r\f\v]*
{ return new Symbol(NEW_LINE, yyline, yycolumn, yytext()); }
ok
|
gharchive/issue
| 2016-11-15T14:26:17 |
2025-04-01T04:34:43.451226
|
{
"authors": [
"juanpale",
"nandotorterolo"
],
"repo": "juanpale/ObligatorioWhile",
"url": "https://github.com/juanpale/ObligatorioWhile/issues/26",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
424200258
|
[Do not merge] Merge all ACI commits from yangl900/acs-engine
What this PR does / why we need it:
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #
Special notes for your reviewer:
Release note:
all changes LGTM, please double check there are no ACI commits missing during rebase/merge
|
gharchive/pull-request
| 2019-03-22T13:06:19 |
2025-04-01T04:34:43.458047
|
{
"authors": [
"juhacket",
"wenwu449"
],
"repo": "juhacket/acs-engine",
"url": "https://github.com/juhacket/acs-engine/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
388313613
|
stuck cdk deployment: kubernetes-master/0 Waiting for 5 kube-system pods to start
I came across the following issue while deploying this bundle on AWS:
http://paste.ubuntu.com/p/wx23SYdkHw/
The deployment gets stuck in this state:
http://paste.ubuntu.com/p/ZJJq6CFWkK/
$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default default-http-backend-nc8tn 0/1 Pending 0 22m
kube-system heapster-v1.6.0-beta.1-7b98d6d498-6kgbq 0/4 Pending 0 22m
kube-system kube-dns-596fbb8fbd-9d7lb 0/3 Pending 0 22m
kube-system kubernetes-dashboard-67d4c89764-qv9xh 0/1 Pending 0 22m
kube-system metrics-server-v0.3.1-67bb5c8d7-t6brw 0/2 Pending 0 22m
kube-system monitoring-influxdb-grafana-v4-65cc9bb8c8-hn86p 0/2 Pending 0 22m
$ kubectl get no --all-namespaces
No resources found.
results-2018-12-06-16-38-39.tar.gz
It looks like you're missing a relation between aws-integrator and kubernetes-worker. Can you try adding that?
From kubelet's logs, I can see that it successfully registered, but then was deleted:
I1206 15:32:00.598481 15620 kubelet_node_status.go:73] Successfully registered node ip-172-31-38-197
I1206 15:32:01.663914 15620 watch.go:113] kubelet config controller: Node was deleted
E1206 15:32:01.678540 15620 kubelet.go:2236] node "ip-172-31-38-197" not found
E1206 15:32:01.778811 15620 kubelet.go:2236] node "ip-172-31-38-197" not found
...
This happens when kube-controller-manager is configured with cloud-provider=aws, but kubelet is not. The controller manager doesn't recognize the node, so it deletes it. I'm pretty sure that's what happened here, and that the missing kubernetes-worker relation would lead to this.
After a reboot of all the VMs the issue was gone.
Huh, that's unexpected. Cool?!
And yet you are correct, a relation is missing between aws-integrator and kubernetes-worker
How did you deploy aws-integrator? Was this a conjure-up deployment, or did you add it manually? Just trying to figure out if there's anything we need to fix here.
juju deploy ./bundle.yaml
# http://paste.ubuntu.com/p/wx23SYdkHw/
juju deploy cs:~containers/aws-integrator
juju add-relation 'aws-integrator:aws' 'kubernetes-master:aws'
# Missed this one: juju add-relation 'aws-integrator:aws' 'kubernetes-worker:aws'
juju trust aws-integrator
I suppose at this point the question is how come and it doesn't complain any more, given I still have not added the missing relation ;)
|
gharchive/issue
| 2018-12-06T17:16:17 |
2025-04-01T04:34:43.468870
|
{
"authors": [
"Cynerva",
"iatrou"
],
"repo": "juju-solutions/bundle-canonical-kubernetes",
"url": "https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/704",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2300313985
|
Change sign in market clearance
Found an example where the orientation of the market clearance conditions matter. Fixed by changing the sign.
The newly added test "test_orientation.jl" is failing with a segmentation error, but only on GitHub (i.e. it's running on my local machine). I've tested to isolate the error to the JuMP.optimize!(jm) call in the solve!(mules_mpsge) statement.
|
gharchive/pull-request
| 2024-05-16T12:46:36 |
2025-04-01T04:34:43.488437
|
{
"authors": [
"mitchphillipson"
],
"repo": "julia-mpsge/MPSGE_MP.jl",
"url": "https://github.com/julia-mpsge/MPSGE_MP.jl/pull/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
210627684
|
How do you get the AlertId?
Hi, how do you get the alertid in order to pass it to the close method? The documentation says "sAlert methods will return the already created alertId.", but I'm not sure how to get this id and pass it to the close method.
Never mind. I guess it wouldn't make sense to have a close button that selects one alert and closes it other than the X button which is located on each alert.
If you need current alertId you can do something like:
(...)
const alertId = Alert.warning('Test message 1', {...optionsHere});
(...)
Alert.close(alertId);
Thanks, I tried that and it didn't work but that was because I was doing it inside a return statement. I moved it to the onclick of callling the function and it worked.
hi, I'll close the isssue for now, but feel free to comment it here if there is something to discuss.
|
gharchive/issue
| 2017-02-27T22:30:21 |
2025-04-01T04:34:43.537587
|
{
"authors": [
"juliancwirko",
"pdfabbro"
],
"repo": "juliancwirko/react-s-alert",
"url": "https://github.com/juliancwirko/react-s-alert/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
782517130
|
Feature - replace hostname completion with fzf
[x] I have read through the manual page (man fzf)
[x] I have the latest version of fzf
[x] I have searched through the existing issues
Info
OS
[x] Linux
[ ] Mac OS X
[ ] Windows
[ ] Etc.
Shell
[x] bash
[ ] zsh
[ ] fish
Problem / Steps to reproduce
Proof of concept for integrating "fzf" into bash completions (known_hosts in this case)
The following script that replaces declared the "_known_host_real()" function with one that uses "fzf", for all commands that does hostname completion (like "ssh")
It came about it as I have my own source of "hostnames" of accounts I use that is more accurate that the existing method (extracting them from "~/.ssh/known_hosts" sources. But I did not want to modify the system installed version, which gets updated from packages. My 'source' of hostnames is not included in the script below, but you can replace the call to the original function to be anything you want.
Currently I source this file from my ".bashrc" to replace the "_known_host_real" function. Now when I type "ssh {some partial hostname}" and hit 'TAB' to do hostname completion (as normal). With that "fzf" will pop up the hostname completions, about will abort is you press 'ESC'.
No need to type '**' or any other weird work around, as "fzf" is properly integrated, at least for "known hosts".
#!/bin/echo "Bash source Script!"
#
# compgen_known_hosts_fzf.bash
#
# Replace "_known_hosts_real()" function with one that uses "fzf"
# that allows the user to interactively select the host they want to use.
#
# To activate source this script from your ".bashrc"
#
# Anthony Thysen <Anthony.Thyssen@gmail.com>, 9 January 2021
#
# Rename the current "_known_hosts_real" function to "_known_hosts_original".
# The original is declared in "/usr/share/bash-completion/bash_completion",
# and is used by many completions, such as ssh, rsych, etc.
if type -t _known_hosts_original >/dev/null; then
: # function has already been redefined
else
eval "$( echo "_known_hosts_original()";
declare -f _known_hosts_real | tail -n +2 )"
# for now make original use the renamed function
_known_host_real() { _known_hosts_original "$@"; }
fi
# Define a replacement known hosts function that calls "fzf"
_known_hosts_fzf() {
local suffix prefix
local OPTIND=1
local user
#printf >&2 "_known_hosts_anthony "; printf >&2 '%s ' "$@"; echo >&2
while getopts "ac46F:p:" flag "$@"; do
case $flag in
a) ;; # aliases from ssh config -- ignore
c) suffix=':' ;;
F) configfile=$OPTARG ;;
p) prefix=$OPTARG ;;
4) ;; # ip4 addresses - ignore
6) ;; # ip6 addresses - ignore
esac
done
[[ $# -lt $OPTIND ]] &&
echo "error: $FUNCNAME: missing mandatory argument CWORD"
cur=${!OPTIND}; let "OPTIND += 1"
[[ $# -ge $OPTIND ]] &&
echo "error: $FUNCNAME("$@"): unprocessed arguments:"\
$(while [[ $# -ge $OPTIND ]]; do printf '%s\n' ${!OPTIND}; shift; done)
# Extract user name from current word
[[ $cur == *@* ]] && user=${cur%@*}@ && cur=${cur#*@}
# Get the original collection of known hosts (no limitations)
# This can be replaced with whatever source of hostnames you like!
_known_hosts_original -- ''
# Add prefix, user and suffix
for (( i=0; i < ${#COMPREPLY[@]}; i++ )); do
COMPREPLY[i]="$prefix$user${COMPREPLY[i]}$suffix"
done
# Now use "fzf" to do the completion
# If user aborts (ESC) then print the current query string
COMPREPLY=( $(
printf '%s\n' "${COMPREPLY[@]}" |
FZF_DEFAULT_OPTS='--bind esc:print-query+abort' \
fzf -q "$cur" -m -i -1 -0
) )
printf '\e[5n'
}
# To redraw line after fzf closes (EG: handle the "printf '\e[5n'" above)
bind '"\e[0n": redraw-current-line'
# Now replace the function with our "fzf" version (if available)...
if type -t fzf >/dev/null; then
_known_hosts_real() { _known_hosts_fzf "$@"; }
fi
It came about it as I have my own source of "hostnames" of accounts I use that is more acculturate that the existing method (extracting them from "~/.ssh/known_hosts" sources.
This sounds cool.
Currently I source this file from my ".bashrc" to replace the "_known_host_real" function. now when I type "ssh {some partial hostname}" and hit 'TAB' to do hostname completion (as normal). With that "fzf" will pop up the hostname completions, about will abort is you press 'ESC'.
No need to type '**' or any other weird work around, as "fzf" is properly integrated, at least for "known hosts".
You can probably accomplish this by sourcing this
_fzf_complete_host_notrigger() { FZF_COMPLETION_TRIGGER='' _fzf_host_completion; }
somewhere, as described in https://github.com/junegunn/fzf/wiki/Examples-(completion)#bash-custom-trigger-less-completion
You can probably accomplish this by sourcing this
_fzf_complete_host_notrigger() { FZF_COMPLETION_TRIGGER='' _fzf_host_completion; }
somewhere, as described in https://github.com/junegunn/fzf/wiki/Examples-(completion)#bash-custom-trigger-less-completion
Nice... didn't know about that, though that does not allow you to use a different source of hostnames, without replacing the original function, anyway!
Nice, yet I personally would rather want that something like this would be just optional.
I mean some people may want to have their _known_host_real() replaced, but other may rather want to keep that and have the fzf host completion separate from it (and use both depending on circumstances.
Rather than use the the function _known_hosts_fzf() in the above script to replace _known_hosts_real() function, you can use it to set the completion for specific commands. This is actually what I am currently doing now..
It could be better as it currently does not handle completion of command options.
BUT it does what I need. for goo enough.
#!/bin/echo "Bash source Script!"
#
# compgen_find_acct.bash
#
# Gather a list of personal hostnames I have accounts on, for completion
# of wrappered commands that make use of these accounts.
#
# This also uses "fzf" to allow the user to interactively select the host they
# want to use.
#
# Anthony Thysen <Anthony.Thyssen@gmail.com>, 9 January 2021
#
###
#
# FUTURE: more specific "command completion" including options.
#
_find_acct() {
local cur prev words cword
_init_completion -n : || return # do basic var/file completions
# Do option handle options as per ssh
if [[ $cur == -* || $prev == -* ]]; then
type -t _ssh >/dev/null ||
source /usr/share/bash-completion/completions/ssh
_ssh 'ssh' "$cur" 'ssh'
return
fi
local user= hosts # user prefix and hostlist
# Extract user name from current cur
[[ $cur == *@* ]] && user=${cur%@*}@ && cur=${cur#*@}
# My sources of hosts....
# Start with the host aliases I have defined
hosts="$( sed 's/#.*//;/^DOMAIN=/d;s/^[^ ]* *//;/^$/d;s/ */\n/g;' \
~/lib/host_aliases )"
# now add my account list, or otherwise the SSH known hosts
if [ -f ~/misc/dist_accts ]; then
# Use my distribution accounts list (prefered)
hosts="$hosts"$'\n'$(awk '/^\]/{print $3}' ~/misc/dist_accts)
elif [ -f ~/.ssh/known_hosts ]; then
# Use the ssh known_hosts from this machine (first 'short' name only)
hosts="$hosts"$'\n'$(
sed '/^#/d; s/[[, :].*//; s/\.$//; s/_.*//;' ~/.ssh/known_hosts*
)
fi
# selection using "fzf" against available hostnames (tab to select multiple)
# If user aborts (ESC) then abort and return to original query string
if type -t fzf >/dev/null; then
COMPREPLY=( $(
FZF_DEFAULT_OPTS='--bind esc:print-query+abort' \
fzf -q "$cur" -m -i -1 -0 <<<"$hosts"
) )
printf '\e[5n'
return
fi
# extract matching hosts using old simplier methods
# Normal Method - match start only
#[ "X$COMPREPLY" = "X" ] &&
# COMPREPLY=( $( echo "$hosts" | grep -v '-' | egrep "^$cur" ) )
# try word start for 'cur' in hostnames ('-' separated)
[ "X$COMPREPLY" = "X" ] &&
COMPREPLY=( $( echo "$hosts" | egrep "\<$cur" ) )
# try any match
[ "X$COMPREPLY" = "X" ] &&
COMPREPLY=( $( echo "$hosts" | egrep "$cur" ) )
[ $# -ne 3 ] && echo "${COMPREPLY[@]}" # direct call output
# COMPREPLY=( $(compgen -f $cur) )
}
# set the commands that will use this completion.
complete -o nospace -F _find_acct r
complete -o nospace -F _find_acct v
complete -o nospace -F _find_acct cssh
complete -o nospace -F _find_acct do_dist
complete -o nospace -F _find_acct hostinfo
|
gharchive/issue
| 2021-01-09T05:25:40 |
2025-04-01T04:34:43.606306
|
{
"authors": [
"HsuanTingLu",
"antofthy",
"calestyo"
],
"repo": "junegunn/fzf",
"url": "https://github.com/junegunn/fzf/issues/2316",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
335071281
|
pluginstall not an editor command
Have done everything as in description
have call plug#begin('~/.vim/autoload/plug.vim')
line
have the plug.vim in autoload
when i execute call plug#begin('~/.vim/autoload/plug.vim') in gvim command line
i got error: git executable not found.
Fixed- installed git , and change the command line to call plug#begin('~/.vim/autoload')
No, you should not install plugins inside autoload directory. Please reread the instruction.
|
gharchive/issue
| 2018-06-23T05:41:13 |
2025-04-01T04:34:43.609044
|
{
"authors": [
"junegunn",
"rihannarickeminem"
],
"repo": "junegunn/vim-plug",
"url": "https://github.com/junegunn/vim-plug/issues/771",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1653194200
|
🛑 kawaki is down
In e42f802, kawaki (https://www.pegasusknight.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: kawaki is back up in 7280043.
|
gharchive/issue
| 2023-04-04T05:22:12 |
2025-04-01T04:34:43.632691
|
{
"authors": [
"junjanjon"
],
"repo": "junjanjon/WatchSomeSites",
"url": "https://github.com/junjanjon/WatchSomeSites/issues/183",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
713666932
|
issue9-federated-sparql-queries
Added remote querying capability using federated queries and SERVICE
Added a notebook example that includes variations of remote service queries and an example widget
#9
I don't think we want .DS_Store in source control. Let's add to the gitignore
For the federated query example can we show an example of query data across the local (in memory ) graph and a remote graph?
Something like below (even though it doesn't actually work). Trying to illustrate the concept.
from rdflib import namespace, Graph, Literal, URIRef
WD = namespace.Namespace("https://www.wikidata.org/wiki/")
graph = Graph()
graph.add((WD.Q28792126, WD.example, Literal("Example")))
query_str = """
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX p: <http://www.wikidata.org/prop/>
PREFIX ps: <http://www.wikidata.org/prop/statement/>
PREFIX pq: <http://www.wikidata.org/prop/qualifier/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX bd: <http://www.bigdata.com/rdf#>
SELECT ?p ?o
WHERE {
wd:Q28792126 ?p ?o .
service <https://query.wikidata.org/>
{
#Cats
SELECT ?p ?o
WHERE
{
BIND(wikibase:label AS ?p)
wd:Q28792126 wdt:P31 wd:Q146.
service wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
}
}
"""
res = graph.query(query_str)
list(res)
Let's remove the remote query widget completely. For now we will just require the users of the regular query widget to use nested select with service calls. In the future, I think we could add a remote query widget that wraps all queries, but we don't want to overcomplicate at the moment.
Can we add a little more narrative around the notebook code? It would be valuable to explain why these queries are important (e.g. for data integration and distributed knowledge).
Let's break up your examples/RemoteQuery_Example.ipynb notebook into two notebooks: RemoteQuery_Example and FederatedQuery_Example and add both to the index under Query Examples as Remote query example and Federated query example.
We will need some basic tests to ensure that the code you added is working as expected. I'm thinking at least:
test for the service wrapper
test for no service wrapper (expect fail)
test for remote query (check results)
test for federated query (check results)
These can be pretty simple for now.
Good work so far. Once you get done with these changes, please update your description with a more comprehensive discussion of the work performed and how to validate the PR (e.g. doit all and run xyz notebook).
Binder: https://mybinder.org/v2/gh/rhythmsyed/ipyradiant/remotequery?urlpath=lab/tree/index.ipynb
To run:
doit lab
run RemoteQuery_Example.ipynb
run FederatedQuery_Example.ipynb
Changes:
Added logging for RDFLib service patch, can be turned off from user input ('CRITICAL')
Added example of combining local graph + remote graph via federated service query
Added a broken example of nested service calls that is not supported in RDFLib
Added more narrative to RemoteQuery and FederatedQuery notebooks
Added pytests for service_patch and query examples
Added utils_helper in /query which fixed circular import issues.
service_patch can be called by ipyradiant.service_patch_rdflib, passing in a query string
logger level can be called by ipyradiant.set_logger_level, passing in a logger level such as 'DEBUG' or 'CRITICAL'
The FederatedQuery_Example.ipynb is crashing the kernel on the 12th cell:
res = graph.query(query_str)
list(res)
Updated November 20th:
To run:
doit lab / jupyter lab
run RemoteQuery_Example.ipynb
run FederatedQuery_Example.ipynb
To run unit tests:
pytest examples/tests
Changes:
Removed remote_query module
Added persistent tests for external APIs
Reformatted unit tests to use pytest
Added testing as a step in git CI
Reformatted utils_helper to utils
Removed set_logger_level function
In the remotequery notebook, added an example to show the query widget using service patch
Restructured notebooks to use working endpoints for sparql queries and added more narrative
In the remotequery notebook, removed redundant examples
In the remotequery notebook, fixed nested query example and warning from other examples
In the federated query notebook, fixed the in-memory and remote graph example
Added another federated example that shows interesting results
Added two remote endpoint querying example
Last couple of comments:
examples/RemoteQuery_Example.ipynb
[ ] Sort the package imports
[ ] This patch is automatically removed for release>5.0.0 its not really removed.
[ ] Also, logger is not printing the info for me in the notebook?
[ ] Once rdflib is updated, the INFO message is removed by setting the logger_level to WARNING or higher. Lets remove this cell. I don't think this is the "right" way to correct the warning.
[ ] Your Widget Example should probably prepopulate a service statement
ipyradiant/query/utils.py
[ ] There is probably a better way to deal with deprecating the service patch with rdflib version, but for now, maybe we can be a bit more explicit.
def service_patch_rdflib(query_str):
# check for rdflib version, if <=5.0.0 throw warning
version = rdflib.__version__
major_version_num = int(version.split(".")[0])
# if version > 5, warn users ipyradiant needs to be updated
if major_version_num <= 5 and "SERVICE" in query_str:
query_str = query_str.replace("SERVICE", "service")
logger.info(
"SERVICE found in query. RDFlib currently only supports `service`. "
"To be fixed in the next release>5.0.0"
)
elif major_version_num > 5:
logger.info("Service patch for ipyradiant should be removed.")
return query_str
examples/FederatedQuery_Example.ipynb
[ ] sort imports
[ ] Federated queries allows the capability to gather knowledge from distributed sources into one aggregated knowledge graph. One small correction, they are not (necessarily) aggregated into a knowledge graph, but they are aggregated into a single query result.
[ ] Here we create a small triple as an in-memory graph -> Here we create a single triple that will be used as...
[ ] In this example, we are supplementing our local graph with data -> In this next example...
[ ] I think your Query Two Remote Endpoints should be Broken Example: Query Two.... to drive home the point that it doesn't work. Also I think the query should be:
query_str = """
SELECT ?s
WHERE {
{
service <https://query.wikidata.org/sparql>
{
SELECT DISTINCT ?s
WHERE {?s ?p ?o}
LIMIT 4
}
}
UNION
{
service <https://query.wikidata.org/sparql>
{
SELECT DISTINCT ?s
WHERE {?s ?p ?o}
LIMIT 4
OFFSET 4
}
}
}
"""
|
gharchive/pull-request
| 2020-10-02T14:31:00 |
2025-04-01T04:34:43.658009
|
{
"authors": [
"RhythmSyed",
"sanbales",
"zwelz3"
],
"repo": "jupyrdf/ipyradiant",
"url": "https://github.com/jupyrdf/ipyradiant/pull/48",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1364043174
|
How to use generated sample extension
Description
I ran through the setup process with cookiecutter to generate my sample extension but I'm not understanding how to test it within a running jupyter server. My (naive) understanding was that this extension just adds a /ping endpoint to the server that responds with a 200 and 'pong', but I'm either misunderstanding the code, or missing something.
The server extension does appear to load correctly judging from the logs below
Reproduce
run cookiecutter https://github.com/jupyter-server/extension-cookiecutter
complete prompts and navigate into the new directory
run pip install .
run jupyter server
Expected behavior
I expected to be able to curl http://localhost:8888/myextension/ping but that returned a 404 instead of "pong"
Context
Operating System and version:
Browser and version:
Jupyter Server version:
Troubleshoot Output
Paste the output from running `jupyter troubleshoot` from the command line here.
You may want to sanitize the paths in the output.
Command Line Output
Paste the output from your command line running `jupyter lab` here, use `--debug` if possible.
Browser Output
Paste the output from your browser Javascript console here, if applicable.
Hi @bloomsa, the endpoint is currently http://localhost:8888/ping, but I agree we should put it under http://localhost:8888/myextension/ping to promote best practices. Additionally, the endpoint is marked as authenticated, so a naive curl will give you a 403 error.
thanks @blink1073! curl-ing localhost:8888/ping with my jupyter server token as query param worked! I swear I tried that route before raising this question from a different terminal window but I must not have.. 🤦 in any case, I appreciate the quick response.
If you wanted to make an issue to resolve that, you could assign me and I'll give it a shot 🙂
|
gharchive/issue
| 2022-09-07T03:36:51 |
2025-04-01T04:34:43.677615
|
{
"authors": [
"blink1073",
"bloomsa"
],
"repo": "jupyter-server/extension-cookiecutter",
"url": "https://github.com/jupyter-server/extension-cookiecutter/issues/13",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1395412038
|
JobDetail and buildTableRow have duplicate OutputFile element
Description
JobDetail and buildTableRow have duplicate OutputFile element
Reproduce
Go to codebase
See OutputFile function in mainviews/job-detail.tsx and OutputFile function in components/job-row.tsx.
Note similarities
Expected behavior
We should be able to reuse this code instead of duplicating it
Context
Operating System and version: N/A
Browser and version: N/A
Jupyter Server version: N/A
P.S. No need to fill out the whole form given for these small implementation details.
Both functions are still present where mentioned in description
These functions have diverged and are now JobFiles in job-row.tsx and JobFile in job-detail.tsx:
https://github.com/jupyter-server/jupyter-scheduler/blob/5f0c3ca88a7c494881b96e2ddb2619580fe35c33/src/components/job-row.tsx#L48-L79
https://github.com/jupyter-server/jupyter-scheduler/blob/5f0c3ca88a7c494881b96e2ddb2619580fe35c33/src/mainviews/detail-view/job-detail.tsx#L153-L177
|
gharchive/issue
| 2022-10-03T22:32:36 |
2025-04-01T04:34:43.682144
|
{
"authors": [
"andrii-i",
"dlqqq",
"jweill-aws"
],
"repo": "jupyter-server/jupyter-scheduler",
"url": "https://github.com/jupyter-server/jupyter-scheduler/issues/75",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
521287823
|
Investigate why the circleci builds are failing
It seems like something in the Ruby builds is causing CircleCI to fail. This isn't a change to our code - re-running a build that used to work now no longer works.
The errors were because of a library that wasn't properly installed in circleci. I added this into the config for circle, and also added an install module that (somewhat) checks for this same error, as it may be common
|
gharchive/issue
| 2019-11-12T02:16:32 |
2025-04-01T04:34:43.708172
|
{
"authors": [
"choldgraf"
],
"repo": "jupyter/jupyter-book",
"url": "https://github.com/jupyter/jupyter-book/issues/435",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
129284444
|
Use models for file opening
@jasongrout, previously we were throwing away information we already had. I will make the required updates to jupyter-js-plugins.
This looks pretty good. Comments are inline, though. Maybe the widget should verify that the format and type is something that it can handle? Before it just registered to handle an extension. Perhaps now it should register for a type/format instead?
Updated, version bumped.
Updated.
I think this is the wrong way to open a terminal, unless we implement terminals backed by files, and open the "file" to have a terminal. Perhaps it's wrong to have the file browser be the place to open an arbitrary widget like a terminal.
Seems fair.
This is a breaking change, so version should be bumped to 0.7.0.
I already bumped to 0.6 in this PR.
ah, okay. Thanks.
|
gharchive/pull-request
| 2016-01-27T22:02:40 |
2025-04-01T04:34:43.711438
|
{
"authors": [
"blink1073",
"jasongrout"
],
"repo": "jupyter/jupyter-js-filebrowser",
"url": "https://github.com/jupyter/jupyter-js-filebrowser/pull/104",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
125994594
|
Unable to reference a Data File that's stored in a child directory
I'm able to access a data file that's stored in the Home Directory, like this:
import csv
file = open("unisex_names_table.csv")
csvreader = csv.reader(file)
airline_safety = list(csvreader)
print(airline_safety)
However, I'm unable to access the data file when it's stored in a child directory like: /data/fivethirtyeight_data/unisex-names/unisex_names_table.csv
I've tried multiple combinations in referencing the data file in child directories, but haven't been successful. Is there a solution to this?
You are referencing absolutely starting from the root of the file system. This is probably not where your home directory is located. You want to reference relative to your home directory which you can do easily with:
...
file = open("~/data/fivethirtyeight_data/unisex-names/unisex_names_table.csv")
...
The tilde character (~) is a short cut for your home directory which is usually located at /home/<your-username>.
@andreaslang, I see your point. However, my setup and hence my reference to Home is different. I run Jupyter from a Docker image as follows:
docker run -d -p 8888:8888 -v /home/chandra/projects/codesamples/notebooks:/home/ds/notebooks dataquestio/python3-starter
So, the Home directory for me is "/home/chandra/projects/codesamples/notebooks".
Having said that, your note did lead me to the solution:
import csv
file = open("/home/ds/notebooks/data/fivethirtyeight_data/unisex-names/unisex_names_table.csv")
csvreader = csv.reader(file)
airline_safety = list(csvreader)
print(airline_safety)
Ideally, I would like to avoid referencing "/home/ds/notebooks" in my path. But, I can live with it.
@andreaslang - Thanks for your input. Helped me solve my problem.
I edited in a second solution in my post which should help you avoid using /home/ds in your code. See above.
@andreaslang - No luck with your second recommendation:
|
gharchive/issue
| 2016-01-11T17:23:06 |
2025-04-01T04:34:43.716724
|
{
"authors": [
"andreaslang",
"chandragaajula"
],
"repo": "jupyter/jupyterhub",
"url": "https://github.com/jupyter/jupyterhub/issues/376",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
364149347
|
RStudio Initialization Errors/ Unable to Connect to Service
I've been getting these bugs lately, does anyone know how we would suggest debugging a build ?
nbrsessionproxy launches rserver --www-port=some_unused_port then proxies that web service on /rstudio. If you want to debug you could try picking some unused port, drop to a jupyter terminal, and run rserver --www-port=your_port. If rserver starts correctly, open a new tab on your notebook server with proxy/your_port/ appended, e.g. https://hub.mybinder.org/user/somebinderuser/proxy/your_port/.
One binder-specific debug trick I use is to run cat /proc/1/fd/2 in a jupyter terminal so I can see notebook's stderr.
nbrsessionproxy has been rewritten (and renamed to jupyter-rserver-proxy) several times, so I'm going to close this one for now. Please re-open if you run into this again!
|
gharchive/issue
| 2018-09-26T18:11:54 |
2025-04-01T04:34:43.727531
|
{
"authors": [
"jzf2101",
"ryanlovett",
"yuvipanda"
],
"repo": "jupyter/repo2docker",
"url": "https://github.com/jupyter/repo2docker/issues/417",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2162213414
|
Update occ API
Code changes:
Restructure occapi module.
Add makeShapeFromMesh and stlIO functions
This PR will help unblock #246
Is it me or the CI passed but it's not showing the green check?
|
gharchive/pull-request
| 2024-02-29T22:34:47 |
2025-04-01T04:34:43.729197
|
{
"authors": [
"martinRenou",
"trungleduc"
],
"repo": "jupytercad/jupytercad",
"url": "https://github.com/jupytercad/jupytercad/pull/330",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
296855315
|
Add the ref to "launch" events?
I was looking through the prometheus launch events, and wonder if it'd be relatively easy to add the label pod to each one:
e.g., couldn't we add something like ref=self.key (after adding key to self) here:
https://github.com/jupyterhub/binderhub/blob/9d91e12f4f9c1e0c53f0c305a3e634388d48db98/binderhub/builder.py#L388
(here's key: https://github.com/jupyterhub/binderhub/blob/9d91e12f4f9c1e0c53f0c305a3e634388d48db98/binderhub/builder.py#L168)
that way we could see if there are particular repos that are causing super long build times, and we could also quantify the number of launches for each repo...
Nope, because you'll end up blowing through our resources pretty quickly. Each combination of metric keys will create a new metric, and if you add ref that's pretty much infinite numberof metrics. So we can't do that without slowing prometheus to a crawl.
https://prometheus.io/docs/practices/naming/ has more info.
Specifically, from https://prometheus.io/docs/practices/naming/#labels:
CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.
https://github.com/jupyterhub/mybinder.org-deploy/issues/97 or similar is the work needed for us to actually be able to gather this data.
ah ok :-(
I'm gonna remove all of that part of https://github.com/jupyterhub/binderhub/pull/453 then
|
gharchive/issue
| 2018-02-13T19:22:16 |
2025-04-01T04:34:43.743173
|
{
"authors": [
"choldgraf",
"yuvipanda"
],
"repo": "jupyterhub/binderhub",
"url": "https://github.com/jupyterhub/binderhub/issues/452",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
926867810
|
FaceExpressionNet tflite
Hi
I want to convert an emotion recognition model to tflite. But after conversion, the model works much worse. She almost always predicts "neutral", "angry" or "happy". The model from face api demo works much better.
I created a model architecture using tensorflow.keras (tf==2.3.1). Then I loaded the weights using decode_weights. Then saved as tf SavedModel. Then I converted to tflite
Code for creating architecture
from tensorflow.keras import layers, Model
def get_face_exp_net():
inputs = layers.Input((112,112,3))
out1 = layers.ReLU()(layers.Conv2D(32, 3, strides=[2,2], padding='same')(inputs))
out2 = layers.SeparableConv2D(32, 3, padding='same')(out1)
in3 = layers.ReLU()(layers.Add()([out1, out2]))
out3 = layers.SeparableConv2D(32, 3, padding='same')(in3)
in4 = layers.ReLU()(layers.Add()([out1, out2, out3]))
out4 = layers.SeparableConv2D(32, 3, padding='same')(in4)
end = layers.ReLU()(layers.Add()([out1, out2, out3, out4]))
out1 = layers.ReLU()(layers.SeparableConv2D(64, 3, strides=[2,2], padding='same')(end))
out2 = layers.SeparableConv2D(64, 3, padding='same')(out1)
in3 = layers.ReLU()(layers.Add()([out1, out2]))
out3 = layers.SeparableConv2D(64, 3, padding='same')(in3)
in4 = layers.ReLU()(layers.Add()([out1, out2, out3]))
out4 = layers.SeparableConv2D(64, 3, padding='same')(in4)
end = layers.ReLU()(layers.Add()([out1, out2, out3, out4]))
out1 = layers.ReLU()(layers.SeparableConv2D(128, 3, strides=[2,2], padding='same')(end))
out2 = layers.SeparableConv2D(128, 3, padding='same')(out1)
in3 = layers.ReLU()(layers.Add()([out1, out2]))
out3 = layers.SeparableConv2D(128, 3, padding='same')(in3)
in4 = layers.ReLU()(layers.Add()([out1, out2, out3]))
out4 = layers.SeparableConv2D(128, 3, padding='same')(in4)
end = layers.ReLU()(layers.Add()([out1, out2, out3, out4]))
out1 = layers.ReLU()(layers.SeparableConv2D(256, 3, strides=[2,2], padding='same')(end))
out2 = layers.SeparableConv2D(256, 3, padding='same')(out1)
in3 = layers.ReLU()(layers.Add()([out1, out2]))
out3 = layers.SeparableConv2D(256, 3, padding='same')(in3)
in4 = layers.ReLU()(layers.Add()([out1, out2, out3]))
out4 = layers.SeparableConv2D(256, 3, padding='same')(in4)
end = layers.ReLU()(layers.Add()([out1, out2, out3, out4]))
end = layers.AvgPool2D((7,7), strides=(2,2), padding='valid')(end)
end = layers.Flatten()(end)
# end = layers.Dropout(0.5)(end)
end = layers.Dense(7)(end)
end = layers.Softmax()(end)
return Model(inputs=inputs, outputs=end)
Code for loading of weights and converting to tflite
model_config = {'func': get_face_exp_net(),
'name': 'face expression net',
'manifest': 'models/tfjs_models/face_expression_model-weights_manifest.json',
'weights': 'models/tfjs_models/face_expression_model-shard1'
}
print('Converting of ', model_config['name'])
with open(model_config['manifest']) as json_file:
manifest = json.load(json_file)
weights_bytes = read_weights(model_config['weights'])
# [{'name':'layer_name1', 'data':np.array with weights},
# {'name':'layer_name2', 'data':np.array with weights},
# ...]
weights = tfjs.read_weights.decode_weights([manifest], [weights_bytes])[0]
model = model_config['func']
target_layers_ids = []
for i, l in enumerate(model.layers):
if 'conv' in l.name or 'dense' in l.name:
# print(i, l.name, [w.shape for w in l.get_weights()])
target_layers_ids.append(i)
# set weights to model
w_counter = 0
for layer_id in target_layers_ids:
num_w = len(model.layers[layer_id].get_weights())
w_list = [w['data'] for w in weights[w_counter:w_counter+num_w]]
model.layers[layer_id].set_weights(w_list)
w_counter+=num_w
model_name = "_".join(model_config['name'].split())
tf_model_path = 'models/tf_models/{}'.format(model_name)
model.save(tf_model_path)
print('TF model is saved to ', tf_model_path)
converter = tf.lite.TFLiteConverter.from_saved_model(tf_model_path)
tflite_model = converter.convert()
tflite_model_path = 'models/tflite_models/{}.tflite'.format(model_name)
open(tflite_model_path, "wb").write(tflite_model)
print('TFLite model is saved to ', tflite_model_path)
My code for inference
class FaceExpressionModel:
def __init__(self, tflite_path=os.getcwd()+'/tflite_face_api/models/tflite_models/face_expression_net.tflite',
):
self.model = tf.lite.Interpreter(model_path=tflite_path)
self.model.allocate_tensors()
self.input_details = self.model.get_input_details()
self.output_details = self.model.get_output_details()
self.image_size = self.input_details[0]['shape'][1:3] # [112,112]
def normalize(self, img):
img[..., 0] -= 122.782
img[..., 1] -= 117.001
img[..., 2] -= 104.298
return img / 255.0
def pad_to_square(self, img):
h, w, ch = img.shape
if h==w:
return img
max_dim = max(h, w)
dim_diff = int(np.abs(h - w) * 0.5)
pad_array = np.zeros((max_dim, max_dim, ch))
if h > w:
pad_array[:, dim_diff:dim_diff+w, :] = img
else:
pad_array[dim_diff:dim_diff+h, :, :] = img
return pad_array
def preprocessing(self, img):
img = cv2.resize(img, (self.image_size)).astype(np.float32)
img = self.pad_to_square(img)
img = self.normalize(img)
return np.expand_dims(img, axis = 0)
def predict(self, img):
"""
Does inference, preprocessing and returns probabilities for emotions
Arguments:
----------
img : 3d np.ndarray
Single rgb image
Return:
---------
1d np.ndarray
array of class probabilites, values from 0 to 1
"""
img = self.preprocessing(img)
self.model.set_tensor(self.input_details[0]['index'], img)
self.model.invoke()
preds = self.model.get_tensor(self.output_details[0]['index'])[0]
return preds
assuming everything is correct, likely issue is loss of resolution or clipping of values - tflite conversion quantizes models to uint8 which may not be enough for all interim processing results.
|
gharchive/issue
| 2021-06-22T06:11:16 |
2025-04-01T04:34:43.875566
|
{
"authors": [
"andBabaev",
"vladmandic"
],
"repo": "justadudewhohacks/face-api.js",
"url": "https://github.com/justadudewhohacks/face-api.js/issues/803",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
598049199
|
Catch Files Stream Uploading Through Proxy to online storage or email storage.
@honfika @justcoding121
I was trying to intercept files being uploaded into online storage or email. Right now I can able to intercept the upload request with some logic of filename and extension validation.
I need some suggestion on below items.
Is there any way to get the file stream(File uploading) from HTTPS Request?, So I can monitor what is the file which he tried to upload.
Is there any way to find in HTTPS Request files being uploading. So I can intercept this request easily. As i said early file extension validation is restricted to only some file types, because we cannot scale all of the file format which is available.
So the problem is that the file is too big, and you want a stream, it is not possible to load the whole file in memory?
I don't care about the memory, as long as I can able to detect the file upload into online.
please check the sample project:
https://github.com/justcoding121/Titanium-Web-Proxy/blob/02541efdad00962e868776efff41c49a4a8590f5/examples/Titanium.Web.Proxy.Examples.Basic/ProxyTestController.cs#L320-L363
var bodyBytes = await e.GetResponseBody();
Sorry, I linked wrong method, this is the onResponse, but it is the dame for the request.. you can read the body in a same way.
Thanks @honfika for your comments. It will helps me a lot.
Is there any way can we get name of the File or the Directory where it choose from.
I only know how to get the Process ID from the request.
The filename should be there, but i think the directory name is not in the HTTP traffic..
|
gharchive/issue
| 2020-04-10T19:13:04 |
2025-04-01T04:34:43.879821
|
{
"authors": [
"honfika",
"surendharb"
],
"repo": "justcoding121/Titanium-Web-Proxy",
"url": "https://github.com/justcoding121/Titanium-Web-Proxy/issues/752",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1843267242
|
Remove Moq
Remove Moq and replace with an alternative mocking library, such as NSubstitute.
I had a quick look into this myself, but due to the way the test code is structured, Moq is everywhere.
I'll take a look at this one over the weekend
I already made a start on Wednesday and have got around to finishing it: #370
|
gharchive/issue
| 2023-08-09T13:39:51 |
2025-04-01T04:34:43.888766
|
{
"authors": [
"hwoodiwiss",
"martincostello",
"slang25"
],
"repo": "justeattakeaway/AwsWatchman",
"url": "https://github.com/justeattakeaway/AwsWatchman/issues/367",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1972561228
|
ci(pie-monorepo): DSW-000 generate QR code for preview deployments.
Describe your changes (can list changeset entries if preferable)
Author Checklist (complete before requesting a review)
[ ] I have performed a self-review of my code
[ ] If it is a core feature, I have added thorough tests
[ ] If it is a PIE Docs change, I have reviewed the Docs site preview
[ ] If it is a component change, I have reviewed the Storybook preview
[ ] If there are visual test updates, I have reviewed them properly before approving
Reviewer checklists (complete before approving)
Reviewer 1
[ ] If it is a PIE Docs change, I have reviewed the PR preview
[ ] If there are visual test updates, I have reviewed them
Reviewer 2
[ ] If it is a PIE Docs change, I have reviewed the PR preview
[ ] If there are visual test updates, I have reviewed them
Closing this as the github actions available aren't great. Will write my own instead :)
|
gharchive/pull-request
| 2023-11-01T15:07:27 |
2025-04-01T04:34:43.892445
|
{
"authors": [
"siggerzz"
],
"repo": "justeattakeaway/pie",
"url": "https://github.com/justeattakeaway/pie/pull/954",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2411069864
|
🛑 Jellyfin is down
In 4bfea0b, Jellyfin (https://video.mkacg.com) was down:
HTTP code: 502
Response time: 382 ms
Resolved: Jellyfin is back up in 3a4ed6e after 28 minutes.
|
gharchive/issue
| 2024-07-16T12:54:44 |
2025-04-01T04:34:43.894852
|
{
"authors": [
"justforlxz"
],
"repo": "justforlxz/status.mkacg.com",
"url": "https://github.com/justforlxz/status.mkacg.com/issues/2605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2446041186
|
🛑 Jellyfin is down
In 469cd49, Jellyfin (https://video.mkacg.com) was down:
HTTP code: 530
Response time: 67 ms
Resolved: Jellyfin is back up in a42b072 after 19 minutes.
|
gharchive/issue
| 2024-08-03T03:52:07 |
2025-04-01T04:34:43.897173
|
{
"authors": [
"justforlxz"
],
"repo": "justforlxz/status.mkacg.com",
"url": "https://github.com/justforlxz/status.mkacg.com/issues/2726",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2291551918
|
Create a commit when bumping
This is not currently supported directly using gix
In order to perform the commit, a newly constructed tree must be formed for the commit to point to
new files can be created as blobs on the Repository
from here, the tree needs to be stitched together by changing entries to point to their new blobs
Implemented in #18
|
gharchive/pull-request
| 2024-05-12T23:34:54 |
2025-04-01T04:34:43.911186
|
{
"authors": [
"justinrubek"
],
"repo": "justinrubek/bomper",
"url": "https://github.com/justinrubek/bomper/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2060193517
|
Merged Cell Styles
Settings Merged Cell styles doesn't appear to work.
import 'dart:io';
import 'package:excel/excel.dart';
Future main(List<String> args) async {
var excel = Excel.createExcel();
var sheet = excel[excel.sheets.keys.first];
var aptosDisplayStyle = CellStyle(fontFamily: 'Aptos Display', bold: true);
sheet.merge(CellIndex.indexByString("A1"), CellIndex.indexByString("B2"),
customValue: const TextCellValue("90"));
sheet.setMergedCellStyle(CellIndex.indexByString("A1"),
aptosDisplayStyle.copyWith(rotationVal: 90));
await File("C:\\Users\\johnc\\Documents\\output_merge_test.xlsx")
.writeAsBytes(excel.encode()!);
}
Setting the style on the first cell does work.
|
gharchive/issue
| 2023-12-29T15:26:35 |
2025-04-01T04:34:43.913633
|
{
"authors": [
"johncharris"
],
"repo": "justkawal/excel",
"url": "https://github.com/justkawal/excel/issues/304",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
949848235
|
Cyber FM
Before submission deletes this line:
THIS IS NOT A TOKEN LISTING REQUEST FORM. IF YOU DO NOT FOLLOW THE FORMAT OR MAKE A GENERIC TOKEN REQUEST YOUR ISSUE WILL BE DELETED WITHOUT COMMENT
YOUR JUSTLIST MUST FOLLOW THE JSON SPECIFICATION
https://github.com/justswaporg/justlists/example.justlists.ts
Checklist
[x] I understand that this is not the place to request a token listing.
[x] I have tested that my JustList is compatible by pasting the URL into the add a list UI at justswap.org.
[x] I understand that filing an issue or adding liquidity does not guarantee addition to the justlists website.
Please provide the following information for your token.
JustList URL must be HTTPS.
JustList URL:
JustList Name:
Link to the official homepage of the JustList manager:
Sorry, your issue will be closed as you did not submit your information in the correct format.
|
gharchive/issue
| 2021-07-21T15:34:29 |
2025-04-01T04:34:43.919423
|
{
"authors": [
"Aakash7474",
"jeffwami"
],
"repo": "justswaporg/justlists",
"url": "https://github.com/justswaporg/justlists/issues/1624",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1291886768
|
🛑 Journals is down
In 70353ec, Journals (https://tecnoscientifica.com/journal/tasp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Journals is back up in ccbd395.
|
gharchive/issue
| 2022-07-01T22:38:44 |
2025-04-01T04:34:43.922003
|
{
"authors": [
"justudin"
],
"repo": "justudin/tsp-status",
"url": "https://github.com/justudin/tsp-status/issues/363",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
304661054
|
Fish shell completions don't work when --clip is used
Fish autocompletes password entries just fine for gopass …, but when I use gopass --clip …, the completions don't work at all.
$ gopass --version
gopass 1.6.11 (0cb08ac43239f18d047f6cadf4dd2900eb48313e 2018-02-20 09:16:25) go1.10 darwin amd64
$ fish --version
fish, version 2.7.1
Thank you very much for reporting this issue. Unfortunately the fish completion is known to be incomplete and buggy. Any help would be very much appreciated.
Here is a slightly more useful completion script for fish that I use.
gopass.fish.txt
I've pushed some changes based on @monofon s snippet in #748. I don't think this will fix this issue, but maybe it improves the fish autocompletion UX.
|
gharchive/issue
| 2018-03-13T07:59:32 |
2025-04-01T04:34:43.924176
|
{
"authors": [
"Strayer",
"dominikschulz",
"monofon"
],
"repo": "justwatchcom/gopass",
"url": "https://github.com/justwatchcom/gopass/issues/709",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2485106170
|
🛑 TRAPANNI PIZZA is down
In 80df629, TRAPANNI PIZZA (https://trapanipizzaestofada.com) was down:
HTTP code: 500
Response time: 872 ms
Resolved: TRAPANNI PIZZA is back up in 1e95edd after 4 days, 6 hours, 46 minutes.
|
gharchive/issue
| 2024-08-25T07:37:44 |
2025-04-01T04:34:43.928876
|
{
"authors": [
"jveyes"
],
"repo": "jveyes/upptime",
"url": "https://github.com/jveyes/upptime/issues/1360",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1792953414
|
🛑 GROWING CLUB is down
In bef142b, GROWING CLUB (https://growingclub.co) was down:
HTTP code: 0
Response time: 0 ms
Resolved: GROWING CLUB is back up in fde64a1.
|
gharchive/issue
| 2023-07-07T07:18:35 |
2025-04-01T04:34:43.931126
|
{
"authors": [
"jveyes"
],
"repo": "jveyes/upptime",
"url": "https://github.com/jveyes/upptime/issues/932",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1849093787
|
java.lang.InstantiationError: androidx.compose.foundation.pager.PagerState After upgrading navigation compose version to 2.7.0
java.lang.InstantiationError: androidx.compose.foundation.pager.PagerState
at com.origeek.imageViewer.gallery.ImagePagerState.<init>(ImagePager.kt:39)
at com.origeek.imageViewer.gallery.ImageGalleryState.<init>(ImageGallery.kt:67)
at com.origeek.imageViewer.previewer.PreviewerPagerState.<init>(PreviewerPagerState.kt:24)
at com.origeek.imageViewer.previewer.PreviewerPagerState.<init>(PreviewerPagerState.kt:17)
at com.origeek.imageViewer.previewer.PreviewerTransformState.<init>(PreviewerTransformState.kt:36)
at com.origeek.imageViewer.previewer.PreviewerVerticalDragState.<init>(PreviewerVerticalDragState.kt:39)
at com.origeek.imageViewer.previewer.PreviewerVerticalDragState.<init>(PreviewerVerticalDragState.kt:30)
at com.origeek.imageViewer.previewer.ImagePreviewerState.<init>(ImagePreviewer.kt:61)
at com.origeek.imageViewer.previewer.ImagePreviewerState.<init>(ImagePreviewer.kt:56)
at com.origeek.imageViewer.previewer.ImagePreviewerKt$rememberPreviewerState$imagePreviewerState$1.invoke(ImagePreviewer.kt:101)
at com.origeek.imageViewer.previewer.ImagePreviewerKt$rememberPreviewerState$imagePreviewerState$1.invoke(ImagePreviewer.kt:100)
at androidx.compose.runtime.saveable.RememberSaveableKt.rememberSaveable(RememberSaveable.kt:88)
at com.origeek.imageViewer.previewer.ImagePreviewerKt.rememberPreviewerState(ImagePreviewer.kt:100)
at com.frank.u.ui.MainScreenKt.MainScreen(MainScreen.kt:53)
at com.frank.u.MainActivity$onCreate$1$1.invoke(MainActivity.kt:22)
at com.frank.u.MainActivity$onCreate$1$1.invoke(MainActivity.kt:21)
java.lang.InstantiationError: androidx.compose.foundation.pager.PagerState
at com.origeek.imageViewer.gallery.ImagePagerState.<init>(ImagePager.kt:39)
at com.origeek.imageViewer.gallery.ImageGalleryState.<init>(ImageGallery.kt:67)
at com.origeek.imageViewer.previewer.PreviewerPagerState.<init>(PreviewerPagerState.kt:24)
at com.origeek.imageViewer.previewer.PreviewerPagerState.<init>(PreviewerPagerState.kt:17)
at com.origeek.imageViewer.previewer.PreviewerTransformState.<init>(PreviewerTransformState.kt:36)
at com.origeek.imageViewer.previewer.PreviewerVerticalDragState.<init>(PreviewerVerticalDragState.kt:39)
at com.origeek.imageViewer.previewer.PreviewerVerticalDragState.<init>(PreviewerVerticalDragState.kt:30)
at com.origeek.imageViewer.previewer.ImagePreviewerState.<init>(ImagePreviewer.kt:61)
at com.origeek.imageViewer.previewer.ImagePreviewerState.<init>(ImagePreviewer.kt:56)
at com.origeek.imageViewer.previewer.ImagePreviewerKt$rememberPreviewerState$imagePreviewerState$1.invoke(ImagePreviewer.kt:101)
at com.origeek.imageViewer.previewer.ImagePreviewerKt$rememberPreviewerState$imagePreviewerState$1.invoke(ImagePreviewer.kt:100)
at androidx.compose.runtime.saveable.RememberSaveableKt.rememberSaveable(RememberSaveable.kt:88)
at com.origeek.imageViewer.previewer.ImagePreviewerKt.rememberPreviewerState(ImagePreviewer.kt:100)
at com.frank.u.ui.MainScreenKt.MainScreen(MainScreen.kt:53)
at com.frank.u.MainActivity$onCreate$1$1.invoke(MainActivity.kt:22)
at com.frank.u.MainActivity$onCreate$1$1.invoke(MainActivity.kt:21)
请更新到以下版本:
implementation 'com.github.jvziyaoyao:ImageViewer:1.0.2-alpha.5'
java.lang.InstantiationError: androidx.compose.foundation.pager.PagerState
at com.origeek.imageViewer.gallery.ImagePagerState.<init>(ImagePager.kt:39)
at com.origeek.imageViewer.gallery.ImageGalleryState.<init>(ImageGallery.kt:67)
at com.origeek.imageViewer.previewer.PreviewerPagerState.<init>(PreviewerPagerState.kt:24)
at com.origeek.imageViewer.previewer.PreviewerPagerState.<init>(PreviewerPagerState.kt:17)
at com.origeek.imageViewer.previewer.PreviewerTransformState.<init>(PreviewerTransformState.kt:36)
at com.origeek.imageViewer.previewer.PreviewerVerticalDragState.<init>(PreviewerVerticalDragState.kt:39)
at com.origeek.imageViewer.previewer.PreviewerVerticalDragState.<init>(PreviewerVerticalDragState.kt:30)
at com.origeek.imageViewer.previewer.ImagePreviewerState.<init>(ImagePreviewer.kt:61)
at com.origeek.imageViewer.previewer.ImagePreviewerState.<init>(ImagePreviewer.kt:56)
at com.origeek.imageViewer.previewer.ImagePreviewerKt$rememberPreviewerState$imagePreviewerState$1.invoke(ImagePreviewer.kt:101)
at com.origeek.imageViewer.previewer.ImagePreviewerKt$rememberPreviewerState$imagePreviewerState$1.invoke(ImagePreviewer.kt:100)
at androidx.compose.runtime.saveable.RememberSaveableKt.rememberSaveable(RememberSaveable.kt:88)
at com.origeek.imageViewer.previewer.ImagePreviewerKt.rememberPreviewerState(ImagePreviewer.kt:100)
at com.frank.u.ui.MainScreenKt.MainScreen(MainScreen.kt:53)
at com.frank.u.MainActivity$onCreate$1$1.invoke(MainActivity.kt:22)
at com.frank.u.MainActivity$onCreate$1$1.invoke(MainActivity.kt:21)
请更新到以下版本: implementation 'com.github.jvziyaoyao:ImageViewer:1.0.2-alpha.5'
好了,非常感谢
|
gharchive/issue
| 2023-08-14T06:31:26 |
2025-04-01T04:34:43.942331
|
{
"authors": [
"jvziyaoyao",
"shangmingchao"
],
"repo": "jvziyaoyao/ImageViewer",
"url": "https://github.com/jvziyaoyao/ImageViewer/issues/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
288215793
|
Parselog for tmux doesn't work as expected
Running tmux | target/debug/examples/parselog always outputs simply the following:
[print] '['
[print] 'e'
[print] 'x'
[print] 'i'
[print] 't'
[print] 'e'
[print] 'd'
[print] ']'
[execute] 0a
All actions inside tmux are ignored. Maybe this is actually a expected behavior and I just don't understand how tmux interacts with the terminal.
:sweat_smile:
|
gharchive/issue
| 2018-01-12T19:03:09 |
2025-04-01T04:34:43.946201
|
{
"authors": [
"nixpulvis"
],
"repo": "jwilm/vte",
"url": "https://github.com/jwilm/vte/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2583586947
|
Progress in app + sys mem fallback fix
This adds a progress bar in the app as well as adding a fix for flooding into system memory.
Previously, cache "stepped up" as the process continued, which de-allocated when it reached the max amount. When sys mem fallback is enabled, the ceiling is raised so it doesn't de-allocate and instead enters system memory.
Step up effect is shown here:
This fix collects cache after each step, stopping this effect:
Also, added a progress bar and pipeline callback:
Awesome work
Thank you for the pull request. Just one small concern: does it slow down the generation significantly by calling gc.collect() and torch.cuda.empty_cache() at every step?
It does slow generation down, though I'm not sure by how much. Testing now.
Without this PR: 441.83 sec
With this PR: 449.57 sec
About 8 second at 16 steps
Without this PR: 441.83 sec With this PR: 449.57 sec
About 8 second at 16 steps
This seems acceptable, I will merge it.
Mine is still flooding into system memory and its slowing the generation down to nearly a halt.
I opened this issue about it: https://github.com/jy0205/Pyramid-Flow/issues/86
|
gharchive/pull-request
| 2024-10-13T02:58:26 |
2025-04-01T04:34:43.969168
|
{
"authors": [
"Ednaordinary",
"FurkanGozukara",
"dillfrescott",
"feifeiobama"
],
"repo": "jy0205/Pyramid-Flow",
"url": "https://github.com/jy0205/Pyramid-Flow/pull/76",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
719388
|
IE8 reports error with validate 1.8
IE8 reports an error every time query.validate-1.8.js is called. The error says this:
Webpage error details
User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.6; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; InfoPath.1; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)
Timestamp: Wed, 30 Mar 2011 22:41:43 UTC
Message: Object doesn't support this property or method
Line: 303
Char: 5
Code: 0
URI: http://listserv-d.web.abbott.com/auth/ahd3/js/jquery.validate-1.8.js
here is a solution to this issure which proved to work properly:
https://gist.github.com/703657
|
gharchive/issue
| 2011-03-30T22:42:00 |
2025-04-01T04:34:43.988762
|
{
"authors": [
"ciznx",
"effadj"
],
"repo": "jzaefferer/jquery-validation",
"url": "https://github.com/jzaefferer/jquery-validation/issues/67",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
199137249
|
Extensions
~dialect of each prefecture~
events
ex) Trick or Treat, Merry Christmas
Currently, the prefecture that corresponds is "Kyoto, Osaka, Okinawa" only 3. We plan to increase it in the future.
|
gharchive/issue
| 2017-01-06T07:02:26 |
2025-04-01T04:34:43.990126
|
{
"authors": [
"k-kuwahara"
],
"repo": "k-kuwahara/ja-greetings",
"url": "https://github.com/k-kuwahara/ja-greetings/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
528118910
|
merge from arlac77/npm-package-template
README.md
docs(README): update from template
Coverage remained the same at 36.367% when pulling 54411368f03593cf2216e792d01bf8b4c423edc4 on npm-template-sync/1 into fda5c543f40518cf9ba70824752c40568b855240 on master.
|
gharchive/pull-request
| 2019-11-25T14:20:49 |
2025-04-01T04:34:44.004604
|
{
"authors": [
"arlac77",
"coveralls"
],
"repo": "k0nsti/konsum",
"url": "https://github.com/k0nsti/konsum/pull/685",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
982413747
|
unable to figure out how to configure credentials for dockerhub
i am trying to configure credentials for dockerhub in registry configuration file so that i will not hit rate limit
i have tried endless configuration without success.
what am i doing wrong?
mirrors:
docker.io:
endpoint:
- https://registry-1.docker.io
configs:
docker.io:
auth:
username: ***
password: ***
The configs section is keyed off the endpoint, not the registry. Have you tried putting the username and password in an entry for registry-1.docker.io?
|
gharchive/issue
| 2021-08-30T05:47:47 |
2025-04-01T04:34:44.101554
|
{
"authors": [
"brandond",
"itai-codefresh"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/issues/3937",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1595000013
|
Generation of certificates and keys for etcd gated if etcd is disabled.
Problem:
When support for etcd was added in 3957142, the generation of certificates and keys for etcd was not gated behind the use of managed etcd. Keys are generated and distributed across servers even if managed etcd is not enabled.
Solution:
Allow generation of certificates and keys only if managed etc is enabled. Check config.DisableETCD flag.
Proposed Changes
Gate generating ETCD Certificates by checking config.DisableETCD flag.
Types of Changes
Bugfix
Verification
When running k3s server provide --disable-etcd flag
Testing
There are no unit tests present.
Linked Issues
Related issue
User-Facing Change
No user-facing changes in that PR.
Further Comments
Trivial change.
Thanks for the PR!
Have you checked to see what happens if you later enable etcd by restarting K3s with --cluster-init added to the args, and then attempt to join a second server to the cluster? Does it handle that properly?
Thanks for the suggestion, I haven't checked that scenario but I am on my way...
TEST SCENARIO: (tested on Ubuntu 20.04 focal inside docker containers `ubuntu:latest)
I've created a docker network and run the k3s servers in two separate containers:
Build k3s for development:
SKIP_VALIDATE=true make
Network:
λ ~/git/k8s/k3s/ 3404/shouldd_not_perform_etcd_setup_if_etcd_is_not_enabled docker network inspect k3s-network
[
{
"Name": "k3s-network",
"Id": "14e9b03b2323fbbb40db6c9e5b75c16a7f8a739dd4a1c7011e59c9f57c3a6c7d",
"Created": "2023-02-26T09:21:39.110735057+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"b18418c2a74f143d0996e5a158ebc42afbce2a5c7c19dc88848e41491d8a56e8": {
"Name": "wonderful_golick",
"EndpointID": "39bb4fdccb6bfb5af7c5e1ee5c29471dcac63c69e5c91b3c857db32183c15baf",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"c8523bf03e01e6e93bc1cb80601c2fd8e5985cc5abc641bf6f8bad9f05fa3533": {
"Name": "exciting_lichterman",
"EndpointID": "fe420aeef760f5bc10731b8d9c102bafedee26c4c0d97fd97ac2644f9bc27458",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Running the first container:
docker run -it --volume ${PWD}:/k3s --network k3s-network --publish 6443:6443 ubuntu:latest /bin/bash
Run the second container:
docker run -it --volume ${PWD}:/k3s --network k3s-network --publish 6446:6443 ubuntu:latest /bin/bash
SCENARIO 1: run k3s in the first container with etcd only and run k3s in the second container with --disable-etcd connecting to the first container etcd on port 6443
The First container:
./bin/k3s server --cluster-init --disable-apiserver --disable-controller-manager --disable-scheduler
The Second container:
./bin/k3s server --cluster-init --disable-etcd --server https://172.18.0.2:6443 -t K1062ecd389f0149f7c55d19a23253e6811f77badf341a61046153ab68ddb7071::server:6be8b4b37e6e76a8f3803e701fd36af8
Connection succeeded.
SCENARIO 2: restart k3s in the second container with --cluster-init.
./bin/k3s server --cluster-init
Restarts and generates servers etcd certs. Creating k3s server succeeded.
I'm not sure that I followed exactly what you meant.
first server: ./bin/k3s server no --cluster-init but I cannot run ti with --disable-etcd as I need to point it to some db server
first server: ./bin/k3s server --cluster-init --disable-apiserver --disable-controller-manager --disable-scheduler - only etcd
second server: ./bin/k3s server --cluster-init --disable-etcd --server https://172.25.0.2:6443 -t
INFO[0000] Starting k3s v1.26.1+k3s-232cc1e5-dirty (232cc1e5)
INFO[0000] Managed etcd cluster not yet initialized
INFO[0000] Reconciling bootstrap data between datastore and disk
INFO[0000] certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=kube-apiserver signed by CN=k3s-server-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=etcd-client signed by CN=etcd-server-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=etcd-peer signed by CN=etcd-peer-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
INFO[0000] certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1677747240: notBefore=2023-03-02 08:54:00 +0000 UTC notAfter=2024-03-01 09:01:34 +0000 UTC
WARN[0000] dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request
INFO[0000] Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.25.0.3:172.25.0.3 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-b017174339f9:b017174339f9 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=AFD207A83BF28EECF1B1CB433D99C030BCEE3371]
INFO[0000] Running load balancer k3s-etcd-server-load-balancer 127.0.0.1:2379 -> [172.25.0.2:2379]
WARN[0001] Failed to remove this node from etcd members
INFO[0001] Tunnel server egress proxy mode: agent
INFO[0001] Tunnel server egress proxy waiting for runtime core to become available
@dereknola thanks for the correction. I did a test according to your suggestion and the behaviour is the same with and without these PR changes.
The second server connects and the etcd is started.
If there are any other tests you would like me to perform or something I should add as confirmation, please let me know.
|
gharchive/pull-request
| 2023-02-22T12:13:15 |
2025-04-01T04:34:44.113630
|
{
"authors": [
"bartossh",
"brandond"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/pull/6998",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
210128956
|
Add note that ratings are from IMDB
Note that ratings are from IMDB in the lede. Fixes issue #28.
Hi there @blech. Your contribution is much appreciated; however, your suggestion has already been mentioned. Thanks!
|
gharchive/pull-request
| 2017-02-24T18:55:03 |
2025-04-01T04:34:44.115061
|
{
"authors": [
"blech",
"k4m4"
],
"repo": "k4m4/movies-for-hackers",
"url": "https://github.com/k4m4/movies-for-hackers/pull/45",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1237973457
|
Mac hotkeys
The hotkeys for Mac don't work as it brings up a save website page, instead of running script
Hiii
I know how to do it
here
if u submit that pull request i can merge it
|
gharchive/issue
| 2022-05-17T02:38:11 |
2025-04-01T04:34:44.119420
|
{
"authors": [
"Enterausernameeeeee",
"k4yp",
"whyyouwannaknowmyusername"
],
"repo": "k4yp/epbot",
"url": "https://github.com/k4yp/epbot/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
925998671
|
Fix local playground setup
Don't create service and don't point publish service to it. Istead use
node IPs as Ingress address
Signed-off-by: Dinar Valeev dinar.valeev@absa.africa
fixes https://github.com/k8gb-io/k8gb/issues/528
@ytsarev, @k0da agreed
@k0da sorry if I missed it, was the issue for terratest extension created before the merge?
@ytsarev just opened one
@k0da thank you 👍
|
gharchive/pull-request
| 2021-06-21T09:06:20 |
2025-04-01T04:34:44.123269
|
{
"authors": [
"k0da",
"kuritka",
"ytsarev"
],
"repo": "k8gb-io/k8gb",
"url": "https://github.com/k8gb-io/k8gb/pull/532",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1285976189
|
operator hive-operator (1.2.3725-6ac23a8)
Update Hive community operator channel(s) [['alpha']] to 1.2.3725-6ac23a8
/hold
/unhold
/merge possible
/merge possible
/merge possible
/merge possible
|
gharchive/pull-request
| 2022-06-27T15:28:49 |
2025-04-01T04:34:44.125412
|
{
"authors": [
"abutcher",
"framework-automation"
],
"repo": "k8s-operatorhub/community-operators",
"url": "https://github.com/k8s-operatorhub/community-operators/pull/1391",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1687101725
|
operator pulp-operator (1.0.0-alpha.6)
Thanks submitting your Operator. Please check below list before you create your Pull Request.
New Submissions
[ ] Are you familiar with our contribution guidelines?
[ ] Have you packaged and deployed your Operator for Operator Framework?
[ ] Have you tested your Operator with all Custom Resource Definitions?
[ ] Have you tested your Operator in all supported installation modes?
[ ] Have you considered whether you want use semantic versioning order?
[ ] Is your submission signed?
[ ] Is operator icon set?
Updates to existing Operators
[ ] Did you create a ci.yaml file according to the update instructions?
[ ] Is your new CSV pointing to the previous version with the replaces property if you chose replaces-mode via the updateGraph property in ci.yaml?
[ ] Is your new CSV referenced in the appropriate channel defined in the package.yaml or annotations.yaml ?
[ ] Have you tested an update to your Operator when deployed via OLM?
[ ] Is your submission signed?
Your submission should not
[ ] Modify more than one operator
[ ] Modify an Operator you don't own
[ ] Rename an operator - please remove and add with a different name instead
[ ] Modify any files outside the above mentioned folders
[ ] Contain more than one commit. Please squash your commits.
Operator Description must contain (in order)
[ ] Description about the managed Application and where to find more information
[ ] Features and capabilities of your Operator and how to use it
[ ] Any manual steps about potential pre-requisites for using your Operator
Operator Metadata should contain
[ ] Human readable name and 1-liner description about your Operator
[ ] Valid category name1
[ ] One of the pre-defined capability levels2
[ ] Links to the maintainer, source code and documentation
[ ] Example templates for all Custom Resource Definitions intended to be used
[ ] A quadratic logo
Remember that you can preview your CSV here.
--
1 If you feel your Operator does not fit any of the pre-defined categories, file an issue against this repo and explain your need
2 For more information see here
/merge possible
/merge possible
/merge possible
/merge possible
/merge possible
|
gharchive/pull-request
| 2023-04-27T15:35:38 |
2025-04-01T04:34:44.137803
|
{
"authors": [
"framework-automation",
"git-hyagi"
],
"repo": "k8s-operatorhub/community-operators",
"url": "https://github.com/k8s-operatorhub/community-operators/pull/2673",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1972598790
|
operator clickhouse (0.22.0)
Thanks submitting your Operator. Please check below list before you create your Pull Request.
Updates to existing Operators
[x] Did you create a ci.yaml file according to the update instructions?
[x] Is your new CSV pointing to the previous version with the replaces property if you chose replaces-mode via the updateGraph property in ci.yaml?
[x] Is your new CSV referenced in the appropriate channel defined in the package.yaml or annotations.yaml ?
[x] Have you tested an update to your Operator when deployed via OLM?
[x] Is your submission signed?
Your submission should not
[x] Modify more than one operator
[x] Modify an Operator you don't own
[x] Rename an operator - please remove and add with a different name instead
[x] Modify any files outside the above mentioned folders
[x] Contain more than one commit. Please squash your commits.
Operator Description must contain (in order)
[x] Description about the managed Application and where to find more information
[x] Features and capabilities of your Operator and how to use it
[x] Any manual steps about potential pre-requisites for using your Operator
Operator Metadata should contain
[x] Human readable name and 1-liner description about your Operator
[x] Valid category name1
[x] One of the pre-defined capability levels2
[x] Links to the maintainer, source code and documentation
[x] Example templates for all Custom Resource Definitions intended to be used
[x] A quadratic logo
Remember that you can preview your CSV here.
--
1 If you feel your Operator does not fit any of the pre-defined categories, file an issue against this repo and explain your need
2 For more information see here
/merge possible
/merge possible
/merge possible
|
gharchive/pull-request
| 2023-11-01T15:28:48 |
2025-04-01T04:34:44.147053
|
{
"authors": [
"framework-automation",
"sunsingerus"
],
"repo": "k8s-operatorhub/community-operators",
"url": "https://github.com/k8s-operatorhub/community-operators/pull/3488",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
935377838
|
JSONPath expressions in Application Resource Mapping and setting values
Application Resource Mapping specifies JSONPath expressions to indentify containers, envs, volumeMounts, and volumes.
The unstructured API of apimachinery package expects fields to set values for an object. For example, to set volumes:
unstructured.SetNestedSlice(application.Object, volumes, "spec", "template", "spec", "volumes")
I couldn't figure out a way to deterministically resolve the fields from a JSONPath expression. Is that possible? If not, we should reconsider JSONPath expression to specify fields.
@baijum have you looked at https://pkg.go.dev/k8s.io/client-go/util/jsonpath#JSONPath.FindResults
@baijum have you looked at https://pkg.go.dev/k8s.io/client-go/util/jsonpath#JSONPath.FindResults
That function returns a slice of results. Should the implementation set value for each of the results?
Should the implementation set value for each of the results?
JSON Path can return multiple matching nodes for a query, the containers, envs and volumeMounts keys all accept multiple json paths, so I'd lean towards treating each matching node as a target to be injected. This behavior is desirable because it allows us to bind into arrays that contain these structures (imagine something like containers that are not quite corev1.Container).
This raises an interesting question as what to do when there is an asymmetry between env and volumeMounts. The path of the volume mount needs to be based on the value of the SERVICE_BINDING_ROOT env var. We have this issue regardless of jsonpath returning one or multiple values.
IMHO, JSONPath is not the right tool for specifying locations in a data structure - it does great job of filtering the values but it does not provide a link between the matched value and its location in the data structure. Hence, updating the value and reflecting that update on the original structure is not something what JSONPath speaks about. It is left to an implementation to try to support it or not.
When there is no match, the result is an empty list - Again, this does not give us any clue where relevant values should be injected if the data structure does not hold them.
In the current spec, Application Resource Mapping defines .envs and .volumes - usually they are optional in application resources. Hence, the provided JSONPath is going to return an empty list, and we are not able to figure where volume/env should be injected.
I would propose that we replace JSONPaths with JSON Pointers - it a well known standard for locating data in JSON structures (e.g. among others kustomize uses it )
to bind into arrays that contain these structures (imagine something like containers that are not quite corev1.Container).
As I understand, there are two primary use cases for Application Resource Mapping.
Project bindings into an intermediate resource. (For eg., runtime-component-operator)
Project bindings into partially PodSpec-able resource (For example cronjob's the template attribute name is jobTemplate)
In both these cases, I don't see a requirement to support an array. BTW, the second example given for cronjob in the spec is not very realistic, because the previous example has a better solution.
In the intermediate resource scenario, repeating the same data inside a nested data structure is not required.
I think dot-separated values of strings would be sufficient to handle the current use cases.
In order to support something like RuntimeComponentSpec, we should introduce a notion of duck-typed PodSpecable resource, something what @baijum pointed out in https://github.com/kubepreset/kubepreset/wiki/Application-Resource-Mapping:
type PodSpecable struct {
InitContainers []corev1.Container `json:"initContainers,omitempty"`
Containers []corev1.Container `json:"containers,omitempty"`
Volumes []corev1.Volume `json:"volumes,omitempty"`
}
The structure can have any other fields, but if it has these, we know what to inject and where. The location to that structure can be specified via a new field:
apiVersion: service.binding/v1alpha2
kind: ClusterApplicationResourceMapping
metadata:
name: # string
generation: # int64, defined by the Kubernetes control plane
...
spec:
versions: # []Version
- version: # string
podSpecable: # jsonpointer or dot separated path
The expression pointing to the location of PodSpecable resource do not need to contain any wildcards, and with that we cover already a huge number of cases. IMHO, it is very unlikely to have containers and volumeMounts living in two very different part of resources, i.e. they have the common parent.
Hence, we can move from the need to support JSONPath, and use simpler standards, e.g. JSONPointer
@pedjak Can you please add a comment with the CR that you'd write for RuntimeComponentSpec? A quick review of the CRD seems to indicate that it's not PodSpecable by this definition.
@nebhale I see that the term podSpecable here is misleading - we are not looking for resources that are exactly of type `PodSpec, we are looking for duck-typed structures, that have some of interesting fields:
type PodSpecable struct {
InitContainers []corev1.Container `json:"initContainers,omitempty"`
Containers []corev1.Container `json:"containers,omitempty"`
Volumes []corev1.Volume `json:"volumes,omitempty"`
}
Everything is optional. The rest of the fields in the found structure are not of interest for the operator.
For a RuntimeComponent CR like:
apiVersion: app.stacks/v1beta1
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: ClusterIP
port: 9080
expose: true
storage:
size: 2Gi
mountPath: "/logs"
CustomApplicationResourceMapping could look like:
apiVersion: service.binding/v1alpha2
kind: ClusterApplicationResourceMapping
metadata:
name: runtimecomponents.app.stacks
...
spec:
versions: # []Version
- version: '*'
podSpecable: .spec
And then it is easy to create volumes collection and an entry in it. If RuntimeComponent CR declares initContainers, then it is easy to inject a proper volumeMount in them.
This works well for CronJob mentioned in the spec, we could have following unique CustomApplicationResourceMapping:
apiVersion: service.binding/v1alpha2
kind: ClusterApplicationResourceMapping
metadata:
name: cronjobs.batch
spec:
versions:
- version: "*"
podSpecable: .spec.jobTemplate.spec.template.spec
For a RuntimeComponent CR like:
apiVersion: app.stacks/v1beta1
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: ClusterIP
port: 9080
expose: true
storage:
size: 2Gi
mountPath: "/logs"
CustomApplicationResourceMapping could look like:
apiVersion: service.binding/v1alpha2
kind: ClusterApplicationResourceMapping
metadata:
name: runtimecomponents.app.stacks
...
spec:
versions: # []Version
- version: '*'
podSpecable: .spec
How do you bind a service to the container with the image quay.io/my-repo/my-app:1.0 ? Since it's not within []corev1.Container it would seem to be inaccessible.
How do you bind a service to the container with the image
So, it seems that .spec of RuntimeComponent looks like corev1.Container with a twist that it contains initContainer and sideContainer as well. We can add an arbitrary number of fields that could be interesting for the operator:
type PodSpecable struct {
InitContainers []corev1.Container `json:"initContainers,omitempty"`
Containers []corev1.Container `json:"containers,omitempty"`
Volumes []corev1.Volume `json:"volumes,omitempty"`
VolumeMounts [][]corev1.VolumeMount
Env []corev1.EnvVar
.
.
}
so in order to bind, the operator locates the structure that should be modified, start looking what fields are available and based on that adds, in this case add entry under volumes, and volumeMounts.
I agree that the current spec'd behavior is under defined and has issues, but I'm not convinced we need to take steps as dramatic as discussed here. Since the ClusterApplicationResourceMapping is a power user feature that his focused on adding support for resource shapes we don't know about, I think we can more emphasis on flexibility over easy of use. Restricting support to resource that have a PodSpec is easy to implement, but ultimately, not as useful.
The essential capabilities we need to bind a service into an application workload are:
a single []corev1.Volume
for each container:
a single []corev1.EnvVar
a single []corev1.VolumeMapping
If we have access to a corev1.PodTemplateSpec we can infer everything else and have a richer experaince. But if we require a PodTemplateSpec, then flexibility is lost.
We need the flexibility to find container-esque shapes wherever they may be in the spec; JSONPath is a good fit. We also need the reliability to know within a found container-esque object a firm reference to the env and volumeMounts (JSON Pointer). While I'm not crazy about requiring two distinct grammars, we can explore a limited subset of JSONPath that gives us a JSON Pointer like experiance.
apiVersion: service.binding/v1alpha2
kind: ClusterApplicationResourceMapping
metadata: # metav1.ObjectMeta
...
spec:
versions:
- version: # string
containers:
- path # string(JSONPath)
name: # string(JSON Pointer), optional
env: # string(JSON Pointer), defaults to "/env"
volumeMounts: # string(JSON Pointer), defaults to "/volumeMounts"
volumes: # string(JSON Pointer)
For the RuntimeComponent resource, it would look like:
apiVersion: service.binding/v1alpha2
kind: ClusterApplicationResourceMapping
metadata:
name: runtimecomponents.app.stacks
spec:
versions:
- version: '*'
containers:
- path: .spec
- path: .spec.initContainers[*]
name: /name
- path: .spec.sidecarContainers[*]
name: /name
volumes: /spec/volumes
What's interesting about RuntimeComponent is there are two obvious arrays of containers, but there's also a single implicit container in the root. This approach is flexible enough to capture all three.
A traditional PodSpecable resource like Deployment would look like:
apiVersion: service.binding/v1alpha2
kind: ClusterApplicationResourceMapping
metadata:
name: deployments.apps
spec:
versions:
- version: '*'
containers:
- path: .spec.template.spec.containers[*]
name: /name
- path: .spec.template.spec.initContainers[*]
name: /name
volumes: /spec/template/spec/volumes
A non-PodSpecable resource like CronJob would look like:
apiVersion: service.binding/v1alpha2
kind: ClusterApplicationResourceMapping
metadata:
name: cronjobs.batch
spec:
versions:
- version: '*'
containers:
- path: .spec.jobTemplate.spec.template.spec.containers[*]
name: /name
- path: .spec.jobTemplate.spec.template.spec.initContainers[*]
name: /name
volumes: /spec/template/spec/volumes
Restricting support to resource that have a PodSpec is easy to implement, but ultimately, not as useful.
This is not a real PodSpec, it is just a duck-typed structure that can have some fields of PodSpec - the fields that we know how to handle. IMHO, containers, volumes, and the other have the same parent, i.e. belong to the same structure - they are not scattered across application CR.
The essential capabilities we need to bind a service into an application workload are:
a single []corev1.Volume
for each container:
a single []corev1.EnvVar
a single []corev1.VolumeMapping
With that we assume that all application CR semantic is the same as PodSpec, just allowing that location and name of these fields might be different. What if application CR does not follow that semantic? What if the application CR hold just a reference to binding secret and we want to inject only that. The current abstraction does not allow us that.
We need the flexibility to find container-esque shapes wherever they may be in the spec; JSONPath is a good fit.
JSONPath is for querying data, but not for specifying location. IMHO, containers/pods are all to be found in the same part of the resource, so no need to use the wildcard operator, and thus we really do not need to use JSONPath. For example .spec.containers returns the list of all containers. Similar can be achieved by JSONPointer /spec/container
@pedjak What concerns me about
We can add an arbitrary number of fields that could be interesting for the operator:
is that it requires the specification to change to support these use-cases. Especially if we expect the spec and implementations to be stable, we shouldn't require a change to them whenever a new unexpected CR comes into existence months or years from now.
is that it requires the specification to change to support these use-cases. Especially if we expect the spec and implementations to > be stable, we shouldn't require a change to them whenever a new unexpected CR comes into existence months or years from now.
Sure, I agree - what I meant is that we can come up with some sets of fields we know how to handle and put in the spec.
I pulled together a proof of concept for my proposal above to use JSONPath to discover containers and JSON Pointer to address pod and container level fields.
The implementation works by converting any runtime object into an unstructured form. A mapping definition specifies the location of key elements of the resource necessary for the binding. Using the mapping definition, the unstructured resource is converted into a normalized structured form. That normalized form can then be manipulated to apply the service bindings. Finally, the inverse conversion is applied using the same mapping definition to map the mutated state on to the original object.
For the moment, the only aspect of the service binding behavior I have implemented is the detection and defaulting of the SERVICE_BINDING_ROOT env var. Full spec compliant behavior can be written here.
There are tests that apply the binding, mapping an object to the meta-model and mapping from the meta-model back to an object are tested on both a PodSpecable resource (Deployment) and non-PodSpecable resource (CronJob). Knowledge of these types is limited to the test suite.
There are zero special dependencies. This behavior is implemented using k8s.io/api, k8s.io/apimachinery and k8s.io/client-go. github.com/google/go-cmp is only used by the test suite to diff the expected and actual objects.
Binding a Deployment with a default mapping:
https://github.com/scothis/unstructured-meta-binding/blob/d80f2f9541dab3b3641828c12862b38c1695ec0a/binding_test.go#L24-L114
Binding a CronJob with a custom mapping:
https://github.com/scothis/unstructured-meta-binding/blob/d80f2f9541dab3b3641828c12862b38c1695ec0a/binding_test.go#L115-L226
tl;dr it works
|
gharchive/issue
| 2021-07-02T03:26:07 |
2025-04-01T04:34:44.177954
|
{
"authors": [
"baijum",
"nebhale",
"pedjak",
"scothis"
],
"repo": "k8s-service-bindings/spec",
"url": "https://github.com/k8s-service-bindings/spec/issues/177",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
245939950
|
getting-started-guides-cloudstack-pr-2017-07-27
@BruceAuyeung please help to review it, thanks
@angelacy @markthink sorry for the delay.
does this merged commit still need another review ?
@markthink, both @bruceauyeung and I do not sure if this merged commit still need another review ? or I has already merge to mast branch?
|
gharchive/pull-request
| 2017-07-27T06:39:08 |
2025-04-01T04:34:44.181052
|
{
"authors": [
"angelacy",
"bruceauyeung"
],
"repo": "k8smeetup/kubernetes.github.io",
"url": "https://github.com/k8smeetup/kubernetes.github.io/pull/14",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1322382264
|
K8SSAND-1710 ⁃ Ability to specify the execution order of multi-pod CassandraTasks
What is missing?
When a CassandraTask targets more than a single pod, it executes tasks in parallel on all pods. We need more fine-grained execution orders, e.g. if a task puts the nodes on pressure, then we should execute the task on all pods from one rack only, then move to the next rack, etc.
Why do we need it?
In order to avoid overloading the entire DC.
Environment
Cass Operator version:
Insert image tag or Git SHA here
**Anything else we need to know?**:
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1710
┆priority: Medium
I'm not entirely sure what this ticket was about, but this is the process it's doing right now. They're sorted to defined order (per rack):
sort.Slice(dcPods, func(i, j int) bool {
rackI := dcPods[i].Labels[cassapi.RackLabel]
rackJ := dcPods[j].Labels[cassapi.RackLabel]
if rackI != rackJ {
return rackI < rackJ
}
return dcPods[i].Name < dcPods[j].Name
})
And then only one pod can be executing at the same time.
|
gharchive/issue
| 2022-07-29T15:22:18 |
2025-04-01T04:34:44.188015
|
{
"authors": [
"adutra",
"burmanm"
],
"repo": "k8ssandra/cass-operator",
"url": "https://github.com/k8ssandra/cass-operator/issues/389",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1288335276
|
K8SSAND-1607 ⁃ Materialized views
What is missing?
The functionality in stargate outlined in this issue from Stargate.
Why do we need it?
Toggle materialized views in K8ssandra-operated Cassandra clusters.
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1607
┆priority: Medium
This should already be possible today. It is already possible to supply a cassandra.yaml for Stargate via a ConfigMap. We do this in e2e tests. Note the cassandraConfigMapRef field here.
Feel free to reopen if this doesn't address your needs.
|
gharchive/issue
| 2022-06-29T08:10:03 |
2025-04-01T04:34:44.191154
|
{
"authors": [
"caniko",
"jsanda"
],
"repo": "k8ssandra/k8ssandra-operator",
"url": "https://github.com/k8ssandra/k8ssandra-operator/issues/590",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
941805
|
Intent open account and check mail
Please review it.
Thx for quick answer ...
Did you mean https://github.com/k9mail/k-9/blob/master/src/com/fsck/k9/remotecontrol/K9RemoteControl.java ? I found a comment: "not yet implemented".
Or is there just another way to trigger a sync?
Oh. That's disappointing :/
On Mon, Nov 07, 2011 at 07:10:55PM -0800, mad654 wrote:
Thx for quick answer ...
Did you mean https://github.com/k9mail/k-9/blob/master/src/com/fsck/k9/remotecontrol/K9RemoteControl.java ? I found a comment: "not yet implemented".
Or is there just another way to trigger a sync?
Reply to this email directly or view it on GitHub:
https://github.com/k9mail/k-9/pull/33#issuecomment-2663334
--
Oh realy it is. Is the next release already planned? Until when will I have to commit a fix if it should be included?
|
gharchive/issue
| 2011-05-23T15:04:35 |
2025-04-01T04:34:44.200549
|
{
"authors": [
"jca02266",
"mad654",
"obra"
],
"repo": "k9mail/k-9",
"url": "https://github.com/k9mail/k-9/issues/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
567687919
|
Bug: Landing page is not picking up updated stack URL
Describe the bug
I replaced the stack hub url to custom stack but the landing page still shows the old stack hub URLs for appsody and codewind.
Expected behavior
It should update the URLs to the new stack hub.
Actual behavior
Steps to reproduce the bug
Replace the https://github.com/kabanero-io/collections/releases/download/0.6.0-rc.1/kabanero-index.yaml to https://github.com/mtamboli/kab-stack/releases/download/1.0/kab-stack-index.yaml bu running oc edit kabanero -n kabanero
Expectation is that landing page will update the URLs for appsody and codewind but it does not.
Environment where the bug occurred
[ ] Firefox (Desktop)
[ ] Safari (Desktop)
[ ] Chrome (Desktop)
[ ] Internet Explorer (Desktop)
[ ] iOS (Mobile)
[ ] Android (Mobile)
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
@marikaj123 this is stop ship for kabanero 0.6.0
@alohr51 what landing page release this fix is in? Is that part of kabanero rc3?
|
gharchive/issue
| 2020-02-19T16:41:44 |
2025-04-01T04:34:44.210370
|
{
"authors": [
"mtamboli"
],
"repo": "kabanero-io/kabanero-landing",
"url": "https://github.com/kabanero-io/kabanero-landing/issues/158",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
524378723
|
Collection hub build pipeline prototype
We'd like to deliver an example collection hub build pipeline to simplifying collection lifecycle management. Eventually, we'll teach kabanero to react to GitHub events to drive this pipeline.
The same issue that @stephenkinder raised on the Collections board was closed by him with this comment: https://github.com/kabanero-io/collections/issues/171#issuecomment-560597448
Should this issue also be closed until it is decided what needs to be done. I don't want time and effort to be wasted doing this is it is deemed not needed at the moment.
|
gharchive/issue
| 2019-11-18T13:47:15 |
2025-04-01T04:34:44.212062
|
{
"authors": [
"groeges",
"stephenkinder"
],
"repo": "kabanero-io/kabanero-pipelines",
"url": "https://github.com/kabanero-io/kabanero-pipelines/issues/133",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
755938931
|
【質問】成行の発注直後の約定照会について
成行の先物発注直後にHoldIDを入手して、そのHoldIDを使って返済オーダーのパラメーターにはめ込みたいのですが
注文・薬剤照会のAPIの結果からHoldIDを取り出すのにいい方法はあるでしょうか?
RecvTimeの最新のものを抽出すればいいのでしょうか?
それと、約定したときの価格を入手するのは、このAPIのPriceではありませんね?
まず、ご質問に対して回答いたします
約定価格を取得する場合は、注約照会の明細データ(Detail)でRecType=8(約定)のデータを抽出し、そのなかのPriceが約定価格となります
返済に建玉指定をしたいということであれば、記載の手順を踏んでいただく必要がございます。
ただし、現在(12月予定)で注約照会と残高照会のフィルタリング機能のリリースを検討しております。
その機能が実装されますと、指定時間以降のデータを抽出することもできますので、ご要望のアクションが実現しやすくなると思われます。
明細データに気づきませんでした。
ありがとうございます。
建玉指定のときのHoldIDというのは、この詳細データにある
ExecutionID
のことでしょうか?
それとも、
約定照会の返値のトップにある注文番号のことでしょうか?
お役に立てるかわかりませんがコメントします。
ポジション確認後IDを取得するコードを書いたので共有します。
サンプルコードにあるpositionsの中にCloseOrdersを埋め込む感じです。
import urllib.request
import json
import pprint
url = 'http://localhost:18080/kabusapi/positions'
params = { 'product': 0, }
req = urllib.request.Request('{}?{}'.format(url, urllib.parse.urlencode(params)), method='GET')
req.add_header('Content-Type', 'application/json')
req.add_header('X-API-KEY', xxxxx)
try:
with urllib.request.urlopen(req) as res:
print(res.status, res.reason)
for header in res.getheaders():
print(header)
print()
content = json.loads(res.read())
x1 = 1
x2 = 1
x3 = 1
x4 = 1
for i in content:
if content[i]["Symbol"] == "xxxx":
x1 = content[i]["LeavesQty"]
x2 = content[i]["ExecutionID"]
if content[i]["Symbol"] == "xxxx":
x3 = content[i]["LeavesQty"]
x4 = content[i]["ExecutionID"]
この後に返済注文コマンドを書いてx1~x4で建玉を指定すれば、返済注文できるかと思います。
ただこの方法だとExecutionIDが複数ある場合は機能しないので約定数量残高のみを抽出してサンプルコードにあるClosePositionOrder形式で注文を出すことにしました。
上記コメントにある発注明細を参照した方がより確実ですね。
ただ約定しているかが明確ではないので、約残を確認するようにしております。
約定照会の結果から、本日の約定データを抽出するために関数を試作してみました。
この関数を呼ぶと、Listが戻ります。
ご参考まで。
def syokai(tkey,jdate):
url = 'http://localhost:18080/kabusapi/orders'
skai=[]
params = { 'product': 3, }
req = urllib.request.Request('{}?{}'.format(url, urllib.parse.urlencode(params)), method='GET')
req.add_header('Content-Type', 'application/json')
req.add_header('X-API-KEY', tkey)
try:
with urllib.request.urlopen(req) as res:
#print(res.status, res.reason)
for header in res.getheaders():
pass
#print(header)
#print()
content = json.loads(res.read())
#pprint.pprint(content)
for i in range(len(content)):
sakiid=content[i]['ID']
sakistate=content[i]['State']
sakiordtype=content[i]['OrdType']
sakiprice=content[i]['Price']
sakiside=content[i]['Side']
cm=content[i]['CashMargin']
sakiq=content[i]['OrderQty']
sakirtime=content[i]['RecvTime']
detail=content[i]['Details']
#print (detail)
for k in range(len(detail)):
edate=detail[k]['ExecutionDay']
print ("edate:",edate)
eqty=detail[k]['Qty']
eid=detail[k]['ExecutionID']
eprice=detail[k]['Price']
if (detail[k]['RecType']==8 and edate==jdate):
print (sakiid,sakistate,sakiordtype,sakiprice,sakiside,sakiq,sakirtime,cm,eid,edate,eprice,eqty)
skai.append([sakiid,sakistate,sakiordtype,sakiprice,sakiside,sakiq,sakirtime,cm,eid,edate,eprice,eqty])
result=skai
return result
except urllib.error.HTTPError as e:
print(e)
content = json.loads(e.read())
pprint.pprint(content)
except Exception as e:
print(e)
指定する返済建玉IDというのは、
約定照会のdetail[k]['ExecutionID']
のことでしょうか?
それと、フォーマット形式エラーにひっかかりますが、
正しいフォーマットの例を教えていただけませんか?
返済建玉IDは記載いただいたとおりです。
フォーマット形式エラーということですが、エラーのリターンコードはなんでしょうか?
フォーマット等は上述頂いているIDをそのまま設定する形となります
こちら、解決しておりますでしょうか?
こちら、解決しておりますでしょうか?
ありがとうございます。解決しました。
2021年1月28日(木) 3:37 yasuyuki-nakazawa notifications@github.com:
こちら、解決しておりますでしょうか?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/kabucom/kabusapi/issues/192#issuecomment-768994394,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AH4TRJ4VVXVJPCKYGROKEGDS4FD6XANCNFSM4ULR7BKA
.
ありがとうございます。解決しました。
2021年1月28日(木) 3:37 yasuyuki-nakazawa notifications@github.com:
こちら、解決しておりますでしょうか?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/kabucom/kabusapi/issues/192#issuecomment-768994394,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AH4TRJ4VVXVJPCKYGROKEGDS4FD6XANCNFSM4ULR7BKA
.
|
gharchive/issue
| 2020-12-03T07:18:19 |
2025-04-01T04:34:44.231252
|
{
"authors": [
"higenobu",
"nobuodesu",
"yasuyuki-nakazawa"
],
"repo": "kabucom/kabusapi",
"url": "https://github.com/kabucom/kabusapi/issues/192",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1469751495
|
[@kadena/client] add a way to generate a apiHost based on a template and the tx contents
PactCommand.setApiHost(apiHostTemplate: string)
apiHostTemplate is a string that contains {apiVersion} {networkId} and {chainId} and will be used when .local() .send() and .poll() are called.
tx.setApiHost("https://api.testnet.chainweb.com/chainweb/{apiVersion}/{networkId}/chain/{chainId}/pact")
We probably want a Client class that we can pass to the Pact instance so all derived ICommandBuilder instances have the same client.
Pseudo implementation:
import { createClient, Pact } from '@kadena/client';
const testnetPact = new Pact(createClient("https://api.testnet.chainweb.com/chainweb/{apiVersion}/{networkId}/chain/{chainId}/pact"));
export const Pact = testnetPact;
|
gharchive/issue
| 2022-11-30T14:42:18 |
2025-04-01T04:34:44.237042
|
{
"authors": [
"alber70g"
],
"repo": "kadena-community/kadena.js",
"url": "https://github.com/kadena-community/kadena.js/issues/109",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
154376617
|
Meteor.user() isn't ready in triggersEnter function when user clicks on bookmarked link
I want to turn off access to a bunch of admin routes if the user is not logged in or doesn't have sufficient priveledges. I've got it working in most instances using a FlowRouter group and triggersEnter. However, I'm having a problem in the below code.
// Admin routes group
const adminRoutes = FlowRouter.group({
prefix: '/admin',
name: 'admin',
triggersEnter: [(context, redirect) => {
if (!Meteor.user() || Meteor.user().username !== 'admin') {
redirect('/not-authorized');
}
}],
});
It works perfectly in all situations except when:
The client refreshes, or
The goes to the route by clicking a saved link (ie bookmark or email etc)
The problem seems to be that Meteor.user() isn't available yet when the trigger runs. I've experimented with different Tracker autorun options but they don't seem to work.
I've created a repro here: https://github.com/proehlen/admin-pages-route-group-redirect.
Relevant code is in /imports/startup/client/routing.js starting at line 28.
That's why Meteor.loggingIn() exists. By checking this you can decide whether its time to check username or not, so the page can be in loading status while Meteor.loggingIn() is true.
It can be done like this, first some helpers :
authInProcess: function() {
return Meteor.loggingIn();
},
canShow: function() {
return (Meteor.user()==='admin');
}
Then in templates (e.g. blaze) :
{{#if authInProcess}}
<p>Loading ...</p>
{{else}}
{{#if canShow}}
<p>Welcome admin :)</p>
{{else}}
<p>You are not authorized to view this page.</p>
{{/if}}
{{/if}}
You can also do the same thing by waiting for Meteor.loggingIn() and then initialize flow router which is not a good idea.
Ok thanks, but that leaves me having to repeat that helper code and template code in as many places as it's needed. I want to close out 5 or 10 or 20 admin routes with a few lines of code in the trigger as it's the most direct (and IMO, appropriate) place to do it.
I don't get it, why you need to repeat those codes ? for template helpers, they can be defined in global :
Template.registerHelper
Or even better you can define them for a parent template which contains the layout of the admin route and for template itself, you just need to put those lines in a parent template and then include everywhere you want. So what is your problem exactly ?
Look the code works in the op original issue perfectly except under the circumstance I outlined - getting there from a link or by typing the url in the browser.
I know how to use global helpers. I don't wan't to repeat a call a helper in every admin template. I don't want to have to remember to tell the next junior dev who adds an admin route to not forget to add the call. I don't want to be worried about remembering to check whether he did or not.
The above method works whether there's 1 admin route or 5 today and whether there will be 5 or 50 next year without writing a single extra line of code. I'd like it to work under all circumstances. I know how to come up with alternatives, I'd appreciate it if you could confine your suggestions to the problem as outlined.
+1
+1.
I have the same problem too, except that my template is using React instead of Blaze which is not reactive with Meteor. It seems to me that Meteor.loggingIn() always returns true in triggersEnter. Is there a way I can get Meteor.user() in FlowRouter?
+1
There is a pretty simple solution to this which I'm using:
// Logged In Routes Group
loggedInRoutes = exposedRoutes.group({
prefix: '',
name: 'loggedIn',
triggersEnter: [function(context, redirect) {
if(!Meteor.loggingIn() && !Meteor.userId()){
let route = FlowRouter.current();
if(route.route.name !== "Login"){
Session.set("redirectAfterLogin",route.path);
redirect('/login');
}
}
}]
});
// Admin Routes Group (extends Logged In Routes)
adminRoutes = loggedInRoutes.group({
prefix: '/admin',
name: 'admin'
});
// Example Admin Route
adminRoutes.route('/dashboard', {
name: "AdminDashboard",
action: function () {
BlazeLayout.render('AdminLayout', {topbar: 'AdminTopbar',content: 'AdminDashboard'});
}
});
// AdminLayout is a layout template, but it is still a template so use the onCreated hook
Template.AdminLayout.onCreated(function() {
const instance = this;
instance.autorun(function () {
var subscription = instance.subscribe('userData');
if (subscription.ready()) {
/**
* Do your check here for access
* E.g. if(user doesn't have access){FlowRouter.go("Some route for non-admins");}
*/
}
});
});
// AdminLayout Template
<template name="AdminLayout">
{{#if Template.subscriptionsReady}}
{{>Template.dynamic template=topbar}}
{{>Template.dynamic template=content}}
{{else}}
{{> Loading}}
{{/if}}
</template>
// Publish userData
Meteor.publish("userData",function(){
if(this.userId){
return Meteor.users.find({_id : this.userId});
}else{
this.ready();
}
});
This removes subscriptions from the Router making it do what it's supposed to do and nothing more. Since all of your admin templates will use the same layout (or at least they probably should) the above code will work for any new admin routes you add to an app so long as the layout is 'AdminLayout'.
Because Admin Routes group extends LoggedIn Routes group the triggersEnter get used automatically. So there is no need to check for Meteor.loggingIn() or Meteor.userId() in the Admin Routes group.
Hope that helps :)
I've moved on since the issue was raised to another platform for the project that prompted it so closing this issue. Thanks @Allerion for posting your work around. I'm sure it will come in handy for others facing this problem.
|
gharchive/issue
| 2016-05-12T01:14:38 |
2025-04-01T04:34:44.246342
|
{
"authors": [
"Allerion",
"hectoroso",
"nilsi",
"proehlen",
"rezaxdi",
"yoonwaiyan"
],
"repo": "kadirahq/flow-router",
"url": "https://github.com/kadirahq/flow-router/issues/625",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
125371086
|
Add composeAll utility
With that we can do things like this:
export default composeAll(
composeWithTracker(composerFn1),
composeWithObservables(composerFn2),
)(Post);
Added with v1.1.0
|
gharchive/issue
| 2016-01-07T10:54:35 |
2025-04-01T04:34:44.248029
|
{
"authors": [
"arunoda"
],
"repo": "kadirahq/react-komposer",
"url": "https://github.com/kadirahq/react-komposer/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2707492444
|
Smaller most appreciated feed
Hi, I would like to include the appreciated rss feed in my reader but it seems to have too much content. Would it make sense to have a smaller feed with eg the N most appreciated posts of the day? Would you consider a PR with such a feature?
Appreciate feed usually has only 20-30 items. What do you mean by too much content?
My bad, I misinterpreted the amount after scrolling a few pages in my reader. Closing this.
|
gharchive/issue
| 2024-11-30T15:18:43 |
2025-04-01T04:34:44.249975
|
{
"authors": [
"facundoolano",
"vprelovac"
],
"repo": "kagisearch/smallweb",
"url": "https://github.com/kagisearch/smallweb/issues/320",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2432358329
|
[Bug] cr-bot이 메세지를 너무 많이 찍는 현상
어떤 문제인가요?
PR의 커밋수가 많은 경우에 두가지 문제가 발생했습니다.
모든 커밋에 cr-bot이 리뷰를 하는 동작
의미없는 내용을 너무 많이 텍스트로 달아주는 동작 (ex: 큰 문제는 없어보입니다! 등등)
어떻게 재현하나요?
커밋수가 많은 PR을 등록합니다.
기대되는 동작을 알려주세요.
1차적으로 리뷰에 최대한 중요한 내용만 담을 수 있도록 프롬프트를 개선합니다.
커밋이 많은경우 중요한 코드 변경에만 cr-bot이 반응합니다.
얼마나 빠르게 해결해야하나요?
정말 급함 (해결해야 다음 작업이 진행가능)
여러가지 해결법이 있을 수 있는데요.
cr-bot 라이브러리 코드를 뜯어고쳐서 원하는 코드 에만 리뷰를 달수있도록 우리끼리의 트리거를 정한다. (단 리뷰서버를 직접 호스팅해야함 - 사실상 젠킨스)
리뷰의 라벨을 트리거로 cr-bot을 발동시킨다. (대신 한 PR안의 커밋별로 리뷰를 나누어서 진행할 수 없음. 리뷰의 의미가 퇴색됨)
리뷰봇을 뗴어낸다.
|
gharchive/issue
| 2024-07-26T14:25:11 |
2025-04-01T04:34:44.299772
|
{
"authors": [
"JaeJunday"
],
"repo": "kakaotech-25/moheng",
"url": "https://github.com/kakaotech-25/moheng/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2341249687
|
Check for excess generic arguments
Attempting to address #149
Awesome, thanks for the addition! Tests are great and passing.
The final part is adding a position to the new error. You should be able to create a range for the excess type parameter diagnostic using Span::union in something like:
let first_excess_type_parameter = &call_site_type_arguments[expected_parameters_length];
let last_excess_type_parameter = call_site_type_arguments.last();
let error_position = first_excess_type_parameter.1.union(&last_excess_type_parameter.1);
Then in diagnostics you can change the TypeCheckError => Diagnostic to build a Diagnostic::Position
After that change it will be good to merge! Another two to the test count!
Do you think it would make sense to have a specific error type for when the function isn't generic in the first place? Excess type argument makes sense if you provide more than you should, but I feel like something like Cannot pass a type argument to a non-generic function might make more sense? Seems straightforward enough to account for that since I'm already returning this error in two spots in this function for each situation.
something like Cannot pass a type argument to a non-generic function might make more sense
I wondered about that. I checked and TSC doesn't actually have a specific error for that. I think that would be a good idea if it is not too hard to add!
Also in the other diagnostic could the count be printed (it is being calculated but not put in the diagnostic). So "n excess type arguments".
After that good to merge!
Would you prefer to match closer to what tsc does? e.g.
Expected {x} type arguments, but got {y}.
not sure 😅 I like "n excess ..." as it is cleaner. But maybe the expected count would be helpful? Up to you :)
not sure 😅 I like "n excess ..." as it is cleaner. But maybe the expected count would be helpful? Up to you :)
Haha it's your project, it's your call :) I personally feel like knowing how many types the function could accept would be helpful so I don't need to do that math in my head. I'm also inclined to say that using expected x, got y can be a good choice because it feels equally as readable whether the function accepts any parameters or not, and can also require just one error type if that's preferred.
BUT I do it could be useful to have distinct errors for each and the non-generic function case could still benefit from having a specific message (though the message could also technically be inferred by the count fields too, to limit the match clauses required since it is technically the same problem) 🤔
Okay I updated it to do both and conditionally use a more specific error message when the function is not generic. If you're not a fan, I'm happy to pull that back out and simplify it either direction.
rustfmt didn't like diagnostics.rs for some reason? Normally something to do with a macro like format. Have cleaned it up manually.
Have made a quick tweak to avoid (s), other than that good to go. Thanks for the additional two tests!
|
gharchive/pull-request
| 2024-06-07T22:36:09 |
2025-04-01T04:34:44.379648
|
{
"authors": [
"josh-degraw",
"kaleidawave"
],
"repo": "kaleidawave/ezno",
"url": "https://github.com/kaleidawave/ezno/pull/160",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
636141504
|
搭建Ubuntu工作站 - Kali 的博客 | Kali Blog
https://kalicn.github.io/2020/05/30/搭建Ubuntu工作环境/
Every failure is leading towards success.
我的那位朋友呢?(手动滑稽)
等会发张图哦~
|
gharchive/issue
| 2020-06-10T10:42:32 |
2025-04-01T04:34:44.384292
|
{
"authors": [
"kalicn"
],
"repo": "kalicn/kalicn.github.io",
"url": "https://github.com/kalicn/kalicn.github.io/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2530233881
|
colspan
show table.cell.where(y: 0): it => {
table.cell(colspan: 2)[#it]
}
it doesn't work
Could you elaborate and provide an example?
insert this code into your template:
table.header[#lorem(30)],
insert this code into your template:
table.header[#lorem(30)],
I think that's because the table.header doesn't support the colspan property. I'll try to fix it next week. In the meantime you can copy the template to your code and replace the table.header with table.cell yourself.
|
gharchive/issue
| 2024-09-17T07:02:05 |
2025-04-01T04:34:44.497096
|
{
"authors": [
"OlegRozman",
"kamack38"
],
"repo": "kamack38/cram-snap",
"url": "https://github.com/kamack38/cram-snap/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
252097223
|
Possibly incompatible with Google's Room persistence library for Android?
Room attempts to match a model's fields with valid SQLite data types. It will also disregard fields with an @Ignore annotation. Otherwise, a type converter must be supplied where type affinity cannot be made by the compiler.
My models subclass Resource and ResourceIdentifier in turn. I think without annotations or a type converter, those super class fields might trigger the following at compile time:
error: Cannot figure out how to save this field into database. You can consider adding a type converter for it.
I'm not certain of the issue so I continue trying to understand it but I wanted to share the issue.
There's nothing to do with the library. To implement type converters for every data model is terrible but the only solution.
I opened the issue to see if it help.
At the time of our inquiry, I moved forward by separating DTO and DAO. Now, @Ignore superclass fields released in Room v2.1.0-alpha01, looks a lot like what we requested.
Separate models was appropriate for my case but I wanted to update our issue for future reference.
|
gharchive/issue
| 2017-08-22T21:38:05 |
2025-04-01T04:34:44.533590
|
{
"authors": [
"es0329",
"kamikat"
],
"repo": "kamikat/moshi-jsonapi",
"url": "https://github.com/kamikat/moshi-jsonapi/issues/68",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
234816149
|
Fixed graphical issue with many items in batch
Fixed graphical issue related to the a lot of items in batch
Thank you for your contribution :)
|
gharchive/pull-request
| 2017-06-09T12:47:03 |
2025-04-01T04:34:44.535948
|
{
"authors": [
"chemaxa",
"kamsar"
],
"repo": "kamsar/Unicorn",
"url": "https://github.com/kamsar/Unicorn/pull/232",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
714080230
|
Feature request: on existing wikidata item, add a property's value
Hi, following the existing README.md, I may suggest new feature as perceived to be practical cases.
Initial state
On wikidata (or any wikibase), and item exists. Known by name or Qid.
On this wikibase, a property exist. Known to us by Pid.
On our local script we have this property's value (but it is not on wikidata)
Action
We want to POST this value to the wikidata item, creating or updating the property.
:heart:
|
gharchive/issue
| 2020-10-03T12:31:04 |
2025-04-01T04:34:44.544697
|
{
"authors": [
"hugolpz"
],
"repo": "kanasimi/wikiapi",
"url": "https://github.com/kanasimi/wikiapi/issues/7",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1124764973
|
#18のレポートを作成する
忘れないうちに作成。
次回からはテンプレートに追加してもよいかもしれない。
結局忘れたのでクローズ
|
gharchive/issue
| 2022-02-05T01:26:55 |
2025-04-01T04:34:44.545572
|
{
"authors": [
"yu-kgr"
],
"repo": "kanazawa-js/kanazawa-js.github.io",
"url": "https://github.com/kanazawa-js/kanazawa-js.github.io/issues/78",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
135308731
|
correction chemin icone gitlab
Dans ./Template/project/integrations.php remplacement de la ligne 1 :
<h3><img src="<?= $this->url->dir() ?>plugins/GitlabWebhook/gitlab-icon.png"/> <?= t('Gitlab webhooks') ?></h3>
par :
<h3><img src="<?php $this->url->dir() ?>plugins/GitlabWebhook/gitlab-icon.png"/> <?= t('Gitlab webhooks') ?></h3>
sans cette correction le chemin vers l'image est :
<img src="//plugins/GitlabWebhook/gitlab-icon.png">
après correction :
<img src="/plugins/GitlabWebhook/gitlab-icon.png">
PHP 5.6.17-0+deb8u1 (cli) (built: Jan 13 2016 09:10:12)
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
This issue have been already fixed, you can download the version 1.0.2 of this plugin.
|
gharchive/issue
| 2016-02-22T04:01:40 |
2025-04-01T04:34:44.570739
|
{
"authors": [
"fguillot",
"xila76"
],
"repo": "kanboard/plugin-gitlab-webhook",
"url": "https://github.com/kanboard/plugin-gitlab-webhook/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
858783423
|
Android white navbar below app navbar
There is a very big contrast between the app navbar and androids own navbar on my phone. Not sure how we want to solve this but one way could be to set the background of androids navbar to black.
Im not sure what abilities we have to color object that are not part of the apps own UI. But i agree, the big contrast of the apps navbar is a bit much here. Perhaps another solution would be to have a different background color of the main part of the app when the device theme is "light".
|
gharchive/issue
| 2021-04-15T11:25:15 |
2025-04-01T04:34:44.572384
|
{
"authors": [
"Dolvur",
"Neathan"
],
"repo": "kandidatg11/behaviour-activation",
"url": "https://github.com/kandidatg11/behaviour-activation/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1369120279
|
Week 3 - Backend Development
Score 2.9/3
Graded by: Evan Yang
Points will be awarded on these weighted elements
[ ] Lesson on ifs and expressions
[ ] Deployment on AWS
[ ] Focus / Habits
Score 2.9/3
Graded by: Evan Yang
Karthik had intensively done his Lesson on his If statements and had severely blown me away with his extremely talented blocks of code.
Karthik had achieved deployment and had gone through struggles to meet this standard that had been posed.
Karthik's focus and habits had creative and indifferent goals and his notes on the yale video had been extremely detailed.
|
gharchive/issue
| 2022-09-12T00:43:00 |
2025-04-01T04:34:44.674695
|
{
"authors": [
"EvanYang24",
"kar722"
],
"repo": "kar722/fastpages",
"url": "https://github.com/kar722/fastpages/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1415804641
|
Accelerate scroll on multiple key presses
When user repeats the same key before the scroll is done, https://github.com/psliwka/vim-smoothie scrolls the same amount of lines but within the time of a single scroll instead of two scroll time. Similar feature would be nice in neoscroll as well.
Scenario 1:
User hit <c-d>
Neoscroll scrolls 10 lines in 500 ms
User hit <c-d> once again after the scroll is completed
Total lines scrolled is 20 lines and it took 1000 ms
Scenario 2:
User hit <c-d>
User hit <c-d> once again when only 5 lines has scrolled within 250 ms (before scroll is completed)
Neoscroll scrolls 15 lines 500ms
Total lines scrolled is 20 lines and it took 750 ms
This might conflict with the "stop on key release" feature. However, I can see some value in this. I think it makes more sense to speed the animation on the 3rd <C-d> rather than on the 2nd as it could mess up scrolling animations with easing functions (after the 2nd <C-d> the scrolling is constant at max speed).
This is a non-trivial change so I'll need to find some time for it.
|
gharchive/issue
| 2022-10-20T01:56:38 |
2025-04-01T04:34:44.700018
|
{
"authors": [
"karb94",
"s1n7ax"
],
"repo": "karb94/neoscroll.nvim",
"url": "https://github.com/karb94/neoscroll.nvim/issues/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
269197124
|
New release
It's been over a year since the last release and master contains a number of important fixes and improvements. Is the current tip in a state to be released as 1.0.9?
Done.
Wow, that was fast. Thanks!
|
gharchive/issue
| 2017-10-27T18:33:27 |
2025-04-01T04:34:44.701416
|
{
"authors": [
"IngmarStein",
"kardianos"
],
"repo": "kardianos/govendor",
"url": "https://github.com/kardianos/govendor/issues/368",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
252535716
|
Add multi-layered cv
Such as:
http://www.wix.com/website-template/view/html/1676?originUrl=http%3A%2F%2Fwww.wix.com%2Fwebsite%2Ftemplates%2Fhtml%2Fportfolio-cv%2F1&bookName=create-master-new&galleryDocIndex=0&category=portfolio-cv&metaSiteId=
theme "layer", style "paged"
|
gharchive/issue
| 2017-08-24T09:09:32 |
2025-04-01T04:34:44.702703
|
{
"authors": [
"kariminf"
],
"repo": "kariminf/cv.json",
"url": "https://github.com/kariminf/cv.json/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
116870921
|
Is there a way to run a subset of tests?
I have two directories with tests--let's call them fast and slow. I want to run fast under some circumstances and slow under other circumstances. Is there any way to configure Karma to do this, without restarting the Karma server between test runs?
I'm not sure, but you might take a look at https://github.com/karma-runner/karma/issues/1507
The --grep=<pattern> trick will do, and it doesn't actually require a restart. Thanks!
but still, is it possible to do this through the public API ?
|
gharchive/issue
| 2015-11-13T23:02:46 |
2025-04-01T04:34:44.717551
|
{
"authors": [
"GBarthos",
"jamesshore",
"jcrben"
],
"repo": "karma-runner/karma",
"url": "https://github.com/karma-runner/karma/issues/1709",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1457980115
|
User Data Uploads [Epic 5 (Discovery)]
Note that discovery tasks will be completed by order of priority in line with discovery task requirements. Discovery may include documentation and/ or implementation of feature design.
Expectation:
Allow users to upload their own data sources
Priority:
Medium
Dependencies:
None
Description:
Users should be able to provide data sources which can be used in the analysis and filtering operations, such as transmission lines etc.
Scope:
The scope of this process will depend on the results of the discovery process.
Out of scope:
This implementation is expected to be ephemeral and operate in a client side context, whereby no state is stored on the server or implementation of user accounts etc.
Requirements:
None
Success Criteria:
Users are able to effectively use ther own data inputs for analysis
Hi @ClaraIV @gclapp1
I believe this ticket needs to be more clarified and split into different tickets according to your feedback.
I have done some thoughts about this one:
What about the ability of using an area's boundaries provided by the user as a polygon: For a start the user can provide a geojson countaining boundaries of an area or multiple areas in one json file and the app will calculate the zone score, lcoe according to filters and weight... We can afterwards add UI for drawing a polygon on the map to avoid the necessity of having a geojson file (Although it will still be useful to have the area selection as a json file).
I was thinking about where the layers you want to use are hosted exactly since layers are often huge in size and uploading a layer somewhere wouldn't be practical in most cases. If you have specific layers you want to use with rezoning let me know about them so I can assess how much work have to be done to achieve using these layers.
this refers mainly to users having the option to upload their own transmission lines and run the analysis; therefore it's focused on light, vector files (perhaps we can limit it to just this one for now, to see how it goes) and the layer does not need to be stored; just used for that single run of the analysis to enable users to obtain more accurate results; We actually want to avoid to save/host any of the user data, as it is a complicated matter
|
gharchive/issue
| 2022-11-21T13:46:48 |
2025-04-01T04:34:44.756074
|
{
"authors": [
"ClaraIV",
"NEDJIMAbelgacem"
],
"repo": "kartoza/rezoning-2-project",
"url": "https://github.com/kartoza/rezoning-2-project/issues/11",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2317950195
|
Open PR to external-dns and add provider to list of webhooks
https://github.com/kubernetes-sigs/external-dns#new-providers
Done in https://github.com/kubernetes-sigs/external-dns/pull/4504
|
gharchive/issue
| 2024-05-26T21:57:33 |
2025-04-01T04:34:44.766932
|
{
"authors": [
"onedr0p"
],
"repo": "kashalls/external-dns-unifi-webhook",
"url": "https://github.com/kashalls/external-dns-unifi-webhook/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
412109581
|
Update firecracker in versions.yaml
Description of problem
According to the data grabbed by the version check tool, our firecracker is out of date - should we update that?
@mcastelino @sboeuf
Num: Item Current Ver Upstream Ver
PASSES:
0: assets.hypervisor.nemu-ovmf.uscan-url 0.6 0.6
1: externals.kubernetes.uscan-url 1.13.3-00 1.13.3
FAILURES:
0: assets.hypervisor.firecracker.uscan-url 0.12.0 0.14.0
1: assets.hypervisor.nemu.uscan-url 2018-11-29 2018-12-17
2: assets.hypervisor.qemu.uscan-url 2.11 3.1.0
3: assets.kernel.uscan-url 4.14.67 4.14.101
4: externals.gometalinter.uscan-url 2.0.5 3.0.0
5: externals.openshift.uscan-url 3.10.0 3.11.0
6: externals.runc.uscan-url 1.0.0-rc5 1.0.0-rc6
7: languages.golang.uscan-url 1.10.4 1.11.5
8: specs.oci.uscan-url 1.0.0-rc5 1.0.1
PASSES: 2
FAILS: 9
Dropping in deference to #1559
|
gharchive/issue
| 2019-02-19T20:27:35 |
2025-04-01T04:34:44.813360
|
{
"authors": [
"grahamwhaley"
],
"repo": "kata-containers/runtime",
"url": "https://github.com/kata-containers/runtime/issues/1252",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
387405114
|
virtcontainers: Return the appropriate container status
When our runtime is asked for the container status, we also handle
the scenario where the container is stopped if the shim process for
that container on the host has terminated.
In the current implementation, we retrieve the container status
before stopping the container, causing a wrong status to be returned.
The wait for the original go-routine's completion was done in a defer
within the caller of statusContainers(), resulting in the
statusContainer()'s values to return the pre-stopped value.
This bug is first observed when updating to docker v18.09/containerd
v1.2.0. With the current implementation, containerd-shim receives the
TaskExit when it detects kata-shim is terminating. When checking the
container state, however, it does not get the expected "stopped" value.
The following commit resolves the described issue by simplifying the
locking used around the status container calls. Originally
StatusContainer would request a read lock. If we needed to update the
container status in statusContainer, we'd start a go-routine which
would request a read-write lock, waiting for the original read lock to
be released. Can't imagine a bug could linger in this logic. We now
just request a read-write lock in the caller (StatusContainer),
skipping the need for a separate go-routine and defer. This greatly
simplifies the logic, and removes the original bug.
Fixes #926
Signed-off-by: Sebastien Boeuf sebastien.boeuf@intel.com
/test
lgtm
/retest
lgtm
/retest
Ping @sboeuf any updates? thx
/retest
@sboeuf - this still alive I presume? I guess you are busy on other stuff - just fmi, any eta on getting back to this. I'm loathed to re-trigger the CIs if we know it will be reworked...
@grahamwhaley is it still relevant? Because the patch is already in master, but this PR is specific to backport it to stable 1.3.
ah, I didn't spot the v1.3 branch @sboeuf right, I'm pretty sure this will be stale then.
speaking of which then, let me check what our docs say about current supported versions - I suspect we are support v1.4 and v1.5, but need to check the defined workflow and timelines etc.
/cc @egernst @gnawux
@jodh-intel @egernst should we close this PR as we're trying to backport to stable-1.3 here? I'm asking because we're currently backporting to stable-1.4 and stable-1.5, but not anymore to stable-1.3.
@sboeuf +1 to close it .
|
gharchive/pull-request
| 2018-12-04T17:46:42 |
2025-04-01T04:34:44.821394
|
{
"authors": [
"amshinde",
"caoruidong",
"grahamwhaley",
"jcvenegas",
"jodh-intel",
"raravena80",
"sboeuf"
],
"repo": "kata-containers/runtime",
"url": "https://github.com/kata-containers/runtime/pull/977",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
410776895
|
[WIP] Tables
Two issues:
I'm losing all attributes from elements in either serialization or deserialization.
How to make table cells editable but do not allow to delete them?
Styling for tables: https://github.com/katalysteducation/adaptarr-front/pull/19
Could you rebase this on top of master? GitHub seems confused about what's part of this PR and what's not. And in the future, please don't base feature branches on other feature branches.
@aiwenar Done
I've also just realised, that rather than adding this temporarily to cnx-designer, we could keep it as a plugin in adaptarr-front. This would require some changes to CNXML de/serialization to allow for defining custom rules in plugins though.
I've opened separate. Now it's handled as separate plugin on the front end - https://github.com/katalysteducation/cnx-designer/pull/19
|
gharchive/pull-request
| 2019-02-15T13:47:11 |
2025-04-01T04:34:44.825769
|
{
"authors": [
"PiotrKozlowski",
"aiwenar"
],
"repo": "katalysteducation/cnx-designer",
"url": "https://github.com/katalysteducation/cnx-designer/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
354771145
|
Misleading example on the official page
In the "Quick start" section on https://wavesurfer-js.org, the script is included as follows.
<script src="//cdnjs.cloudflare.com/ajax/libs/wavesurfer.js/2.0.6/wavesurfer.min.js"></script>
This method does not work on locally placed HTML files.
To make this work, add 'https'.
<script src="https//cdnjs.cloudflare.com/ajax/libs/wavesurfer.js/2.0.6/wavesurfer.min.js"></script>
The same description is in the "Getting started" on https://wavesurfer-js.org/docs. These may confuse people trying to use wavesurfer.js for the first time in their local environment.
Example
Save the following lines to a local file and open it with web browser.
<html>
<meta charset="utf-8"/>
<body>
<script src="//cdnjs.cloudflare.com/ajax/libs/wavesurfer.js/2.0.6/wavesurfer.min.js"></script>
<script>
console.log(WaveSurfer);
</script>
</body>
</html>
Output from the developer console:
[Firefox 61.0.2]
@higuri thanks, can you make a pull request and edit the gh-pages branch?
@thijstriemstra Sure. I'll do it.
Thanks.
|
gharchive/issue
| 2018-08-28T14:58:24 |
2025-04-01T04:34:44.844495
|
{
"authors": [
"higuri",
"thijstriemstra"
],
"repo": "katspaugh/wavesurfer.js",
"url": "https://github.com/katspaugh/wavesurfer.js/issues/1447",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.