id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1548954048
|
🛑 otakusan is down
In 6eff52b, otakusan (https://otakusan.net/LightNovel) was down:
HTTP code: 0
Response time: 0 ms
Resolved: otakusan is back up in b8fd159.
|
gharchive/issue
| 2023-01-19T11:12:59 |
2025-04-01T04:35:38.072782
|
{
"authors": [
"quanhieu"
],
"repo": "quanhieu/alive_up",
"url": "https://github.com/quanhieu/alive_up/issues/400",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1977019585
|
🛑 hako.re is down
In 92be6a3, hako.re (https://mangochan.site/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: hako.re is back up in 517a528 after 7 minutes.
|
gharchive/issue
| 2023-11-03T22:44:02 |
2025-04-01T04:35:38.075963
|
{
"authors": [
"quanhieu"
],
"repo": "quanhieu/alive_up",
"url": "https://github.com/quanhieu/alive_up/issues/8039",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
667669795
|
Please disambiguate unary and binary minus in QASM
A statement like "cu1(-pi) q[0], q[1];" cannot be parsed correctly, unless we insert a space like so: "cu1(- pi) q[0], q[1];". This problem can be reproduced on https://quantum-circuit.com/ as well.
The antlr grammar needs to be changed. A quick search found some discussions:
https://stackoverflow.com/questions/32166738/disambiguating-unary-and-binary-minus-in-antlr4-grammar
https://stackoverflow.com/questions/27478834/how-compiler-distinguishes-minus-and-negative-number-during-parser-process
@wh5a thank you for reporting. Will be fixed asap. In meantime, you can use -1*pi instead -pi.
|
gharchive/issue
| 2020-07-29T08:53:06 |
2025-04-01T04:35:38.083535
|
{
"authors": [
"perak",
"wh5a"
],
"repo": "quantastica/quantum-circuit",
"url": "https://github.com/quantastica/quantum-circuit/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1682047295
|
Separate PointSet (non-dist) type
"Dists can be denormalized" is a pretty big footgun in the current language and can cause subtle bugs. See also: #1400, #1399.
Here's what I think we should do to fix this:
add a separate PointSet value type that's not a distribution
return PointSet (not PointSetDist) from all operations that aren't safe (basically all pointwise operations, .+, .-, etc., and also mapY, and maybe some others); all unsafe operations take either PointSet or any Dist, but always return a PointSet
don't support PointSet objects in dist operations; make users manually normalize them with -> PointSet.toDist or something
remove isNormalized function on dists; dists are always normalized
rename PointSet.* functions somehow; split into PointSetDist and PointSet according to type?
Yep, I agree in this general direction. We definitely could use some more generic PointSet type, calling it PointSet seems reasonable.
PointSetDist might be too long. Also inconsistent with SampleSet.
Maybe PointDist (or plural, PointsDist)? But then we'd better rename SampleSet to SampleDist / SamplesDist too, for consistency.
|
gharchive/issue
| 2023-04-24T21:07:32 |
2025-04-01T04:35:38.090617
|
{
"authors": [
"OAGr",
"berekuk"
],
"repo": "quantified-uncertainty/squiggle",
"url": "https://github.com/quantified-uncertainty/squiggle/issues/1734",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
327370710
|
Docs for BcolzDailyBarWriter should indicate that data can't have gaps
Dear Zipline Maintainers,
Before I tell you about my issue, let me describe my environment:
Environment
Operating System: (Windows 7)
Python Version: 3.5.5 Anaconda
Python Bitness: 64
How did you install Zipline: Conda
Python packages:
_nb_ext_conf 0.4.0 py35_1
alembic 0.7.7 py35_0 quantopian
anaconda-client 1.6.14 py35_0
asn1crypto 0.24.0 py35_0
astroid 1.6.3 py35_0
backcall 0.1.0 py35_0
bcolz 0.12.1 np111py35_0 quantopian
bleach 2.1.3 py35_0
blosc 1.14.3 he51fdeb_0
bottleneck 1.2.1 py35h8a3671c_0
bzip2 1.0.6 hfa6e2cd_5
ca-certificates 2018.03.07 0
certifi 2018.4.16 py35_0
cffi 1.11.5 py35h945400d_0
chardet 3.0.4 py35h177e1b7_1
click 6.7 py35h10df73f_0
clyent 1.2.2 py35h3cd9751_1
colorama 0.3.9 py35h32a752f_0
contextlib2 0.5.5 py35h0a97e54_0
cryptography 2.2.2 py35hfa6e2cd_0
cyordereddict 0.2.2 py35_0 quantopian
cython 0.28.2 py35hfa6e2cd_0
decorator 4.3.0 py35_0
empyrical 0.4.2 py35_0 quantopian
entrypoints 0.2.3 py35hb91ced9_2
hdf5 1.10.2 hac2f561_1
html5lib 1.0.1 py35h047fa9f_0
icc_rt 2017.0.4 h97af966_0
idna 2.6 py35h8dcb9ae_1
intel-openmp 2018.0.0 8
intervaltree 2.1.0 py35_0 quantopian
ipykernel 4.8.2 py35_0
ipython 6.4.0 py35_0
ipython_genutils 0.2.0 py35ha709e79_0
ipywidgets 7.2.1 py35_0
isort 4.3.4 py35_0
jedi 0.12.0 py35_1
jinja2 2.10 py35hdf652bb_0
jsonschema 2.6.0 py35h27d56d3_0
jupyter_client 5.2.3 py35_0
jupyter_core 4.4.0 py35h629ba7f_0
lazy-object-proxy 1.3.1 py35he996729_0
libsodium 1.0.16 h9d3ae62_0
logbook 0.12.5 py35_0 quantopian
lru-dict 1.1.4 py35_0 quantopian
lzo 2.10 h6df0209_2
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gmp 6.1.0 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
mako 1.0.7 py35ha146b58_0
markupsafe 1.0 py35hc253e08_1
mccabe 0.6.1 py35hcf31250_1
mistune 0.8.3 py35hfa6e2cd_1
mkl 2018.0.2 1
msys2-conda-epoch 20160418 1
multipledispatch 0.5.0 py35_0
nb_anacondacloud 1.4.0 py35_0
nb_conda 2.2.0 py35_0
nb_conda_kernels 2.1.0 py35_0
nbconvert 5.3.1 py35h98d6c46_0
nbformat 4.4.0 py35h908c9d9_0
nbpresent 3.0.2 py35_0
networkx 1.11 py35h097edc8_0
notebook 5.5.0 py35_0
numexpr 2.6.5 py35hcd2f87e_0
numpy 1.11.3 py35h4a99626_4
openssl 1.0.2o h8ea7d77_0
pandas 0.18.1 np111py35_0
pandas-datareader 0.5.0 py35_0
pandoc 1.19.2.1 hb2460c7_1
pandocfilters 1.4.2 py35h978f723_1
parso 0.2.0 py35_0
patsy 0.5.0 py35_0
pickleshare 0.7.4 py35h2f9f535_0
pip 10.0.1 py35_0
prompt_toolkit 1.0.15 py35h89c7cb4_0
pycparser 2.18 py35h15a15da_1
pygments 2.2.0 py35h24c0941_0
pylint 1.8.4 py35_0
pyopenssl 18.0.0 py35_0
pysocks 1.6.8 py35_0
pytables 3.4.3 py35he6f6034_1
python 3.5.5 h0c2934d_2
python-dateutil 2.7.3 py35_0
pytz 2018.4 py35_0
pywinpty 0.5.1 py35_0
pyyaml 3.12 py35h4bf9689_1
pyzmq 17.0.0 py35hfa6e2cd_1
requests 2.18.4 py35h54a615f_1
requests-file 1.4.3 py35_0
requests-ftp 0.3.1 py35_0
scipy 1.1.0 py35h672f292_0
send2trash 1.5.0 py35_0
setuptools 39.1.0 py35_0
simplegeneric 0.8.1 py35_2
six 1.11.0 py35hc1da2df_1
snappy 1.1.7 h777316e_3
sortedcontainers 1.5.10 py35_0
sqlalchemy 1.2.7 py35ha85dd04_0
statsmodels 0.9.0 py35h452e1ab_0
terminado 0.8.1 py35_1
testpath 0.3.1 py35h06cf69e_0
toolz 0.9.0 py35_0
tornado 5.0.2 py35_0
tqdm 4.23.4 <pip>
traitlets 4.3.2 py35h09b975b_0
urllib3 1.22 py35h8cc84eb_0
vc 14 h0510ff6_3
vs2015_runtime 14.0.25123 3
wcwidth 0.1.7 py35h6e80d8a_0
webencodings 0.5.1 py35h5d527fb_1
wheel 0.31.1 py35_0
widgetsnbextension 3.2.1 py35_0
win_inet_pton 1.0.1 py35hbef1270_1
win_unicode_console 0.5 py35h56988b5_0
wincertstore 0.2 py35hfebbdb8_0
winpty 0.4.3 4
wrapt 1.10.11 py35h54666f7_0
yaml 0.1.7 hc54c509_2
zeromq 4.2.5 hc6251cf_0
zipline 1.2.0 np111py35_0 quantopian
zlib 1.2.11 h8395fce_2
Now that you know a little about me, let me tell you about the issue I am
having:
Description of Issue
What did you expect to happen?
I am building a bundle out of data I have on an in-house web service. This data (from the SPX) contains legitimate gaps for a number of stocks which weren't traded on a number of dates. I would expect Zipline to import the data and fill the gaps (either with NaNs or filling methods)
What happened instead?
When I run
zipline ingest -b mybundle I receive this error:
AssertionError: Got 1238 rows for daily bars table with first day=2012-05-01, last day=2017-04-03, expected 1239 rows.
Missing sessions: [Timestamp('2016-04-13 00:00:00+0000', tz='UTC')]
Extra sessions: []
and the process halts. The assertion is correct, but the data (missing that session) is also correct. I see it should also be handled properly as of this pull request: https://github.com/quantopian/zipline/pull/1778
Here is how you can reproduce this issue on your machine:
Reproduction Steps
This is the ingest function I built:
# "API" and "assets" are defined globally
def ingest(environ, asset_db_writer, minute_bar_writer, daily_bar_writer, adjustment_writer, calendar, start_session, end_session, cache, show_progress, output_dir):
symbols = sorted([a['ticker'] for a in assets])
dtype = [('start_date', 'datetime64[ns]'),
('end_date', 'datetime64[ns]'),
('auto_close_date', 'datetime64[ns]'),
('symbol', 'object')]
metadata = pd.DataFrame(np.empty(len(symbols), dtype=dtype))
def write_fn():
for idx, asset in enumerate(assets):
aid, tkr = asset['id'], asset['ticker']
print(tkr)
# Replace the following line with, say, a simple pd.read_csv() of a timeseries with legit session gaps, as API() connects to my service and can't be used to reproduce
ts = requests.get(API('asset/{:s}/prices/eod'.format(aid))).json()
df = pd.DataFrame(ts).set_index('time')[['open', 'high', 'low', 'close', 'volume']]
start_date = pd.to_datetime(df.index[0])
end_date = pd.to_datetime(df.index[-1])
metadata.iloc[idx] = start_date, end_date, end_date + pd.Timedelta(days=1), tkr
yield idx, df
daily_bar_writer.write(write_fn(), show_progress=True)
asset_db_writer.write(equities=metadata)
and this is the register() call:
register(
'mybundle',
ingest,
calendar_name='NYSE'
)
What steps have you taken to resolve this already?
I tried looking into Zipline's source code and through the issues/pull requests to find out whether I made a mistake in my implementation but couldn't find anything. Thanks for your help, let me know if you need further information.
Sincerely,
Andrea Venuta
Hi @veeenu - apologies for the confusion. I looked into this, and the description in #1778 is misleading. Looks like we may have started with a different intention, but in the change we actually merged, the daily bar writer expects no gaps in the data (i.e. they won't be filled).
If you expect gaps, you probably just want to reindex against the expected trading sessions to fill with nans. You should be able to do something like:
from zipline.utils.calendars import get_calendar
# Ensure the df is indexed by UTC timestamps
df = df.set_index(df.index.to_datetime().tz_localize('UTC'))
# Get all expected trading sessions in this range and reindex.
sessions = get_calendar('NYSE').sessions_in_range(start_date, end_date)
df = df.reindex(sessions)
No problem! I will try reindexing the dataframe. I suggest adding this bit of information to the documentation as I believe time series with gaps are a frequent use case, at least in the context of custom bundles.
Thanks for your patience, and keep up the great work! :)
After fixing the above, I incurred in another issue which I can't solve. It seems now that the data is correctly ingested, but I get an error at the time of executing the algorithm. This is my new ingest function:
def ingest(environ, asset_db_writer, minute_bar_writer, daily_bar_writer, adjustment_writer, calendar, start_session, end_session, cache, show_progress, output_dir):
differences = dict()
symbols = sorted([a['ticker'] for a in assets])
dtype = [('start_date', 'datetime64[ns]'),
('end_date', 'datetime64[ns]'),
('auto_close_date', 'datetime64[ns]'),
('symbol', 'object')]
metadata = pd.DataFrame(np.empty(len(symbols), dtype=dtype))
def write_fn():
for idx, asset in enumerate(assets):
aid, tkr = asset['id'], asset['ticker']
ts = requests.get(API('asset/{:s}/prices/eod'.format(aid))).json()
df = pd.DataFrame(ts).set_index('time')[['open', 'high', 'low', 'close', 'volume']]
df.index = pd.to_datetime(df.index).tz_localize('UTC')
start_date = df.index[0]
end_date = df.index[-1]
metadata.iloc[idx] = start_date.tz_convert(None), end_date.tz_convert(None), (end_date + pd.Timedelta(days=1)).tz_convert(None), tkr
sess = calendar.sessions_in_range(start_date, end_date)
dif = sess.difference(df.index)
if len(dif) > 0:
differences[tkr] = dif
df = df.reindex(sess)
yield idx, df
daily_bar_writer.write(write_fn(), show_progress=True)
asset_db_writer.write(equities=metadata)
adjustment_writer.write(
dividends=pd.DataFrame(columns=['sid', 'amount', 'ex_date', 'record_date', 'declared_date', 'pay_date']),
splits=pd.DataFrame(columns=['sid', 'ratio', 'effective_date']))
metadata['exchange'] = 'REINDEER'
for k, v in differences.items():
print(k, ' -> ', v) # list gaps
I then wrote a dummy algorithm, which works as intended with Quandl data:
from zipline.api import order, record, symbol
def initialize(context):
print(context)
def handle_data(context, data):
print(data)
But, as soon as I switch to my bundle, I get:
$ zipline run -b spx-reindeer -f algo.py -s 2018-01-01 -e 2018-02-01
[2018-05-30 08:51:18.524235] WARNING: Loader: Refusing to download new benchmark data because a download succeeded at 2018-05-30 07:59:42.414161+00:00.
[2018-05-30 08:51:18.549750] WARNING: Loader: Refusing to download new treasury data because a download succeeded at 2018-05-30 07:59:47.815400+00:00.
Traceback (most recent call last):
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\Scripts\zipline-script.py", line 11, in <module>
load_entry_point('zipline==1.2.0', 'console_scripts', 'zipline')()
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\click\core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\click\core.py", line 697, in main
rv = self.invoke(ctx)
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\click\core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\click\core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\click\core.py", line 535, in invoke
return callback(*args, **kwargs)
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\zipline\__main__.py", line 98, in _
return f(*args, **kwargs)
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\click\decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\zipline\__main__.py", line 259, in run
environ=os.environ,
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\zipline\utils\run_algo.py", line 208, in _run
overwrite_sim_params=False,
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\zipline\algorithm.py", line 642, in run
self.trading_environment.asset_finder.sids
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\zipline\assets\assets.py", line 494, in retrieve_all
update_hits(self.retrieve_equities(type_to_assets.pop('equity', ())))
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\zipline\assets\assets.py", line 528, in retrieve_equities
return self._retrieve_assets(sids, self.equities, Equity)
File "C:\Users\avenuta\AppData\Local\Continuum\Anaconda3\envs\zipline\lib\site-packages\zipline\assets\assets.py", line 681, in _retrieve_assets
asset = asset_type(**filter_kwargs(row))
File "zipline\assets\_assets.pyx", line 59, in zipline.assets._assets.Asset.__init__ (zipline/assets\_assets.c:1857)
TypeError: __init__() takes at least 2 positional arguments (1 given)
I'm not sure how to debug this situation as it looks pretty deep in the code, and related to the way the bundle is constructed. I also tried to add empty adjustment dataframes but at this point I can find no significant difference in the calls between my ingest function and the csvdir bundle (which I used as a guideline for my function). Do you have any suggestions?
Thanks!
It looks like you're setting metadata['exchange'] after passing metadata into asset_db_writer.write(), so my guess is that the data being written is missing an exchange column. I'd try setting that beforehand.
For the initial report, sounds like the only issue is some details missing in the documentation, so I'm going to update to title of this reflect that. Feel free to open another issue if you run into anything else!
@yankees714 the clarification is quite helpful. However, in cases when a stock is listed very recently and we want to create a custom data bundle starting from a very early date e.g. 2000-1-1, the first several thousand rows in this stock's dataframe are going to be empty. I wonder what is the recommended way to deal with huge gap here? thanks in advance
How would you solve for extra sessions? I have a similar problem, but iI have 2 errors. 1 one missing date, and the other shows extra sessions. How can I can ingest data and ignoring these extra sessions? and ingest such that it takes all the available data?
|
gharchive/issue
| 2018-05-29T15:23:21 |
2025-04-01T04:35:38.106112
|
{
"authors": [
"cemal95",
"fricative",
"veeenu",
"yankees714"
],
"repo": "quantopian/zipline",
"url": "https://github.com/quantopian/zipline/issues/2195",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
335593306
|
zipline beginner tutorial should be optimized for jupyter notebook
The beginner tutorial http://www.zipline.io/beginner-tutorial.html#basics mixes cli commands with jupyter notebook commands which makes it difficult to know what syntax to use in a notebook.
The first section, 'My first algorithm' is premature. One needs to ingest a bundle first, which I have not been able to accomplish within a notebook.
I would think it is safe to assume that anyone following the 'beginner's guide' would prefer to work in a notebook rather than the cli. Perhaps, try following the instructions step by step and you will see how one could be confused.
Hi @jraviotta thanks for your feedback on docs. Do you have suggestions for a different order of steps in regards to the tutorial?
The way it's currently outline is:
Explanation of what Zipline is and some functions you should care about
Peeking at an example algorithm
Here I saw the ?? syntax, which is an IPython magic, so I can see how that might be confusing
Ingesting data in order to run the algorithm
Run it from the CLI
View the performance packets
Run it from a Jupyter Notebook
Example of calling data.history()
Conclusion
Thanks for asking. After some reflection, I think the biggest problem may not necessarily be the order of instructions, rather that the tutorial is not tailored to any particular persona. The world is getting too complex to try to address every user type in a single tutorial as appealing as that may be.
In my case, I came to the tutorial after building the docker container. That means that I don't have direct cli access in the container without doing docker run -it .... so executing zipline commands in a notebook is preferable. Then I ran into trouble trying to run zipline ingest in a notebook and had to resolve with some assistance translating that to python. Now i'm stuck trying to get a custom ingest function working.
I can see all the effort put into zipline in the testing and deployment process. Perhaps it might be a good idea to schedule a UX sprint where the team develops personas, and on-boarding user stories. Doing so will identify barriers such as the ones I have run into and reveal opportunities for documentation clarity.
|
gharchive/issue
| 2018-06-25T22:39:42 |
2025-04-01T04:35:38.114234
|
{
"authors": [
"freddiev4",
"jraviotta"
],
"repo": "quantopian/zipline",
"url": "https://github.com/quantopian/zipline/issues/2221",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
482553389
|
check/pylint sometimes doesn't check imports
An unused import (Optional) passed CI here: https://github.com/quantumlib/Cirq/blame/master/cirq/ops/pauli_gates.py#L15
Now I'm getting an error for that on Travis: https://travis-ci.com/quantumlib/Cirq/jobs/225990740#L253
When I run pylint locally, I don't catch this or other import errors. This has happened during my last few PRs: https://travis-ci.com/quantumlib/Cirq/jobs/225990740#L251
I updated my local pylint to pylint==2.3.1, astroid==2.1.0 and still don't get the errors that Travis shows.
Can anyone else reproduce this?
Could it be that the errors have nothing to do with the unused import? What you linked has
cirq/optimizers/eject_phased_paulis_test.py:42:0: C0301: Line too long (83/80) (line-too-long)
cirq/optimizers/eject_phased_paulis_test.py:18:0: C0411: third party import "import numpy as np" should be placed before "import sympy" (wrong-import-order)
Alongside the unused import error.
The same thing happened in #1963 (last commit on Apr 15). I got an unused import error on Travis but no errors locally.
I just ran your cduck:eject-pauli branch localy on my machine and i received lots on unsued imports.
vtomole@vtomole:~/Cirq$ check/pylint
************* Module cirq.neutral_atoms.neutral_atom_devices
cirq/neutral_atoms/neutral_atom_devices.py:18:0: W0611: Unused DefaultDict imported from typing (unused-import)
************* Module cirq.optimizers.eject_z
cirq/optimizers/eject_z.py:17:0: W0611: Unused Dict imported from typing (unused-import)
cirq/optimizers/eject_z.py:17:0: W0611: Unused List imported from typing (unused-import)
cirq/optimizers/eject_z.py:17:0: W0611: Unused Tuple imported from typing (unused-import)
************* Module cirq.work.pauli_sum_collector
cirq/work/pauli_sum_collector.py:16:0: W0611: Unused MutableMapping imported from typing (unused-import)
************* Module cirq.value.linear_dict
cirq/value/linear_dict.py:17:0: W0611: Unused Dict imported from typing (unused-import)
************* Module cirq.sim.simulator
cirq/sim/simulator.py:30:0: W0611: Unused Hashable imported from typing (unused-import)
cirq/sim/simulator.py:30:0: W0611: Unused Tuple imported from typing (unused-import)
************* Module cirq.sim.wave_function_simulator
cirq/sim/wave_function_simulator.py:19:0: W0611: Unused Hashable imported from typing (unused-import)
************* Module cirq.ops.linear_combinations
cirq/ops/linear_combinations.py:15:0: W0611: Unused DefaultDict imported from typing (unused-import)
************* Module cirq.ops.op_tree
cirq/ops/op_tree.py:20:0: W0611: Unused Dict imported from typing (unused-import)
************* Module cirq.ops.pauli_gates
cirq/ops/pauli_gates.py:15:0: W0611: Unused Optional imported from typing (unused-import)
************* Module cirq.experiments.qubit_characterizations
cirq/experiments/qubit_characterizations.py:8:0: W0611: Unused Axes3D imported from mpl_toolkits.mplot3d (unused-import)
************* Module cirq.google.serializable_gate_set
cirq/google/serializable_gate_set.py:18:0: W0611: Unused List imported from typing (unused-import)
cirq/google/serializable_gate_set.py:18:0: W0611: Unused Type imported from typing (unused-import)
************* Module cirq.google.engine.engine
cirq/google/engine/engine.py:33:0: W0611: Unused Tuple imported from typing (unused-import)
************* Module cirq.protocols.has_unitary
cirq/protocols/has_unitary.py:15:0: W0611: Unused Dict imported from typing (unused-import)
************* Module cirq.contrib.acquaintance.optimizers
cirq/contrib/acquaintance/optimizers.py:15:0: W0611: Unused FrozenSet imported from typing (unused-import)
cirq/contrib/acquaintance/optimizers.py:15:0: W0611: Unused List imported from typing (unused-import)
cirq/contrib/acquaintance/optimizers.py:15:0: W0611: Unused Set imported from typing (unused-import)
************* Module cirq.contrib.acquaintance.executor
cirq/contrib/acquaintance/executor.py:15:0: W0611: Unused List imported from typing (unused-import)
************* Module cirq.contrib.qasm_import._parser
cirq/contrib/qasm_import/_parser.py:16:0: W0611: Unused Dict imported from typing (unused-import)
cirq/contrib/qasm_import/_parser.py:16:0: W0611: Unused Optional imported from typing (unused-import)
************* Module cirq.contrib.paulistring.convert_to_pauli_string_phasors
cirq/contrib/paulistring/convert_to_pauli_string_phasors.py:15:0: W0611: Unused List imported from typing (unused-import)
************* Module cirq.vis.heatmap
cirq/vis/heatmap.py:29:0: W0611: Unused List imported from typing (unused-import)
I also have pylint==2.3.1, astroid==2.1.0. I always get these but i ignore them and they pass on Travis.
At least some of those are imports only used in a type comment. The latest pylint version should handle that.
We are on the latest version on pylint, what we are not on the lastest version of is astroid because this bug has not been fixed https://github.com/PyCQA/astroid/issues/650.
This is growing stale and we probably need someone to repo again to make progress.
I still notice pylint checks returning more results on my machine than in ci.
I noticed that the pylintrc has a weird configuration - it used [config] instead of [MASTER] section - not sure how it was working at all - I think we were running pylint with default settings everywhere.
https://github.com/quantumlib/Cirq/pull/3387 is fixing this + upgrades to the latest pylint.
I believe that #3387 should fix this issue.
|
gharchive/issue
| 2019-08-19T22:14:13 |
2025-04-01T04:35:38.128174
|
{
"authors": [
"Strilanc",
"balopat",
"cduck",
"dabacon",
"vtomole"
],
"repo": "quantumlib/Cirq",
"url": "https://github.com/quantumlib/Cirq/issues/1986",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
351749874
|
Unexpected rotation gate convention
I find the output of the following code extremely surprising. In every textbook the rotation around X gate is always defined as R_X(theta) = e^{-i theta X / 2}. But that's not true in Cirq. Not only does RotXGate differ from this by a global phase, but the function is 2pi periodic instead of 4pi periodic. The last part is extremely confusing for those of us that try to use these gates for quantum simulation (we are thinking about evolving for certain amounts of time, rather than the geometry of the Bloch sphere). I was trying to make a controlled gate that corresponded to controlled evolution under X and because of these strange global phase conventions my circuit wasn't working. This is such unexpected behavior that I suggest we consider it a bug.
import numpy
import scipy
import cirq
class MyRotXGate(cirq.KnownMatrix, cirq.Gate):
def __init__(self, rads):
self.rads = rads
def matrix(self):
X = numpy.array([[0, 1], [1, 0]])
unitary = scipy.linalg.expm(-1.j * self.rads * X / 2.)
return unitary
for angle in numpy.linspace(0, 3 * numpy.pi, 10):
A = MyGate(rads=angle)
B = cirq.RotXGate(rads=angle)
print('My gate')
print(A.matrix())
print('Cirq gate')
print(B.matrix())
print()
I think this is actually reasonable behavior. In cirq, RotXGate(rads=θ) corresponds to Xθ/π. In some sense, the difference is between thinking of X as observable vs. as a unitary. Maybe renaming RotXGate (and its siblings) to something like PauliXGate would make this clearer.
It's important to distinguish between the two different issues here: the global phase and the factor of 2.
The global phase issue is not just for the Pauli rotation gates; see #816. Personally, the decision to really embrace that global phase is physically irrelevant seems reasonable, especially given the availability of the helper function cirq.linalg.predicates.allclose_up_to_global_phase.
The factor of two is more inconvenient. One reasonable solution would be to have a second gate RX that includes the factor of 2; that would make it usable by people familiar with the physicists' convention without having to change any existing code.
To be clear, the factor of 2 difference in periodicity and the global phase are really the same thing; if you look at expectation values like np.abs(np.dot(A, [1, 0]))**2 vs np.abs(np.dot(B, [1, 0]))**2 for @babbush's A and B you'll see that the two gates "rotate" the state at the same rate. The periodicity difference is only in the global phase which is physically unobservable. Of course, if you consider this as a controlled operation, then that phase is not "global" so it starts to matter. But we don't have any standard way in cirq of taking a generic gate and turning it into a controlled operation; @babbush, how are you trying to do that?
@maffoo There is now ControlledGate, which is what @babbush is using. It constructs the matrix in the expected way (adding an identity block in the top left).
What @kevinsung said is correct. @bryano yes, I support renaming things. The definition that Cirq is using is inconsistent with the definition of a rotation gate in any textbook.
@bryano I prefer keeping the global phase as I've spent a lot of time trying to figure out what was going on with it when I started using Cirq. I'm sure one or two other users have too. I agree with @babbush that this is a bug. A second RX gate will make things more confusing than the "factor of 2" issue in my opinion.
I'm in favor of renaming RotXGate to PauliXGate and defining RotXGate with its textbook definition.
As @maffoo pointed out, there actually is no factor of 2 issue, and the difference does matter when controlling. That makes it seem like renaming the current rotation gates and adding new ones with the right phases is the right way to go. The biggest question is whether @Strilanc is okay with that.
In my experience, Controlled-X always refers to the gate
[1 0 0 0]
[0 1 0 0]
[0 0 0 1]
[0 0 1 0]
which is not Controlled-exp(-i pi/2 X); that would be
[1 0 0 0]
[0 1 0 0]
[0 0 0 -i]
[0 0 -i 0]
Similarly, Controlled-Z always refers to
[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 -1]
and not Controlled-exp(-i pi/2 Z), which would be
[1 0 0 0]
[0 1 0 0]
[0 0 -i 0]
[0 0 0 i]
So I think Cirq's current convention leads to the correct, least surprising, controlled gates.
We aren't talking about a controlled X. We are talking about a controlled
rotation around the X axis, which is certainly not what Cirq is giving us.
On Sat, Aug 18, 2018, 00:11 Kevin J. Sung notifications@github.com wrote:
In my experience, Controlled-X always refers to the gate
[1 0 0 0]
[0 1 0 0]
[0 0 0 1]
[0 0 1 0]
which is not Controlled-exp(-i pi/2 X); that would be
[1 0 0 0]
[0 1 0 0]
[0 0 0 -i]
[0 0 -i 0]
Similarly, Controlled-Z always refers to
[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 -1]
and not Controlled-exp(-i pi/2 Z), which would be
[1 0 0 0]
[0 1 0 0]
[0 0 -i 0]
[0 0 0 i]
So I think Cirq's current convention leads to the correct, least
surprising, controlled gates.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/quantumlib/Cirq/issues/865#issuecomment-414037960,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ANlTf7Kz4TPUBaT9AjrhvqrxIqipa8zDks5uR74egaJpZM4WCPwf
.
I'm not sure what you mean. ControlledGate(RotXGate(rads=theta)) is certainly a controlled rotation around the X axis.
I'm just saying that by your logic, a controlled X rotation by pi radians would give the gate
[1 0 0 0]
[0 1 0 0]
[0 0 0 -i]
[0 0 -i 0]
which is in my opinion a more surprising result than
[1 0 0 0]
[0 1 0 0]
[0 0 0 1]
[0 0 1 0]
Of course I can only speak for myself.
@Strilanc, I also like your suggestions. A few comments:
Is there some way for us to make equivalence-up-to-global-phase more of a first class citizen in the library? E.g. should cirq.Z == cirq.iZ?
This seems like a bad idea to me. Having explicit helpers to check equality up to global phase (as we do now for testing) seems much better than overriding equality.
We could add a global phase property to RotXGate and friends. It would likely require dropping the concept of the gate being period.
This seems potentially interesting; for example, gates that have a matrix could know their SU(N) representation as well as a phase (I don't think we should call it a "global phase" because it is not global as soon as you put it inside a controlled gate, which is a confusion I think we should avoid). When you ask for the matrix you'd get the product of these two, but you could also ask for the decomposed form and this might be useful in some cases, e.g. when checking equality up to global phase. But it's not immediately clear to me that this would be worth the trouble.
If we drop the concept of gates being periodic, it fixes the fact that (U3)(1/3) is not equal to U in several cases (e.g. 240 degrees * 3 canonicalizes to 120 degrees, then 120 degrees /3 = 60 degrees; instead we would always say 240deg * 3 = 720deg). Do we ever actually use this concept? How bad is it to have cirq.X**3 be matrix-equivalence to cirq.X, and yet they disagree about what their square root is?
We might want to drop "canonicalization" of arguments in constructors, and only do that when we really need to. For example, when compiling to hardware gates we actually care about applying the smallest "equivalent" rotation (e.g. a 90 degree rotation instead of a 450 = 360 + 90 degree rotation) and since these hardware rotations are never "controlled" the extra phase really is global to the entire state and can be ignored. The non-canonicalized gates are still manifestly periodic if you look at their matrices, so we don't lose much there. And if you need to add back in optimization rules that can simplify things based on known periodicities, then at least those have to be invoked explicitly when you do a circuit optimization, rather than during gate construction.
ControlledGate should probably have a global-phase-on-sub-gate property.
Related to my comments above about being careful with "global": the name of this property sounds to me like an oxymoron :-)
I agree we should change. We've gotten this feedback from lots of different folks now (not just silly Dave back when were painting the shed!) I also hadn't realized that we had shifted the gates away from the gates as defined in the proto file, sorry I should have yelled when we did that. I thought that at least our single qubit gates were in SU.
I like Craig's suggestions, but I worry that having Rx and RotX is very confusing and no one is going to remember which is which. In thinking about this I think we should be guided by the fact that X**s is the non-standard notation in the quantum computing community. I do think it is useful, but I think we should error on making the support for at a gate feature level the one that has the name that is very non-standard.
Is there some way for us to make equivalence-up-to-global-phase more of a first class citizen in the library? E.g. should cirq.Z == cirq.iZ?
I agree with @maffoo that we should probably not do this equality.
Also +1 to @maffoo s suggestion that we not call it a global phase. What should it be called? Really the relevant object here is the determinant of the matrix.
Another idea I had was to have the pauli gates play double-duty as gates and as Hamiltonians. So cirq.X * 5 would be a valid thing, but no longer a gate since it's not unitary, and you could say stuff like cirq.exp(1j * cirq.X * np.pi/2) as long as the argument to cirq.exp was Hermitian.
The definition of SU(d) is that det X = 1. https://en.wikipedia.org/wiki/Special_unitary_group . Not sure what you mean about ambiguity for NxN matrices. Further the determinat is the part that multiplies out separately: AB = det(A)det(B) (A/det(A)) (B/det(B)) = det(AB) (A/det(A)) (B/det(B)).
I think it is a very bad idea to mix objects that are downstairs in quantum computing (unitaries, i.e. elements of the Lie Group) from upstairs (operators that can be expotentiated, i.e. elements of the Lie Algebra). "They are just matrices" is true, but hides a lot of the differences in things you can do with these that make sense in quantum computing.
Same as @babbush I have spent a lot of times to figure out that the definition of the RotXGate is not consistent with what we are used to in the quantum information text book. Would be nice if the Rotation gates can be defined consistently based on the known definition. I agree with @dabacon to call the currents RotXGate as PowXGate to clear the confusion.
OK renamed this bug as I think we've decided to make the breaking change of renaming Rot gates Pow gates, and change Rot gates to new convention. This is a breaking change and one that can cause considerable pain since it will change the exiting Rot gates in a subtle way.
|
gharchive/issue
| 2018-08-17T21:53:09 |
2025-04-01T04:35:38.153980
|
{
"authors": [
"EhsanZ4t1qbit",
"Strilanc",
"babbush",
"bryano",
"dabacon",
"kevinsung",
"maffoo",
"vtomole"
],
"repo": "quantumlib/Cirq",
"url": "https://github.com/quantumlib/Cirq/issues/865",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1025190716
|
Possible small error in tutorial example code?
In the tutorial code,
https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/intro_to_openfermion.ipynb, under the heading "InteractionOperator and InteractionRDM...", there is the following block of code,
`for p in range(n_orbitals):
for q in range(p + 1, n_orbitals):
kappa[p, q] = random_angles[index]
kappa[q, p] = -numpy.conjugate(random_angles[index])
index += 1
# Build the unitary rotation matrix.
difference_matrix = kappa + kappa.transpose()
rotation_matrix = scipy.linalg.expm(kappa)
# Apply the unitary.
molecular_hamiltonian.rotate_basis(rotation_matrix)`
It seems to me that the last two blocks should not be indented, i.e. should be outside the p-loop. The way it is, the bases get rotated n_orbital times, for which I don't see a reason, even though it essentially does no harm. Indeed, when I removed the indents (taking them out of the p-loop), the agreement on ground state energy gets slight better, i.e. "more" iso-spectral. Presumably repeated base rotations accumulated some error, albeit the error is still very small.
Anyway, I could be mistaken, but I thought I should ask in case no one has raised the issue.
You are right. It shouldn't be indented. The loops are for forming the generator matrix kappa. Indeed we are rotating with some partial kappa every time which isn't the intended function based on the text. Thanks for finding the issue. Feel free to open a PR to change it. If I don't hear back in a couple of days I'll get around to making the change over the weekend.
Glad to know I am not mistaken. As far as fixing it, I'm not hacker enough to be able to handle it (don't even know what "opening a PR" means) and will leave it to the pros. Thanks!
|
gharchive/issue
| 2021-10-13T12:31:27 |
2025-04-01T04:35:38.159345
|
{
"authors": [
"cb2014",
"ncrubin"
],
"repo": "quantumlib/OpenFermion",
"url": "https://github.com/quantumlib/OpenFermion/issues/745",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2184528341
|
Add symbolic support to Bloqs that expect control values as cvs: Tuple[int, ...]
Many bloqs (like MultiAnd, MultiControlPauli etc.) currently expect a control values object as a Tuple[int, ...]. This has a consequence that we cannot instantiate these objects symbolically.
We should replace cvs: Tuple[int, ...] with CtrlSpec object that can be instantiated symbolically as well.
Would we want some sort of CtrlSpec that takes a parameter n for the number of controls and doesn't fully specify what the control values actually are? Usually during costing the difference between a positive and negative control is negligible
Yes, that's the idea.
The newly-added Shaped symbolic utility #867 would be good here
Is there a list of high priority bloqs that need this
|
gharchive/issue
| 2024-03-13T17:20:23 |
2025-04-01T04:35:38.162232
|
{
"authors": [
"mpharrigan",
"tanujkhattar"
],
"repo": "quantumlib/Qualtran",
"url": "https://github.com/quantumlib/Qualtran/issues/786",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1523718699
|
Support for Azure IoT Hub - Quarkus Dev Service
As Developer,
Would be great to be able to have a Dev Service in Quarkus that allows me to use Azure IoT Hub without requiring any internet connection for my local environment.
As you know, Azure IoT Hub supports two types of connection:
iot-service-client ( register and send messages/events from cloud to device)
iot-device-client ( receive and send messages/events from device to Cloud)
I would like to be able to make two operations:
Register Devices
Send messages/events to a registered device that was created using the operation above or it was a pre-populated device via application.properties
When sending messages to a pre-populated device it would be great to define a few properties:
quarkus.azure.iothub.device."quarkus-device1".status=enable/disable
quarkus.azure.iothub.device."quarkus-device1".message.acknowledgment=true/false
quarkus.azure.iothub.device."quarkus-device1".message.acknowledgment.delay=5000 (ms)
Documentation:
https://learn.microsoft.com/en-us/java/api/overview/azure/iot?view=azure-java-stable
First, this would require Azure IoT clients to be supported on Quarkus through dedicated extensions.
Second, Quarkus dev services typically (but not necessarily) use Testcontainers for mocking the backend. Do you know whether there is any Testcontainer available for Azure IoT Hub? Azurite does not seem to support it.
@agoncal does Microsoft have any strategy for delivering Testcontainers for the individual Azure services?
|
gharchive/issue
| 2023-01-07T11:09:08 |
2025-04-01T04:35:38.168469
|
{
"authors": [
"JoaoBrandao",
"ppalaga"
],
"repo": "quarkiverse/quarkus-azure-services",
"url": "https://github.com/quarkiverse/quarkus-azure-services/issues/56",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1475854450
|
Can not running with dev mode
checkout https://github.com/zhfeng/shardingsphere-quarkus-example and run mvn quarkus:dev, it fails
2022-12-05 15:45:46,027 WARN [io.agr.pool] (agroal-11) Datasource '<default>': Failed to create connection due to NullPointerException
2022-12-05 15:45:46,035 ERROR [io.qua.run.Application] (Quarkus Main Thread) Failed to start application (with profile dev): java.lang.NullPointerException: Can not find configuration file `application.properties`.
at java.base/java.util.Objects.requireNonNull(Objects.java:246)
at org.apache.shardingsphere.driver.jdbc.core.driver.ShardingSphereDriverURL.toConfigurationBytes(ShardingSphereDriverURL.java:61)
at org.apache.shardingsphere.driver.jdbc.core.driver.DriverDataSourceCache.createDataSource(DriverDataSourceCache.java:51)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
at org.apache.shardingsphere.driver.jdbc.core.driver.DriverDataSourceCache.get(DriverDataSourceCache.java:45)
at org.apache.shardingsphere.driver.ShardingSphereDriver.connect(ShardingSphereDriver.java:51)
at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:226)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:535)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:516)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1126)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
It should be fixed after upgrade shardingspher to 5.3.1
|
gharchive/issue
| 2022-12-05T07:09:17 |
2025-04-01T04:35:38.170574
|
{
"authors": [
"zhfeng"
],
"repo": "quarkiverse/quarkus-shardingsphere-jdbc",
"url": "https://github.com/quarkiverse/quarkus-shardingsphere-jdbc/issues/74",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1965036033
|
Update OperandType for new capstone version, split loader option in t…
…he cli, add new option to directly set the capstone ARCH and MODE for the decompilation (used by binexport backend)
I've corrected a couple of typos and removed some leftovers.
I also added the __len__ method to GenericGraph that was missing.
For the rest it LGTM.
|
gharchive/pull-request
| 2023-10-27T08:33:21 |
2025-04-01T04:35:38.172324
|
{
"authors": [
"Fenrisfulsur",
"patacca"
],
"repo": "quarkslab/qbindiff",
"url": "https://github.com/quarkslab/qbindiff/pull/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1299470129
|
YAML value ... must be a Quarto YAML front matter object.
Hi, I use the HTML syntax to insert the image to the slide. It works fine, but the quarto sometime shows it as a problem (although it would render just fine). I'm wondering how can I disable this warning?
Could you share a more complete example with a document that exhibits this warning reproducibly?
---
title: "Untitled"
format: revealjs
---
---
<img src="
https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png"
alt="Google Logo">
---
<img src="
https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png"
alt="Google Logo">
This is the minimal reproducible example: you will see the first image
shows problem.
Thank you!
Best,
Albert
On Sat, Jul 9, 2022 at 7:06 PM J.J. Allaire @.***>
wrote:
Could you share a more complete example with a document that exhibits this
warning reproducibly?
—
Reply to this email directly, view it on GitHub
https://github.com/quarto-dev/quarto-vscode/issues/37#issuecomment-1179619040,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AOIS5MROMHIF5CI55OVAK2DVTIAW7ANCNFSM53CHWD3Q
.
You are receiving this because you authored the thread.Message ID:
@.***>
Thanks! I believe that we fixed this bug in quarto-cli a few days ago: https://github.com/quarto-dev/quarto-cli/commit/c8ec3f8f3dd494c8f177830e91019b9f5507b075
Try updating to the very latest Quarto (v1.0.5) and things should work as expected.
|
gharchive/issue
| 2022-07-08T21:53:11 |
2025-04-01T04:35:38.308342
|
{
"authors": [
"albert-ying",
"jjallaire"
],
"repo": "quarto-dev/quarto-vscode",
"url": "https://github.com/quarto-dev/quarto-vscode/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
773248817
|
Unresponsive Client
I tested my payload on multiple devices and performed perfectly except for the device I am targeting,
what are the possible causes of such a problem knowing that I have disabled the antivirus, the firewall and the .NET framework is installed on the client side (Windows 10)?
Is the client running at all? Download cports from https://www.nirsoft.net/utils/cports.html confirm if it is trying to connect all.
Is the client running at all? Download cports from https://www.nirsoft.net/utils/cports.html confirm if it is trying to connect at all.
I just checked, it is listening!
So, client trying to connect and server waiting for connection? If so Try this,
Reset Network TCP/IP stack
If your TCP/IP stack is corrupted it can be reseted with the following commands.
Windows XP:
Search for Command Prompt > Run As Administrator
netsh winsock reset
netsh int ip reset
ipconfig /flushdns
Restart computer
Windows Vista, 7, 8, 10:
Search for Command Prompt > Run As Administrator
netsh winsock reset catalog
netsh int ipv4 reset reset.log
netsh int ipv6 reset reset.log
Restart computer
Sorry that didn't resolve the issue, I think the server requires elements that are not installed on the client device.
Is the device you're running the client on fully updated with latest .NET framework and Windows updates?
Is the device you're running the client on fully updated with latest .NET framework and Windows updates?
Is the device you're running the client on fully updated with latest .NET framework and Windows updates?
Oddly yes!
Is the device you're running the client on fully updated with latest .NET framework and Windows updates?
Oddly yes!
Just to confirm. What antivirus is installed, Windows Defender only? Remember that Defender will not stay off, it will turn back on automatically (part of recent security updates) after a short time. To be safe make a exclusion for your client.
Just to confirm. What antivirus is installed, Windows Defender only? Remember that Defender will not stay off, it will turn back on automatically (part of recent security updates) after a short time. To be safe make a exclusion for your client.
|
gharchive/issue
| 2020-12-22T20:57:05 |
2025-04-01T04:35:38.325054
|
{
"authors": [
"BurntDog",
"MaxXor",
"assadi49"
],
"repo": "quasar/Quasar",
"url": "https://github.com/quasar/Quasar/issues/897",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
644410090
|
'AttributeError' when creating model.
Hi! I got an error when tryied to create model like model = smk.Unet()
AttributeError: Can't set the attribute "name", likely because it conflicts with an existing read-only @property of the object. Please choose a different name.
After I commented line 82: model.name = 'u-{}'.format(backbone_name) in file site-packages/segmentation_models/unet/model.py everything works well.
I think it's possible to replace string model.name with model._name for example.
I am getting the same error but commenting it give some more errors
|
gharchive/issue
| 2020-06-24T08:07:19 |
2025-04-01T04:35:38.362361
|
{
"authors": [
"KuldeepSangwan",
"jwegas"
],
"repo": "qubvel/segmentation_models",
"url": "https://github.com/qubvel/segmentation_models/issues/367",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2152509345
|
🛑 Search Service is down
In c0857cd, Search Service (https://quessttechnologies.com/search/healthcheck) was down:
HTTP code: 429
Response time: 5 ms
Resolved: Search Service is back up in a7d405a after 12 minutes.
|
gharchive/issue
| 2024-02-24T22:42:04 |
2025-04-01T04:35:38.390884
|
{
"authors": [
"QuesstTechnologies"
],
"repo": "quesst-technologies/qst-admin-status-all",
"url": "https://github.com/quesst-technologies/qst-admin-status-all/issues/418",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1082176267
|
Changing directory does not always work and sometimes breaks Quickgui
While moving around my system looking for somewhere to drop another test VM I notice that attempting to chose /home/<user> and clock on OK left the working directory where it was.
Also, trying to find some reason/pattern to this (in the past the presence of symlinks has had surprising impacts and of course permissions/acls were a likely candidate ) I found I could consistently break quickgui by selecting /home/<user>/snap/flutter (window goes blank/white and loses possibility to do anything if in "Manage Existing" mode, or if in "New" mode bottom half goes white but top half logo is still there and you can "Close" and on return the previous directory will be current ) .
This may disappear with a reboot as that directory is left over from removing the flutter snap, and is actually a directory from a zfs dataset bind mounted in to try and cope with the demands of the flutter snap so probably somewhat spurious edge-case. However, the error message suggests it might be worth "handling" this "it cannot happen" event more graciously (i.e. the file picker allows selection of a directory that flutter believes does not exist )
[ERROR:flutter/lib/ui/ui_dart_state.cc(209)] Unhandled Exception: FileSystemException: Getting current working directory failed, path = '' (OS Error: No such file or directory, errno = 2)
#0 _Directory.current (dart:io/directory_impl.dart:42)
#1 Directory.current (dart:io/directory.dart:136)
#2 _ManagerState._getVms (package:quickgui/src/pages/manager.dart:133)
#3 _ManagerState.initState.<anonymous closure> (package:quickgui/src/pages/manager.dart:67)
#4 _rootRunUnary (dart:async/zone.dart:1436)
#5 _CustomZone.runUnary (dart:async/zone.dart:1335)
#6 _CustomZone.runUnaryGuarded (dart:async/zone.dart:1244)
#7 _CustomZone.bindUnaryCallbackGuarded.<anonymous closure> (dart:async/zone.dart:1281)
#8 _rootRunUnary (dart:async/zone.dart:1444)
#9 _CustomZone.runUnary (dart:async/zone.dart:1335)
#10 _CustomZone.bindUnaryCallback.<anonymous closure> (dart:async/zone.dart:1265)
#11 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:395)
#12 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:426)
#13 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:192)
Further investigation suggests the directory that did not exists was in some kind of snap-remove-related limbo: (because snapd tried to remove it but it was a bind mount so unlink failed on resource busy)
drwxr-xr-x 4 phil phil 4.0K Dec 13 10:25 ubuntu-accomplishments
drwxr-xr-x 0 phil phil 2 Dec 13 10:31 flutter <---
drwxr-xr-x 5 phil phil 4.0K Dec 13 13:27 btop
drwxr-xr-x 5 phil phil 4.0K Dec 14 19:02 openscad
drwxr-xr-x 5 phil phil 4.0K Dec 15 10:33 cura-slicer
rmdir returns device or resource busy and fuser -v indicated kernel mount ... so unmounted it and then removed it. Problem 2 resolved (and probably not worth the time to try handling something so odd that should not happen ever) Should probably let the snapd folks know though.
And the other problem is between seat and keyboard: slightly counter-intuitive (but consistent) behaviour of the file picker confuses old person who prefers the command line.
Nothing is real.
Strawberry Fields Forever!
|
gharchive/issue
| 2021-12-16T13:07:19 |
2025-04-01T04:35:38.429074
|
{
"authors": [
"philclifford"
],
"repo": "quickemu-project/quickgui",
"url": "https://github.com/quickemu-project/quickgui/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1419202803
|
Fix detection and filtering of duplicate push events
Add concurrent test runs (max 5 for now)
First draft of test for hooks/handlePush
Added HeadCommit for handlePush test
Added testdata/courses/qf104-2022 to support test running
Load run script to be used from testdata
Fixed write back http error response on failure
Test*HandlePush focus on inputs and concurrency
Increase concurrent push events to "force" max 5 goroutines
Removed counting concurrent handlePush() goroutines
Added comment about rate limiting if too many push events
Filter duplicate push events with same commit ID
Fixes #868
Codecov Report
Merging #869 (9b8d5f0) into master (0f8ddea) will increase coverage by 1.21%.
The diff coverage is 60.97%.
@@ Coverage Diff @@
## master #869 +/- ##
==========================================
+ Coverage 28.20% 29.42% +1.21%
==========================================
Files 88 89 +1
Lines 9378 9506 +128
==========================================
+ Hits 2645 2797 +152
+ Misses 6438 6404 -34
- Partials 295 305 +10
Flag
Coverage Δ
unittests
29.42% <60.97%> (+1.21%)
:arrow_up:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
ci/run_tests.go
7.50% <0.00%> (ø)
web/hooks/github.go
19.23% <37.50%> (+12.56%)
:arrow_up:
web/hooks/duplicate_map.go
100.00% <100.00%> (ø)
scm/scm.go
0.00% <0.00%> (ø)
scm/github.go
0.00% <0.00%> (ø)
scm/helper.go
75.00% <0.00%> (+9.09%)
:arrow_up:
scm/mock.go
78.96% <0.00%> (+12.54%)
:arrow_up:
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
gharchive/pull-request
| 2022-10-22T09:18:22 |
2025-04-01T04:35:38.442000
|
{
"authors": [
"codecov-commenter",
"meling"
],
"repo": "quickfeed/quickfeed",
"url": "https://github.com/quickfeed/quickfeed/pull/869",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
3449811
|
QS auto start Lion, typing does nothing
When I have quicksilver set to Open at Login it will start fine, hot key triggers fine, but then typing does nothing. This happens every single time, I've uninstalled and reinstall and this changes nothing.
I think I may have figured this out. Can everyone affected please run this in a Terminal and post the output here?
defaults read com.blacktree.Quicksilver "Last Update Check"
Thanks.
2012-03-01 06:12:44 +0000
|
gharchive/issue
| 2012-03-01T06:13:01 |
2025-04-01T04:35:38.718410
|
{
"authors": [
"oxnard805",
"skurfer"
],
"repo": "quicksilver/Quicksilver",
"url": "https://github.com/quicksilver/Quicksilver/issues/741",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2656679861
|
bug: [Android] onValueChanged does not trigger or send wrong values sometimes
Hello,
I recently tried using your library, and it works great on iOS. However, I've encountered a couple of issues on Android. Specifically, the onValueChanged callback sometimes doesn’t trigger, or it returns an incorrect value. I’ve recorded a video demonstrating these issues and also created a simple Snack example for easy testing.
Issues demonstrated in the video:
Around the 3-second mark, I select the value 4, but the onValueChanged callback receives the value 3 instead.
Around the 7-second mark, I change the value from 1 to 2, but the callback is not triggered at all (you can confirm this by checking the logs—onValueChanged is not called).
The above issues continue to occur intermittently with different values, indicating that the problem is not limited to a specific value.
I'm using a OnePlus 7T.
I’d appreciate any guidance or fixes you could provide for this behavior.
Thank you in advance!
Snack : https://snack.expo.dev/@eeynard_allocab/wheelpicker-android-issue
https://github.com/user-attachments/assets/e318d089-dea4-4ccc-9000-0725aa94580d
@eeynard Thank you for your issue!
Please answer a few questions:
Is the new or old architecture being used?
What is the version of React Native?
Build Debug or Release?
I have not been able to reproduce the problem, but I have suggestions that this may be due to the braking of the JS stream.
I see that you have a new reference to data and onValueChanged for each render, try to memorize them and see if it will work better or not.
At the moment, I understand that the conditions for calling onValueChanged are not implemented in the best way and it needs to be improved.
In the near future, I will try to improve the onValueChanged call and remove the 300ms delay in calling the function.
To determine when scrolling stops, use the onScrollBeginDrag, onScrollEndDrag, and onMomentumScrollEnd events.
I accidentally forgot to save my code, but I did test it using a static dataset (you can see it in the updated snack). The issue still persists.
Is the new or old architecture being used?
old
What is the version of React Native?
Expo Go 52 => RN 0.76. And on my app Expo 51 => RN 0.74.5
Build Debug or Release?
I believe Expo Go runs in release mode, while my app was tested in debug mode.
Thanks for the feedback! I’ll look into finding a workaround in the meantime.
So, just to add more information, I'm stumbling in the same thing. My problem is the sometimes it doesn't change at all or changes by itself.
This is what I'm getting from scrolling:
One down: most of the time it won't register the change
Two down: works fine all the time
Three down: Goes one up (turns to two down)
Four down: Sometimes turns to three down (goes one up)
Scrooling up: Always works
But, btw, great wheel, it was by far the best I've tested.
https://github.com/user-attachments/assets/2e28cd1b-48c7-4cdb-b196-031ba8f9d761
@thacio Thank you for your additional information!
@thacio @eeynard Can you check in version 1.3.1-beta.0 how onValueChanged works now?
Noticed a small change on one down, the rest were similar. I tested these ones :
One down: skips the odd indexes
Two down: works fine all the time
Three down: Goes one up (turns to two down)
Scrooling up: Always works
https://github.com/user-attachments/assets/5ff45587-0535-4f07-850b-f03e21980cff
@thacio Thank you very much for helping in this difficult task!)
I have released version 1.3.1-beta.1, please see if the problem has been fixed? In this version, I fixed the index definition and removed unnecessary calls when synchronizing scrolling. This should improve onValueChanged work.
@thacio @eeynard Could you please provide feedback on version 1.3.1-beta.1?
All right, congrats! Working flawlessly! Thanks, I tried reproducing in a simple example but it didn't work. Everything worked now in the same setup.
@thacio Thank you so much!)
Released 1.3.2
|
gharchive/issue
| 2024-11-13T20:28:37 |
2025-04-01T04:35:38.731283
|
{
"authors": [
"eeynard",
"rozhkovs",
"thacio"
],
"repo": "quidone/react-native-wheel-picker",
"url": "https://github.com/quidone/react-native-wheel-picker/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2036819672
|
feat: gsalib tables
Fixes #13
packageFolder.meta: {'options': {'region': 'us-east-1', 'time': '2023-12-11T23:01:23.456Z', 'type': 'ObjectCreated:Put', 'bucket': 'omics-quilt-omicsquiltckaoutput712023778557useast-1r7hdb9111n3i', 'key': 'outputs/8637245/out/bqsr_report/NA12878.hg38.recal_data.csv', 'package': 'outputs/8637245', 'debug': False}, 'context': {}}
[ERROR] S3NoValidClientError: S3 AccessDenied for S3Api.LIST_OBJECT_VERSIONS on bucket: omics-quilt-omicsquiltckaoutput712023778557useast-1r7hdb9111n3i
Traceback (most recent call last):
File "/var/task/packager/index.py", line 9, in handler
return handler.handleEvent(event)
File "/var/task/packager/handler.py", line 65, in handleEvent
meta = self.packageFolder(root, opts)
Apparently this is Quilt-specific error.
|
gharchive/pull-request
| 2023-12-12T01:38:12 |
2025-04-01T04:35:38.742665
|
{
"authors": [
"drernie"
],
"repo": "quiltdata/omics-quilt-demo",
"url": "https://github.com/quiltdata/omics-quilt-demo/pull/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1105513475
|
Barracks
Barracks (Units kommen raus, diese Stoppen dann die ennemies und bekämpfen sie)
Originally posted by @quiode in https://github.com/quiode/TroyTD/issues/39#issuecomment-1014247920
(Units kommen raus, diese Stoppen dann die ennemies und bekämpfen sie)
done ig
|
gharchive/issue
| 2022-01-17T08:15:39 |
2025-04-01T04:35:38.747889
|
{
"authors": [
"quiode"
],
"repo": "quiode/TroyTD",
"url": "https://github.com/quiode/TroyTD/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
259056532
|
Add infotext option to stay permanently / go away with callback
Instead of info text timing out, it'd be nice to have some persist.
This could get used for the 6 fulms under, or when dps need to float for double stack, or to tell you what color you are during o4s color lasers?
Even better would be an update function, so you could do stuff like: "thunder 3: you are 10.2 yalms away from the boss DANGER"
popup-text.js:
add to f
let resolveText = (condition, callback) => {
if (condition(gPopupText.data)) {
callback();
}
replace timeout in alarmText, alertText and infoText with:
if ('resolve' in trigger)
window.setInterval(resolveText.bind(that, trigger.resolve, removeText.bind(that, holder, div), this), 300);
else
window.setTimeout(removeText.bind(that, holder, div), duration * 1000);
trigger.js
{
id: 'example',
regex: /hello world/,
resolve: function(data) {
if (data.role != tank)
return false;
return true;
},
infoText: 'hello world',
}
This actually works! But i couldn't figure out how to clear the interval after the condition was met.
Ok, I think this was an idea from the past which never really ended up being that useful of an idea.
|
gharchive/issue
| 2017-09-20T06:44:56 |
2025-04-01T04:35:38.758109
|
{
"authors": [
"DrLippe",
"quisquous"
],
"repo": "quisquous/cactbot",
"url": "https://github.com/quisquous/cactbot/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
419787200
|
with_venv.sh python setup.py install报错
问题描述
tools/with_venv.sh python setup.py install
环境配置
CentOS: 7.5
Python: 3.6
pip: 19.0.3
复现步骤
tools/with_venv.sh python setup.py install
实际输出结果
ERROR:root:Error parsing
Traceback (most recent call last):
File "/usr/local/open_dnsdb-0.2/venv3.6/lib/python3.6/site-packages/pbr/core.py", line 96, in pbr
attrs = util.cfg_to_args(path, dist.script_args)
File "/usr/local/open_dnsdb-0.2/venv3.6/lib/python3.6/site-packages/pbr/util.py", line 256, in cfg_to_args
pbr.hooks.setup_hook(config)
File "/usr/local/open_dnsdb-0.2/venv3.6/lib/python3.6/site-packages/pbr/hooks/init.py", line 25, in setup_hook
metadata_config.run()
File "/usr/local/open_dnsdb-0.2/venv3.6/lib/python3.6/site-packages/pbr/hooks/base.py", line 27, in run
self.hook()
File "/usr/local/open_dnsdb-0.2/venv3.6/lib/python3.6/site-packages/pbr/hooks/metadata.py", line 26, in hook
self.config['name'], self.config.get('version', None))
File "/usr/local/open_dnsdb-0.2/venv3.6/lib/python3.6/site-packages/pbr/packaging.py", line 849, in get_version
name=package_name))
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name dnsdb was given, but was not able to be found.
error in setup command: Error parsing /usr/local/open_dnsdb-0.2/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name dnsdb was given, but was not able to be found.
你好,能帮忙看下这个错是什么原因吗?
升级python3测试还没完成,现在的代码回退到python2了,麻烦用python2试试
虚拟环境切换到2.7了,但是还是这个报错,报错信息如下:
(venv2.7) [root@localhost open_dnsdb-0.2]# tools/with_venv.sh python setup.py install
ERROR:root:Error parsing
Traceback (most recent call last):
File "/usr/local/open_dnsdb-0.2/venv2.7/lib/python2.7/site-packages/pbr/core.py", line 96, in pbr
attrs = util.cfg_to_args(path, dist.script_args)
File "/usr/local/open_dnsdb-0.2/venv2.7/lib/python2.7/site-packages/pbr/util.py", line 256, in cfg_to_args
pbr.hooks.setup_hook(config)
File "/usr/local/open_dnsdb-0.2/venv2.7/lib/python2.7/site-packages/pbr/hooks/init.py", line 25, in setup_hook
metadata_config.run()
File "/usr/local/open_dnsdb-0.2/venv2.7/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run
self.hook()
File "/usr/local/open_dnsdb-0.2/venv2.7/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook
self.config['name'], self.config.get('version', None))
File "/usr/local/open_dnsdb-0.2/venv2.7/lib/python2.7/site-packages/pbr/packaging.py", line 849, in get_version
name=package_name))
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name dnsdb was given, but was not able to be found.
error in setup command: Error parsing /usr/local/open_dnsdb-0.2/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name dnsdb was given, but was not able to be found.
@LostSymbol 看了下pbr/packaging.py,设置下环境变量PBR_VERSION就可以了,问题解决,多谢!
@ghjacky 请教一下,解决这个问题的具体步骤。
@LostSymbol 看了下pbr/packaging.py,设置下环境变量PBR_VERSION就可以了,问题解决,多谢!
冒昧问下,具体怎么设?目前也遇到相同类似问题,看下你方法尝试下。
|
gharchive/issue
| 2019-03-12T03:39:50 |
2025-04-01T04:35:38.771699
|
{
"authors": [
"LostSymbol",
"ghjacky",
"kenkong2019",
"zengdd-pro"
],
"repo": "qunarcorp/open_dnsdb",
"url": "https://github.com/qunarcorp/open_dnsdb/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2177188778
|
🛑 Adguard Home DoT is down
In c03a1c3, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in 5b69bca after 8 minutes.
|
gharchive/issue
| 2024-03-09T11:26:01 |
2025-04-01T04:35:38.804268
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/10560",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1868400414
|
🛑 Adguard Home DoT is down
In ef9fbc0, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in 40734b0 after 136 days, 3 hours, 42 minutes.
|
gharchive/issue
| 2023-08-27T07:56:57 |
2025-04-01T04:35:38.806988
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/4517",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1920141320
|
🛑 Adguard Home DoT is down
In e4b2735, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in e8112d0 after 23 minutes.
|
gharchive/issue
| 2023-09-30T06:27:20 |
2025-04-01T04:35:38.809067
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/5663",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1997840611
|
🛑 Adguard Home DoT is down
In 1bf6981, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in 36ed8d3 after 7 minutes.
|
gharchive/issue
| 2023-11-16T21:42:44 |
2025-04-01T04:35:38.811051
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/7275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1695302907
|
🛑 Adguard Home DoT is down
In d04225f, Adguard Home DoT ($AG_DOT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Adguard Home DoT is back up in 7dbfb67.
|
gharchive/issue
| 2023-05-04T05:50:22 |
2025-04-01T04:35:38.813107
|
{
"authors": [
"quyleanh"
],
"repo": "quyleanh/upptime",
"url": "https://github.com/quyleanh/upptime/issues/800",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1842288374
|
Shortcut for adding tabs, plus fixing the flickering issue
This PR fixes the flickering issue for the headless tabs while reducing the amount of code necessary to manage the index and ARIA attributes.
We do it by using an inline component to manage the root tabs component
This allows us to create tabs by using an alternative shorter syntax:
<Tabs>
<TabPanel title="tab 1"> Panel 1 </TabPanel>
</Tabs>
Which will automatically be translated to:
<Tabs>
<TabList>
<Tab> Tab 1 </Tab>
</TabList>
<TabPanel > Panel 1 </TabPanel>
</Tabs>
I have read the CLA Document and I hereby sign the CLA
@all-contributors please add @wmertens for code, research, ideas, tests and docs
|
gharchive/pull-request
| 2023-08-09T00:53:55 |
2025-04-01T04:35:38.815172
|
{
"authors": [
"shairez",
"wmertens"
],
"repo": "qwikifiers/qwik-ui",
"url": "https://github.com/qwikifiers/qwik-ui/pull/385",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1630991240
|
potential Mistakes in the test data selection for perplexity evaluation
ptb_text_only uses the validation file instead of the test file. while it is still from the same dataset, and should result in similar results, makes 1 to 1 comparisons difficult.
c4 only has validation, so that is fine.
wikitext-2 uses test https://github.com/qwopqwop200/GPTQ-for-LLaMa/blob/468c47c01b4fe370616747b6d69a2d3f48bab5e4/datautils.py#L13
ptb_text_only uses validation https://github.com/qwopqwop200/GPTQ-for-LLaMa/blob/468c47c01b4fe370616747b6d69a2d3f48bab5e4/datautils.py#L35
c4 uses validation https://github.com/qwopqwop200/GPTQ-for-LLaMa/blob/468c47c01b4fe370616747b6d69a2d3f48bab5e4/datautils.py#L59-L61
please correct me if this is intended. :)
I just follow the settings of GPTQ.
:eyes:
https://github.com/qwopqwop200/GPTQ-for-LLaMa/blob/841feedde876785bc8022ca48fd9c3ff626587e2/datautils.py#L107
|
gharchive/issue
| 2023-03-19T15:36:18 |
2025-04-01T04:35:38.819154
|
{
"authors": [
"Green-Sky",
"qwopqwop200"
],
"repo": "qwopqwop200/GPTQ-for-LLaMa",
"url": "https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/60",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1575448567
|
🛑 Jenkins is down
In c5826fb, Jenkins (https://apk.qwq2333.top) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Jenkins is back up in 3cd23f2.
|
gharchive/issue
| 2023-02-08T04:28:48 |
2025-04-01T04:35:38.821608
|
{
"authors": [
"qwq233"
],
"repo": "qwq233/upptime",
"url": "https://github.com/qwq233/upptime/issues/556",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1461213232
|
More robust logging configuration
Currently logging is only done through the basicConfig and everything is done at the debug level.
We should update this, we should also be logging to a file (whose location is user controllable), especially for clean and smudge filters, we should have some messages at debug and some at info, and a user configurable way to control the verbosity of logs.
Ideally there would also be a way to see our debug messages without getting the ones from GitPython as some of their debug logs look like errors (the message about CYGWIN for example) and like they come from git-theta as we are currently configuring the root logger.
Also, now that we are using Async code, we should make something like a custom formatter that injects an asyncio task id into the log. This will make it easy to grep out a single async call from the interleaved result you get otherwise.
This link has an example of how to get the current async task id which we could add to a custom formatter https://stackoverflow.com/a/53949138
https://github.com/r-three/git-theta/pull/216 adds async task ids to the logs, we still should move off the root logger eventually
|
gharchive/issue
| 2022-11-23T07:41:58 |
2025-04-01T04:35:38.867387
|
{
"authors": [
"blester125"
],
"repo": "r-three/git-theta",
"url": "https://github.com/r-three/git-theta/issues/93",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2367914612
|
[20:05:37] [CRITICAL] target URL is not responding..
Ghauri is a good tool, unfortunately I'm encountering this problem on the database of a site I'm trying to dump.
The site is functional, but after a while Ghauri disconnects and leaves the message
[20:05:37] [CRITICAL] target URL is not responding...
Nothing is logged of course, is there any way to fix/bypass this?
How is this problem solved
|
gharchive/issue
| 2024-06-22T17:11:48 |
2025-04-01T04:35:38.877096
|
{
"authors": [
"anchoret-x",
"ghost"
],
"repo": "r0oth3x49/ghauri",
"url": "https://github.com/r0oth3x49/ghauri/issues/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
752621516
|
🛑 Chelsea is down
In 908803b, Chelsea (http://chelsea.kt.co.kr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chelsea is back up in 14a8e02.
|
gharchive/issue
| 2020-11-28T07:56:00 |
2025-04-01T04:35:38.882320
|
{
"authors": [
"r2fresh"
],
"repo": "r2fresh/chelsea",
"url": "https://github.com/r2fresh/chelsea/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1647746520
|
Update Fluid Benchmark
Hello,
I noticed a couple issues with the Fluid benchmark:
The Cottle benchmark was mapping the Product objects into a dictionary whereas the Fluid benchmark was accessing properties on the object
The Fluid benchmark was re-creating the FluidParser each time which is thread safe and supposed to be shared
It wasn't re-creating the TemplateContext each time which apparently you are supposed to do
After making those updates I see that Fluid seems to have consistently less execution time and memory allocation than Cottle for both Create and Render.
Did I miss something in the benchmark? I'm just trying to do a fair evaluation of the two.
Hello @DanielStout5 and thanks for contributing! I'm definitely trying to get a comparison as fair as possible, and it's very likely I didn't manage to use the various libraries included in this benchmark in a way that maximizes their performance.
However I think some parts of your change actually introduce a bias in favor of Fluid ; please see inline comments for details 🙂
FYI I just updated benchmark code to reflect this discussion. I modified again code for Fluid after I realized reusing the same TemplateContext gave it a small boost over creating a new one for each render (it also makes the test more aligned with other libraries). Results are here ; thanks again for pushing this!
|
gharchive/pull-request
| 2023-03-30T14:28:22 |
2025-04-01T04:35:38.885850
|
{
"authors": [
"DanielStout5",
"r3c"
],
"repo": "r3c/cottle",
"url": "https://github.com/r3c/cottle/pull/188",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1444897154
|
Update tutorial-one-dotnet.md
Remove redundant symbols and empty string.
@kegor Please sign the Contributor License Agreement!
Click here to manually synchronize the status of this Pull Request.
See the FAQ for frequently asked questions.
@kegor Thank you for signing the Contributor License Agreement!
Good catch, thanks.
|
gharchive/pull-request
| 2022-11-11T04:07:10 |
2025-04-01T04:35:38.920773
|
{
"authors": [
"kegor",
"michaelklishin",
"pivotal-cla"
],
"repo": "rabbitmq/rabbitmq-website",
"url": "https://github.com/rabbitmq/rabbitmq-website/pull/1557",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
585254718
|
S01E03: How to contribute to RabbitMQ? Part 1
Proposed via rabbitmq/tgir#4
Hosted by @gerhardlazu
Published on: 2020-03-27
Just a quick note that I had to perform something along theses lines (https://github.com/ltfschoen/tendermint-elixir/issues/1) to get a modern make on OS X.
Well done @johanrhodin, you found the first "Easter egg" 🐣
This is my way of checking how many follow along and are willing to solve the small challenges that would otherwise prevent them to. +1 Johan's comment if you found the same 🐣
Let me know how it goes 👍🏻
|
gharchive/pull-request
| 2020-03-20T18:30:05 |
2025-04-01T04:35:38.924071
|
{
"authors": [
"gerhard",
"johanrhodin"
],
"repo": "rabbitmq/tgir",
"url": "https://github.com/rabbitmq/tgir/pull/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
185622789
|
チャンネル生放送での使用について(xml現物とパスは送れます)
各種解説ブログなどでチャンネルだと使えない(使わない)と書いてあったので
調査の結果不可能かも知れませんが、
チャンネル放送でもOBSStudioを自枠の自動検知で使用したいのでお願いします
作成に必要な情報があれば教えて下さい
https://twitter.com/Tanosimi3500
v2.1.0-preで新配信(β)に対応しています。もしかしたら、チャンネル配信も一緒に対応できているかも知れません。試せるようであれば、一度お試しください。
ココに書けば返信になるのかな?
最新のpreじゃない方のプラグインでチャンネルで自枠を自動検知することは出来ませんでした
新配信βではエラー一度も起きてないので
rtmpの後ろが通常、新配信、チャンネルと違うからでしょうね
ストリームキーはlvなんたらかんたらでチャンネルも表記は一緒っぽいです
「rtmp://~~なんたらかんたら」のところをコピペしたり
xml現物を送って差し上げたいのですが
どうでしょうか
XMLをそのままここの返答で貼り付けていただければと思います。貼り付ける時は放送が終わったものや一部の文字を伏せ字にしてください。
貼り付けは、下記のように```で囲ってください。
```
XMLの内容
```
囲みが成功してるか初挑戦ですが
個人的なユーザーIDとか放送番号、複数あることが確定している
ところだけ伏せ字にします
<?xml version="1.0" encoding="UTF-8"?>
<flashmedialiveencoder_profile>
<preset>
<name>Custom</name>
<description></description>
</preset>
<capture>
<video>
<device></device>
<crossbar_input>0</crossbar_input>
<frame_rate>24.00</frame_rate>
<size>
<width>640</width>
<height>480</height>
</size>
</video>
<audio>
<device></device>
<crossbar_input>0</crossbar_input>
<sample_rate>44100</sample_rate>
<channels>2</channels>
<input_volume>78</input_volume>
</audio>
</capture>
<process>
<video>
<preserve_aspect></preserve_aspect>
</video>
</process>
<encode>
<video>
<format>H.264</format>
<datarate>250;</datarate>
<outputsize>320x240;</outputsize>
<advanced>
<profile>Main</profile>
<level>3.0</level>
<keyframe_frequency>5 Seconds</keyframe_frequency>
</advanced>
<autoadjust>
<enable>false</enable>
<maxbuffersize>1</maxbuffersize>
<dropframes>
<enable>false</enable>
</dropframes>
<degradequality>
<enable>false</enable>
<minvideobitrate></minvideobitrate>
<preservepfq>false</preservepfq>
</degradequality>
</autoadjust>
</video>
<audio>
<format>MP3</format>
<datarate>96</datarate>
</audio>
</encode>
<restartinterval>
<days></days>
<hours></hours>
<minutes></minutes>
</restartinterval>
<reconnectinterval>
<attempts>1</attempts>
<interval>5</interval>
</reconnectinterval>
<output>
<rtmp>
<url>rtmp://chnl03(枠取得ごとに1~3で不規則に変わる).ep.live.nicovideo.jp:1935/publicorigin/161113_01_1?枠を取ったユーザーID:lv放送番号:4:1478966464:0:1478966404:d758562b000f1ca4</url>
<backup_url></backup_url>
<stream>lv上と同じ放送番号</stream>
</rtmp>
</output>
<metadata></metadata>
<preview>
<video>
<input>
<zoom>50%</zoom>
</input>
<output>
<zoom>50%</zoom>
</output>
</video>
<audio></audio>
</preview>
</flashmedialiveencoder_profile>
これってニコニコ側が出しているXMLでしょうか…うーん、通常の生放送と全く違うので、すぐに対応とかは難しいですね…。取得するURLって、http://live.nicovideo.jp/api/getpublishstatusでしょうか?そこから違っているような気がして、そうなると、根本からフラグで分けないと難しいかも。
これは放送ページからダウンロードしてFMLEに喰わせるための設定ファイルですね。
On Nov 14, 2016, at 18:43, らっしー(raccy) notifications@github.com wrote:
これってニコニコ側が出しているXMLでしょうか…うーん、通常の生放送と全く違うので、すぐに対応とかは難しいですね…。取得するURLって、http://live.nicovideo.jp/api/getpublishstatus http://live.nicovideo.jp/api/getpublishstatusでしょうか?そこから違っているような気がして、そうなると、根本からフラグで分けないと難しいかも。
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub https://github.com/raccy/obs-rtmp-nicolive/issues/15#issuecomment-260290284, or mute the thread https://github.com/notifications/unsubscribe-auth/AAu2e9hpbFvawNWAgbHRUVF81zBFiU4Bks5q-C0sgaJpZM4KiJ0J.
Tannosimiさん、obs-rtmpプラグインが参考にしているのは
http://watch.live.nicovideo.jp/api/getpublishstatus?v=[ http://watch.live.nicovideo.jp/api/getpublishstatus?v=[放送番号]
というURLを参照して得られる情報なんです。放送を始めた時に、上のURLの[放送番号]の部分を
自分が放送を始めたチャネル放送の放送番号に書き替えて、ブラウザーでそのURLを開いて、
そのブラウザーの内容をこのメーリングリストに貼り付けて下さい。
On Nov 14, 2016, at 19:08, Чайка Я mellanie335@gmail.com wrote:
これは放送ページからダウンロードしてFMLEに喰わせるための設定ファイルですね。
On Nov 14, 2016, at 18:43, らっしー(raccy) <notifications@github.com mailto:notifications@github.com> wrote:
これってニコニコ側が出しているXMLでしょうか…うーん、通常の生放送と全く違うので、すぐに対応とかは難しいですね…。取得するURLって、http://live.nicovideo.jp/api/getpublishstatus http://live.nicovideo.jp/api/getpublishstatusでしょうか?そこから違っているような気がして、そうなると、根本からフラグで分けないと難しいかも。
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub https://github.com/raccy/obs-rtmp-nicolive/issues/15#issuecomment-260290284, or mute the thread https://github.com/notifications/unsubscribe-auth/AAu2e9hpbFvawNWAgbHRUVF81zBFiU4Bks5q-C0sgaJpZM4KiJ0J.
上の長いURL全てをそのまま入力したらエラーになりました
URLが被っている様に見えるので
一つ削って
http://watch.live.nicovideo.jp/api/getpublishstatus?v=lv放送番号
これでなんだか意味がありそうな文字が出てきたのでソレを貼ります
間違っていたらすいません
<getpublishstatus status="ok" time="1479198451">
<stream>
<id>lv放送番号</id>
<token>713b01e9b0d8bc93052e8fdb2b77a2282b9c7ac8</token>
<exclude>1</exclude>
<provider_type>channel</provider_type>
<base_time>1479197746</base_time>
<open_time>1479197746</open_time>
<start_time>1479197746</start_time>
<end_time>1479199546</end_time>
<allow_vote>1</allow_vote>
<disable_adaptive_bitrate>1</disable_adaptive_bitrate>
<is_reserved>0</is_reserved>
<is_chtest>1</is_chtest>
<for_mobile>1</for_mobile>
<editstream_language>1</editstream_language>
<test_extend_enabled>1</test_extend_enabled>
<category>一般(その他)</category>
</stream>
<user>
<nickname>ユーザーIDに紐付けられた放送者名いわゆるハンネ</nickname>
<is_premium>1</is_premium>
<user_id>枠を取得したユーザーID</user_id>
<NLE>1</NLE>
</user>
<rtmp is_fms="1">
<url>
rtmp://chnl01.ep.live.nicovideo.jp:1935/publicorigin/161115_17_1
</url>
<stream>lv放送番号</stream>
<ticket>
ユーザーID:lv放送番号:4:1479198451:0:1479197746:51811d32776328e0
</ticket>
<bitrate>1024</bitrate>
</rtmp>
</getpublishstatus>
プラグインが見に行っているのは
http://live.nicovideo.jp/api/getpublishstatus
です。今の仕組みでは放送番号を事前に知る方法がないため、ここから取得しています。ここで取得できないとすぐの対応はちょっと無理です。
XMLは見る限り普通のユーザー生放送と同じですので、なぜできないのかはちょっとわかりません。このXMLを上のURLで取得できれば、放送できるはずなのですが。
まだチャンネルが本格始動していない為
自分以外誰も見られないテスト配信モードしか使えませんが
http://live.nicovideo.jp/api/getpublishstatus
だけだと表示されませんでした(新旧ユーザー放送は見える)
http://live.nicovideo.jp/api/getpublishstatus/lv放送番号
だと情報が出ました
他のチャンネル稼働済みの人に本配信中に開いて試してもらいますが、
ソレがダメだと難しそうですかね
生を弄っているドワンゴの中の人とかに聞けるチャンスがあったら
今後の開発のためにコレを絶対聞いてくれみたいなポイントはありますか?
NLEでうまく配信できるのであれば、getpublishstatusに複数取得モード(accept-multi:1)でアクセスしたときだけ、放送枠の情報が取れるのかも知れません。複数取得しても選択できないので、今は常に単独取得なんですよね。
おー私には知識が無くてわかりませんが
今シンプルに見に行ってるところを複雑にいじると
行けそうな雰囲気は今の時点でもあるということなのかな?
因みにNLEはチャンネルで生放送枠を取って
チャンネル用のアカウントでログインして配信開始ボタン押すだけで
今のラッシーさんの作ったプラグインみたいに全て自動取得して始まります
|
gharchive/issue
| 2016-10-27T09:53:10 |
2025-04-01T04:35:38.944969
|
{
"authors": [
"chajka",
"raccy",
"tanosimi"
],
"repo": "raccy/obs-rtmp-nicolive",
"url": "https://github.com/raccy/obs-rtmp-nicolive/issues/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
199412323
|
Refactorings for loading this plugin faster
I did some refactorings mainly for loading this plugin faster (and fixing some global scope pllution).
Move implementation into autoload/ directory to defer loading plugin until omni completion is executed
Fix variable prefixes not to pollute global scope and to stop variables living longer unnecessarily
Improve file path separator for Windows
Fix indentation
I believe I fixed all the points. Could you review additional fixes? Diff is below:
https://github.com/rhysd/vim-racer/compare/d6f3d30803d193406fcbb8f5130eb993137749a9...2b029aab0e97fd32a18085473734ab3cc055f31c
Almost good.
Please fix the new comments.
Thank you for catching them. I fixed.
Nice. Merged.
Thank you for your review.
|
gharchive/pull-request
| 2017-01-08T09:24:40 |
2025-04-01T04:35:38.950231
|
{
"authors": [
"Shougo",
"rhysd"
],
"repo": "racer-rust/vim-racer",
"url": "https://github.com/racer-rust/vim-racer/pull/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
224463718
|
Don't recommend installing gem with sudo (From #87)
This fixes https://github.com/rack-test/rack-test/pull/87
@perlun can you review and merge this PR? thanks.
@perlun just moment to merge this PR.
I would like to respect another original PR.
|
gharchive/pull-request
| 2017-04-26T13:15:07 |
2025-04-01T04:35:38.953357
|
{
"authors": [
"junaruga"
],
"repo": "rack-test/rack-test",
"url": "https://github.com/rack-test/rack-test/pull/162",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
54639093
|
I always have to specify a port when adding an endpoint.
It'd be super handy to be able to give deproxy a handler, let it pick the port, and let me query deproxy for the port it's chosen for the handler.
I'm thinking similar behavior like what embedding Jetty will do: If you don't give it a port, it will pick one, and then you can introspect on it and find the port it selected to listen on.
This is extremely important to parallelizing test execution.
Is it even possible to start up deproxy's handlers without having the software under test running?
I'm thinking the order of events to get automatic port selection working all the way down:
Start up deproxy's endpoints
Figure out the ports deproxy used for it's endpoints
Start up repose with magic option to make it auto-select ports, instead of using the defined ports
introspect repose via JMX to get the port those nodes are running on
Start up deproxy using that port the repose node is listening to
Execute test.
So the existing endpoint logic can do some port finder stuff, but that won't work. It has proven itself unreliable in the past.
so, a change to deproxy could allow it to bind to a random port, and return that port relatively easily
ServerSocket, which is being used to attach, can be created unbound with a no arg constructor. Then you can tell it to bind(null) which will hook it up to localhost on an ephemeral port. Finally, you can say getLocalPort() and it will return the port to which it has attached.
This is probably substantially more reliable than the existing method of using the PortFinder singleton.
|
gharchive/issue
| 2015-01-16T23:38:47 |
2025-04-01T04:35:38.957522
|
{
"authors": [
"dkowis"
],
"repo": "rackerlabs/deproxy",
"url": "https://github.com/rackerlabs/deproxy/issues/63",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
134739192
|
Avoid generating warnings on success
Generating a warning event on every Chef run is superfluous. The slack_handler should not generate warning events when everything is working exactly as it should.
Thanks!
|
gharchive/pull-request
| 2016-02-19T00:35:15 |
2025-04-01T04:35:39.064513
|
{
"authors": [
"martinb3",
"nicwaller"
],
"repo": "rackspace-cookbooks/chef-slack_handler",
"url": "https://github.com/rackspace-cookbooks/chef-slack_handler/pull/9",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
66153226
|
Can't get transitionTo to use HTML5 History instead of Hash
For example when I do .transitionTo('/'); when I'm at /query/777, I go to /query/777#/ and not /. The routes work correctly though when I'm traveling through a Router.Link. So basically, how can I get my .transitionTo code to use HTML5 History like my Router.Link instead of a Hash?
// package.json
...
"dependencies": {
"jquery": "2.1.3",
"jquery-mousewheel": "^3.1.12",
"react": "^0.12.2",
"react-router": "^0.12.2",
"reactable": "~0.10.1",
"reflux": "~0.2.7"
}
...
// RouterContainer.js
...
var _router;
module.exports = {
get: function() {
return _router;
},
set: function(router) {
_router = router;
}
};
...
// Routes.js
...
var Routes = (
<Route name="app" path="/" handler={App}>
<DefaultRoute name="home" handler={Home} />
<Route name="query" path="/query/" handler={Query}>
<Route name="json" path="/query/:json" handler={JSON}/>
</Route>
</Route>
);
var RouterContainer = require('./RouterContainer');
RouterContainer.set(Router.create({
routes: Routes
}));
...
// store.js
...
updateURL: function() {
RouterContainer.get().transitionTo('/'); // doesn't work
// RouterContainer.get().transitionTo('home'); // doesn't work either
}
...
// main.js
...
Router.run(Routes, Router.HistoryLocation, function (Handler, state) {
React.render(<Handler />, document.body);
});
...
Found the solution!
Instead of:
RouterContainer.set(Router.create({
routes: Routes
}));
One should pass in a location field like so:
RouterContainer.set(Router.create({
routes: Routes,
location: Router.HistoryLocation
}));
|
gharchive/issue
| 2015-04-03T14:07:09 |
2025-04-01T04:35:39.079888
|
{
"authors": [
"01AutoMonkey"
],
"repo": "rackt/react-router",
"url": "https://github.com/rackt/react-router/issues/1044",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
107258255
|
Best Pattern for fetching async data on route begin
I think the one thing the current router lacks is fetching async data when a route is hit, and showing a loading spinner while that happens.
While i saw mention of react-router-async-props in places, it seems that project is no longer active.
Here is how I currently do this in my application. Wondering if someone can tell me if there is a better way to achieve this:
I am using redux along with react-router.
On hitting a url like /profile, in the React component's componentWillMount() method, I have something like:
componentWillMount()
{
dispatch(action("FETCH"));
$.get("/myprofile"). then(
(data) => dispatch(action("UPDATE")(data)));
}
my reducer is like this:
function initialState() {
return { loading: true, data: null };
}
function reducer(state =initialState(), action) {
if (action.type === ("FETCH" ))
return initialState();
else if (action.type === ("UPDATE" ))
return { loading: false, data: action.payload };
return state;
};
In my component, I just check for the reducer's loading field to display a spinner or the content.
The downside of this approach is that the data fetched now resides in the store even after we navigate to a different route and the component is no longer present.
Wondering if there are better ways to accomplish the above, that also clear the fetched data on component destruction.
this Is an architecture/data issue, not specific to router.
Different people bootstrap data differently. You components will always have extra knowledge to know if data is existent/stale/loading/etc.
I posted comment here #2101 where you might some clue.
As @blairanderson this goes beyond react-router and depends or architecture.
yep, this goes beyond the React Router, but there are couple of good discussions going on, please look at similar issues. Also this one seems interesting #2101
|
gharchive/issue
| 2015-09-18T18:59:25 |
2025-04-01T04:35:39.084122
|
{
"authors": [
"blairanderson",
"knowbody",
"pdeva",
"vojtatranta"
],
"repo": "rackt/react-router",
"url": "https://github.com/rackt/react-router/issues/2008",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
116423437
|
params on last version using hash and '?'
Hello
I have this:
<Route name="oauth" path="/oauth?token=:token" handler={ LoginOauth } />
it doesn't work on react 0.14.x and last version (1.0.0) of react-router. LoginOauth is never called.
how can I make it works? can I use params after a '?' using a hash history?
TIA
Thanks for your question!
We want to make sure that the GitHub issue tracker remains the best place to track bug reports and feature requests that affect the development of React Router.
Questions like yours deserve a purpose-built Q&A forum. Would you like to post this question to Stack Overflow with the tag #react-router? https://stackoverflow.com/questions/ask?tags=react-router.
We also have an active and helpful React Router community on Reactiflux, which is a great place to get fast help with React Router and with the rest of the React ecosystem. You can join at https://discord.gg/0ZcbPKXt5bYaNQ46.
|
gharchive/issue
| 2015-11-11T21:18:49 |
2025-04-01T04:35:39.087641
|
{
"authors": [
"UXDart",
"taion"
],
"repo": "rackt/react-router",
"url": "https://github.com/rackt/react-router/issues/2525",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
110258671
|
Using .set with an object can trigger inconsistant observers
Ractive lastest & edge
When .setting a new object of properties, Ractive seem to process each of them one at time. I would expect setting all of them at the same time.
Consider this example: http://jsfiddle.net/bd29n0w6/ (result in the console)
The parent view contains two children that are displayed conditionally like a navigation routing point. When I navigate to another view, I want to set the view name and the parameters at the same time the parent router.
What happens is that ChildA receives ChildB's params before currentView is evaluated.
Am I doing anything wrong? Is this a bug?
Thank you for the support!
Even though I couldn't find any similar issue already open, if there is, please close this one, thanks!
I've also noticed the same thing when items implicitly depend on each other. Yet to find a way to set the correct resolution order other than having a crap ton of error checking and some try catches. :|
set with an object has always been a shortcut for calling set multiple times. So as it stands, it's not really a bug.
I suppose the question is should set with an object defer dependant notification untill after the model has been set for each key. It looks like that would be possible, but from a cursory glance, it wouldn't be pretty.
@evs-chris , yes I actually thought it was an optimization until I ran into this as well. in the end, I just use multiple sets... though, it would be nice to have the set object defer the notifications, definitely. we could simplify some code that way.
@evs-chris fair enough, thanks for the info. It would be a great enhancement to be able to set multiple data at the same time though, both in terms of performance (avoid double runloop / DOM update) and consistency I would say.
Any chance we have it on a future release?
set with an object has always been a shortcut for calling set multiple times
It would be a great enhancement to be able to set multiple data at the same time though, both in terms of performance (avoid double runloop / DOM update) and consistency I would say
Actually a hash object set is different than calling set multiple times. It only calls runloop.end() once . This means that while dependents are notified on each set, DOM updates are only processed once at the end after all the sets have been made.
Given that observers are notified prior to DOM changes being made, it makes sense that the code example, as written, behaves as it does. What isn't as clearly defined is what should happen if the observe is passed { defer: true } which means it should be called after the DOM updated. Currently it behaves identically - you could argue it shouldn't.
In any case, swapping out sections like this does tend to be more difficult than it seems it should. And it can be more difficult if you have transitions defined.
How I would manage this case (and given that the details matter, it may be different for your "real" use-case) is to clear the data that drives the section, then set the new data. Without transitions that would mean (see http://jsfiddle.net/bd29n0w6/3/):
parent.set({ currentView: '', params: null });
parent.set({ currentView, params });
With transitions something more like (http://jsfiddle.net/bd29n0w6/4/):
parent.set({ currentView: '', params: null }).then( () => {
parent.set({ currentView, params });
});
@martypdx Interesting. Using an empty view seems to be a good workaround to change params without giving any intermediary to the views and I will use it for my navigation stuff. Thanks for details & advices!
If you use deferred observers, it looks like the issue is resolved: http://jsfiddle.net/bd29n0w6/5/
|
gharchive/issue
| 2015-10-07T15:58:44 |
2025-04-01T04:35:39.103053
|
{
"authors": [
"JonDum",
"evs-chris",
"heavyk",
"martypdx",
"ngasull"
],
"repo": "ractivejs/ractive",
"url": "https://github.com/ractivejs/ractive/issues/2200",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1691685359
|
Adding multiple classes in Java
It would be nice if I could add multiple classes in my Java code, something I'm unable to do because Riju only supports one "file" (which I like), but Java requires making each class a new file. Maybe there's a workaround in the JVM to allow for multiple class declarations in one file?
Thanks,
--reese
You can do this by defining the additional classes as package-private (i.e., removing the public modifier): https://stackoverflow.com/a/48839136
|
gharchive/issue
| 2023-05-02T02:35:01 |
2025-04-01T04:35:39.158615
|
{
"authors": [
"raxod502",
"reesericci"
],
"repo": "radian-software/riju",
"url": "https://github.com/radian-software/riju/issues/179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
351996412
|
Allows CommonMark (Markdown) in descriptions
Implements #162.
I need to make another commit to run the build. I'll do so after the last review and potential other commits to avoid unnecessary commits just for the purpose of triggering the build.
FYI You can make an empty commit
git commit -a --allow-empty
To trigger a build. You can also manually build in the CircleCI interface.
Triggered the build, can be merged now.
|
gharchive/pull-request
| 2018-08-20T06:05:50 |
2025-04-01T04:35:39.194389
|
{
"authors": [
"m-mohr",
"matthewhanson"
],
"repo": "radiantearth/stac-spec",
"url": "https://github.com/radiantearth/stac-spec/pull/185",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1751384700
|
🛑 GPS Receiver API is down
In 47342c1, GPS Receiver API (http://tracker.ace-energy.co.th/receiver/api/v1/health) was down:
HTTP code: 0
Response time: 0 ms
Resolved: GPS Receiver API is back up in f9c4b48.
|
gharchive/issue
| 2023-06-11T11:33:35 |
2025-04-01T04:35:39.251966
|
{
"authors": [
"chindanai"
],
"repo": "radiuszon/upptime",
"url": "https://github.com/radiuszon/upptime/issues/1199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1798147554
|
🛑 GPS Receiver API is down
In f0284e9, GPS Receiver API (http://tracker.ace-energy.co.th/receiver/api/v1/health) was down:
HTTP code: 504
Response time: 15032 ms
Resolved: GPS Receiver API is back up in 252449e.
|
gharchive/issue
| 2023-07-11T05:43:15 |
2025-04-01T04:35:39.254325
|
{
"authors": [
"chindanai"
],
"repo": "radiuszon/upptime",
"url": "https://github.com/radiuszon/upptime/issues/1606",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2086161505
|
[Label] Make optional the prevention of text selection when double clicking label
Feature request
Overview
The Label primitive prevents text selection when double clicking the label.
This also prevents using the "up/down" arrows of an input element of type number placed inside the Label.
Adding an option to Label to disable the prevention of text selection would be great.
Who does this impact? Who is this for?
Advanced users. This request might be too specific, feel free to close this issue.
Here the source code of label
const Label = React.forwardRef<LabelElement, LabelProps>((props, forwardedRef) => { return ( <Primitive.label {...props} ref={forwardedRef} onMouseDown={(event) => { props.onMouseDown?.(event); // prevent text selection when double clicking label if (!event.defaultPrevented && event.detail > 1) event.preventDefault(); }} /> ); });
You can use the label component that is available from @radix-ui/react-primitive package since the only difference is onMouseDown
import { Primitive } from '@radix-ui/react-primitive';
Ah excellent, thanks for that! I use Shadcn UI, so I modified its Label component to add a disableSelectionPrevention option, as follow:
import * as LabelPrimitive from "@radix-ui/react-label";
import { Primitive } from "@radix-ui/react-primitive";
import { cva, type VariantProps } from "class-variance-authority";
import * as React from "react";
import { cn } from "@/lib/utils";
const labelVariants = cva(
"text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70",
);
const Label = React.forwardRef<
React.ElementRef<typeof LabelPrimitive.Root>,
React.ComponentPropsWithoutRef<typeof LabelPrimitive.Root> &
VariantProps<typeof labelVariants> & {
disableSelectionPrevention?: boolean;
}
>(({ className, disableSelectionPrevention, ...props }, ref) => {
const Component = disableSelectionPrevention
? Primitive.label
: LabelPrimitive.Root;
return (
<Component
ref={ref}
className={cn(labelVariants(), className)}
{...props}
/>
);
});
Label.displayName = LabelPrimitive.Root.displayName;
export { Label };
I'll let the people at WorkOS decide if they want to add this option to @radix-ui/react-label, I still think it could be useful. If not, feel free to close this issue :slightly_smiling_face:
I've addressed the issue in #2753.
Oh excellent! Thanks for doing that!
|
gharchive/issue
| 2024-01-17T13:09:34 |
2025-04-01T04:35:39.259166
|
{
"authors": [
"Zwyx",
"benoitgrelard",
"kal07"
],
"repo": "radix-ui/primitives",
"url": "https://github.com/radix-ui/primitives/issues/2656",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
664243487
|
Add debouncing for state events
Is your feature request related to a problem? Please describe.
Yes. Sometimes a state event is sent twice.
Describe the solution you'd like
Some implementation of debouncing to prevent identical state events from being handled twice.
Additional context
Here are some (custom) debug logs that show multiple state events being handled simultaneously:
Jul 23 01:46:46 wyzesense2mqtt python3[18592]: sensor_mac: 77800DFB
Jul 23 01:46:46 wyzesense2mqtt python3[18592]: State event data: [2020-07-23 01:46:45][77800DFB]StateEvent: sensor_type=motion, state=active, battery=96, signal=89
Jul 23 01:46:46 wyzesense2mqtt python3[18592]: [TRACE] sensor_state: active
Jul 23 01:46:46 wyzesense2mqtt python3[18592]: [TRACE] event.MAC: 77800DFB
Jul 23 01:46:46 wyzesense2mqtt python3[18592]: [TRACE] invert_state: False
Jul 23 01:46:46 wyzesense2mqtt python3[18592]: {'available': True, 'mac': '77800DFB', 'device_class': 'motion', 'last_seen': 1595486805.816, 'last_seen_iso': '2020-07-23T01:46:45.816000', 'signal_strength': -89, 'battery': 96, 'name': 'Garage Motion', 'state': 1}
Jul 23 01:47:01 wyzesense2mqtt python3[18592]: sensor_mac: 77836170
Jul 23 01:47:01 wyzesense2mqtt python3[18592]: State event data: [2020-07-23 01:47:00][77836170]StateEvent: sensor_type=switch, state=close, battery=97, signal=85
Jul 23 01:47:01 wyzesense2mqtt python3[18592]: [TRACE] sensor_state: close
Jul 23 01:47:01 wyzesense2mqtt python3[18592]: [TRACE] event.MAC: 77836170
Jul 23 01:47:01 wyzesense2mqtt python3[18592]: [TRACE] invert_state: True
Jul 23 01:47:01 wyzesense2mqtt python3[18592]: {'available': True, 'mac': '77836170', 'device_class': 'opening', 'last_seen': 1595486820.137, 'last_seen_iso': '2020-07-23T01:47:00.137000', 'signal_strength': -85, 'battery': 97, 'name': 'Doorbell', 'state': 0}
Jul 23 01:47:04 wyzesense2mqtt python3[18592]: sensor_mac: 77A5E27A
Jul 23 01:47:04 wyzesense2mqtt python3[18592]: State event data: [2020-07-23 01:47:03][77A5E27A]StateEvent: sensor_type=motion, state=inactive, battery=93, signal=54
Jul 23 01:47:04 wyzesense2mqtt python3[18592]: [TRACE] sensor_state: inactive
Jul 23 01:47:04 wyzesense2mqtt python3[18592]: [TRACE] event.MAC: 77A5E27A
Jul 23 01:47:04 wyzesense2mqtt python3[18592]: [TRACE] invert_state: False
Jul 23 01:47:04 wyzesense2mqtt python3[18592]: {'available': True, 'mac': '77A5E27A', 'device_class': 'motion', 'last_seen': 1595486823.058, 'last_seen_iso': '2020-07-23T01:47:03.058000', 'signal_strength': -54, 'battery': 93, 'name': 'Back Porch Motion', 'state': 0}
>>> This is the first event of the duplicated state events. <<<
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: sensor_mac: 77836170
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: State event data: [2020-07-23 01:47:09][77836170]StateEvent: sensor_type=switch, state=open, battery=99, signal=88
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: [TRACE] sensor_state: open
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: [TRACE] event.MAC: 77836170
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: [TRACE] invert_state: True
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: {'available': True, 'mac': '77836170', 'device_class': 'opening', 'last_seen': 1595486829.336, 'last_seen_iso': '2020-07-23T01:47:09.336000', 'signal_strength': -88, 'battery': 99, 'name': 'Doorbell', 'state': 0}
>>> This is the second event of the duplicated state events. <<<
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: sensor_mac: 77836170
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: State event data: [2020-07-23 01:47:09][77836170]StateEvent: sensor_type=switch, state=open, battery=99, signal=88
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: [TRACE] sensor_state: open
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: [TRACE] event.MAC: 77836170
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: [TRACE] invert_state: True
Jul 23 01:47:10 wyzesense2mqtt python3[18592]: {'available': True, 'mac': '77836170', 'device_class': 'opening', 'last_seen': 1595486829.336, 'last_seen_iso': '2020-07-23T01:47:09.336000', 'signal_strength': -88, 'battery': 99, 'name': 'Doorbell', 'state': 0}
Jul 23 01:47:26 wyzesense2mqtt python3[18592]: sensor_mac: 77800DFB
Jul 23 01:47:26 wyzesense2mqtt python3[18592]: State event data: [2020-07-23 01:47:23][77800DFB]StateEvent: sensor_type=motion, state=inactive, battery=96, signal=89
Jul 23 01:47:26 wyzesense2mqtt python3[18592]: [TRACE] sensor_state: inactive
Jul 23 01:47:26 wyzesense2mqtt python3[18592]: [TRACE] event.MAC: 77800DFB
Jul 23 01:47:26 wyzesense2mqtt python3[18592]: [TRACE] invert_state: False
Jul 23 01:47:26 wyzesense2mqtt python3[18592]: {'available': True, 'mac': '77800DFB', 'device_class': 'motion', 'last_seen': 1595486843.947, 'last_seen_iso': '2020-07-23T01:47:23.947000', 'signal_strength': -89, 'battery': 96, 'name': 'Garage Motion', 'state': 0}
I’m not sure why something would be sending twice, but I like the idea of preventing it. I wonder if this will be easy to implement after adding the birth/will support I just logged. Once that is done, we’d be storing the last state value Which would make for an easy comparison before sending again.
Did you have an idea here on how you wanted to implement this?
There have been quite a few things in this project that have sent me for a puzzled brain, but I refuse to give up on it. Like I said before, it is far more stable than the HA custom integration.
Apparently I am the "rare case" for every scenario that we've found yet, but that is a good thing because I am willing to speaking up about my experience with the software.
I honestly have no clue where to begin to implement this. Might it be possible to keep a global variable of event payloads and skip publishing if the timestamp matches? I just don't know enough about the underlying asynchronous nature of the script to understand the best place to address this issue.
Here's what I propose on this one. Assuming you are able to pick up another bridge soon, lets see if this repeats with a second bridge. It's possible this is a side effect of whatever is causing the bridge/USB disconnect issues.
If it does stick around, lets start by adding a SENSOR_STATES dict (like what I'm talking about in the HA birth/will issue), and compare the major values in the event to that, in particular the state and timestamp values. We could just start with that living in memory and determine if we want to store it later, but would give us a relatively quick way to test for the duplicates without having to settle on a storage option for now.
Sound good?
|
gharchive/issue
| 2020-07-23T06:53:49 |
2025-04-01T04:35:39.265722
|
{
"authors": [
"dale3h",
"raetha"
],
"repo": "raetha/wyzesense2mqtt",
"url": "https://github.com/raetha/wyzesense2mqtt/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2589493592
|
Update awesome_notifications.dart
I've modified the timezone identifiers: utc to 'UTC', and local to 'now().timeZoneName'.
I noticed that notifications sometimes appear +9 hours after the set time. Since +9 hours corresponds to KST, which is also my timezone, it seems that the set time is not accounting for the timezone.
Upon reviewing the code, I found that the variable names and values do not align.
While I have not analyzed the entire project code, in this segment, I suggest adjustments to better reflect the variable names.
Could you please review this PR and confirm if the original implementation was indeed correct?
Thank you for your project.
Hi @jujinkim, in fact, the localTimeZone variable must receive the local time zone and the utcTimeZone must receive the UTC timezone. The previous code is correct.
|
gharchive/pull-request
| 2024-10-15T17:58:28 |
2025-04-01T04:35:39.290802
|
{
"authors": [
"jujinkim",
"rafaelsetragni"
],
"repo": "rafaelsetragni/awesome_notifications",
"url": "https://github.com/rafaelsetragni/awesome_notifications/pull/982",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1055898327
|
inpoly -> inpolygon
I'm guessing this is just a leftover typo?
Still doesn't really work for me though:
julia> pts
24395-element Vector{Tuple{Float64, Float64}}:
(121.55573999999999, 14.113560000000001)
(121.55573999999999, 14.113560000000001)
⋮
(25.217, 0.6)
julia> a[1]
Polygon(137 Points)
julia> inpolygon(pts, a[1])
24395×2 BitMatrix:
0 0
0 0
⋮
0 0
I think the result is supposed to be a BitVector, right?
Ahhg yes this needs testing. The matrix is the output of PolygonInbounds.jl. I guess it should return the first column
|
gharchive/pull-request
| 2021-11-17T09:26:26 |
2025-04-01T04:35:39.297988
|
{
"authors": [
"mkborregaard",
"rafaqz"
],
"repo": "rafaqz/Rasters.jl",
"url": "https://github.com/rafaqz/Rasters.jl/pull/225",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
210986917
|
Uint 8 Requirement/Image Preprocessing
These seems to be a killer flaw with this wonderful tool. Running the demo saliency map demo code runs perfect but it assumes the dtype of the image array is uint8. Any other dtype causes an error when ran.
How to get around this? What happens when your problem has images that have been preprocessed (like img_array/255 and img_array.dtype='float32')? The model may give bad predictions on image arrays that are not in the same preprocssed formats used to train the model. Is there any way around this? Hoping for help from anyone...
That's a good point. So, the heatmap could have specific bounds and dtype then? I can certainly change the API to accept these params to deprocess the image.
Another option might be to infer these based on seed_img, but i dont think i can get the min/max bounds from that. I can certainly infer dtype.
The third option is to assume bounds to always be 0-1, float32 or 0-255, uint8. That would keep the API simpler with less params.
Thoughts?
I would just LOVE to use this API with any arbitrary model I build and
preprocessed image. I think that is what will make this truly powerful. In
general an API like
'cam_map(model,image,penultimate_conv_layer,model_class_prediction)', where
on could then resize to the original image size and visualize the CAM
mapping would be amazing. Basically what you have now but with ultimate
flexibility.
float32 0-1 range is the most common case, isnt't it?
|
gharchive/issue
| 2017-03-01T05:55:41 |
2025-04-01T04:35:39.313145
|
{
"authors": [
"mm-manu",
"pGit1",
"raghakot"
],
"repo": "raghakot/keras-vis",
"url": "https://github.com/raghakot/keras-vis/issues/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
603496908
|
crash when open UI after deleting a station
Describe the bug
I deleted a station because it was requesting material although it was full.
after deleting the station im not able to open the ltn manager ui
kastorio 2.zip
0.1.9 fixed the issue
|
gharchive/issue
| 2020-04-20T19:59:09 |
2025-04-01T04:35:39.332871
|
{
"authors": [
"JanEggers"
],
"repo": "raiguard/Factorio-LtnManager",
"url": "https://github.com/raiguard/Factorio-LtnManager/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
626186347
|
LTNManager crashes on load
Describe the bug
Crash on load
Save file
Upload the save file where the bug occured. If the file is too large for GitHub, use a file service such as Google Drive or Dropbox. If these aren't available, PM me (Raiguard) on the Factorio forums, or on Discord (Raiguard#7402).
If you can consistently reproduce the error from a new save, please upload the original save anyways, just in case.
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Click on '....'
Scroll down to '....'
See error
This is actually a bug with LTN Combinator, not LTN Manager.
|
gharchive/issue
| 2020-05-28T03:32:45 |
2025-04-01T04:35:39.335681
|
{
"authors": [
"maxtimbo",
"raiguard"
],
"repo": "raiguard/Factorio-LtnManager",
"url": "https://github.com/raiguard/Factorio-LtnManager/issues/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1448041628
|
Use raw distributions from importlib_metadata
This PR resolves #135
Currently, testing in Python 3.8 is failing. I will address this issue at a later date.
[2022-11-14T13:11:52.162Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
Related Links:
Upload Issues (Unable to locate build via Github Actions API) - Support - Codecov
Codecov Uploader
OK, I see that the token was used and the job was successful.
failure log
Codecov report uploader 0.3.2
[2022-11-14T13:13:47.319Z] ['info'] => Project root located at: /home/runner/work/pip-licenses/pip-licenses
[2022-11-14T13:13:47.321Z] ['info'] -> No token specified or token is empty
[2022-11-14T13:13:47.411Z] ['info'] Searching for coverage files...
[2022-11-14T13:13:47.473Z] ['info'] => Found 1 possible coverage files:
./coverage.xml
[2022-11-14T13:13:47.473Z] ['info'] Processing ./coverage.xml...
[2022-11-14T13:13:47.476Z] ['info'] Detected GitHub Actions as the CI provider.
[2022-11-14T13:13:47.477Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-3.1.1-uploader-0.3.2&token=*******&branch=fix-output-with-system-in-v4&build=3461764854&build_url=https%3A%2F%2Fgithub.com%2Fraimon49%2Fpip-licenses%2Factions%2Fruns%2F3461764854&commit=7631f76d66ccc84218c74ed5dd381190c820dd19&job=Python+package&pr=136&service=github-actions&slug=raimon49%2Fpip-licenses&name=codecov-umbrella&tag=&flags=&parent=
[2022-11-14T13:13:48.721Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
success log
Codecov report uploader 0.3.2
[2022-11-19T09:05:24.443Z] ['info'] => Project root located at: /home/runner/work/pip-licenses/pip-licenses
[2022-11-19T09:05:24.444Z] ['info'] -> Token found by environment variables
[2022-11-19T09:05:24.537Z] ['info'] Searching for coverage files...
[2022-11-19T09:05:24.591Z] ['info'] => Found 1 possible coverage files:
./coverage.xml
[2022-11-19T09:05:24.591Z] ['info'] Processing ./coverage.xml...
[2022-11-19T09:05:24.595Z] ['info'] Using manual override from args.
[2022-11-19T09:05:24.596Z] ['info'] Detected GitHub Actions as the CI provider.
[2022-11-19T09:05:24.597Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-3.1.1-uploader-0.3.2&token=*******&branch=fix-output-with-system-in-v4&build=3502871440&build_url=https%3A%2F%2Fgithub.com%2Fraimon49%2Fpip-licenses%2Factions%2Fruns%2F3502871440&commit=5cf467b52a84b5703d04bd09ff0badb2146fc797&job=Python+package&pr=136&service=github-actions&slug=raimon49%2Fpip-licenses&name=codecov-umbrella&tag=&flags=&parent=
[2022-11-19T09:05:25.085Z] ['info'] https://app.codecov.io/github/raimon49/pip-licenses/commit/5cf467b52a84b5703d04bd09ff0badb2146fc797
https://storage.googleapis.com/codecov/v4/raw/2022-11-19/E13974577919ABF094EC6B6E5BF61C93/5cf467b52a84b5703d04bd09ff0badb2146fc797/77dd22e1-1e5f-4d2a-9d05-df84926751a4.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=GOOG1EJOGFN2JQ4OCTGA2MU5AEIT7OT5Z7HTFOAN2SPG4NWSN2UJYOY5U6LZQ%2F20221119%2FUS%2Fs3%2Faws4_request&X-Amz-Date=20221119T090525Z&X-Amz-Expires=10&X-Amz-SignedHeaders=host&X-Amz-Signature=f72baacd21161e70a52aa620aafd85f5a6ec549333e0c06791b5ab5e8691d227
[2022-11-19T09:05:25.086Z] ['info'] Uploading...
[2022-11-19T09:05:25.298Z] ['info'] {"status":"success","resultURL":"https://app.codecov.io/github/raimon49/pip-licenses/commit/5cf467b52a84b5703d04bd09ff0badb2146fc797"}
|
gharchive/pull-request
| 2022-11-14T13:11:02 |
2025-04-01T04:35:39.476388
|
{
"authors": [
"raimon49"
],
"repo": "raimon49/pip-licenses",
"url": "https://github.com/raimon49/pip-licenses/pull/136",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
513313911
|
Syntax Error on creating pivot table
Hi,
I am trying to create a pivot table with the plugin for a $belongsToMany relation of a company id to a category id but getting following error when trying to assign primary key for company_id and category_id:
SQLSTATE[42000]: Syntax error or access violation: 1059 Identifier name 'namespaceauthor_directory_company_category_pivot_company_id_category_id_primary' is too long (SQL: alter table namespaceauthor_directory_company_category_pivotadd primary keynamespaceauthor_directory_company_category_pivot_company_id_category_id_primary(company_id, category_id))
public function up()
{
Schema::create('namespaceauthor_directory_company_category_pivot', function($table)
{
$table->engine = 'InnoDB';
$table->integer('company_id');
$table->integer('category_id');
$table->primary(['company_id','category_id']);
});
}
public function down()
{
Schema::dropIfExists('namespaceauthor_directory_company_category_pivot');
}
I am not quite s
@setianke You can define the index name as the second argument in the primary() method call, so you can call it something much shorter if you wish.
|
gharchive/issue
| 2019-10-28T13:46:02 |
2025-04-01T04:35:39.487227
|
{
"authors": [
"bennothommo",
"setianke"
],
"repo": "rainlab/builder-plugin",
"url": "https://github.com/rainlab/builder-plugin/issues/307",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
120091228
|
Chunk persist blocking
addresses issue #65
can i have a look at this before you merge it?
@Dieterbe of course, that is why this PR is here.
so basically we block the nsq handler if the current chunk needs to be saved && and a previous chunk is not being saved yet. and it blocks as long as we're not able to save the previous, or any of the earlier chunks. if somebody backfills a bunch of data from a script or for whatever reason sends a days' worth of data (which encompasses several chunks) then our nsq handlers will block for the time it takes to save the chunks, or when cassandra has a temporary downtime then our nsq handlers will block for the duration of that downtime (+ time needed to save those chunks). that backpressure is too aggressive and will affect consuming other customers data. similar story in case of a data pipeline buffer-up. whether it's nsq or kafka. if we had to restart NMT for whatever reason, and kafka/nsq has a queue of data ready, we should be able to process it as fast as we can. if that queue spans across several chunks, we will block needlessly.
in the above case, as well as any case where cassandra is temporarily down and it's causing blocking, GC is blocked from cleaning up further metrics, other metrics may not need cassandra so it would be nicer if we could GC the other metrics, and not prevent cleaning up memory.
persist() should be lowercase. doesn't matter much here since everything is in main package but at least it's a bit clearer that it shouldn't be called nilly-willy by external callers. it requires that the AggMetric lock is held while it's being called, which should be documented in the function doc.
i also don't like that the time we're willing to wait for cassandra coming back up after it's been down is dependent on chunksize. chunksize is a tunable to tweak storage compression rate vs data safety ~ "commit to cassandra" interval. (short chunk -> low compression, but [attempted] save to cassandra earlier. long chunk -> better compression, but [attempted] save to cassandra later). when somebody picks a short chunkspan we shouldn't punish them by only tolerating a shorter timespan of cassandra downtime before blocking their data consumption. because especially with a short chunkspan, they probably still have plenty of RAM to take in new data, and should be able to withstand cassandra downtimes spanning multiple chunks.
I think we can solve all of the mentioned problems by using different logic that decides when "flushing to cassandra is going so badly that we should block incoming data".
maybe "cassandra-downtime-toleration-interval" (in seconds/minutes) as a tunable, or auto-set it based on numChunks. or express it as a flag that is a number of chunks the operator is willing to have dirty (max-dirty-chunks) (per metric, or global)
frankly, the process could also measure how much RAM the system has available (minus cached/buffered) and as long as we have a GB or more remaining it could always take in new data, and just start blocking ingest if RAM is getting critically high.
now that i think about, i like this idea, because it covers a wide range of causes (cassandra downtime), more ingest/metrics then expected and it guarantees a high level of safety, while it is as forgiving as it can possibly be to whatever issue that is causing RAM usage to be too high
also #65 is still an issue for metrics that have numChunks 1 (like for aggregated metrics i think that will be common)
So what if we just block if a chunk that is about to be cleared hasn't been saved?
@Dieterbe check out the lastest version. I think this is a big improvement.
New design uses a per series buffered channel as a write queue. A single goroutine "reader" per series reads from the channel and processes the writes in the order they are written to the channel. If there are no entries in the writeQueue then the goroutine exits. So under normal operation, the reader will write one chunk to Cassandra then terminate.
The reader only moves onto the next write once the first completes successfully. The user can set the the size of the writeQueue as a config option. The writeQueue can be any size including greater then the numChunks setting. Once the queue is full, calls to persist() will block, causing the NSQ handlers to block.
This is looking much better.
my earlier point about people filling in data that spans many chunks (>maxUnwrittenChunks) still holds true though. if they do this for several series, it may temporarily block all data from ingesting in memory-tank. I think this is something we can figure out later, though. maybe we can work around it for now by just having enough nsq handlers ?
as per earlier comment, I think GC is being too aggressively blocked, but this is also something we could optimize later. and maybe not a big deal.
what did you think of my "just look at the RAM usage" idea? One problem I see with it is that there's no clear upper bound on how out of data cassandra can be, which makes it harder to administer our queueing infrastructure and guarantee no dataloss (though this issue goes away given large enough kafka setup / streaming backups /...)
The writeQueue can be any size including greater then the numChunks setting.
I can see how maxUnwrittenChunks > numChunks can be useful, though this basically means a long backlog of chunks-to-be-persisted that are occupying space in RAM but that we're not benefiting from to serve data out of memory for (the chunks are out of the circular buffer so unusable for serving requests but still in RAM). anyway, something we may optimize down the road
it looks like persist() assumes (and requires) the lock to be held while called, to restrict only one persist() executing at the same time. this should be documented in the function doc.
the code looks tight but could be explained better, you probably have a pretty clear idea of how everything fits together but it's hard to figure that out right now. i'm gonna comment on some particular lines that i think can be better
I can see how maxUnwrittenChunks > numChunks can be useful, though this basically means a long backlog of chunks-to-be-persisted that are occupying space in RAM but that we're not benefiting from to serve data out of memory for.
100% agree with this. It would not make sense to set it larger, just documenting that you could.
I would expect that typically maxUnwrittenChunks would be set to numChunks, so we only block accepting new data because unsaved chunks in memory need to be released to make way for newer chunks.
Cassandra is already a fault tolerant clustered system, so it should never be down. Doing much more then what we have here already would be over-engineering the problem.
what did you think of my "just look at the RAM usage" idea?
i think this would be a good improvement for later, but not needed at this time.
@Dieterbe how do you think this looks now.
looks good! can be merged, but i'll leave the honor to you.
|
gharchive/pull-request
| 2015-12-03T04:16:30 |
2025-04-01T04:35:39.498901
|
{
"authors": [
"Dieterbe",
"woodsaj"
],
"repo": "raintank/raintank-metric",
"url": "https://github.com/raintank/raintank-metric/pull/67",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
218315907
|
unused ranks shall not be reported as skipped tests.
Looking at
https://travis-ci.org/rainwoodman/runtests/jobs/216916766#L393
If a test is skipped due to insufficient communicator size, it shall be marked as skipped, but if it is not ran on a rank because the test case didn't request it, then it shall not be issued as skipped -- the test case still ran.
The fundamental unit of test is not per rank -- it is collective.
closed in d2ebc2d8a26832326997b7fca80536b0a4550fe8
|
gharchive/issue
| 2017-03-30T20:19:46 |
2025-04-01T04:35:39.504425
|
{
"authors": [
"rainwoodman"
],
"repo": "rainwoodman/runtests",
"url": "https://github.com/rainwoodman/runtests/issues/4",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1732656662
|
🛑 RiotSkin4CN-Download is down
In fc8b2b7, RiotSkin4CN-Download (https://s-cd-4307-general.oss.dogecdn.com/riotskin4cn/live.json) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RiotSkin4CN-Download is back up in d140e70.
|
gharchive/issue
| 2023-05-30T17:09:33 |
2025-04-01T04:35:39.506899
|
{
"authors": [
"rainzee"
],
"repo": "rainzee/RiotSkin4CN-Status",
"url": "https://github.com/rainzee/RiotSkin4CN-Status/issues/1602",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1552032484
|
🛑 RiotSkin4CN-Download is down
In d1ef3f6, RiotSkin4CN-Download (https://s-cd-4307-general.oss.dogecdn.com/riotskin4cn/live.json) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RiotSkin4CN-Download is back up in c4df578.
|
gharchive/issue
| 2023-01-22T07:40:31 |
2025-04-01T04:35:39.509288
|
{
"authors": [
"rainzee"
],
"repo": "rainzee/RiotSkin4CN-Status",
"url": "https://github.com/rainzee/RiotSkin4CN-Status/issues/295",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1582091348
|
🛑 RiotSkin4CN-Download is down
In 3b021e7, RiotSkin4CN-Download (https://s-cd-4307-general.oss.dogecdn.com/riotskin4cn/live.json) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RiotSkin4CN-Download is back up in 69a86ff.
|
gharchive/issue
| 2023-02-13T10:36:36 |
2025-04-01T04:35:39.511622
|
{
"authors": [
"rainzee"
],
"repo": "rainzee/RiotSkin4CN-Status",
"url": "https://github.com/rainzee/RiotSkin4CN-Status/issues/426",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1649402965
|
🛑 RiotSkin4CN-Download is down
In 451f439, RiotSkin4CN-Download (https://s-cd-4307-general.oss.dogecdn.com/riotskin4cn/live.json) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RiotSkin4CN-Download is back up in 6cf42a9.
|
gharchive/issue
| 2023-03-31T13:56:47 |
2025-04-01T04:35:39.514165
|
{
"authors": [
"rainzee"
],
"repo": "rainzee/RiotSkin4CN-Status",
"url": "https://github.com/rainzee/RiotSkin4CN-Status/issues/925",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
745982712
|
Replace logo initial T to Te
New logo should be Te – looks better 🎨
TeRe oleks ka huvitav, kuigi pikem @romilrobtsenkov :D :octocat:
Võib olla ka TeRe!
|
gharchive/issue
| 2020-11-18T20:32:12 |
2025-04-01T04:35:39.584887
|
{
"authors": [
"raimop",
"romilrobtsenkov"
],
"repo": "rakenduste-programmeerimine-2020s/teemaderegister",
"url": "https://github.com/rakenduste-programmeerimine-2020s/teemaderegister/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2192128458
|
Link to initial mention of Template Toolkit in README
... so that it's easier to navigate to the relevant background information when reading the README online.
After @lizmat's comment in #28, I decided to rework #27 to update the pod so that this change does get merged into the README at release time.
As always, please let me know if anything should be corrected here: I'm more than happy to rework and resubmit PRs.
Thanks!
|
gharchive/pull-request
| 2024-03-18T12:54:19 |
2025-04-01T04:35:39.587505
|
{
"authors": [
"lizmat",
"paultcochrane"
],
"repo": "raku-community-modules/Template6",
"url": "https://github.com/raku-community-modules/Template6/pull/29",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2739133389
|
New feature: Multilingual num2words
Thank you to louispan for some code in PR#135.
This uses multilingual number to word using num2words.
Language is passed on automatically, added flag to "--keep_numbers" to not transcribe numbers
Still getting used to GitHub.
Good song to test it with is: https://www.youtube.com/watch?v=ZQHLwdE0riA
Haha den Song hab ich ewig nicht gehört 👮
Haha den Song hab ich ewig nicht gehört 👮
Perfekter Song zum testen.
Anything I need to do?
Anything I need to do?
All fine! :)
|
gharchive/pull-request
| 2024-12-13T20:09:51 |
2025-04-01T04:35:39.593760
|
{
"authors": [
"agwosdz",
"rakuri255"
],
"repo": "rakuri255/UltraSinger",
"url": "https://github.com/rakuri255/UltraSinger/pull/188",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
76774593
|
Go Wiki
Why not add this as a page on the Go wiki?
https://github.com/golang/go/wiki
It might be a better fit for the Projects page, https://github.com/golang/go/wiki/Projects.
That would make sense.
https://github.com/golang/go/wiki/Projects#hardware
Hopefully a little easier for people to find (or stumble upon).
I added a link underneath the indexes subtitle at https://github.com/golang/go/wiki/Projects#indexes-and-search-engines.
P.S. Thanks for putting together this list.
Thanks 😇
Cool.
|
gharchive/issue
| 2015-05-15T16:11:11 |
2025-04-01T04:35:39.596936
|
{
"authors": [
"nathany",
"rakyll"
],
"repo": "rakyll/go-hardware",
"url": "https://github.com/rakyll/go-hardware/issues/1",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2470091411
|
Add Recently Added Section to Manufacturer Autocomplete #717
Description
Testing instructions
Add a set up instructions describing how the reviewer should test the code
[ ] Review code
[ ] Check Actions build
[ ] Review changes to test coverage
[ ] {more steps here}
Agile board tracking
connect to #717
closes #717
Haven't looked at the code yet, but noticed that clicking the add manufacturer button again after using it to create a new manufacturer doesn't clear the previous content, this doesn't seem to have been the case before this PR.
I've been able to recreate the bug on the develop branch, so I don't think it's an issue with this PR.
Haven't looked at the code yet, but noticed that clicking the add manufacturer button again after using it to create a new manufacturer doesn't clear the previous content, this doesn't seem to have been the case before this PR.
I've been able to recreate the bug on the develop branch, so I don't think it's an issue with this PR.
Yep you're right, Joshua fixed it in https://github.com/ral-facilities/inventory-management-system/pull/897.
|
gharchive/pull-request
| 2024-08-16T11:34:13 |
2025-04-01T04:35:39.600625
|
{
"authors": [
"asuresh-code",
"joelvdavies"
],
"repo": "ral-facilities/inventory-management-system",
"url": "https://github.com/ral-facilities/inventory-management-system/pull/892",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
718633926
|
Clarify URL used to access Home Assistant
Helps with #22
thanks, this is definitely more clear than what I had before
|
gharchive/pull-request
| 2020-10-10T14:39:34 |
2025-04-01T04:35:39.602372
|
{
"authors": [
"KTibow",
"raman325"
],
"repo": "raman325/ha-zoom-automation",
"url": "https://github.com/raman325/ha-zoom-automation/pull/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
167891718
|
Integrate application with iOS App Search engine
It would be cool if we integrate our app with App Search engine. Here is a good article on the topic by @CognitiveDisson: https://habrahabr.ru/company/rambler-co/blog/268257/
Implemented indexing of Event, Lecture, Speaker models. I should still implement switching to corresponding screen on search results tap.
|
gharchive/issue
| 2016-07-27T16:01:55 |
2025-04-01T04:35:39.605145
|
{
"authors": [
"etolstoy"
],
"repo": "rambler-ios/RamblerConferences",
"url": "https://github.com/rambler-ios/RamblerConferences/issues/60",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
94655719
|
Rename isSet
isSet naming is too confusing, imo. My first guess would be that it is either has or is(Set).
allUniq
:+1:
Renaming the function R.allUniq would be a huge improvement. The other option is to deprecate the function, since its functionality can (less efficiently) be achieved via R.uniq(xs).length === xs.length. I'm fine with keeping this function even though I've never used it, as it is well specified.
I like it too,
|
gharchive/issue
| 2015-07-13T07:17:48 |
2025-04-01T04:35:39.607273
|
{
"authors": [
"CrossEye",
"asaf-romano",
"buzzdecafe",
"davidchambers"
],
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/issues/1278",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
194684608
|
isNil doesn't account for undeclared variables
The source of isNil checks for x == null, but if x was never declared as a variable, an error is still thrown. Since isNil seems to imply that it checks for a variable being funky, it makes sense to account for that case and add something like && typeof(x) !== 'undefined'.
Otherwise, a new function could be added, like exists, if this should be handled separately.
What do you think? I'm happy to put up a PR for either case.
This is actually nothing to do with isNil. JavaScript is eagerly evaluated, so before isNil can be applied to its argument its argument must be evaluated. In this case, evaluating the argument results in a ReferenceError being thrown:
windoww;
// ! ReferenceError: windoww is not defined
One could use typeof windoww to guard against this at the call site.
@davidchambers Huh I didn't know that JS did that. Thanks for the info, will close this!
|
gharchive/issue
| 2016-12-09T20:03:52 |
2025-04-01T04:35:39.609886
|
{
"authors": [
"davidchambers",
"greyvugrin"
],
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/issues/2005",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
198216128
|
ComposeK seems to be broken in 0.23.0
Functions such as this are breaking in the most recent version.
let plus5 = x=> Task.of(x+5)
let times2 = x => Task.of(x*2)
let plusTimes = R.composeK(times2, plus5)
plusTimes(Task.of(21)).fork(R.identity, R.identity))//NaN
I am using Folktale's Task
Oh found the commit with the breaking changes: fcbb62b1a93b021414127787acb5c7d65bf5275d
Turns out the composition should now be called like this plusTimes(21).fork(R.identity, R.identity))
It looks cleaner, though.
The new version seems more correct.
binary ComposeK would be
Chain M => (b -> M c) -> (a -> M b) -> a -> M b
So plusTimes(Task.of(21)) is passing Task(21) to plus5 which is expecting a number
|
gharchive/issue
| 2016-12-31T03:45:57 |
2025-04-01T04:35:39.612328
|
{
"authors": [
"diegovdc",
"jethrolarson"
],
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/issues/2034",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
527225042
|
Use quick swipe has been bug | Swipe next bug
Next story swipe
Next story quick quick swipe, continues (jumping) story on before
zucj.js
BEFORE:
var navigateItem = function navigateItem() {
if (lastTouchOffset.x > window.screen.width / 3 || !option('previousTap')) {
zuck.navigateItem('next', event);
} else {
zuck.navigateItem('previous', event);
}
};
MY SOLUTION:
var navigateItem = function navigateItem() {
if(!direction){
if (lastTouchOffset.x > window.screen.width / 3 || !option('previousTap')) {
zuck.navigateItem('next', event);
} else {
zuck.navigateItem('previous', event);
}
}
};
Thanks, I'll test this and update on the next version
|
gharchive/issue
| 2019-11-22T14:21:09 |
2025-04-01T04:35:39.639012
|
{
"authors": [
"limitlessisa",
"ramon82"
],
"repo": "ramon82/zuck.js",
"url": "https://github.com/ramon82/zuck.js/issues/64",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
204980555
|
[bug] Does not generate UUIDv5 with "name" argument equal to "0"
The command does not allow generating a UUIDv3 (and/or UUIDv5) with "name" argument equal to "0" (or an empty string).
I didn't find anything in the RFC4122 about empty names not being allowed. The closest was "Convert the name to a canonical sequence of octets (as defined by the standards or conventions of its name space); <...>", but a proprietary namespace may allow empty names, so why not accept them in the command, too.
Steps to reproduce
Call the command with parameters:
version: 3 (or 5)
namespace: any valid namespace (ex. "00000000-0000-0000-0000-000000000000")
name: "0"
uuid generate 5 00000000-0000-0000-0000-000000000000 "0"
Expected outcome
The command generates a UUID.
b6c54489-38a0-5f50-a60a-fd8d76219cae
Actual outcome
The command complains about name argument not being provided.
[Ramsey\Uuid\Console\Exception]
The name argument is required for version 3 or 5 UUIDs
That's for the report!
This is likely due to the empty() check on the argument, since empty('0') evaluates to true.
See here: https://github.com/ramsey/uuid-console/blob/de3abb276994e8f0422e25c6c552718ccc336c23/src/Command/GenerateCommand.php#L151-L153
I can change this to use strlen($name) === 0 or the like.
|
gharchive/issue
| 2017-02-02T20:02:47 |
2025-04-01T04:35:39.648834
|
{
"authors": [
"kewlar",
"ramsey"
],
"repo": "ramsey/uuid-console",
"url": "https://github.com/ramsey/uuid-console/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1917561976
|
Using my own obj files for training
I have been trying to use my own dataset with meshcnn but I am not able to identity how I can make your code to accept my dataset.
I created a new dataset in the dataset folder it has the same directory structure as the shrec16 dataset but when I run the code after changing the dataroot directory to datasets/anchor_obj in train.sh file, I get the following error:
File "/home/sawaiz/Documents/Lab/Projects/MeshCNN/MeshCNN-master/data/init.py", line 12, in CreateDataset
dataset = ClassificationData(opt)
File "/home/sawaiz/Documents/Lab/Projects/MeshCNN/MeshCNN-master/data/classification_data.py", line 19, in init
self.get_mean_std()
File "/home/sawaiz/Documents/Lab/Projects/MeshCNN/MeshCNN-master/data/base_dataset.py", line 32, in get_mean_std
for i, data in enumerate(self):
File "/home/sawaiz/Documents/Lab/Projects/MeshCNN/MeshCNN-master/data/classification_data.py", line 34, in getitem
edge_features = pad(edge_features, self.opt.ninput_edges)
File "/home/sawaiz/Documents/Lab/Projects/MeshCNN/MeshCNN-master/util/util.py", line 22, in pad
return np.pad(input_arr, pad_width=npad, mode='constant', constant_values=val)
File "/home/sawaiz/.local/lib/python3.10/site-packages/numpy/lib/arraypad.py", line 748, in pad
pad_width = _as_pairs(pad_width, array.ndim, as_index=True)
File "/home/sawaiz/.local/lib/python3.10/site-packages/numpy/lib/arraypad.py", line 518, in _as_pairs
raise ValueError("index can't contain negative values")
ValueError: index can't contain negative values
I feel I have to process my data in a certain way but I am not sure. Can you please clarify how I can do that?
My dataset is a face dataset. I used mediapipe facial landmarker to get the facial landmarks of a face in 3d and then convert the point cloud into a mesh using open3d library. I can access the obj files and see the objects. The number of vertices and faces are variable in all these files.
Hi! I get the same error.And I processed my data,like simplifying to the same faces,but it still didn't work.Were you able to solve the issue?
Nope. I gave up
|
gharchive/issue
| 2023-09-28T13:30:45 |
2025-04-01T04:35:39.654789
|
{
"authors": [
"Sawaiz8",
"yy030425"
],
"repo": "ranahanocka/MeshCNN",
"url": "https://github.com/ranahanocka/MeshCNN/issues/153",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1674242375
|
pandas 2.0 upgrade
Do you plan to upgrade for pandas 2.0?
Thanks
SJ
Take a look in these PRs, both addressing the issue generated by Pandas-2.0
https://github.com/ranaroussi/quantstats/pull/260
https://github.com/ranaroussi/quantstats/pull/263
|
gharchive/issue
| 2023-04-19T06:19:09 |
2025-04-01T04:35:39.656537
|
{
"authors": [
"sabirjana",
"tibkiss"
],
"repo": "ranaroussi/quantstats",
"url": "https://github.com/ranaroussi/quantstats/issues/262",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
642344442
|
how to turn yf.download() into just Close prices for each ticker
yf.download(list_of_tickers) get you columns such as Stock A's open/high/low/close..etc. and then Stock B Open/high/low/close
anyway to turn it into a dataframe where column names are just the stock's ticker ,and the values are 'Close'?
I'm literally creating a new dataframe and iterating through each ticker to add to it.
d2 = pd.DataFrame()
for tk in tickers:
d2[tk] = download[tk]['Close']
wondering if there's a better way?
thanks
sorry figured it out. basically reshape the multi-index dataframe as follows
download= download.xs('Close', level = 1, axis=1)
|
gharchive/issue
| 2020-06-20T09:34:40 |
2025-04-01T04:35:39.658520
|
{
"authors": [
"mspacey4415"
],
"repo": "ranaroussi/yfinance",
"url": "https://github.com/ranaroussi/yfinance/issues/347",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2509130512
|
AKS region changes in Edit Mode
Setup
Rancher version: v2.9-0f2bd9c3cde85aa2b5270946d0cecd5c95b86fd2-head
Describe the bug
When creating an AKS cluster in a region different from the default displayed in the UI (East US) with the wrong configuration, once the user edits the cluster config, the Azure Region is set back to the default, undoing the region that the user previously selected
To Reproduce
1- Via UI, create an Azure cluster under a region other than East US with incomplete or wrong config
2- After the cluster is created, try to edit the cluster config via UI
3 - Observe that the region switches to the East US
Result
the Azure Region is set back to the default, undoing the region that the user previously selected
Expected Result
The region previously selected by the user must be preserved in the Edit form
Screenshots
https://github.com/user-attachments/assets/65213a34-eaab-4687-84b0-2c6e8b7471ed
/backport v2.10.1
|
gharchive/issue
| 2024-09-06T00:28:45 |
2025-04-01T04:35:39.662232
|
{
"authors": [
"IsaSih",
"gaktive"
],
"repo": "rancher/dashboard",
"url": "https://github.com/rancher/dashboard/issues/11841",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1141657841
|
Various issues when displaying registry information on cluster -> config view
Setup
Rancher version: v2.6.4-rc3
Browser type & version: chrome 98.0.4758.80
Describe the bug
When viewing registry information about a cluster (cluster management -> cluster -> config view) information displayed is sometimes incorrect.
To Reproduce
this can be in a custom cluster without registering a node for testing purposes
deploy a cluster that has registry information defined, specifically for the Configure advanced containerd mirroring and registry authentication options option
include some registry authentication and mirror information as well
save your settings
view the cluster, then select the config view
navigate to the registry settings
if you see the correct values, edit the cluster's registry settings (say, enable the tls skip checkbox) and view the same config page upon saving the changes
Result
registry information is not displaying correctly. Sometimes, the authentication options do not display with what is in the yaml (i.e. the checkbox won't be checked even though the yaml shows that it has a true value)
Other times, the Configure advanced containerd mirroring and registry authentication options is not saved, even though there is authentication information for the registry in the yaml. And the UI will default back to the Pull images for Rancher from a private registry radio box instead
Expected Result
display options for config should match what is in the yaml (and what the user originally selected through the UI
Screenshots
(for example in repro steps above)
This issue should be addressed in the same milestone as https://github.com/rancher/dashboard/issues/6919, which is updating and simplifying that part of the form.
I believe this issue is fixed in this PR https://github.com/rancher/dashboard/pull/7508
Ticket #5146 - Test Results
With Docker on a single-node instance:
Verified on rancher v2.7-8062f1e3ab7bff870dfab195ff699a308644168a-head:
Fresh install of rancher v2.7-head
Create a downstream custom cluster w/ cluster level registry and mirrors configured - [i used fake data for testing; i did not actually provision]
Once created, Edit Config and confirm data values persist
Verified - data persists
Edit YAML and confirm data values persist and are accurate
Verified - data persists and is accurate
Screenshots:
Step 4
Step 6
|
gharchive/issue
| 2022-02-17T17:46:11 |
2025-04-01T04:35:39.672959
|
{
"authors": [
"Josh-Diamond",
"catherineluse",
"slickwarren"
],
"repo": "rancher/dashboard",
"url": "https://github.com/rancher/dashboard/issues/5146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1263724175
|
Not able to add users from ADFS SSO
Setup
Rancher version: 2.6.4
SSO: ADFS
Screenshots
SSO User -
Defined Users in rancher itself -
Describe the bug
Successfully completed ADFS SSO setup with rancher from the offical docs and could login to that configured user. But how can we manage the SSO based users, as the above ADFS provider user is not listed in the users management in rancher console as there should be 3 users in total instead it shows 2. Is there any way to achieve this?
Context
As this would need to manage different SSO users as per their access profile/roles through jumpcloud.
I don't think the dashboard shows all users in the users list (some instances could have many thousands of users by default). If you try to assign a role to one (cluster member, project member, etc) I think only then they appear in that list.
@richard-cox
I have a SSO user configured from JumpCloud. Facing two issues -
It is having "admin" access by default.
When we try to update the cluster level access from rancher, it does not fetch the user.
The local auth provider will always be the default member, it will have access to everything.
That looks like the root of the issue. Does it happen when you try to create an RKE2/K3S cluster (you don't have to actually create it, just check if the users are shown when searching for them)
The local auth provider will always be the default member, it will have access to everything.
Alright.
Does it happen when you try to create an RKE2/K3S cluster (you don't have to actually create it, just check if the users are shown when searching for them)
For the logged in user it shows as "owner access" but no, could not fetch the users from there while adding members.
You experience problem because UID Field mapping in Authentication provider config page. JumpCloud is sending empty value for it. Change it to email or something else. After that users will start appearing in user list after first login. Same with Groups created.
Note that you need to be logged in via ADFS and be members of all groups so dropdown menu show it. Rancher not have AD Search implemented so its only search is from current user list inside Rancher.
|
gharchive/issue
| 2022-06-07T18:49:35 |
2025-04-01T04:35:39.679742
|
{
"authors": [
"gauravkcldcvr",
"richard-cox",
"simcomp2003"
],
"repo": "rancher/dashboard",
"url": "https://github.com/rancher/dashboard/issues/6116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1305312308
|
Create button gets disabled for workloads/projects after an error
Setup
Rancher version: 2.6-head 3506e43
Browser type & version: Chrome
Describe the bug
When we submit any incorrect form for workloads/projects, the create button errors out. After few seconds the create button is disabled and we would need to navigate back to workloads/projects page to create resources.
To Reproduce
Create a project from projects/namespaces with the name test;;;.
Click on create button , button changes to Error
Wait for a few seconds.
Result
Create button is disbaled.
Expected Result
Expected create button to be enabled after an error.
Screenshots
It seems like something related to form validation @Sean-McQ
Yup! They can just click the "x" in the error message at the top to get the create button back but I can make some adjustments to make API errors not block the create button.
@Sean-McQ Please to that
Re opening for QA validation.
Verified on v2.6-head 07ee596
After an error, create button changes to Error and then back to Create .
|
gharchive/issue
| 2022-07-14T21:36:34 |
2025-04-01T04:35:39.684766
|
{
"authors": [
"Sean-McQ",
"anupama2501",
"catherineluse",
"nwmac"
],
"repo": "rancher/dashboard",
"url": "https://github.com/rancher/dashboard/issues/6378",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2416944682
|
[charts - ecm] release config
Scope
Implement configuration options for charts release into emc-distro-tools.
Parent issue:
Architect import charts into ecm-distro-tools
Implement new options on the config.json file.
Example:
"charts":{
"workspace":"",
"charts_repo_url":"",
"charts_remote_upstream":"",
"charts_remote_fork":"",
}
Update the subcommands to reflect the new options added to the config.json file:
edit
gen
view
Expected result
Upon execution of:
release config gen
charts object is automatically added to config.json.
edit and view must reflect it also.
type ChartsRelease struct {
DevBranch string `json:"dev_branch"` // dev-v2.9
ReleaseBranch string `json:"release_branch"` // release-v2.9
Workspace string `json:"workspace"` // <path/to/charts/repo>
ChartsRepoOwner string `json:"charts_repo_owner"` // git user name
DryRun bool `json:"dry_run"` // just log and don't execute
}
Done
|
gharchive/issue
| 2024-07-18T17:04:08 |
2025-04-01T04:35:39.689157
|
{
"authors": [
"nicholasSUSE"
],
"repo": "rancher/ecm-distro-tools",
"url": "https://github.com/rancher/ecm-distro-tools/issues/444",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2102338291
|
Gitjob returns 500 to a webhook with wrong credentials instead of 401
Not so critical but it would be nice if gitjob returned 401 instead of 500 on a webhook POST with wrong credentials.
The credentials were set on local cluster by: (doc)
$ kubectl create secret generic gitjob-webhook -n cattle-fleet-system --from-literal=azure-username=user --from-literal=azure-password=password
$ kubectl create secret generic gitjob-webhook2 -n cattle-fleet-system --from-literal=github=secretsecret
Noted on rancher:v2.8.1 / fleet 0.9.0 on RKE2:1.27 when testing webhooks from Azure Devops and Github with custom built rancher/gitjob image from release/fleet/v0.9 branch.
This is from Azure WH test:
closing as duplicate of https://github.com/rancher/fleet/issues/2271
This has not been fixed as part of https://github.com/rancher/fleet/issues/2271
Authentication errors should return a 4xx status code.
|
gharchive/issue
| 2024-01-26T14:47:33 |
2025-04-01T04:35:39.708276
|
{
"authors": [
"manno",
"thehejik"
],
"repo": "rancher/gitjob",
"url": "https://github.com/rancher/gitjob/issues/423",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
856616356
|
Update image format hint for gz and xz fomat
Compressed image format is not supported in current implementation. The support for it is tracked in https://github.com/rancher/harvester/issues/681
We need to update the hint on image creation page:
Validation passed in version v0.2.0-rc2
On latest master-head
Create a image with unsupported extension, the error message still say supoort compressed image.
currently we don't support compressed image in backend, but still can create a image by a url with comperessed file extension
compressed image is not supported on the editing image page, and remove the tips about compressed image
validated on master-0423.iso
|
gharchive/issue
| 2021-04-13T06:19:51 |
2025-04-01T04:35:39.713041
|
{
"authors": [
"DaiYuzeng",
"TangStone",
"futuretea",
"gitlawr"
],
"repo": "rancher/harvester",
"url": "https://github.com/rancher/harvester/issues/682",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
410512798
|
CLI: Parsing errors for rancher cluster lscommand
Rancher CLI version: v2.2.0-rc7 /v2.2.0-rc8
Rancher version used: Master
Steps:
Create a rancher setup with master and have multiple DO clusters in it.
Perform export of one of the existing clusters using cli and save the yml in cluster.yml
Example cluster.yml file:
Version: v3
clusters:
test-532531:
dockerRootDir: /var/lib/docker
enableNetworkPolicy: false
localClusterAuthEndpoint: {}
rancherKubernetesEngineConfig:
addonJobTimeout: 30
authentication:
strategy: x509
authorization: {}
bastionHost: {}
cloudProvider: {}
dns: {}
ignoreDockerVersion: true
ingress:
provider: nginx
kubernetesVersion: v1.13.1-rancher1-1
monitoring:
provider: metrics-server
network:
plugin: canal
restore: {}
services:
etcd:
backupConfig:
intervalHours: 12
retention: 6
creation: 12h
extraArgs:
election-timeout: "5000"
heartbeat-interval: "500"
retention: 72h
snapshot: false
kubeApi:
serviceNodePortRange: 30000-32767
kubeController: {}
kubelet: {}
kubeproxy: {}
scheduler: {}
nodePools:
np-gbpb41:
clusterId: test-532531
controlPlane: true
etcd: true
hostnamePrefix: testauto-417921
nodeTemplateId: test-73882
quantity: 1
worker: true
Create a cluster using CLI with rancher up command
./rancher up --file cluster.yml
After creating a cluster, do an ls as ./rancher cluster ls command
It displays parsing errors as below:
rancher-v2.2.0-rc8 soumya$ ./rancher cluster ls
strconv.ParseFloat: parsing "5700Mi": invalid syntax
strconv.ParseFloat: parsing "1329952Ki": invalid syntax
strconv.ParseFloat: parsing "1900Mi": invalid syntax
CURRENT ID STATE NAME PROVIDER NODES CPU RAM PODS
c-cckbs active dosinganodeaaa Rancher Kubernetes Engine 1 0.54/2 0.14/3.76 GB 9/110
c-hhpmx active dosingnodenew1 Rancher Kubernetes Engine 1 0.54/2 0.14/3.76 GB 9/110
c-hvtt9 provisioning test-532531 Rancher Kubernetes Engine 1 0.00/0 0.00/0.00 GB 0/0
c-j8fvh active dosingnodenew101 Rancher Kubernetes Engine 1 0.54/2 0.14/3.76 GB 9/110
c-ltllp active test-54701 Rancher Kubernetes Engine 8 1.30/6 0.24/0.00 GB 15/330
* c-nbgs8 active dosingnode Rancher Kubernetes Engine 1 0.54/2 0.14/3.76 GB 10/110
c-qgjzb active soumyaexamplecluster Google Kubernetes Engine 3 1.34/2820m 0.00/3.38 GB 18/330
c-tzmfv active dosingnodenew100 Rancher Kubernetes Engine 1 0.54/2 0.14/3.76 GB 9/110
c-xrdvt active test-62917 Rancher Kubernetes Engine 1 0.54/2 0.14/0.00 GB 9/110
rancher-v2.2.0-rc8 soumya$
Tested with CLI version v2.2.0-rc9 using Master build from Feb 19.
Performed the steps as mentioned above. rancher cluster ls command is functional with no parsing errors.
$ ./rancher cluster ls
CURRENT ID STATE NAME PROVIDER NODES CPU RAM PODS
c-9nxpw active nosnaphotfin Rancher Kubernetes Engine 1 0.54/2 0.14/1.86 GB 9/110
c-b76ng active test-96265 Rancher Kubernetes Engine 1 0.54/2 0.14/1.86 GB 9/110
c-fxntx active doclusternew Rancher Kubernetes Engine 1 0.26/2 0.02/3.76 GB 7/110
c-p8sfr active snaphottruefin Rancher Kubernetes Engine 1 0.54/2 0.14/1.86 GB 9/110
c-qx9xh active docluster Rancher Kubernetes Engine 1 0.64/2 0.14/3.76 GB 13/110
* c-r9zb2 active test-96886 Rancher Kubernetes Engine 1 0.54/2 0.14/1.86 GB 9/110
c-wtlsx active snaphotfalsefin Rancher Kubernetes Engine 1 0.54/2 0.14/1.86 GB 9/110
|
gharchive/issue
| 2019-02-14T22:03:01 |
2025-04-01T04:35:39.719170
|
{
"authors": [
"soumyalj"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/18128",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
428935065
|
Backport: Panic when attempting to view metrics for workload.
Backport https://github.com/rancher/rancher/issues/19358
Steps to reproduce
Import a standalone Pod by Import YAML.
Import a deployment.
View metrics of the deployment in Rancher UI
Results:
You will see a red error popup
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: busybox
image: busybox:1.25
volumeMounts:
- name: quobytevolume
mountPath: /persistent
command:
- /bin/sh
- -ec
- |
if [ -f /persistent/log ]; then
echo -n "Found old state starting at "; head -n1 /persistent/log
else
echo -n "Starting with a fresh state"
fi
while sleep 5; do date | tee -a /persistent/log; done
volumes:
- name: quobytevolume
quobyte:
registry: ignored:7861
volume: testVolume
readOnly: false
user: root
group: root
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Can be tested with rancher/rancher:v2.2.2-rc2
The bug is reproduced in Rancher v2.1.1
Setup: Rancher is single installed, add a DO cluster with 4 nodes(1 etcd, 1 control plane, 2 workers)
Following the steps in the previous comment from @loganhz , Rancher UI shows a red error message and the log shows a panic serving api request error.
2019/04/04 18:16:04 [ERROR] Panic serving api request:
goroutine 99923 [running]:
runtime/debug.Stack(0xc01834fb10, 0x33da1e0, 0x6a45ef0)
/usr/local/go/src/runtime/debug/stack.go:24 +0xa7
github.com/rancher/rancher/vendor/github.com/rancher/norman/api.(*Server).ServeHTTP.func1(0x419c280, 0xc016e7fea0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/server.go:175 +0x67
panic(0x33da1e0, 0x6a45ef0)
/usr/local/go/src/runtime/panic.go:513 +0x1b9
github.com/rancher/rancher/pkg/api/customization/monitor.parseMetricParams(0xc005224b00, 0x416bc80, 0xc014b65f58, 0xc00d359b08, 0x8, 0xc012f3c450, 0x7, 0xc00ed92c60, 0xc01962f280, 0x12, ...)
/go/src/github.com/rancher/rancher/pkg/api/customization/monitor/handler_utils.go:253 +0x1594
github.com/rancher/rancher/pkg/api/customization/monitor.graph2Metrics(0xc005224b00, 0x4212840, 0xc0036fc000, 0xc012f3c450, 0x7, 0xc00d359b08, 0x8, 0xc01962f680, 0x1a, 0xc00b4051d0, ...)
/go/src/github.com/rancher/rancher/pkg/api/customization/monitor/cluster_graph_action.go:146 +0xf3
github.com/rancher/rancher/pkg/api/customization/monitor.(*ProjectGraphHandler).QuerySeriesAction(0xc00121e9c0, 0xc019bac225, 0x5, 0xc0055aaf20, 0xc00e8585a0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/pkg/api/customization/monitor/project_graph_action.go:89 +0x79b
github.com/rancher/rancher/pkg/api/customization/monitor.(*ProjectGraphHandler).QuerySeriesAction-fm(0xc019bac225, 0x5, 0xc0055aaf20, 0xc00e8585a0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/pkg/api/server/managementstored/setup.go:511 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/api.handleAction(0xc0055aaf20, 0xc00e8585a0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/server.go:263 +0x62
github.com/rancher/rancher/vendor/github.com/rancher/norman/api.(*Server).handle(0xc000aadd90, 0x419c280, 0xc016e7fea0, 0xc010781900, 0xc00a9ff200, 0xc005227ef0, 0xc0062e97d8)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/server.go:251 +0xd2
github.com/rancher/rancher/vendor/github.com/rancher/norman/api.(*Server).ServeHTTP(0xc000aadd90, 0x419c280, 0xc016e7fea0, 0xc010781900)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/server.go:180 +0x7d
github.com/rancher/rancher/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc004698a80, 0x419c280, 0xc016e7fea0, 0xc010781900)
/go/src/github.com/rancher/rancher/vendor/github.com/gorilla/mux/mux.go:159 +0xf1
github.com/rancher/rancher/pkg/auth/requests.authHeaderHandler.ServeHTTP(0x41666c0, 0xc0057fdc20, 0x415bec0, 0xc004698a80, 0x417a340, 0xc003af5000, 0x4166680, 0xc005692630, 0x419c280, 0xc016e7fea0, ...)
/go/src/github.com/rancher/rancher/pkg/auth/requests/filter.go:110 +0x60d
github.com/rancher/rancher/pkg/audit.auditHandler.ServeHTTP(0x415a800, 0xc005077640, 0x0, 0x419c280, 0xc016e7fea0, 0xc010781700)
/go/src/github.com/rancher/rancher/pkg/audit/filter.go:31 +0x46a
github.com/rancher/rancher/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc004698a10, 0x419c280, 0xc016e7fea0, 0xc010781700)
/go/src/github.com/rancher/rancher/vendor/github.com/gorilla/mux/mux.go:159 +0xf1
github.com/rancher/rancher/pkg/dynamiclistener.(*Server).cacheIPHandler.func1(0x419c280, 0xc016e7fea0, 0xc010781500)
/go/src/github.com/rancher/rancher/pkg/dynamiclistener/server.go:419 +0x101
net/http.HandlerFunc.ServeHTTP(0xc0006a1260, 0x419c280, 0xc016e7fea0, 0xc010781500)
/usr/local/go/src/net/http/server.go:1964 +0x44
net/http.serverHandler.ServeHTTP(0xc00769b450, 0x419c280, 0xc016e7fea0, 0xc010781500)
/usr/local/go/src/net/http/server.go:2741 +0xab
net/http.(*conn).serve(0xc017399680, 0x41a5900, 0xc0148d0e00)
/usr/local/go/src/net/http/server.go:1847 +0x646
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2851 +0x2f5
The bug fix is validated in Rancher v2.2.2-rc2
Setup: Rancher is single installed, add a DO cluster with 4 nodes(1 etcd, 1 control plane, 2 workers)
|
gharchive/issue
| 2019-04-03T19:25:46 |
2025-04-01T04:35:39.724850
|
{
"authors": [
"alena1108",
"jiaqiluo",
"loganhz"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/19371",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
478503067
|
Improve agent error when cacerts is not JSON
What kind of request is this (question/bug/enhancement/feature request):
bug
Steps to reproduce (least amount of steps as possible):
put HA Rancher Cluster behind Cloudflare Proxy
Result:
cattle-node-agent pods crashing. rancher/rancher-agent:v2.2.7
Other details that may be helpful:
I've installed the Cluster via RKE. Maybe I need to get the hash of the cloudflare certificate? and replace the cert in the environment variables of the cattle-node-agent workload in the system project. How could I optain the hash?
Environment information
Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): rancher/rancher:2.2.7
Installation option (single install/HA): HA
Cluster information
Cluster type (Hosted/Infrastructure Provider/Custom/Imported): imported
Machine type (cloud/VM/metal) and specifications (CPU/memory): 3 VMs (8 Cores 16 GB RAM)
Kubernetes version (use kubectl version): 1.14.5
(paste the output here)
Docker version (use docker version):
19.03
INFO: Environment: CATTLE_ADDRESS=91.132.145.79 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=2e76e834397ca1db5b57bfc5edc5754b732ba0285f51b5391d2f5a94b49c6549 CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=91.132.145.79 CATTLE_SERVER=https://my.domain.com
INFO: Using resolv.conf: nameserver ########### search domain.com
INFO: https://###########/ping is accessible
INFO: my.domain.com resolves to 104.24.121.210 104.24.120.210
parse error: Invalid numeric literal at line 1, column 10
I've now generated the hash from cloudflares certificate with
openssl x509 -noout -fingerprint -sha256 -inform pem -in cloudflare.cer and removed the : between the letters, and wrote the string into the CATTLE_CA_CHECKSUM env variable, but this didn't solve the problem.
Hope someon knows whats going wrong. Currently the continously crashing cattle-node-agent does not affect my cluster but it would be good the get it working again.
Any help?
@superseb any thoughts on this?
This seems to break on:
curl --insecure -s -fL $CATTLE_SERVER/v3/settings/cacerts | jq -r '.value | select(length > 0)' > $temp
Can you run curl --insecure -s -fL $CATTLE_SERVER/v3/settings/cacerts on the host and in the container so see what the response is?
well, it was my fault, I configured cloudflares access controll feature with google authentifikation, and my IP bypass configurations was wrong.
Nevertheless, many thanks.
Hi guys, this issue was solve?
I am having the same issue in k8s.
INFO: Environment: CATTLE_ADDRESS=x.x.x.x CATTLE_CA_CHECKSUM=c634d3a288462e204a3da64f04030333d5db4f58aba05b099f420a9e99b63c02 CATTLE_CLUSTER=true CATTLE_FEATURES=dashboard=true CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-5485b55f76-grl5l CATTLE_SERVER=https://rancher.my.org INFO: Using resolv.conf: search cattle-system.svc.cluster.local svc.cluster.local cluster.local mshome.net nameserver x.x.x.x options ndots:5 INFO: https://rancher.my.org/ping is accessible INFO: rancher.my.org resolves to x.x.x.x parse error: Invalid numeric literal at line 1, column 10
Persistent CrashLoop
you probably followed the tutorial and used the standard hostname rancher.my.org as it seems - that resolves to an existing public ip. it should resolve to the local rancher ip.
This command may fix your issue :
kubectl -n cattle-system patch deployments cattle-cluster-agent --patch '{
"spec": {
"template": {
"spec": {
"hostAliases": [
{
"hostnames":
[
"rancher.my.org"
],
"ip": "172.27.0.2"
}
]
}
}
}
}'
I found it somewhere on another git post, there are other solutions, like changing the name to a working local address, but then DNS in your container should also resolve properly.
You also need to add the name to your local DNS with that IP (or /etc/hosts)
|
gharchive/issue
| 2019-08-08T14:22:00 |
2025-04-01T04:35:39.737019
|
{
"authors": [
"MSandro",
"VGerris",
"cjellick",
"superseb",
"yoliander"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/22063",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
484318998
|
RFE: Allow changing Rancher Server URL
Issue:
Currently, this is not supported as published in the docs:
https://rancher.com/docs/rancher/v2.x/en/admin-settings/
After you set the Rancher Server URL, we do not support updating it. Set the URL with extreme care.
RFE:
Add support for updating the Rancher Server URL
Rancher version:
v2.2.x
This is covered in https://github.com/rancher/rancher/issues/14731, either should be merged/close.
Close in favor of https://github.com/rancher/rancher/issues/14731
|
gharchive/issue
| 2019-08-23T04:05:45 |
2025-04-01T04:35:39.740514
|
{
"authors": [
"jambajaar",
"loganhz",
"superseb"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/22377",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
487829605
|
Error master: Unschedulable
Please help me. My cluster's master (custom cluster) show unschedulable
My master node has: 2G RAM, and 40GB disk
The master status show: kubelet has no disk pressure
And the below is detail
admin Log Out
{
"allocatable": {
"cpu": "1",
"ephemeral-storage": "34776022983",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "1760808Ki",
"pods": "110"
},
"annotations": {
"flannel.alpha.coreos.com/backend-data": "{"VtepMAC":"0e:45:0f:a5:7f:11"}",
"flannel.alpha.coreos.com/backend-type": "vxlan",
"flannel.alpha.coreos.com/kube-subnet-manager": "true",
"flannel.alpha.coreos.com/public-ip": "192.168.94.130",
"node.alpha.kubernetes.io/ttl": "0",
"rke.cattle.io/external-ip": "192.168.94.130",
"rke.cattle.io/internal-ip": "192.168.94.130",
"volumes.kubernetes.io/controller-managed-attach-detach": "true"
},
"baseType": "node",
"capacity": {
"cpu": "1",
"ephemeral-storage": "36850Mi",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "1863208Ki",
"pods": "110"
},
"clusterId": "c-6xxcm",
"conditions": [ 7 items
{
"status": "True",
"type": "Initialized"
},
{
"message": "registered with kubernetes",
"status": "True",
"type": "Registered"
},
{
"status": "True",
"type": "Provisioned"
},
{
"lastHeartbeatTime": "2019-09-01T00:49:49Z",
"lastHeartbeatTimeTS": 1567298989000,
"lastTransitionTime": "2019-08-31T15:57:09Z",
"lastTransitionTimeTS": 1567267029000,
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2019-09-01T00:49:49Z",
"lastHeartbeatTimeTS": 1567298989000,
"lastTransitionTime": "2019-08-31T15:57:09Z",
"lastTransitionTimeTS": 1567267029000,
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
},
{
"lastHeartbeatTime": "2019-09-01T00:49:49Z",
"lastHeartbeatTimeTS": 1567298989000,
"lastTransitionTime": "2019-08-31T15:57:09Z",
"lastTransitionTimeTS": 1567267029000,
"message": "kubelet has sufficient PID available",
"reason": "KubeletHasSufficientPID",
"status": "False",
"type": "PIDPressure"
},
{
"lastHeartbeatTime": "2019-09-01T00:49:49Z",
"lastHeartbeatTimeTS": 1567298989000,
"lastTransitionTime": "2019-08-31T15:58:40Z",
"lastTransitionTimeTS": 1567267120000,
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
}
],
"controlPlane": true,
"created": "2019-08-31T15:51:04Z",
"createdTS": 1567266664000,
"creatorId": null,
"customConfig": {
"address": "192.168.94.130",
"type": "/v3/schemas/customConfig"
},
"dockerInfo": {
"debug": false,
"experimentalBuild": false,
"type": "/v3/schemas/dockerInfo"
},
"etcd": true,
"externalIpAddress": "192.168.94.130",
"hostname": "master1",
"id": "c-6xxcm:m-d5802d05bbf0",
"imported": true,
"info": {
"cpu": {
"count": 1
},
"kubernetes": {
"kubeProxyVersion": "v1.14.6",
"kubeletVersion": "v1.14.6"
},
"memory": {
"memTotalKiB": 1863208
},
"os": {
"dockerVersion": "1.13.1",
"kernelVersion": "3.10.0-957.27.2.el7.x86_64",
"operatingSystem": "CentOS Linux 7 (Core)"
}
},
"ipAddress": "192.168.94.130",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/arch": "amd64",
"kubernetes.io/hostname": "master1",
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/controlplane": "true",
"node-role.kubernetes.io/etcd": "true"
},
"links": {
"remove": "…/v3/nodes/c-6xxcm:m-d5802d05bbf0",
"self": "…/v3/nodes/c-6xxcm:m-d5802d05bbf0",
"update": "…/v3/nodes/c-6xxcm:m-d5802d05bbf0"
},
"name": "",
"namespaceId": null,
"nodeName": "master1",
"nodePoolId": "",
"nodeTemplateId": null,
"podCidr": "10.42.0.0/24",
"requested": {
"cpu": "250m",
"pods": "4"
},
"requestedHostname": "master1",
"sshUser": "root",
"state": "active",
"taints": [ 2 items
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/controlplane",
"type": "/v3/schemas/taint",
"value": "true"
},
{
"effect": "NoExecute",
"key": "node-role.kubernetes.io/etcd",
"type": "/v3/schemas/taint",
"value": "true"
}
],
"transitioning": "no",
"transitioningMessage": "",
"type": "node",
"unschedulable": false,
"uuid": "23d25408-cc07-11e9-990b-0242ac110002",
"worker": false
}
--
Useful
Info
Versions
Rancher v2.2.8 UI: v2.2.94
Route
authenticated.cluster.monitoring.node-detail
etcd and controlplane nodes are tainted, this is shown as Unschedulable in the UI. Because these are the main components of the cluster, you don't want to run workloads on those nods by default. More information can be found on https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/
|
gharchive/issue
| 2019-09-01T01:58:16 |
2025-04-01T04:35:39.770905
|
{
"authors": [
"superseb",
"taibc"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/22586",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
607688567
|
[BUG] Alert Expression Metrics Not Populating
What kind of request is this (question/bug/enhancement/feature request):
Bug
Steps to reproduce (least amount of steps as possible):
Install Rancher v2.4.2
Enable Monitoring
Create New Alert
Select Expression radio button.
Attempt to drop-down the list of metrics.
Result:
Metrics fields are not populating in the drop-down.
Other details that may be helpful:
See attached image
Environment information
Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): v2.4.2
Installation option (single install/HA): HA/RKE
@maggieliu from discussion with @westlywright this appears to be a UI bug
@codyrancher Can you add the PR for 2.4 release branch to this issue?
https://github.com/rancher/ui/pull/3943
Rancher version master-head (04/28/2020) master-2821-head commit id: 14013a8d0
I was able to select expression, the dropdown gets populated in a second and no errors in the browser devtools console and network.
Rule was successfully added and I was able to also remove it.
|
gharchive/issue
| 2020-04-27T16:35:09 |
2025-04-01T04:35:39.777213
|
{
"authors": [
"izaac",
"js422",
"mrajashree",
"sangeethah"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/26829",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
120307705
|
Uncaught TypeError when clicking on dropdown for Scheduling
Version - Master 12/03
Steps;
Add a couple of labels to your hosts
Make sure Developer Tools are open
Create a service and under Advanced Options/Scheduling click on the drop-down for Key or Value
Results:
The service still works and the rule is followed but the Debugger pauses on caught exception - Uncaught TypeError: Cannot read property 'relatedTarget' of undefined.
Expected:
Probably shouldn't do this.
Can you still reproduce this? I tried..
Version - master 3/22
Verified fixed
|
gharchive/issue
| 2015-12-04T00:51:47 |
2025-04-01T04:35:39.779680
|
{
"authors": [
"tfiduccia",
"vincent99"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/2910",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1069958850
|
[2.5] EKS upgrades loop without a default AMI list for v1.19 onwards
Backport https://github.com/rancher/rancher/issues/35295
SURE-3342
Setup For Reproduction
Rancher version: v2.5.11
Rancher install: docker
Downstream clusters: EKS
Kubernetes version: 1.18/1.19
Setup For Validation
Rancher version: v2.5-head(07a745a)
Rancher install: docker
Downstream clusters: EKS
Kubernetes version: 1.18/1.19
Steps For Reproduction
Go to https://rancherserver/g/clusters/add/launch/amazoneks
select regions as well as kubernetes versions: us-east2 and k8s: v1.18/v1.19 (One cluster for each version of k8s)
Proceed with all other defaults.
Create cluster
wait for error to show up which will be listed as follows: (this will happen when provisioning the v1.19 cluster and will happening when upgrading the v1.18 cluster to v1.18)
Default ami of region us-east-2 for kubernetes version 1.19 is not set
Steps For Validation
Go to https://rancherserver/g/clusters/add/launch/amazoneks
select regions as well as kubernetes versions: us-east2 and k8s: v1.18/v1.19 (One cluster for each version of k8s)
Proceed with all other defaults.
Create cluster
The v1.19 cluster will become active with no issues and once you upgrade the v1.18 cluster to v1.19 it should also become active with no issues.
Results
The clusters provision correctly and have a default ami set in aws for the v1.19 kubernetes version
Notes
It should be noted that https://rancherserver/g/clusters/add/launch/amazoneksv2 allows for clusters to be provisioned in v2.5.11.
Same error in eu-west-1 region
@throrin19 This change was included in v2.5.12, which was just released.
I see that, But How remove the error and correctly launch the update of my cluster ? I try to reedit the cluster and validate but nothing happen
|
gharchive/issue
| 2021-12-02T20:35:29 |
2025-04-01T04:35:39.787902
|
{
"authors": [
"Auston-Ivison-Suse",
"deniseschannon",
"thedadams",
"throrin19"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/35725",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1792172901
|
[RFE] TLS 1.3 Support for Rancher Manager Ingress
Currently we do not support TLS 1.3 and a customer is asking that we do. Some applications that are part of the cluster could have TLS 1.3 enabled but doing so can end up breaking things.
The business case is that we will eventually have to do this once TLS v1.3 adoption will at some point eclipse TLS v1.2
Versions to be dropped:
TLS 1.0 and TLS 1.1
Support to TLS 1.0 and TLS 1.1 must be dropped. These versions are considered insecure and were deprecated in 2021, see RFC 8996.
The deprecation must happen and be supported in all places and services used by Rancher:
Source code - https://github.com/rancher/rancher/blob/release/v2.7/pkg/tls/base.go#L17-L21.
Single Docker install / Helm install / Ingress - https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-references/tls-settings.
The configuration must be applied to all TLS services started by Rancher.
Versions to be maintained: TLS 1.2
In the same places listed above.
Versions to be added: TLS 1.3
In the same places listed above.
Notes
Regarding the ciphers to be supported by both TLS 1.2 and 1.3, we should use the compatible list provided by Cloudflare. Cloudfare’s list aligns with Go's ciphers for TLS 1.3. Those ciphers offer a good compromise between client compatibility and security. Old or legacy ciphers must not be supported.
Ingress' default TLS configuration might have to be tailored according to the listed ciphers.
When a PR is ready for review, the Security team will run a TLS scan to confirm that all Rancher's TLS related services/ports do properly support TLS 1.2 and 1.3 and disallow TLS 1.0 and 1.1.
Would be good to get feedback from Product regarding if we should completely remove TLS 1.0 and 1.1 or deprecate them first, giving that this might impact customers (in case we have customers actually using those versions in their environments).
The changes must be release noted.
SURE-3166
QA Testing
Root cause
Rancher doesn't allow setting tls-min-version to 1.3. It also doesn't allow to set any 1.3 ciphers in tls-ciphers.
What was fixed, or what changes have occurred
Rancher's validation of the two settings now allows min version to be 1.3 and enforces 1.3 ciphers if the min version is 1.3.
Areas or cases that should be tested
What areas could experience regressions?
Steps
Run Rancher without changing TLS-related settings (as before)
Note that Rancher starts up without errors. Make sure the UI works fine.
Inspect the value of the tls-min-version, ensure the value is the default 1.2.
Run Rancher by specifying tls-min-version as 1.3.
Note that Rancher fails to start up because the default ciphers are for lower versions of TLS.
Run Rancher by specifying tls-min-version as 1.3 and tis-ciphers as "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384"
Note that Rancher starts up without errors. Make sure the UI works fine.
Test Cases
#
Priority
Description & Link
PASS/FAIL
1
P0
Start Rancher with only CATTLE_TLS_MIN_VERSION=1.3 Should Fail
✅ PASS
2
P0
Run Rancher Without Specifying CATTLE_TLS_MIN_VERSION
✅ PASS
3
P0
Run with CATTLE_TLS_MIN_VERSION & CATTLE_TLS_CIPHERS
✅ PASS
4
P0
Run with CATTLE_TLS_MIN_VERSION & Wrong CATTLE_TLS_CIPHERS
✅ PASS
🚨 4 test cases... CLICK TO EXPAND! (For table links to work) ⬅️
1 Start Rancher with only CATTLE_TLS_MIN_VERSION=1.3 Should Fail / Status: ✅ PASS
:small_red_triangle: back to top
Test 1 details... Click to expand
Test Steps for Validation
Attempt to start Rancher on v2.8-head
Specify CATTLE_TLS_MIN_VERSION as 1.3
-e CATTLE_TLS_MIN_VERSION=1.3
SSH onto the instance where Rancher is running
Run:
docker ps
docker logs -f $CONTAINER_ID_GOES_HERE
In the logs notice the following:
2023/10/05 20:58:01 [FATAL] failed to setup TLS listener: unsupported cipher
As expected, Rancher fails to start
Can use if needed: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
✅ Expected Outcome
For there to be an error message in the logs and Rancher not to start.
✅ Actual Outcome
Error logs to be present and Rancher didn't come up active
2 Run Rancher Without Specifying CATTLE_TLS_MIN_VERSION / Status: ✅ PASS
:small_red_triangle: back to top
Test 2 details... Click to expand
Test Steps for Validation
Install Rancher on v2.8-head without setting CATTLE_TLS_MIN_VERSION
Ensure functionality
Enable auth provider (GitHub OAuth)
Add two standard users (user1, and user2)
Login as user1
Create a downstream RKE1 Linode cluster as user1
Create a project
Create a namespace
Create a deployment
Ensure the deployment comes up active
Add user2 as a project owner
Login as user2 and ensure access to the project
Check Rancher logs for anything related to tls version
No errors observed
✅ Expected Outcome
For Rancher to startup and function without issue
✅ Actual Outcome
No issues observed
3 Run with CATTLE_TLS_MIN_VERSION & CATTLE_TLS_CIPHERS / Status: ✅ PASS
:small_red_triangle: back to top
Test 3 details... Click to expand
Test Steps for Validation
Install Rancher on 2.8-head and set CATTLE_TLS_MIN_VERSION and CATTLE_TLS_CIPHERS
Set CATTLE_TLS_MIN_VERSION=1.3 & CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384
Can use: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 -e CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
Check Rancher logs for any errors
Login to Rancher and verify functionality
Create two standard users user1, user2
Login as user1
Create a downstream Linode K3s cluster
Create a project in the cluster
Create a namespace in the project
Create a deployment in the namespace
Assign user2 as a project-owner
✅ Expected Outcome
✅ Actual Outcome
4 Run with CATTLE_TLS_MIN_VERSION & Wrong CATTLE_TLS_CIPHERS / Status: 🔄 NOT TESTED YET
:small_red_triangle: back to top
Test 4 details... Click to expand
Test Steps for Validation
Attempt to start Rancher on v2.8-head
Specify CATTLE_TLS_MIN_VERSION as 1.3
-e CATTLE_TLS_MIN_VERSION=1.3
CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
SSH onto the instance where Rancher is running
Run:
docker ps
docker logs -f $CONTAINER_ID_GOES_HERE
In the logs notice the following:
2023/10/06 00:26:47 [FATAL] failed to setup TLS listener: unsupported cipher TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, must be one or more of: TLS_AES_128_GCM_SHA256
TLS_AES_256_GCM_SHA384
As expected, Rancher fails to start
Can use if needed: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 -e CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
✅ Expected Outcome
Rancher should fail to start and error with a message about needing the correct cipher.
✅ Actual Outcome
Rancher failed to start and was erroring about needing a set list of ciphers.
:muscle: Test Cases
#
Priority
Description & Link
PASS/FAIL
1
P0
Start Rancher with only CATTLE_TLS_MIN_VERSION=1.3 Should Fail
✅ PASS
2
P0
Run Rancher Without Specifying CATTLE_TLS_MIN_VERSION
✅ PASS
3
P0
Run with CATTLE_TLS_MIN_VERSION & CATTLE_TLS_CIPHERS
✅ PASS
4
P0
Run with CATTLE_TLS_MIN_VERSION & Wrong CATTLE_TLS_CIPHERS
✅ PASS
🚨 4 test cases... CLICK TO EXPAND! (For table links to work) ⬅️
1 Start Rancher with only CATTLE_TLS_MIN_VERSION=1.3 Should Fail / Status: ✅ PASS
:small_red_triangle: back to top
Test 1 details... Click to expand
Test Steps for Validation
Attempt to start Rancher on v2.8-head
Specify CATTLE_TLS_MIN_VERSION as 1.3
-e CATTLE_TLS_MIN_VERSION=1.3
SSH onto the instance where Rancher is running
Run:
docker ps
docker logs -f $CONTAINER_ID_GOES_HERE
In the logs notice the following:
2023/10/05 20:58:01 [FATAL] failed to setup TLS listener: unsupported cipher
As expected, Rancher fails to start
Can use if needed: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
✅ Expected Outcome
For there to be an error message in the logs and Rancher not to start.
✅ Actual Outcome
Error logs to be present and Rancher didn't come up active
2 Run Rancher Without Specifying CATTLE_TLS_MIN_VERSION / Status: ✅ PASS
:small_red_triangle: back to top
Test 2 details... Click to expand
Test Steps for Validation
Install Rancher on v2.8-head without setting CATTLE_TLS_MIN_VERSION
Ensure functionality
Enable auth provider (GitHub OAuth)
Add two standard users (user1, and user2)
Login as user1
Create a downstream RKE1 Linode cluster as user1
Create a project
Create a namespace
Create a deployment
Ensure the deployment comes up active
Add user2 as a project owner
Login as user2 and ensure access to the project
Check Rancher logs for anything related to tls version
No errors observed
✅ Expected Outcome
For Rancher to startup and function without issue
✅ Actual Outcome
No issues observed
3 Run with CATTLE_TLS_MIN_VERSION & CATTLE_TLS_CIPHERS / Status: ✅ PASS
:small_red_triangle: back to top
Test 3 details... Click to expand
Test Steps for Validation
Install Rancher on 2.8-head and set CATTLE_TLS_MIN_VERSION and CATTLE_TLS_CIPHERS
Set CATTLE_TLS_MIN_VERSION=1.3 & CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384
Can use: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 -e CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
Check Rancher logs for any errors
Login to Rancher and verify functionality
Create two standard users user1, user2
Login as user1
Create a downstream Linode K3s cluster
Create a project in the cluster
Create a namespace in the project
Create a deployment in the namespace
Assign user2 as a project-owner
✅ Expected Outcome
No error logs regarding tls and Rancher working as expected.
✅ Actual Outcome
No errors observed in logs regarding tls, Rancher was working as expected.
4 Run with CATTLE_TLS_MIN_VERSION & Wrong CATTLE_TLS_CIPHERS / Status: 🔄 NOT TESTED YET
:small_red_triangle: back to top
Test 4 details... Click to expand
Test Steps for Validation
Attempt to start Rancher on v2.8-head
Specify CATTLE_TLS_MIN_VERSION as 1.3
-e CATTLE_TLS_MIN_VERSION=1.3
CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
SSH onto the instance where Rancher is running
Run:
docker ps
docker logs -f $CONTAINER_ID_GOES_HERE
In the logs notice the following:
2023/10/06 00:26:47 [FATAL] failed to setup TLS listener: unsupported cipher TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, must be one or more of: TLS_AES_128_GCM_SHA256
TLS_AES_256_GCM_SHA384
As expected, Rancher fails to start
Can use if needed: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 -e CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
✅ Expected Outcome
Rancher should fail to start and error with a message about needing the correct cipher.
✅ Actual Outcome
Rancher failed to start and was erroring about needing a set list of ciphers.
:muscle: Test Cases
#
Priority
Description & Link
PASS/FAIL
1
P0
Start Rancher with only CATTLE_TLS_MIN_VERSION=1.3 Should Fail
✅ PASS
2
P0
Run Rancher Without Specifying CATTLE_TLS_MIN_VERSION
✅ PASS
3
P0
Run with CATTLE_TLS_MIN_VERSION & CATTLE_TLS_CIPHERS
✅ PASS
4
P0
Run with CATTLE_TLS_MIN_VERSION & Wrong CATTLE_TLS_CIPHERS
✅ PASS
🚨 4 test cases... CLICK TO EXPAND! (For table links to work) ⬅️
1 Start Rancher with only CATTLE_TLS_MIN_VERSION=1.3 Should Fail / Status: ✅ PASS
:small_red_triangle: back to top
Test 1 details... Click to expand
Test Steps for Validation
Attempt to start Rancher on v2.8-head
Specify CATTLE_TLS_MIN_VERSION as 1.3
-e CATTLE_TLS_MIN_VERSION=1.3
SSH onto the instance where Rancher is running
Run:
docker ps
docker logs -f $CONTAINER_ID_GOES_HERE
In the logs notice the following:
2023/10/05 20:58:01 [FATAL] failed to setup TLS listener: unsupported cipher
As expected, Rancher fails to start
Can use if needed: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
✅ Expected Outcome
For there to be an error message in the logs and Rancher not to start.
✅ Actual Outcome
Error logs to be present and Rancher didn't come up active
2 Run Rancher Without Specifying CATTLE_TLS_MIN_VERSION / Status: ✅ PASS
:small_red_triangle: back to top
Test 2 details... Click to expand
Test Steps for Validation
Install Rancher on v2.8-head without setting CATTLE_TLS_MIN_VERSION
Ensure functionality
Enable auth provider (GitHub OAuth)
Add two standard users (user1, and user2)
Login as user1
Create a downstream RKE1 Linode cluster as user1
Create a project
Create a namespace
Create a deployment
Ensure the deployment comes up active
Add user2 as a project owner
Login as user2 and ensure access to the project
Check Rancher logs for anything related to tls version
No errors observed
✅ Expected Outcome
For Rancher to startup and function without issue
✅ Actual Outcome
No issues observed
3 Run with CATTLE_TLS_MIN_VERSION & CATTLE_TLS_CIPHERS / Status: ✅ PASS
:small_red_triangle: back to top
Test 3 details... Click to expand
Test Steps for Validation
Install Rancher on 2.8-head and set CATTLE_TLS_MIN_VERSION and CATTLE_TLS_CIPHERS
Set CATTLE_TLS_MIN_VERSION=1.3 & CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384
Can use: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 -e CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
Check Rancher logs for any errors
Login to Rancher and verify functionality
Create two standard users user1, user2
Login as user1
Create a downstream Linode K3s cluster
Create a project in the cluster
Create a namespace in the project
Create a deployment in the namespace
Assign user2 as a project-owner
✅ Expected Outcome
No error logs regarding tls and Rancher working as expected.
✅ Actual Outcome
No errors observed in logs regarding tls, Rancher was working as expected.
4 Run with CATTLE_TLS_MIN_VERSION & Wrong CATTLE_TLS_CIPHERS / Status: ✅ PASS
:small_red_triangle: back to top
Test 4 details... Click to expand
Test Steps for Validation
Attempt to start Rancher on v2.8-head
Specify CATTLE_TLS_MIN_VERSION as 1.3
-e CATTLE_TLS_MIN_VERSION=1.3
CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
SSH onto the instance where Rancher is running
Run:
docker ps
docker logs -f $CONTAINER_ID_GOES_HERE
In the logs notice the following:
2023/10/06 00:26:47 [FATAL] failed to setup TLS listener: unsupported cipher TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, must be one or more of: TLS_AES_128_GCM_SHA256
TLS_AES_256_GCM_SHA384
As expected, Rancher fails to start
Can use if needed: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 -e CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
✅ Expected Outcome
Rancher should fail to start and error with a message about needing the correct cipher.
✅ Actual Outcome
Rancher failed to start and was erroring about needing a set list of ciphers.
:muscle: Test Cases
#
Priority
Description & Link
PASS/FAIL
1
P0
Start Rancher with only CATTLE_TLS_MIN_VERSION=1.3 Should Fail
✅ PASS
2
P0
Run Rancher Without Specifying CATTLE_TLS_MIN_VERSION
✅ PASS
3
P0
Run with CATTLE_TLS_MIN_VERSION & CATTLE_TLS_CIPHERS
✅ PASS
4
P0
Run with CATTLE_TLS_MIN_VERSION & Wrong CATTLE_TLS_CIPHERS
✅ PASS
5
P0
Upgrade
✅ PASS
🚨 5 test cases... CLICK TO EXPAND! (For table links to work) ⬅️
1 Start Rancher with only CATTLE_TLS_MIN_VERSION=1.3 Should Fail / Status: ✅ PASS
:small_red_triangle: back to top
Test 1 details... Click to expand
Test Steps for Validation
Attempt to start Rancher on v2.8-head
Specify CATTLE_TLS_MIN_VERSION as 1.3
-e CATTLE_TLS_MIN_VERSION=1.3
SSH onto the instance where Rancher is running
Run:
docker ps
docker logs -f $CONTAINER_ID_GOES_HERE
In the logs notice the following:
2023/10/05 20:58:01 [FATAL] failed to setup TLS listener: unsupported cipher
As expected, Rancher fails to start
Can use if needed: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
✅ Expected Outcome
For there to be an error message in the logs and Rancher not to start.
✅ Actual Outcome
Error logs to be present and Rancher didn't come up active
2 Run Rancher Without Specifying CATTLE_TLS_MIN_VERSION / Status: ✅ PASS
:small_red_triangle: back to top
Test 2 details... Click to expand
Test Steps for Validation
Install Rancher on v2.8-head without setting CATTLE_TLS_MIN_VERSION
Ensure functionality
Enable auth provider (GitHub OAuth)
Add two standard users (user1, and user2)
Login as user1
Create a downstream RKE1 Linode cluster as user1
Create a project
Create a namespace
Create a deployment
Ensure the deployment comes up active
Add user2 as a project owner
Login as user2 and ensure access to the project
Check Rancher logs for anything related to tls version
No errors observed
✅ Expected Outcome
For Rancher to startup and function without issue
✅ Actual Outcome
No issues observed
3 Run with CATTLE_TLS_MIN_VERSION & CATTLE_TLS_CIPHERS / Status: ✅ PASS
:small_red_triangle: back to top
Test 3 details... Click to expand
Test Steps for Validation
Install Rancher on 2.8-head and set CATTLE_TLS_MIN_VERSION and CATTLE_TLS_CIPHERS
Set CATTLE_TLS_MIN_VERSION=1.3 & CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384
Can use: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 -e CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
Check Rancher logs for any errors
Login to Rancher and verify functionality
Create two standard users user1, user2
Login as user1
Create a downstream Linode K3s cluster
Create a project in the cluster
Create a namespace in the project
Create a deployment in the namespace
Assign user2 as a project-owner
✅ Expected Outcome
No error logs regarding tls and Rancher working as expected.
✅ Actual Outcome
No errors observed in logs regarding tls, Rancher was working as expected.
4 Run with CATTLE_TLS_MIN_VERSION & Wrong CATTLE_TLS_CIPHERS / Status: ✅ PASS
:small_red_triangle: back to top
Test 4 details... Click to expand
Test Steps for Validation
Attempt to start Rancher on v2.8-head
Specify CATTLE_TLS_MIN_VERSION as 1.3
-e CATTLE_TLS_MIN_VERSION=1.3
CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
SSH onto the instance where Rancher is running
Run:
docker ps
docker logs -f $CONTAINER_ID_GOES_HERE
In the logs notice the following:
2023/10/06 00:26:47 [FATAL] failed to setup TLS listener: unsupported cipher TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, must be one or more of: TLS_AES_128_GCM_SHA256
TLS_AES_256_GCM_SHA384
As expected, Rancher fails to start
Can use if needed: https://github.com/brudnak/linode-docker-cattle
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh",
"docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -e CATTLE_BOOTSTRAP_PASSWORD=${var.rancher_bootstrap_password} -e CATTLE_TLS_MIN_VERSION=1.3 -e CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 rancher/rancher:${var.rancher_instances[count.index].rancher_version} --acme-domain ${random_pet.random_pet[count.index].id}.${var.aws_route53_fqdn}",
]
}
✅ Expected Outcome
Rancher should fail to start and error with a message about needing the correct cipher.
✅ Actual Outcome
Rancher failed to start and was erroring about needing a set list of ciphers.
5 Upgrade / Status: ✅ PASS
:small_red_triangle: back to top
Test 5 details... Click to expand
Test Steps for Validation
Start with Rancher on v2.7.7
Upgrade Rancher to v2.8-head
When upgrading pass the following:
CATTLE_TLS_MIN_VERSION=1.3
CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384
After upgrade check that Rancher is accessible and there are no warnings about TLS ciphers
✅ Expected Outcome
Rancher to be usable after upgrading from v2.7.7 to v2.8-head and providing
CATTLE_TLS_MIN_VERSION=1.3
CATTLE_TLS_CIPHERS=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384
✅ Actual Outcome
Rancher comes up active after upgrade and is usable. No warnings seen for TLS cipher.
|
gharchive/issue
| 2023-07-06T20:15:55 |
2025-04-01T04:35:39.902292
|
{
"authors": [
"brudnak",
"maxsokolovsky",
"samjustus"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/42027",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1802932274
|
[BUG]
Rancher Server Setup
Rancher version: rancher/rancher:latest (2.7.5)
Installation option (Docker install/Helm Chart): docker
If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc):
Proxy/Cert Details:
Information about the Cluster
Kubernetes version:
Cluster Type (Local/Downstream):
If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider):
User Information
What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom)
If custom, define the set of permissions:
Describe the bug
Hey trying to get rancher up in running, unfortunately its not getting up because of the following errors.
Tried different version with the same result.
Is there a problem with: https://git.rancher.io?
fatal: unable to access 'https://git.rancher.io/system-charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:29:38 [ERROR] error syncing 'library': handler catalog: Update failed: fatal: unable to access 'https://git.rancher.io/charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:29:51 [ERROR] error syncing 'system-library': handler catalog: Clone failed: Cloning into 'management-state/catalog-cache/6bda4fa402575032915dc735ad66131a2f8a36ff38c6fb593402ad315fbfdf85'...
fatal: unable to access 'https://git.rancher.io/system-charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:30:12 [ERROR] error syncing 'library': handler catalog: Update failed: fatal: unable to access 'https://git.rancher.io/charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:30:22 [ERROR] error syncing 'system-library': handler catalog: Clone failed: Cloning into 'management-state/catalog-cache/6bda4fa402575032915dc735ad66131a2f8a36ff38c6fb593402ad315fbfdf85'...
fatal: unable to access 'https://git.rancher.io/system-charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:30:47 [ERROR] error syncing 'library': handler catalog: Update failed: fatal: unable to access 'https://git.rancher.io/charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:30:53 [ERROR] error syncing 'system-library': handler catalog: Clone failed: Cloning into 'management-state/catalog-cache/6bda4fa402575032915dc735ad66131a2f8a36ff38c6fb593402ad315fbfdf85'...
fatal: unable to access 'https://git.rancher.io/system-charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:31:21 [ERROR] error syncing 'library': handler catalog: Update failed: fatal: unable to access 'https://git.rancher.io/charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:31:25 [ERROR] error syncing 'system-library': handler catalog: Clone failed: Cloning into 'management-state/catalog-cache/6bda4fa402575032915dc735ad66131a2f8a36ff38c6fb593402ad315fbfdf85'...
fatal: unable to access 'https://git.rancher.io/system-charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:31:58 [ERROR] error syncing 'system-library': handler catalog: Clone failed: Cloning into 'management-state/catalog-cache/6bda4fa402575032915dc735ad66131a2f8a36ff38c6fb593402ad315fbfdf85'...
fatal: unable to access 'https://git.rancher.io/system-charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:32:04 [ERROR] error syncing 'library': handler catalog: Update failed: fatal: unable to access 'https://git.rancher.io/charts/': The requested URL returned error: 502
: exit status 128, requeuing
2023/07/13 12:32:29 [ERROR] error syncing 'system-library': handler catalog: Clone failed: Cloning into 'management-state/catalog-cache/6bda4fa402575032915dc735ad66131a2f8a36ff38c6fb593402ad315fbfdf85'...
fatal: unable to access 'https://git.rancher.io/system-charts/': The requested URL returned error: 502
: exit status 128, requeuing
To Reproduce
Result
Expected Result
Screenshots
Additional context
If you need more informations let me know.
Regards,
Michel
i have the same issue :(
Even i do have the same issue.
time="2023-07-17T17:09:18Z" level=error msg="error syncing 'rancher-partner-charts': handler helm-clusterrepo-ensure: git -C /var/lib/rancher-data/local-catalogs/v2/rancher-partner-ch │
│ arts/8f17acdce9bffd6e05a58a3798840e408c4ea71783381ecd2e9af30baad65974 fetch origin -- 53b05e3afbc42f183efc9e61a6484171b3abc579 error: exit status 128, detail: fatal: unable to access │
│ 'https://git.rancher.io/partner-charts/': The requested URL returned error: 502\n, requeuing" │
Also if i clone the repo, I ended up fatal error
git clone --depth=1 -n --branch master https://git.rancher.io/partner-charts
Cloning into 'partner-charts'...
fatal: unable to access 'https://git.rancher.io/partner-charts/': The requested URL returned error: 502
👍🏼 bump
Another me too. Saw on Rancher 2.7.5. then downgraded to 2.7.2 and still saw it.
Then it started working again.
Same here. I'm submitting a support ticket to see how to fix.
Today it is working fine for me, I'm not getting 502 errors. Maybe the git repo was offline and now it is fixed?
Rancher 2.9, same issue
Rancher 2.8, having the same issue
Same Issue with latest -> V2.9 (31.07.2024)
how I managed to fix that: sudo echo "nameserver 192.168.1.1" >> /etc/resolv.conf
it was an issue with not resolving on the DNS ;)
how I managed to fix that: sudo echo "nameserver 192.168.1.1" >> /etc/resolv.conf it was an issue with not resolving on the DNS ;)
I don't think thats the solution. The "https://git.rancher.io" can be resolved. "https://git.rancher.io/system-charts/" gets me a 404, this is a dead link and not a DNS Problem.
Rancher 2.8.5、2.8.4, having the same issue
Same here with Rancher "stable" 2.9.1
2024/09/05 07:21:51 [ERROR] error syncing 'helm3-library': handler catalog: Clone failed: Cloning into 'management-state/catalog-cache/f341cfdfa521a9aa2b993cb34b26bb91b2d173ef1a7df8d41b8921b0e4f82788'...
fatal: unable to access 'https://git.rancher.io/helm3-charts/': OpenSSL/3.1.4: error:80000002:system library::No such file or directory
And from my browser the URL is not reachable too.
Rancher v2.8.5
time="2024-12-19T00:13:28Z" level=error msg="error syncing 'rancher-partner-charts': handler helm-clusterrepo-download: update failure: git -C /var/lib/rancher-data/local-catalogs/v2/rancher-partner-charts/8f17...5974 fetch origin -- main error: exit status 128, detail: fatal: unable to access 'https://git.rancher.io/partner-charts/': The requested URL returned error: 502\n, requeuing"
|
gharchive/issue
| 2023-07-13T12:36:13 |
2025-04-01T04:35:39.918679
|
{
"authors": [
"00r2",
"Juventinooo0911",
"Sispheor",
"alexsalex",
"gajapathi28",
"jcox10",
"jmckeown2",
"max5800",
"mbathe19",
"ricardosilva86",
"vonhutrong",
"zlingqu"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/42075",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1927053845
|
[RFE] Add auth config field that allows Rancher to user service account to search for users
Is your feature request related to a problem? Please describe.
This is a feature request required to solve the issue filed in this bug ticket:
https://github.com/rancher/rancher/issues/35259
Describe the solution you'd like
Stolen from a comment on the original bug ticket.
For background, Rancher today (in the case of most auth providers) always attempts to scope its searches to the permissions of the requesting user. It does this so that when users attempt to add new permissions to a user/group for a cluster/project, they can only see the objects that they could see outside of Rancher. This way, Rancher doesn't become an "attack vector", allowing users to effectively escalate their permissions in the auth providers.
However, we recognize that this approach does not account for typical setups (such as LDAP) where users have no permissions in the authentication provider and the service account is used as a way of tightly controlling read permissions from the application. We see these configurations as legitimate use cases, and want to better support them.
To this end, we are going to consider a more comprehensive solution to this issue. One potential solution would be to expose a field on the auth config (search_using_service_account, a bool) which will cause Rancher to always use the service account (and never the user) to search for users/groups. This value will default to false, to keep Rancher's current behavior for our existing consumers. This will allow users who use the more typical setups to use Rancher in the method that they desire, and implementing an explicit behavior will give us the chance to ensure that this behavior is consistent and up to our standards of quality.
Describe alternatives you've considered
This PR was merged and then reverted
https://github.com/rancher/rancher/pull/40391
It looks like the work done in https://github.com/rancher/rancher/pull/40391 would still be applicable for this. We would just add another field to the authconfig to give the choice of whether or not to use the serviceaccount fallback option.
Adding a field here would also require a small UI change to display the new field and to pass that new value in when enabling the auth provider.
Neither the frontend nor backend changes are particularly large.
This would probably be a reasonable first or second issue for someone to take on.
Das to pick this up for QA - QA/M
|
gharchive/issue
| 2023-10-04T22:07:45 |
2025-04-01T04:35:39.924319
|
{
"authors": [
"MKlimuszka",
"Priyashetty17",
"crobby"
],
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/43064",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.