id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
712236085
updated the logo A logo with the circular background was needed, so I have made a logo with high px quality, along with the size of 60X60 px, white background as per the instructions. Do let me know if you need any further changes here. Glad to help :) I'm attaching the same file here so that it would be easy for you to review. make the border white or remove the black one. @reveurguy please check the new commit.
gharchive/pull-request
2020-09-30T19:53:59
2025-04-01T04:55:32.782995
{ "authors": [ "aditimaurya", "reveurguy" ], "repo": "Py-Contributors/py-contributors.github.io", "url": "https://github.com/Py-Contributors/py-contributors.github.io/pull/71", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1007204518
Remove dictionary from default install Current dictionary is quite big and probably not every users are going to use that. Also, PyThaiNLP may consider to include nlpO3 Python-binding as one of the word tokenzation options. That means two exact copies of a same dictionary will be installed. Dictionary can be installed if the user explicitly put it in a feature flag, maybe something like: nlpo3 = {version = "1.3.0", features = ["dict-pythainlp", "dict-libthai"]} More dictionary options can be offered in this way as well. If user just install it without the feature flag: nlpo3 = {version = "1.3.0"} the user have to supply a dictionary of their own. #43 merged. Close.
gharchive/issue
2021-09-25T21:50:47
2025-04-01T04:55:32.894601
{ "authors": [ "bact" ], "repo": "PyThaiNLP/nlpo3", "url": "https://github.com/PyThaiNLP/nlpo3/issues/42", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1137935009
classwisewrapper doesn't support prefix 🐛 Bug Hi, good afternoon. I have defined ClasswiseWrapper inside a MetricCollection which looks like this train_metrics = MetricCollection({ # weighted acc "WA": Accuracy(num_classes=num_classes, threshold=0.5, average=avg), # unweight acc "UA": Accuracy(num_classes=num_classes, threshold=0.5, average="micro"), "Precision": Precision(num_classes=num_classes, threshold=0.5, average=avg), "Recall": Recall(num_classes=num_classes, threshold=0.5, average=avg), "UAR": Recall(num_classes=num_classes, threshold=0.5, average="micro"), # weighted F1 "WF1": F1Score(num_classes=num_classes, threshold=0.5, average=avg), # unweight F1 "UF1": F1Score(num_classes=num_classes, threshold=0.5, average="micro"), "MAE": MeanAbsoluteError(), "Corr": Corr(compute_on_step=False), "acc": ClasswiseWrapper(Accuracy(num_classes=num_classes, average=None), labels), "r": ClasswiseWrapper(Precision(num_classes=num_classes, average=None), labels), "p": ClasswiseWrapper(Recall(num_classes=num_classes, average=None), labels), "f1": ClasswiseWrapper(F1Score(num_classes=num_classes, average=None), labels), }, prefix="train/epoch/") But it seems that when I call metrics = train_metrics.compute() although I can get other metrics with prefix all right (e.g. train/epoch/UF1),classwisewrapper doesn't take prefix into account, and I get acc_label1 acc_label2 etc. Expected behavior I think once classwisewrapper is inside a MetricCollection it should have something like train/epoch/acc_label1. Environment PyTorch Version (e.g., 1.0): 1.8.2 OS (e.g., Linux): Linux How you installed PyTorch (conda, pip, source): conda Python version: 3.7.11 Any other relevant information: torch metrics install from lastest master (0.8.0.dev0) Additional context Thank you for this awesome project!! @cnut1648 thanks for reporting this. I take a look at it today (should be a simple fix). Great to know that the wrapper is useful to some users :]
gharchive/issue
2022-02-14T22:41:27
2025-04-01T04:55:32.899662
{ "authors": [ "SkafteNicki", "cnut1648" ], "repo": "PyTorchLightning/metrics", "url": "https://github.com/PyTorchLightning/metrics/issues/842", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1402896230
Fix markdown for .env section Due to consecutive markdown blocks not being rendered correctly inside lists (only when accessing the page from the Pycord Guide), I removed one of the two examples that are basically showing the same thing in the same place. In addition to that, I improved the explanation so that it is more clear for beginners. Please fix merge conflicts
gharchive/pull-request
2022-10-10T09:51:39
2025-04-01T04:55:32.912449
{ "authors": [ "BobDotCom", "Elitesparkle" ], "repo": "Pycord-Development/guide", "url": "https://github.com/Pycord-Development/guide/pull/227", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2621366744
Use a sphinx theme that supports dark mode natively Currently we are using "Read The Docs" theme for our Sphinx-generated documentation pages. However, there are many additional theme options available to explore at https://sphinx-themes.readthedocs.io/en/latest/. Personally, I prefer the PyData or Furo. Both support dark mode natively and, in my view, offer a cleaner, more modern style. For example, Qualtran uses the PyData theme for their docs, which has a really nice look. Do you have any preferences or thoughts on this? No issue from my end if you have a strong preference and would prefer to change the style. On Tue, Oct 29, 2024, 7:47 AM Adrien Suau @.***> wrote: Both seems fine to me. I agree that RTD is not perfect, and having a native black-theme is a nice feature. — Reply to this email directly, view it on GitHub https://github.com/QCHackers/tqec/issues/382#issuecomment-2444485432, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKAXTDZCZXQEFZULKZCCO3Z56NYJAVCNFSM6AAAAABQZ5XVHSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINBUGQ4DKNBTGI . You are receiving this because you are subscribed to this thread.Message ID: @.***>
gharchive/issue
2024-10-29T14:02:52
2025-04-01T04:55:32.948614
{ "authors": [ "afowler", "inmzhang" ], "repo": "QCHackers/tqec", "url": "https://github.com/QCHackers/tqec/issues/382", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1042959009
Dond: Only set parameter if not already at setpoint. Current implementation of dond will ask every instrument to set the setpoint at every measurement point. This leads to an excessive amount of instrument communication as well as adding up all post_delays at every measurement point. Especially for magnet parameters this becomes a huge issue as setting the parameter even to the current value can take several seconds due to internal PID loops in the magnet power supply. Change behaviour to only set the parameter if it isn't already at the desired value as known by qcodes. This ensures we inly ping instruments that needs to get the setpoint updated during the run. closed in favor of #3534 which is functionally the same change.
gharchive/pull-request
2021-11-03T00:19:34
2025-04-01T04:55:32.950658
{ "authors": [ "ThorvaldLarsen" ], "repo": "QCoDeS/Qcodes", "url": "https://github.com/QCoDeS/Qcodes/pull/3543", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
327638905
unable to install jenkins sir, when i tried to install jenkins its showing me following error please help me with that If plugin install fails the only option is to reinstall. If the reinstall is failing removing JENKINS_HOME directory & retry sir how to remove JENKINS_HOME directory its saying no such file or directory JENKINS_HOME is environment variable for the Jenkins HOME DIRECTORY generally present @ /var/lib/jenkins. PLease understand what you are trying to do before implementing any stuff first type command echo $JENKINS_HOME you will find Jenkins home directory. sorry sir can u please tell me how to remove jenkins _home directory rm -rf /var/lib/jenkins sir when i use these commands rm -rf /var/lib/jenkins its says permission denied ihave tried new ubuntu server still same issue sir unable to connect jenkins restart your jenkins service and try again. yes sir due to internet issue its hapenning sir so i have created new instance and done installation its working
gharchive/issue
2018-05-30T09:05:21
2025-04-01T04:55:32.979028
{ "authors": [ "babasmd", "chittalalk", "shaikkhajaibrahim", "vijaybhaskarreddyy" ], "repo": "QT-DevOps/DevOpsIssues", "url": "https://github.com/QT-DevOps/DevOpsIssues/issues/22", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2192945283
Refactoring Python code Main Changes :warning: This is still WIP TODOs [ ] Move hw_test_receiver.py to the test folder [x] RC: remove any methods that send/receive data to/from ESP32; move them to EspBridge [ ] Populate constants.json with all constants [x] Delete old altitude_Kalman_filter.py [ ] Documentation is needed for all functions [x] Probably we'll have to get rid of all (or most) caches in main.py [x] euler_angles: move to bzzz.util [x] process_radio_data and process_ESP_data: this functionality will be mostly moved to EspBridge [x] We need to figure out why we need a time.sleep statement in the main loop Associated Issues Closes #109 Closes #118 Closes #179 Addresses #193 Tests None To stop this PR getting to big it might be worth merging soon? To stop this PR getting to big it might be worth merging soon? @alphaville Sure. Can we just tick all the boxes in our todo list? @jamie-54 where do we stand on this? I suppose, as you said, we need to cross out the first item on our todo list. @jamie-54 where do we stand on this? I suppose, as you said, we need to cross out the first item on our todo list. I think it might be worth merging the working version of this then address the other issues separately? @alphaville @Runway27 @ejb-11 pretty sure this is ready to be merged, please take a look
gharchive/pull-request
2024-03-18T18:19:22
2025-04-01T04:55:32.985196
{ "authors": [ "alphaville", "jamie-54" ], "repo": "QUB-ASL/bzzz", "url": "https://github.com/QUB-ASL/bzzz/pull/215", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1196542475
Build frontpage table by storing ids only Right now frontpage table is filled with huge json fields. It'd be better to keep a simple table with 50 ids, and then join it with questions in getFrontpage. I'm pretty sure the performance won't degrade. (And frontpage_full is entirely unnecessary, since the same effect can be achieved with SELECT * FROM questions). PS: This is not really important, but it's part of the work I'd like to do on normalizing the database. I'm pretty sure the performance won't degrade No strong opinions, happy to defer to you
gharchive/issue
2022-04-07T20:34:44
2025-04-01T04:55:32.987448
{ "authors": [ "NunoSempere", "berekuk" ], "repo": "QURIresearch/metaforecast", "url": "https://github.com/QURIresearch/metaforecast/issues/49", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2722584540
add some files 2 commits, check if https://github.com/sequoia-pgp/fast-forward works /fast-forward
gharchive/pull-request
2024-12-06T09:59:46
2025-04-01T04:55:32.995636
{ "authors": [ "Qbicz" ], "repo": "Qbicz/fast-forward-rebase-merge", "url": "https://github.com/Qbicz/fast-forward-rebase-merge/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2176095832
Very Slow Performance The Ubuntu image worked well (using a 64 GB SD card), but the Ubuntu system is very slow when installing certain libraries and packages such as Spyder IDE or others. The system appears to hang many times. Additionally, the free space is only 4.6 GB, whereas the SD card has a capacity of 64 GB. That's due to the Jetson Nano Quad-core ARM Cortex-A57. It isn't fast. To be precise, with a clock of 1.4 GHz, it can be slower than a Raspberry Pi 4. The great advantage of the Jetson family are their CUDA acceleration. But CPU-wise, their are not the fastest in town. Is there a way to make this image (Ubuntu 20.04) faster ??
gharchive/issue
2024-03-08T13:58:02
2025-04-01T04:55:33.002624
{ "authors": [ "Qengineering", "adeljalalyousif" ], "repo": "Qengineering/Jetson-Nano-Ubuntu-20-image", "url": "https://github.com/Qengineering/Jetson-Nano-Ubuntu-20-image/issues/74", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1296178403
Attempting to leverage for multilingual data. I am trying to leverage the HiStruct architecture for Multilingual (the Hindi language in this case) type of data. I have modified the existing codebase which can be found here, with all instructions to run the code. However, the following issue seems to occur when I attempt to start model training: [2022-06-27 13:40:25,965 INFO] Loading train dataset from data_hiwiki/data_hiwiki_roberta/hiwiki.train.0.bert.pt, number of examples: 7 Traceback (most recent call last): File "histruct/src/train.py", line 159, in <module> train_ext(args, device_id) File "/vc_data/users/gmanish/table2Text/histruct/histruct/src/train_extractive.py", line 463, in train_ext train_single_ext(args, device_id) File "/vc_data/users/gmanish/table2Text/histruct/histruct/src/train_extractive.py", line 505, in train_single_ext trainer.train(train_iter_fct, args.train_steps) File "/vc_data/users/gmanish/table2Text/histruct/histruct/src/models/trainer_ext.py", line 152, in train self._gradient_accumulation( File "/vc_data/users/gmanish/table2Text/histruct/histruct/src/models/trainer_ext.py", line 511, in _gradient_accumulation sent_scores, mask = self.model(src, segs, clss, mask, mask_cls,sent_struct_vec,tok_struct_vec,section_names) File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/vc_data/users/gmanish/table2Text/histruct/histruct/src/models/model_builder.py", line 518, in forward top_vec = self.bert(src, mask_src) File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/vc_data/users/gmanish/table2Text/histruct/histruct/src/models/model_builder.py", line 194, in forward top_vec = self.model(x, attention_mask=mask,position_ids=position_ids).last_hidden_state File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 815, in forward encoder_outputs = self.encoder( File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 508, in forward layer_outputs = layer_module( File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 395, in forward self_attention_outputs = self.attention( File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 323, in forward self_outputs = self.self( File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/gmanish/miniconda3/envs/table2Text_histruct/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 243, in forward attention_scores = attention_scores / math.sqrt(self.attention_head_size) RuntimeError: CUDA out of memory. Tried to allocate 18.85 GiB (GPU 0; 39.59 GiB total capacity; 20.30 GiB already allocated; 16.78 GiB free; 21.85 GiB reserved in total by PyTorch) Considering the fact that the data sample which I am trying to use is quite small, I am unable to understand the reason of this error, as this seems to be an exponential increase in the requirement of resources as opposed to the original (monolingual) architecture. Any insight from your end would be deeply appreciated. Thanking you in anticipation. Hi anupampatil44! Many thanks for your interest in using histruct. I am not sure if the problem is with the multilingual model. I had no problem training with the vanilla longformer on 24GB GPUs and fine-tuning roberta and bert (with input length 512 and 1024). If I get it right, you freezed the LM (i.e., -finetune_bert false) when taking longer inputs and didn't add token-level hierarchical position embeddings(HPE) (-add_tok_struct_emb false)? Did you try to clear GPU memory before training? Maybe you can also try reducing the input length (e.g. -max_pos) and the number of sentence-level HPE (i.e., you can count the number of sentences in your inputs (turncated by max_pos), set a minimum possible value for max_nsent, adjust -max_npara and -max_nsent_in_para accordingly if necessary). Best, Qian
gharchive/issue
2022-07-06T17:48:13
2025-04-01T04:55:33.008370
{ "authors": [ "QianRuan", "anupampatil44" ], "repo": "QianRuan/histruct", "url": "https://github.com/QianRuan/histruct/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1993358903
running_programs_using_decorators fails Steps to reproduce the problem Install quantum middleware locally (M1 Mac for my case) Execute 'running_programs_using_decorators' in the Jupyter notebook. What is the current behavior? "Hello, Qiskit!" fails job.logs() shows Traceback (most recent call last):\n File "/tmp/ray/session_2023-11-14_10-23-00_770226_8/runtime_resources/working_dir_files/_ray_pkg_6635b2a1eae383f6/entrypoint_a80fb97c.py", line 11, in <module>\n function = cloudpickle.load(file)\nAttributeError: Can\'t get attribute \'_function_setstate\' on <module \'cloudpickle.cloudpickle\' from \'/home/ray/anaconda3/lib/python3.9/site-packages/cloudpickle/cloudpickle.py\'>\n' What is the expected behavior? Completes without error This may unique for M1 mac The notebook image gets cloudpickle 3.0.0 installed and the Ray node gets cloudpickle 2.2.1 installed. This causes serialization mismatch and causes the failure. @IceKhan13 @Tansito @psschwei Is this only for M1 mac(arm64)? Can somebody try this with amd64 system? Thanks! Pin cloudpickle to 2.2.1 fix this but I'm not sure if it's the right fix for this yet. when you say install locally, are you building from source or pip install quantum-serverless? I built it from the main branch. and running locally using docker compose? I installed with helm. even better (that's what I started trying to install with) The python package is installed at the image build time and won't change after that (I believe). I can test it in M2 now 😂 it failed for me, but not for the reason it failed for you: Failed to pull image "registry.access.redhat.com/ubi8/openssl:8.8-9": rpc error: code = Unknown desc = failed to pull and unpack image "registry.access.redhat.com/ubi8/openssl:8.8-9": failed to resolve reference "registry.access.redhat.com/ubi8/openssl:8.8-9": pulling from host registry.access.redhat.com failed with status code [manifests 8.8-9]: 502 Bad Gateway that (should be) temporary... let me try again same error: Traceback (most recent call last): File "/tmp/ray/session_2023-11-14_12-46-08_676930_14/runtime_resources/working_dir_files/_ray_pkg_60b88e2da40ac6db/entrypoint_a8565cd6.py", line 8, in function = cloudpickle.load(file) AttributeError: Can't get attribute '_function_setstate' on <module 'cloudpickle.cloudpickle' from '/home/ray/anaconda3/lib/python3.9/site-packages/cloudpickle/cloudpickle.py'> My error 😂 : => ERROR [gateway 7/8] RUN chown -R 1000:100 /usr/src/app && mkdir /usr/src/app/media && chown 1000:100 /usr/src/app/media 22.5s => ERROR [scheduler 7/8] RUN chown -R 1000:100 /usr/src/app && mkdir /usr/src/app/media && chown 1000:100 /usr/src/app/media 22.5s ------ > [gateway 7/8] RUN chown -R 1000:100 /usr/src/app && mkdir /usr/src/app/media && chown 1000:100 /usr/src/app/media: 22.52 mkdir: cannot create directory ‘/usr/src/app/media’: File exists ------ ------ > [scheduler 7/8] RUN chown -R 1000:100 /usr/src/app && mkdir /usr/src/app/media && chown 1000:100 /usr/src/app/media: 22.44 mkdir: cannot create directory ‘/usr/src/app/media’: File exists @Tansito you have to delete /gateway/media directory before you build the gateway image. Oh, true! Thanks @akihikokuroda 👍 @psschwei So it's not m1 unique. We can pin cloudpickle to 2.2.1 for now in requirements.txt in client directory. @WDYT I confirm, it happens to me too, yeah. works for me. do we need to pin in the notebook requirements file too? It's only in the client/requrements.txt. I'll change it. Thanks for helm @psschwei @Tansito Thanks to you @akihikokuroda for detecting it 👍
gharchive/issue
2023-11-14T18:53:09
2025-04-01T04:55:33.021422
{ "authors": [ "Tansito", "akihikokuroda", "psschwei" ], "repo": "Qiskit-Extensions/quantum-serverless", "url": "https://github.com/Qiskit-Extensions/quantum-serverless/issues/1090", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1931371553
Incorrectly displayed primitive result plot in Step 4 of Hello World URL to the relevant documentation https://docs.quantum-computing.ibm.com/start/hello-world Select all that apply [ ] typo [ ] code bug [ ] add new content [ ] out-of-date content [X] it's just wrong [ ] other What is the documentation issue/request? Wrong plot in Step 4 for the code block from Step 3. Should look like the below. Just now I ran the code both on the simulator and on manila, and both times I got the plot we see in the current version, rather than the one @jerrymchow shows. Anyone know what's wrong in the code? @frankharkins I also can't reproduce (using qiskit==0.44.2 and qiskit_ibm_runtime==0.12.2). @jerrymchow can you give more information about how you ran the code? I spoke with Jerry and he determined his outcome was different because how he initialized qc -- so our graph is ok, and I'll close this issue. Thanks all!
gharchive/issue
2023-10-07T14:25:42
2025-04-01T04:55:33.025694
{ "authors": [ "abbycross", "frankharkins", "jerrymchow" ], "repo": "Qiskit/documentation", "url": "https://github.com/Qiskit/documentation/issues/95", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2084971454
Define U gate before parsing This adds a bogus gate definition for U, which is required to be built in. This appears as the first statement in the ASG. This is perhaps not the best way to handle the built in gate. But it doesn't touch much code. It can be changed to something better without too much trouble. Since U is required by the language spec, I am thinking it may be better to treat this specially rather than always sticking it in the ASG. This would require an explicit check in a gate call. Also an explicit check every time a symbol is bound (If I recall correctly you are not allowed to shadow U). If I recall correctly you are not allowed to shadow U I think the scoping rules are no different from any other global-scope identifier (if I remember right), in that you can shadow it with a variable in an inner scope, but not in the global scope. But equally, if it's easier to treat it as reserved for the time being, I don't think that's an onerous request for people. My mention of shadowing is a bit of a red herring. In fact, we can treat U as reserved (with special handling for checking semantics of calls); and at the same time allow it to be shadowed where the spec allows. Closing this in favor of #68 #68 is cleaner, smaller, and more efficient.
gharchive/pull-request
2024-01-16T21:25:40
2025-04-01T04:55:33.029268
{ "authors": [ "jakelishman", "jlapeyre" ], "repo": "Qiskit/openqasm3_parser", "url": "https://github.com/Qiskit/openqasm3_parser/pull/35", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
911433578
[WIP] Spectroscopy and calibrations integration V2 Summary This is a WIP PR designed to show and discuss how calibrations and experiments could be integrated through the analysis class. An alternative option is shown in #80. Details and comments The user would run the calibration experiment as follows: calibrations = BackendCalibrations(backend) ... qubit = 5 qubit_freq = calibrations.get_qubit_frequencies()[qubit] frequences = np.linspace(qubit_freq - 10e6, qubit_freq + 10e6) spec = QubitSpectroscopy(qubit, frequencies) spec.set_analysis_options(calibrations=calibrations, force_update=True, calibration_group="my_group") spec.run(backend) The pros of this integration are: The BackendCalibrations class will remain lightweight. The cons of this integration are: The analysis class may not be reusable for other types of peak fits unless extra logic will be needed to support other peak-fit experiments such as spectroscopy on 1<->2. The user needs to write a lot of code. When you compare the number of lines that the user has to write in #79 and #80, note that there is something "unfair" for #79: #79 contains two lines for calculating frequencies, while frequencies is just None in #80. So in the code snippet above, these two lines should be removed, for a fair comparison. By the way, In both #79 and #80, if we want to calculate the frequencies from the calibrations, we can do it internally for the user. Closed by #88
gharchive/pull-request
2021-06-04T12:06:50
2025-04-01T04:55:33.039130
{ "authors": [ "eggerdj", "yaelbh" ], "repo": "Qiskit/qiskit-experiments", "url": "https://github.com/Qiskit/qiskit-experiments/pull/79", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1420184021
SparseLabelOp doesn't validate input when copy=False Environment Qiskit Nature version: 40f18174ed8246c9d0fa02981ffda7f127c30a45 Python version: N/A Operating system: N/A What is happening? As you can see, the validation is not performed if copy is False. https://github.com/Qiskit/qiskit-nature/blob/40f18174ed8246c9d0fa02981ffda7f127c30a45/qiskit_nature/second_q/operators/sparse_label_op.py#L76-L81 How can we reproduce the issue? N/A What should happen? Validation should be performed even if copy=False. Any suggestions? No response This is done by design to completely avoid having to iterate over the data contents when copy=False. This is also stated in the documentation of both, the SparseLabelOp baseclass here and the FerminionicOp class here.
gharchive/issue
2022-10-24T04:10:38
2025-04-01T04:55:33.046796
{ "authors": [ "kevinsung", "mrossinek" ], "repo": "Qiskit/qiskit-nature", "url": "https://github.com/Qiskit/qiskit-nature/issues/912", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
440393690
CouplingMap.reduce raises with connected subgraphs I might be missing something important here, in which case, an friendly explanation would suffice to close this issue. The test test.python.transpiler.test_coupling.CouplingTest.test_failed_reduced_map attempts to create a coupling map that is not connected and, therefore, CouplingMap.reduce should rice. However, the resulting candidate (reduced_cmap) is [[1, 2], [0, 1]], which is connected. What am I missing? but it does not connect in qubit 4 Oh! Thanks.
gharchive/issue
2019-05-05T02:58:24
2025-04-01T04:55:33.048718
{ "authors": [ "1ucian0", "jaygambetta" ], "repo": "Qiskit/qiskit-terra", "url": "https://github.com/Qiskit/qiskit-terra/issues/2313", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1620691643
Routing pass does not account for calibrations when assessing gate direction Environment Qiskit Terra version: 0.23.2 Python version: 3.10.9 Operating system: Fedora Linux 37 What is happening? When transpiling a circuit with a custom gate with a calibration, the gate direction pass does not check the circuit calibrations for the gate. It only checks the target and looks at a special-cased gates. How can we reproduce the issue? from qiskit import QuantumCircuit, pulse, transpile from qiskit.circuit.gate import Gate from qiskit.circuit.library import CXGate from qiskit.transpiler import Target # Placeholder schedule because the schedule content does not matter sched = pulse.ScheduleBlock() # Custom target with one two qubit gate added so that the target coupling map is connected target = Target(num_qubits=2) target.add_instruction(CXGate(), properties={(0, 1): None}) gate = Gate("my_2q_gate", 2, []) circ = QuantumCircuit(2) circ.append(gate, (0, 1)) circ.add_calibration(gate, (0, 1), sched) transpile(circ, target=target, optimization_level=0) Running this code produces: Traceback (most recent call last): File "/reverse.py", line 16, in <module> transpile(circ, target=target, optimization_level=1) File "/lib/python3.10/site-packages/qiskit/compiler/transpiler.py", line 381, in transpile _serial_transpile_circuit( File "/lib/python3.10/site-packages/qiskit/compiler/transpiler.py", line 474, in _serial_transpile_circuit result = pass_manager.run(circuit, callback=callback, output_name=output_name) File "/lib/python3.10/site-packages/qiskit/transpiler/passmanager.py", line 528, in run return super().run(circuits, output_name, callback) File "/lib/python3.10/site-packages/qiskit/transpiler/passmanager.py", line 228, in run return self._run_single_circuit(circuits, output_name, callback) File "/lib/python3.10/site-packages/qiskit/transpiler/passmanager.py", line 283, in _run_single_circuit result = running_passmanager.run(circuit, output_name=output_name, callback=callback) File "/lib/python3.10/site-packages/qiskit/transpiler/runningpassmanager.py", line 125, in run dag = self._do_pass(pass_, dag, passset.options) File "/lib/python3.10/site-packages/qiskit/transpiler/runningpassmanager.py", line 173, in _do_pass dag = self._run_this_pass(pass_, dag) File "/lib/python3.10/site-packages/qiskit/transpiler/runningpassmanager.py", line 202, in _run_this_pass new_dag = pass_.run(dag) File "/lib/python3.10/site-packages/qiskit/transpiler/passes/utils/gate_direction.py", line 300, in run return self._run_target(dag, layout_map) File "/lib/python3.10/site-packages/qiskit/transpiler/passes/utils/gate_direction.py", line 270, in _run_target raise TranspilerError( qiskit.transpiler.exceptions.TranspilerError: "Flipping of gate direction is only supported for ['cx', 'cz', 'ecr'] at this time, not 'my_2q_gate'." What should happen? Transpilation should run without error. It should leave the circuit unmodified. Any suggestions? In gate_direction.py, right before/after the target is checked in _run_target, the circuit's calibrations should be checked for the gate and no exception should be raised if the calibration is there. Additionally, there may be an additional bug here because the error message is confusingly about flipping the gate direction when the calibration has the right direction present. A "gate not found" type of message would be more appropriate. I wonder if there are other ways to trigger the "flipping the gate" message where the issue is that the gate was not found at all. This issue was first noticed in the qiskit-dynamics PR for the DynamicsBackend tutorial. In most cases, we are working only with gates already in the Target so this issue does not come up. Probably we could copy added calibrations to target in the running pass manager, which might be easiest fix. Alternatively, we could update every pass to check calibration -- so far calibration was ignored in many places and we needed to manually update basis gates to respect added calibration. Target must be copied through serialization so mutating the target per circuit doesn't trigger race condition. https://github.com/Qiskit/qiskit-terra/blob/2ce129a14279a746d309f00e311b930ddbfe633c/qiskit/transpiler/passmanager.py#L264-L266 I've made #9786 to address the bulk of this issue. For the error messages: when running in CouplingMap mode, the pass can't know if an instruction is actually supported or not, so the "flipping of gate direction" thing is the right call - it's up to translation passes to sort that stuff out first agreed that the transpiler error messages can be confusing; if a gate isn't supported in the flipped direction either, we should say that rather than saying "we don't know how to flip this gate". I've also made #9787 to address the error-message concerns of this issue. Yes, that sounds reasonable. PR also looks good to me. Thanks Jake.
gharchive/issue
2023-03-13T03:31:23
2025-04-01T04:55:33.056619
{ "authors": [ "jakelishman", "nkanazawa1989", "wshanks" ], "repo": "Qiskit/qiskit-terra", "url": "https://github.com/Qiskit/qiskit-terra/issues/9783", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1759149628
Remove list argument broadcasting and simplify transpile() Summary This commit updates the transpile() function to no longer support broadcast of lists of arguments. This functionality was deprecated in the 0.23.0 release. As part of this removal the internals of the transpile() function are simplified so we don't need to handle broadcasting, building preset pass managers, parallel dispatch, etc anymore as this functionality (without broadcasting) already exists through the transpiler API. Besides greatly simplifying the transpile() code and using more aspects of the public APIs that exist in the qiskit.transpiler module, this commit also should fix the overhead we have around parallel execution due to the complexity of supporting broadcasting. This overhead was partially addressed before in #7789 which leveraged shared memory to minimize the serialization time necessary for IPC but by using PassManager.run() internally now all of that overhead is removed as the initial fork will have all the necessary context in each process from the start. Three seemingly unrelated changes made here were necessary to support our current transpile() API without relying on custom pass manager construction. ~The first is the handling of layout from intlist. The current Layout class is dependent on a circuit because it maps Qubit~ ~objects to a physical qubit index. Ideally the layout structure would just map virtual indices to physical indices (see #8060~ ~for a similar issue, also it's worth noting this is how the internal NLayout and QPY represent layout), but because of the~ ~existing API the construction of a Layout is dependent on a circuit. For the initial_layout argument when running with~ ~multiple circuits to avoid the need to broadcasting the layout construction for supported input types that need the circuit~ ~to lookup the Qubit objects the SetLayout pass now supports taking in an int list and will construct a Layout object at run~ ~time. This effectively defers the Layout object creation for initial_layout to run time so it can be built as a function of the~ ~circuit as the API demands.~ (this was handled separately in https://github.com/Qiskit/qiskit-terra/pull/10344) The second is the FakeBackend class used in some tests was constructing invalid backends in some cases. This wasn't caught in the previous structure because the backends were not actually being parsed by transpile() previously which masked this issue. This commit fixes that issue because PassManagerConfig.from_backend() was failing because of the invalid backend construction. The third issue is a new _skip_target private argument to generate_preset_pass_manager() and PassManagerConfig. This was necessary to recreate the behavior of transpile() when a user provides a BackendV2 and either basis_gates or coupling_map arguments. In general the internals of the transpiler treat a target as higher priority because it has more complete and restrictive constraints than the basis_gates/coupling map objects. However, for transpile() if a backendv2 is passed in for backend paired with coupling_map and/or basis_gates the expected workflow is that the basis_gates and coupling_map arguments take priority and override the equivalent attributes from the backend. To facilitate this we need to block pulling the target from the backend This should only be needed for a short period of time as when #9256 is implemented we'll just build a single target from the arguments as needed. Details and comments Fixes #7741 TODO: [x] Fix last failing tests (dt handling for scheduling) The test failure was a flakiness in the CI suite that I've made #10439 to address. This looks a jillion times better than the old code. Given that this PR claims to close #7741, do you have approximate timings for the type of thing in that issue? I don't have the timings handy. I think that I tried to run the recreate from the issue when I wrote this a month ago and I was having trouble reproducing the issue locally. I'll give it a try again in a bit and see if I can get real numbers.
gharchive/pull-request
2023-06-15T16:05:27
2025-04-01T04:55:33.065519
{ "authors": [ "jakelishman", "mtreinish" ], "repo": "Qiskit/qiskit-terra", "url": "https://github.com/Qiskit/qiskit-terra/pull/10291", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2061727719
Qasm3 loader fails on feed-forward circuit Environment Python 3.10.12 openqasm3 0.5.0 qiskit-qasm3-import 0.4.1 pytket-qiskit 0.46.0 qiskit 0.45.1 qiskit-aer 0.13.1 qiskit-algorithms 0.2.1 qiskit-dynamics 0.4.2 qiskit-experiments 0.5.4 qiskit-ibm-experiment 0.3.5 qiskit-ibm-provider 0.7.2 qiskit-ibm-runtime 0.17.0 qiskit-terra 0.45.1 What is happening? The exporting and importing of Qasm3 uses different Qasm3 version/convention: qasmInst=qiskit.qasm3.dumps(qcT) qc3=qiskit.qasm3.loads( qasmInst) This is the Qasm3 circuit I want to parse with qiskit.qasm3.loads(.) OPENQASM 3; include "stdgates.inc"; bit[5] c0; rz(pi/2) $13; sx $13; rz(pi/2) $13; cx $13, $14; cx $13, $12; barrier $13, $14, $12, $0; c0[0] = measure $13; c0[1] = measure $14; c0[2] = measure $12; barrier $13, $14, $12, $0; if (c0[0] & c0[1] | c0[0] & c0[2] | c0[1] & c0[2]) { x $0; } c0[3] = measure $0; The error is: File "/usr/local/lib/python3.10/dist-packages/qiskit_qasm3_import/exceptions.py", line 20, in raise_from_node raise ConversionError(message, node) qiskit_qasm3_import.exceptions.ConversionError: 14,4: unhandled binary operator '|' How can we reproduce the issue? Attached code demonstrates I can save a transpiled circuit with feed-forward operations as Qasm3, but I can't read it back. https://bitbucket.org/balewski/quantummind/src/master/Qiskit/issues/issue27_qasm3_IO.py What should happen? I should get back the original circuit Any suggestions? There is also a 2nd issue. When I switch to a simpler circuit w/o feed-forward logic, like GHZ, then qiskit.qasm3.loads( .) works, but it forgets the qubit mapping assigned by the transpiler and counts qubits from 0 to N-1. Such a read-in circuit would not run on the HW properly. The extended binary-operation conditions aren't currently supported in the OpenQASM 3 import; we just haven't added that capability to the converter yet, because we're in the process of completely rewriting it to switch to a more performant and easier to extend parser. This feature is very much on our roadmap, it's just likely a few months away, since it's taking a backseat right now in favour of changing the foundations of OQ3 import. For the second issue: it's hard to know exactly what you mean without a reproducible example. If you would like to open a second issue about that, please do. thanks for the explanation. How should I export a feed-forward circuit constructed in Qiskit to other frameworks, like TKet or cuQuantum? Until this issue using QASM allowed the convenient conversion of circuit. I have filed new ticket #11480 with regard of lost qubit IDs Well, the export still works, it's just our import that doesn't at the moment. I wasn't aware that tket had any support for the dynamic-circuits features of OpenQASM 3 yet (the circuit-runtime operations on classical data in your condition), so I'm not sure if they'd be able to import OpenQASM 3 themselves, but if they've added it recently, then the import should work. I'm not very familiar with cuQuantum's offerings, but I believe they have a functional OpenQASM 3 converter to other formats, which I assume contains themselves.
gharchive/issue
2024-01-01T19:19:08
2025-04-01T04:55:33.073466
{ "authors": [ "balewski", "jakelishman" ], "repo": "Qiskit/qiskit", "url": "https://github.com/Qiskit/qiskit/issues/11474", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2291380540
Avoid lossing precision when scaling frequencies Summary Classes in pulse_instruction.py scale frequency values to GHz by multipliying ParameterExpression with float 1e9. This can lead to numerical errors on some systems due to symengine "rounding" errors. Instead, this scaling can be done multiplying by integer 10**9. Details and comments In this unit test https://github.com/Qiskit/qiskit/blob/235e581b1f76f29add0399989ed47f47a4e98bb8/test/python/qobj/test_pulse_converter.py#L343 The frequency string "f / 1000" gets converted to ParameterExpression(1000000.0*f) after ParameterExpression(f/1000) is multiplied by 1e9. For some unknown reason, when the symbol f is later substituted with the value 3.14, and the RealDouble is converted to float, an error is introduced that can't be fixed by https://github.com/Qiskit/qiskit/blob/235e581b1f76f29add0399989ed47f47a4e98bb8/qiskit/pulse/utils.py#L71-L74 This fixes: https://github.com/Qiskit/qiskit/issues/12359#issuecomment-2104426621 Upstream issue: https://github.com/symengine/symengine.py/issues/476 Pull Request Test Coverage Report for Build 9052405331 Details 2 of 2 (100.0%) changed or added relevant lines in 1 file are covered. 5 unchanged lines in 2 files lost coverage. Overall coverage increased (+0.02%) to 89.65% Files with Coverage Reduction New Missed Lines % qiskit/transpiler/passes/synthesis/unitary_synthesis.py 2 88.2% crates/qasm2/src/lex.rs 3 92.62% Totals Change from base Build 9037095581: 0.02% Covered Lines: 62216 Relevant Lines: 69399 💛 - Coveralls @Mergifyio backport stable/0.46
gharchive/pull-request
2024-05-12T15:22:10
2025-04-01T04:55:33.084900
{ "authors": [ "1ucian0", "coveralls", "iyanmv" ], "repo": "Qiskit/qiskit", "url": "https://github.com/Qiskit/qiskit/pull/12392", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
582478554
Add Qiskit Pulse Benchmarks Summary Partially fixes #794. Schedule construction benchmarks: with parameters being the: [x] number of unique pulses [x] number of channels [x] ~number of uses of each pulse~ options: pulse types: [x] Parametric [x] Sample Building methods: [x] Append instruction [x] Insert instruction right to left [x] Convert the instruction to a schedule and then take the union of all schedules. Details and comments I understand the the pulse api is in flux and currently being refactored. But what is concerning me here is that the main value of asv is showing performance over time as we make changes. If/when we need to update the benchmark code to adapt to deprecation removals in the future that will invalidate the old results, not only in spirit (by changing the measurement) but the benchmarks are all actually implicitly versioned with the hash of the code. So changing the code treats the benchmark as a new version that can't be compared to the old one. The question I have is while we probably should update the api here to be the non-deprecated version so we can measure consistent results moving forward, is this new api going to stay around for longer than 3 months? Rebuilding the historical data set every time there is an api change won't scale (back-filling old data takes a very long time) The core content of the tests will remain relevant over time, I imagine this revision of the pulse API should stay current for > 3 months even as we move to the new builder interface which will sit on top of the IR these tests use. Ok, sounds good then let's update the API usage in the benchmarks here to avoid the deprecation warnings and then this is good to go from my perspective. Lint error ************* Module test.benchmarks.pulse.schedule_to_instruction_conversion test/benchmarks/pulse/schedule_to_instruction_conversion.py:20:0: E0611: No name 'Play' in module 'qiskit.pulse' (no-name-in-module) ************* Module test.benchmarks.pulse.schedule_construction test/benchmarks/pulse/schedule_construction.py:19:0: E0611: No name 'Play' in module 'qiskit.pulse' (no-name-in-module) Will be fixed after Qiskit Terra release.
gharchive/pull-request
2020-03-16T17:30:01
2025-04-01T04:55:33.090941
{ "authors": [ "SooluThomas", "mtreinish", "taalexander" ], "repo": "Qiskit/qiskit", "url": "https://github.com/Qiskit/qiskit/pull/848", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1037261920
Develop a public roadmap for a11y improvements Summary There is an existing GH project that Isabela, Tony, et. al have put together over time to track issues and work-related to a11y on Jupyterlab. There is however no canonical roadmap to refer to and to track work and goals against. Acceptance Criteria [x] The roadmap is published and shared with the broader community [ ] Write a blogpost to announce the creation of the roadmap Tasks to complete [ ] #33 [x] #34 [x] #35 [ ] #36 @tonyfast will help Isabela to get this into Jupyter a11y This is in jupyter/accessibility as #64, but I haven't merged it for reasons I listed on this PR. Many of these come from the fact that I don't know how the docs are (partially) set up on this repo. From the PR: [ ] Create a file to listed completed roadmap items (also good so people can double check that changes are working) [ ] Check if I need to update the docs index (or if this is grouped in fine with the meeting notes) [ ] Make sure the intro has proper context for this repository. Maybe link to the proposal or other issues on this repo that are relevant.
gharchive/issue
2021-10-27T10:43:37
2025-04-01T04:55:33.118537
{ "authors": [ "isabela-pf", "trallard" ], "repo": "Quansight-Labs/jupyter-a11y-mgmt", "url": "https://github.com/Quansight-Labs/jupyter-a11y-mgmt/issues/2", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
926426308
Docs not built properly, not showing all pages? It seems all the pages are not visible on qhub.dev for example the following: https://github.com/Quansight/qhub/blob/main/docs/source/08_integrations/ https://github.com/Quansight/qhub/blob/main/docs/source/04_how_to_guides/7_qhub_gpu.md As @costrouc said: There is a latest and stable version of our docs. We recently changed the default to stable. Did you visit https://docs.qhub.dev/en/latest/?
gharchive/issue
2021-06-21T17:09:54
2025-04-01T04:55:33.120891
{ "authors": [ "aktech" ], "repo": "Quansight/qhub", "url": "https://github.com/Quansight/qhub/issues/676", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2074223220
Decouple prepare from chat WIP I'm ok with this an an initial implementation when my comments are addressed and CI is green. What I dislike here is that Corpus is a second class citizen. It is more like a dataclass and one can only interact with it through the Chat. However, and admin in the "managed Ragna" use case which we are targeting here, only wants to interact with the Corpus and not the Chat. Meaning, they need to have an assistant present and meet its requirements before they can even prepare a Corpus. In the presence of the demo assistant that is not terribly bad, but it is not sound design. I'm wondering if the source storage should be a component of the Corpus and the assistant a component of the Chat. That way, the Corpus can stand on its own, e.g. can be prepared, without the need to ever create a Chat. This PR has become quite messy. I'm closing it (to preserve the history, if we need it for whatever reason in the future), and opening a clean version in #269. (Trying to add #269 on top of this made the history completely unreadable.)
gharchive/pull-request
2024-01-10T12:00:46
2025-04-01T04:55:33.123649
{ "authors": [ "nenb", "pmeier" ], "repo": "Quansight/ragna", "url": "https://github.com/Quansight/ragna/pull/263", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1590777929
Difference between GeneralizedLinearRegressor and GeneralizedLinearRegressorCV for finding best alpha Hi there, I am currently using GeneralizedLinearRegressorCV for L1-penalized regression for variable selection purpose. Currently, I am leaving everything as default which means there will be 100 alpha being tried on 5-cv folds resulting in 500 times glm repeated. This process is extremely slow for my dataset and I want to improve it. I notice GeneralizedLinearRegressor also allows specifying a list of alphas to try. I am wondering if I can use that as an alternative for better run time. Also, what is your criteria for finding best alpha for GeneralizedLinearRegressor? Thanks! Closing as answered. Feel free to reopen! Thank you!!! Very helpful suggestions.
gharchive/issue
2023-02-19T17:21:32
2025-04-01T04:55:33.127157
{ "authors": [ "lbittarello", "miaow27" ], "repo": "Quantco/glum", "url": "https://github.com/Quantco/glum/issues/608", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1875945499
Refresh rules button in proof panel This button should reload the rules in the toolbar in the proof panel. Completed in 73da7b8ebbd3eaf193cc76af572c036a5f3bdec5
gharchive/issue
2023-08-31T17:21:19
2025-04-01T04:55:33.127956
{ "authors": [ "RazinShaikh" ], "repo": "Quantomatic/zxlive", "url": "https://github.com/Quantomatic/zxlive/issues/98", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1068945648
Add type annotations Since you added docstrings, how about you add type annotations https://towardsdatascience.com/type-annotations-in-python-d90990b172dc https://dev.to/dstarner/using-pythons-type-annotations-4cfe https://github.com/ninjamar/pyaw/blob/main/pyaw/core.py#L19 https://stackoverflow.com/questions/38727520/how-do-i-add-default-parameters-to-functions-when-using-type-hinting Seems good! I'll add this in the future! Thanks Just... replit doesn't recognize "NoReturn" lol wdym? noreturn isn't defined. i think you mean None wdym? noreturn isn't defined. i think you mean None https://stackoverflow.com/a/48038206 Here. If I make a function: ```py def func(): #return is never used... print("okay...") It never returns anything in whatever case. So NoReturn should be used according to the answer wdym? noreturn isn't defined. i think you mean None https://stackoverflow.com/a/48038206 Here. If I make a function: def func(): #return is never used... print("okay...") It never returns anything in whatever case. So NoReturn should be used according to the answer from typing import NoReturn I https://pypi.org/project/typing/ For package maintainers, it is preferred to use typing;python_version<"3.5" I don't really wanna do that In https://pypi.org/project/typing/ For package maintainers, it is preferred to use typing;python_version<"3.5" I don't really wanna do that then don't support versions less than 3.5. python 3.5 is oldish In https://pypi.org/project/typing/ For package maintainers, it is preferred to use typing;python_version<"3.5" I don't really wanna do that then don't support versions less than 3.5. python 3.5 is oldish Oh, so my package will be fine if everyone uses python > 3.6? In https://pypi.org/project/typing/ For package maintainers, it is preferred to use typing;python_version<"3.5" I don't really wanna do that then don't support versions less than 3.5. python 3.5 is oldish Oh, so my package will be fine if everyone uses python > 3.6? Thanks! 3.6+ will work. i think your code only needs requests which needs 3.6. typing needs 3.5. i personally just choose 3.7 when i don't know because it isn't too old but it isn't super new. I'd just use None lol I agree Ok, so I tried. https://github.com/Quantum-Codes/ScraGet/blob/0d9c0b3436826a7b2732df58cfc2972767872c3d/ScraGet/user.py#L42 In v0.1.7 I added it. But it does nothing... Ok, so I tried. https://github.com/Quantum-Codes/ScraGet/blob/0d9c0b3436826a7b2732df58cfc2972767872c3d/ScraGet/user.py#L42 In v0.1.7 I added it. But it does nothing... I just requested for user 100 like this user = ScraGet.get_user() user.updateScratch(100) #in int format No error was thrown :/ Unlike docstrings, replit also didn't say that str is supposed to be used. type hints. it doesn't change how to code is run but it provides help to a user using ScraGet . Ok, so I tried. https://github.com/Quantum-Codes/ScraGet/blob/0d9c0b3436826a7b2732df58cfc2972767872c3d/ScraGet/user.py#L42 In v0.1.7 I added it. But it does nothing... I just requested for user 100 like this user = ScraGet.get_user() user.updateScratch(100) #in int format No error was thrown :/ Unlike docstrings, replit also didn't say that str is supposed to be used. type hints. it doesn't change how to code is run but it provides help to a user using ScraGet. Can you give an example? I expected a popup like docstr Can you give an example? I expected a popup like docstr Ide's like replit show the docstring and the function definition (def updateScratch(self, user: str) -> None:) so when the user writes code, they know what type of argument to pass in. This also helps with documentation generators. https://stavshamir.github.io/python/the-other-benefit-of-python-type-annotations/ Oh thanks!!! I get it! I saw it in the pop-up too lol! Well, the union operator doesn't work. def hello(text : int | str) #unsupported operator | for type and type print(text) .``` Couldn't find info on google not sure Maybe an issue with annotations? Anyways I completed this except for that.. Sure https://stackoverflow.com/questions/33945261/how-to-specify-multiple-return-types-using-type-hints You mean replit isn't using py 3.10? replit python doesn't use 3.10 but u can use a nix replit and in the dependencies put python3.10
gharchive/issue
2021-12-01T22:49:45
2025-04-01T04:55:33.147197
{ "authors": [ "Quantum-Codes", "ninjamar" ], "repo": "Quantum-Codes/ScraGet", "url": "https://github.com/Quantum-Codes/ScraGet/issues/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
634043554
support expectation expectation semantic can be useful in compiling differentiable programs function vqe(theta) 1 => Rx(theta) 2 => Rx(theta) @expect 1:3 <op> end the @expect should be semantically equivalent to multiple @measure statements suggested by @GiggleLiu , a better way to support this is via a primitive expect, which is equivalent to ob = 0 for _ in 1:nshots ob += @measure 1:3 op end ob/nshots in concept, but will be handled specially under different contexts.
gharchive/issue
2020-06-08T03:06:47
2025-04-01T04:55:33.149608
{ "authors": [ "Roger-luo" ], "repo": "QuantumBFS/YaoLang.jl", "url": "https://github.com/QuantumBFS/YaoLang.jl/issues/29", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2397627363
aarch64: userspace fails to parse vdso through auxv test code (note: should be GPL but I forgot where the code was from). https://paste.sr.ht/blob/2bd3b5a16686b99fe4790d298d90d6a38c90f3a8 on native aarch64 (kernel 5.15) $ ./test VDSO @ 0xffff871d8000 __kernel_rt_sigreturn @ (nil) __kernel_clock_getres @ 0xffff871d8760 __kernel_clock_gettime @ 0xffff871d8720 __kernel_gettimeofday @ 0xffff871d8740 with Quark: docker run --rm --runtime=quark_d test VDSO @ 0xa000001000 __kernel_rt_sigreturn @ (nil) __kernel_clock_getres @ (nil) __kernel_clock_gettime @ (nil) __kernel_gettimeofday @ (nil) (note: I didn't include @chl337 's vdso patch as it has nothing to do with userspace and auxv) Interesting, I will give it a look later. I can repro similar issue happened in x86. On Tue Jul 9, 2024 at 5:01 PM CEST, QuarkSoft wrote: I can repro similar issue happened in x86. For x86, are you using arch specify vdso symbol names? i.e. instead of __kernel_rt_sigreturn __kernel_gettimeofday __kernel_clock_gettime __kernel_clock_getres use __vdso_clock_gettime __vdso_getcpu __vdso_gettimeofday __vdso_time Yes. I did the change. From: Tianhao Wang @.***> Sent: Tuesday, July 9, 2024 8:24 AM To: QuarkContainer/Quark Cc: Yulin Sun; Assign Subject: Re: [QuarkContainer/Quark] aarch64: userspace fails to parse vdso through auxv (Issue #1324) On Tue Jul 9, 2024 at 5:01 PM CEST, QuarkSoft wrote: I can repro similar issue happened in x86. For x86, are you using arch specify vdso symbol names? i.e. instead of __kernel_rt_sigreturn __kernel_gettimeofday __kernel_clock_gettime __kernel_clock_getres use __vdso_clock_gettime __vdso_getcpu __vdso_gettimeofday __vdso_time — Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_QuarkContainer_Quark_issues_1324-23issuecomment-2D2218012267&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=DlVd8EA60eTawMXHolu8r3SI1kM4__hdCe61U_easzU&m=p0FBFjvdqF74dcfSeVh3n2MquzGtLIFo5rrQn-GCb4079mlgZu1fWcipZmu9fh2f&s=foEkgOVlQZ6_mznmXimUkLoqUdO4lzVzcISdvH6T1m4&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ATC5KBHBITFY7S7VC4DUMGTZLP6DLAVCNFSM6AAAAABKSOBW5CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJYGAYTEMRWG4&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=DlVd8EA60eTawMXHolu8r3SI1kM4__hdCe61U_easzU&m=p0FBFjvdqF74dcfSeVh3n2MquzGtLIFo5rrQn-GCb4079mlgZu1fWcipZmu9fh2f&s=M_SWwVBPUty6cxwJvGwRbNWwiFeEl4od69BIFzMvZ4Q&e=. You are receiving this because you were assigned.Message ID: @.***> Looks like our vdso binary build has issue. I tried to use https://github.com/enarx/vdso to parse our vdso.so and get wrong result. for x86, we need to change the test code vdso_sym("LINUX_2.6", "__vdso_clock_gettime")) instead of vdso_sym("LINUX_2.6.39", "__vdso_clock_gettime")) Then it will work
gharchive/issue
2024-07-09T09:16:52
2025-04-01T04:55:33.160728
{ "authors": [ "QuarkContainer", "chl337", "shrik3" ], "repo": "QuarkContainer/Quark", "url": "https://github.com/QuarkContainer/Quark/issues/1324", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
849848329
Environment setup script fails if SDK is checked out using git If git protocol is used to checkout the repository the hard-coded remote URL's fail to detect this and environment setup fails. I have this fixed locally with another nested if. Will submit a PR to fix shortly. Reviewed the way we were using the envsetup.sh, and now ensure that the script can be sourced from anywhere, but will always work relative to qorc-sdk dir, hence removing the need for these checks, making it more robust. Please open a new issue if you see problems with the latest master.
gharchive/issue
2021-04-04T10:48:16
2025-04-01T04:55:33.220590
{ "authors": [ "coolbreeze413", "whatnick" ], "repo": "QuickLogic-Corp/qorc-sdk", "url": "https://github.com/QuickLogic-Corp/qorc-sdk/issues/114", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1612039673
Better serialization support (KorGE) As discussed in #86 it seems to be problematic that the Fleks entity does not have the @Serialization annotation which makes it more complicated on user side to serialize/deserialize the snapshots of a world. The commit of #86 is part of a new branch: 2.3-korge-serialization Things that we need to consider: is kotlinx-serialization the standard to serialize/deserialize stuff? Or are there other libraries out there that people might use. What is LibGDX using? I know that I was able to simply convert the snapshot to a JSON string in LibGDX and store that in a preference (MysticGarden project) if we add kotlinx-serialization as dependency, how much bigger does the library become? Currently it is very small because it has no dependencies at all besides standard kotlin. I'd like to keep it that way and ideally don't require the strong dependency at all. Can we replace the entity value class with a typealias? What downsides does that have? Does normal Int then also get the entity extensions? If so, I don't like that solution because it polutes standard classes. Entity as an interface: what impact does that have to performance? Can we "generalize" the Fleks world and allow the user to override the used entity class and entity service? Per default it will use the current implementation. Maybe something like: val world = world { entityService = useDefault<UserCustomEntity>() // <-- default entity service with custom entity class // OR entityService = UserEntityService() // <-- custom service by the user with either his own entity class or the Fleks entity } // any custom entity class must provide an 'id' property to still work with the component index // any custom entity service must provide the current entity service functions like create/remove/... not sure how easy that is and if it is even possible because entity configuration and therefore also component configuration is happening in the current entity service. Not sure how a user could still provide this functionality in his own service. Maybe don't allow the complete service "replacement". Just allow the ID creation/removal and keep all the other things like component assignment, etc. in the Fleks service. So just a part of the current Fleks entity service can be customized by users. However, this approach might be good for some other issues that were discussed lately where people needed their own way to create/assign IDs. is kotlinx-serialization the standard to serialize/deserialize stuff? kotlinx.serialization is the standard for serializing in Kotlin multiplatform. It works for jvm, js/web and native targets. Downside is that it needs @Serializable annotation on each non-primitive object which should be serialized. So that the Kotlin compiler plugin can automatically create code needed for serialization of the objects. It is possible to write KSerializers for objects from 3rd party libs where it is not possible to add the annotation, but that requires some boilerplate code in your game serialization code. Thus, it would be good if we could either add the annotation in Fleks, reduce Entity to a primitive/interface type or decouple its creation from Fleks so that the Entity object can be annotated outside of Fleks. I guess these are the options we have. Also it depends on what your target is. If you use Fleks only on jvm than there might be better (easier) serialization libs. BTW I like the idea of "Just allow the ID creation/removal" outside of Fleks. I think it's worth looking into this. :) is kotlinx-serialization the standard to serialize/deserialize stuff? It is, if this library focuses on the kotlin ecosystem then it should be used. Even if it is jvm only, it can still be desirable. This is definitely the easy solution for now and I think it is what @jobe-m is doing.I think the library size also dies not grow by a lot (12kb or so?). But still, I want to look into the option that users can provide their own way of creating/removing and recycling an ID as mentioned in the opening post. Unfortunately I had no time so far but I still have it in the back of my head and won't forget it! Unrelated but I think we should also rename the world method. I already got a few comments where people got confused by it. They tried to call it with capital W and got an error sind that it not allowed. The World constructor is internal. I remember that IntelliJ did not suggest the constructor for me and that's why I thought it is a good enough solution but seems like others have a starting problem with it. Maybe something like newWorld or World.of or createWorld is better. I just tried a quick&dirt implementation to allow users to override the way entities are created/removed and this is the minimum interface I could come up with: interface EntityProvider { var nextId: Int fun create(): Entity fun prepareNextEntity(id: Int) fun remove(entity: Entity) fun isRemoved(entity: Entity): Boolean fun numEntities(): Int fun contains(entity: Entity): Boolean fun forEach(action: (Entity) -> Unit) fun reset() } So basically, someone needs to provide a way to create/remove entities and also a special way to create a specific entity the next time create is called (prepareNextEntity). This is needed for the loadSnapshot functions. That's the default implementation which Fleks is currently using: class DefaultEntityProvider(initialEntityCapacity: Int) : EntityProvider { /** * The id that will be given to a newly created [entity][Entity] if there are no [recycledEntities]. */ override var nextId = 0 /** * Separate BitArray to remember if an [entity][Entity] was already removed. * This is faster than looking up the [recycledEntities]. */ @PublishedApi internal val removedEntities = BitArray(initialEntityCapacity) /** * The already removed [entities][Entity] which can be reused whenever a new entity is needed. */ @PublishedApi internal val recycledEntities = ArrayDeque<Entity>() override fun numEntities(): Int = nextId - recycledEntities.size override fun create(): Entity { return if (recycledEntities.isEmpty()) { Entity(nextId++) } else { val recycled = recycledEntities.removeLast() removedEntities.clear(recycled.id) recycled } } override fun prepareNextEntity(id: Int) { recycledEntities.remove(Entity(id)) recycledEntities.addLast(Entity(id)) } override fun remove(entity: Entity) { removedEntities.set(entity.id) recycledEntities.add(entity) } override fun isRemoved(entity: Entity): Boolean = removedEntities[entity.id] override fun forEach(action: (Entity) -> Unit) { for (id in 0 until nextId) { val entity = Entity(id) if (removedEntities[entity.id]) { continue } action(entity) } } override fun reset() { nextId = 0 recycledEntities.clear() removedEntities.clearAll() } override fun contains(entity: Entity): Boolean { return entity.id in 0 until nextId && !removedEntities[entity.id] } } I am not 100% convinced that this is the right way to go because I am not sure if a user knows all the details and information to provide a proper implementation for all methods. What do you guys think? Sorry, I will respond a bit later since I am currently occupied with offtopic stuff ... But want to test it out for sure! :) From a quick read through I saw that the Entity class is still part of Fleks. It is used as type in the interface above. Do you plan to decouple the entity type also from Fleks? At least that would be needed if we want to annotate it outside of Fleks for serialization. But I am also fine if it stays inside of Fleks and we have the annotation for serialization there. :) is kotlinx-serialization the standard to serialize/deserialize stuff? It is, if this library focuses on the kotlin ecosystem then it should be used. Even if it is jvm only, it can still be desirable. Half offtopic remark. One pretty big advantage of kotlinx-serialization is that it is compatible with graalvm native images (no reflections, no setup required). That is still desirable for a video game imo, as native images can drastically improve startup times (up to 10-20x). Your players want to wait as little as possible, and it gives you that with less effort. Can also possibly result in free console support, if there is a build of graalvm for the playstation for example. (yes I know, stuff like LWJGL still needs to be ported but you get the gist of it) is kotlinx-serialization the standard to serialize/deserialize stuff? It is, if this library focuses on the kotlin ecosystem then it should be used. Even if it is jvm only, it can still be desirable. Half offtopic remark. One pretty big advantage of kotlinx-serialization is that it is compatible with graalvm native images (no reflections, no setup required). That is still desirable for a video game imo, as native images can drastically improve startup times (up to 10-20x). Your players want to wait as little as possible, and it gives you that with less effort. Can also possibly result in free console support, if there is a build of graalvm for the playstation for example. (yes I know, stuff like LWJGL still needs to be ported but you get the gist of it) Thanks for the info. Kotlinx serialization was already added now in #92 . So this part of the issue is resolved. Now I only want to look if there is a good solution to decouple ID (=entity) creation and allow custom implementation by users. Q: If Entity is the only serialized class and is simply an ID container, how would the state of it's components be serialized? Q: If Entity is the only serialized class and is simply an ID container, how would the state of it's components be serialized? I guess you could also phrase it as: how does world-state get serialized & replicated The actual serialization of the components of an entity is happening outside of Fleks. It is done by serializing a "snapshot" of the Fleks ECS world. You can see how this works in the unit test of korge-fleks here: https://github.com/korlibs/korge-fleks/blob/9c693b626aecafa74f9effa0946366b2ab3b2d7a/korge-fleks/src/commonTest/kotlin/com/soywiz/korgeFleks/components/CommonTestEnv.kt#L26 A little bit of magic is still needed around the serialization which is implemented here in korge-fleks: https://github.com/korlibs/korge-fleks/blob/main/korge-fleks/src/commonMain/kotlin/korlibs/korge/fleks/utils/SnapshotSerializer.kt I know that the @Serializable tag has been added to the Entity class (🥳) and that I'm late to the party, but I figured I'd mention my solution here in case kotlinx serialization gets yoinked from Fleks. I solved this without marking Entity as serializable by using a contextual serializer for the entity class instead, which serializes an entity as an Int - similar to what @jobe-m mentioned with the type alias. object EntitySerializer : KSerializer<Entity> { override val descriptor: SerialDescriptor = PrimitiveSerialDescriptor("Entity", PrimitiveKind.INT) override fun serialize(encoder: Encoder, value: Entity) = encoder.encodeInt(value.id) override fun deserialize(decoder: Decoder): Entity = Entity(decoder.decodeInt()) } Which one would add to their serializer module; private val format = Json { serializersModule = SerializersModule { contextual(EntitySerializer) } } Which works perfectly, the only downside is that all instances of Entity in a serializable component need to be marked with @Contextual; @Serializable @SerialName("foo") data class FooComponent( val otherEntities: MutableList<@Contextual Entity> ) : Component<Foo>, SerialisableComponent { //... } Thanks for sharing your solution as well! I try to finalize the new version of Fleks within the next two weeks. I was super busy the last weeks with moving to a new appartment with my family and my little son also demands more time than I had expected :D I don't have that much spare time anymore but I will try my best to deliver the new version soon! I personally find it weird that it isn't so easy - at least it seems - to serialize/deserialize an entity. In the end it should be just a plain int. At least that is promised by Kotlin's value classes. I only have experience with LibGDX's serialization, which is basic JSON serialization/deserialization and there I had no issues. I am not that familiar with kotlinx-serialization and KorGE, sorry. But I want to look into KorGE some more in the future and try to implement a simple game to see the differences to LibGDX and maybe I prefer it over LibGDX ;) Let's keep it in because as discussed before it is the official way for kotlin multiplatform, it does not add a huge size to the bundled jar and it makes serialization life a little easier ;) #93 is now merged into master. I will try to release version 2.4 in the upcoming weeks. This issue is resolved and we also added more flexibility with the new EntityProvider interface. In my experience using an alternative would also be a clear downgrade. This is just so high quality.
gharchive/issue
2023-03-06T19:03:06
2025-04-01T04:55:33.247645
{ "authors": [ "Frontrider", "JonoAugustine", "Quillraven", "geist-2501", "jobe-m" ], "repo": "Quillraven/Fleks", "url": "https://github.com/Quillraven/Fleks/issues/87", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2172843378
使用的模型 Qwen-7B-Chat启动卡主不动,且flash-attention,layer_norm,rotary都已安装 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this? [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions 该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ? [X] 我已经搜索过FAQ | I have searched FAQ 当前行为 | Current Behavior ==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-3.10.0-1160.108.1.el7.x86_64-x86_64-with-glibc2.17. python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35 当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['Qwen-7B-Chat'] @ cuda {'device': 'cuda', 'gpus': '0,1', 'host': '0.0.0.0', 'infer_turbo': False, 'limit_worker_concurrency': 20, 'max_gpu_memory': '22GiB', 'model_path': '/home/chatglm3/chatglm3_model/Qwen-7B-Chat', 'model_path_exists': True, 'num_gpus': 2, 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cuda ==============================Langchain-Chatchat Configuration============================== /home/user/anaconda3/envs/langchai-chat/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-03-07 10:35:54 | ERROR | stderr | INFO: Started server process [14373] 2024-03-07 10:35:54 | ERROR | stderr | INFO: Waiting for application startup. 2024-03-07 10:35:54 | ERROR | stderr | INFO: Application startup complete. 2024-03-07 10:35:54 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000/ (Press CTRL+C to quit) /home/user/anaconda3/envs/langchai-chat/lib/python3.11/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( /home/user/anaconda3/envs/langchai-chat/lib/python3.11/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( 2024-03-07 10:35:55 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker ad7d6481 ... 2024-03-07 10:35:56 | WARNING | transformers_modules.Qwen-7B-Chat.modeling_qwen | Try importing flash-attention for faster inference... 2024-03-07 10:35:56 | WARNING | transformers_modules.Qwen-7B-Chat.modeling_qwen | Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary 期望行为 | Expected Behavior 正常启动 复现方法 | Steps To Reproduce langchain-chat 模型 qwen-7b-caht 运行环境 | Environment - OS: CentOS Linux release 7.9.2009 (Core) - Python: Python 3.11.7 -CUDA Version: 12.2 备注 | Anything else? ==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-3.10.0-1160.108.1.el7.x86_64-x86_64-with-glibc2.17. python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35 当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['Qwen-7B-Chat'] @ cuda {'device': 'cuda', 'gpus': '0,1', 'host': '0.0.0.0', 'infer_turbo': False, 'limit_worker_concurrency': 20, 'max_gpu_memory': '22GiB', 'model_path': '/home/chatglm3/chatglm3_model/Qwen-7B-Chat', 'model_path_exists': True, 'num_gpus': 2, 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cuda ==============================Langchain-Chatchat Configuration============================== /home/user/anaconda3/envs/langchai-chat/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-03-07 10:35:54 | ERROR | stderr | INFO: Started server process [14373] 2024-03-07 10:35:54 | ERROR | stderr | INFO: Waiting for application startup. 2024-03-07 10:35:54 | ERROR | stderr | INFO: Application startup complete. 2024-03-07 10:35:54 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000/ (Press CTRL+C to quit) /home/user/anaconda3/envs/langchai-chat/lib/python3.11/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( /home/user/anaconda3/envs/langchai-chat/lib/python3.11/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( 2024-03-07 10:35:55 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker ad7d6481 ... 2024-03-07 10:35:56 | WARNING | transformers_modules.Qwen-7B-Chat.modeling_qwen | Try importing flash-attention for faster inference... 2024-03-07 10:35:56 | WARNING | transformers_modules.Qwen-7B-Chat.modeling_qwen | Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary Closed in favor of https://github.com/QwenLM/Qwen/issues/1124
gharchive/issue
2024-03-07T03:03:21
2025-04-01T04:55:33.277027
{ "authors": [ "Andy1018", "jklj077" ], "repo": "QwenLM/Qwen", "url": "https://github.com/QwenLM/Qwen/issues/1123", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1929335351
Data format for continuous pre-training thanks for providing SFT scripts, but I found that the data is going to be formatted as instruction / chat finetune format. Details here https://github.com/QwenLM/Qwen/blob/main/finetune.py#L123 I would like to continuous pre-train the model for around 40b tokens before doing final instruction finetune, and would like to ask which part of data format code should I modified to make it possible? Thanks. Our pretraining follows the implementation of conventional gpt training, which accepts inputs with an eod token as the indicator of the end of document. You can consider concatenating different documents, but our finetuning script does not support this. Our pretraining follows the implementation of conventional gpt training, which accepts inputs with an eod token as the indicator of the end of document. You can consider concatenating different documents, but our finetuning script does not support this. One more question: After adding eod token, do we need to mask it out (attention mask's value = 0) to do continual pre-training?
gharchive/issue
2023-10-06T02:31:41
2025-04-01T04:55:33.280105
{ "authors": [ "JustinLin610", "tiendung" ], "repo": "QwenLM/Qwen", "url": "https://github.com/QwenLM/Qwen/issues/394", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
922611661
Exclude IPs option This is a handy little app and it works well! Is it still maintained? If so, an IP exclusion configuration option would be extremely helpful so admin machines or vulnerability scanners don't trigger an alert. Hi there! Yes, still maintained when I have time. IP exclusions is the #1 thing I want add. Right now I'm working on changing up the format of the configuration file and re-writing the GUI portion so that it doesn't use any 3rd party libraries. I'm hoping to have a new release out in 1 - 2 months. Very cool! Looking forward to it and keep up the great work!
gharchive/issue
2021-06-16T13:26:09
2025-04-01T04:55:33.286251
{ "authors": [ "R-Smith", "oneoffdallas" ], "repo": "R-Smith/tcpTrigger", "url": "https://github.com/R-Smith/tcpTrigger/issues/5", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1834272191
Troops lost after committing secret mission Describe the bug After committing secret mission attack the lord, in which you can only bring hero units with you, all common troops will disappear and only hero units left Mod Version Numeric Version: 1.2.7.6-1 Evidence [ ] The mod is present in the callstack; [ ] The issue only happens with the mod activated; [x] You tested it with the least amount of other mods as possible. Crash Report not a crash issue The "attack a lord" secret mission is part of the SueMoreSpouses mod and is not associated with bannerkings. Troops not reappearing has been a known issue with that mod for years, so I do not believe that it is caused by a conflict with bannerkings.
gharchive/issue
2023-08-03T04:57:14
2025-04-01T04:55:33.289095
{ "authors": [ "Fengzi2333", "TheCab121" ], "repo": "R-Vaccari/bannerlord-banner-kings", "url": "https://github.com/R-Vaccari/bannerlord-banner-kings/issues/153", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1135795357
Listen server host cannot save demos Bug Hosting a listen server seems to prevent the host from saving demos Steps to reproduce Make sure you have demos enabled (join a server, play a round, close game, check Titanfall2\r2\ if the demo is in there) log 1 demo_enabledemos 1 demo_autoRecord 1 demo_autoRecordName demo demo_writeLocalFile 1 Host a listen server, play a round, close game, check whether the demo was saved No demo was created from when hosting the server even though it should be there Specifications Northstar version: 1.5.0-rc3 Platform: Steam Tested again to make sure and can confirm. Demo worked on joining random server but not when hosting my own listen server. As demos basically just record and playback network packets and as a listen server doesn't talk over the network stack to the own client there's nothing to record. Forcing to use the network stack via net_usesocketsforloopback 1 does result in demos being recorded. However we do not want to set that by default as it also applies to e.g. single player which can cause players to lag in singleplayer.
gharchive/issue
2022-02-13T12:45:29
2025-04-01T04:55:33.311665
{ "authors": [ "GeckoEidechse" ], "repo": "R2Northstar/Northstar", "url": "https://github.com/R2Northstar/Northstar/issues/194", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1701182287
dll with Trojan:Win32/Wacatac.B!ml i just downloaded the .zip: Trojan:Win32/Wacatac.B!ml containerfile: E:\R3nzSkin.zip file: E:\R3nzSkin.zip->R3nzSkin.dll webfile: E:\R3nzSkin.zip|https://objects.githubusercontent.com/github-production-release-asset-2e65be/410126695/4478fabe-d7db-4410-ab11-301ef61a8615?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230509%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230509T014132Z&X-Amz-Expires=300&X-Amz-Signature=52624c70ac46f7dd5668e039bc9a569fcfab9b6af984ec7f7cc97301f12ec3a4&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=410126695&response-content-disposition=attachment%3B%20filename%3DR3nzSkin.zip&response-content-type=application%2Foctet-stream|pid:14800,ProcessStart:133280701107189464 That is a false positive. The skin changer as well as the injector uses some Win32 API's that aren't commonly used in normal programs. The project is open-source, so you are welcome to compile the skin changer on your own, otherwise disable your antivirus and add the folder with the binaries to the exclusion list. So I dont have a fucking wacatac virus on my pc? This shit shook the hell out of me
gharchive/issue
2023-05-09T01:55:00
2025-04-01T04:55:33.317573
{ "authors": [ "hotline1337", "pequenahimeka", "tettad" ], "repo": "R3nzTheCodeGOD/R3nzSkin", "url": "https://github.com/R3nzTheCodeGOD/R3nzSkin/issues/496", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
272571575
Device type creation: Dynamic source registration Can not set the dynamic source registration when creating a new device type, either when creating or editing. There should be a checkbox to enable or disable dynamic source registration. Correction: clicking the label 'Dynamic source registration' and then saving actually toggles the state, so there might be display issue. @dennyverbeeck i tested this on latest dev and i do not see any problems. Clicking on the Dynamic source registration checkbox should change the state, so should be the display.
gharchive/issue
2017-11-09T13:53:12
2025-04-01T04:55:33.321514
{ "authors": [ "dennyverbeeck", "nivemaham" ], "repo": "RADAR-CNS/ManagementPortal", "url": "https://github.com/RADAR-CNS/ManagementPortal/issues/125", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
911737088
Can not use decoder in The Tings Stack (v3) If I try to use the decoder I get: { "code": 3, "message": "error:pkg/applicationserver:field_value (invalid value of field `formatters.up_formatter_parameter`)", "details": [ { "@type": "type.googleapis.com/ttn.lorawan.v3.ErrorDetails", "namespace": "pkg/applicationserver", "name": "field_value", "message_format": "invalid value of field `{field}`", "attributes": { "field": "formatters.up_formatter_parameter" }, "correlation_id": "98a4150f65d04bff8c86a4765e384506", "cause": { "namespace": "pkg/applicationserver", "name": "formatter_script_too_large", "message_format": "formatter script size exceeds maximum allowed size", "attributes": { "max_size": 4096, "size": 4207 }, "code": 3 }, "code": 3 } ], "request_details": { "url": "/as/applications/iiie-trackers/devices/wistrio-tracker-1", "method": "put", "stack_component": "as" } } Running it trough a minifier does the rick. i did it with the next code `function decodeUplink(input) { var hexString=bin2HexStr(input.bytes); return { data: { rakData: rakSensorDataDecode(hexString)} } } // convert array of bytes to hex string. // e.g: 0188053797109D5900DC140802017A0768580673256D0267011D040214AF0371FFFFFFDDFC2E function bin2HexStr(bytesArr) { var str = ""; for(var i=0; i<bytesArr.length; i++) { var tmp = (bytesArr[i] & 0xff).toString(16); if(tmp.length == 1) { tmp = "0" + tmp; } str += tmp; } return str; } // convert string to short integer function parseShort(str, base) { var n = parseInt(str, base); return (n << 16) >> 16; } // convert string to triple bytes integer function parseTriple(str, base) { var n = parseInt(str, base); return (n << 8) >> 8; } // decode Hex sensor string data to object function rakSensorDataDecode(hexStr) { var str = hexStr; var myObj = {}; while (str.length > 4) { var flag = parseInt(str.substring(0, 4), 16); switch (flag) { case 0x0768:// Humidity myObj.humidity = parseFloat(((parseShort(str.substring(4, 6), 16) * 0.01 / 2) * 100).toFixed(1)) + "%RH";//unit:%RH str = str.substring(6); break; case 0x0673:// Atmospheric pressure myObj.barometer = parseFloat((parseShort(str.substring(4, 8), 16) * 0.1).toFixed(2)) + "hPa";//unit:hPa str = str.substring(8); break; case 0x0267:// Temperature myObj.temperature = parseFloat((parseShort(str.substring(4, 8), 16) * 0.1).toFixed(2)) + "°C";//unit: °C str = str.substring(8); break; case 0x0188:// GPS myObj.latitude = parseFloat((parseTriple(str.substring(4, 10), 16) * 0.0001).toFixed(4)) + "°";//unit:° myObj.longitude = parseFloat((parseTriple(str.substring(10, 16), 16) * 0.0001).toFixed(4)) + "°";//unit:° myObj.altitude = parseFloat((parseTriple(str.substring(16, 22), 16) * 0.01).toFixed(1)) + "m";//unit:m str = str.substring(22); break; case 0x0371:// Triaxial acceleration myObj.acceleration_x = parseFloat((parseShort(str.substring(4, 8), 16) * 0.001).toFixed(3)) + "g";//unit:g myObj.acceleration_y = parseFloat((parseShort(str.substring(8, 12), 16) * 0.001).toFixed(3)) + "g";//unit:g myObj.acceleration_z = parseFloat((parseShort(str.substring(12, 16), 16) * 0.001).toFixed(3)) + "g";//unit:g str = str.substring(16); break; case 0x0402:// air resistance myObj.gasResistance = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "KΩ";//unit:KΩ str = str.substring(8); break; case 0x0802:// Battery Voltage myObj.battery = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "V";//unit:V str = str.substring(8); break; case 0x0586:// gyroscope myObj.gyroscope_x = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "°/s";//unit:°/s myObj.gyroscope_y = parseFloat((parseShort(str.substring(8, 12), 16) * 0.01).toFixed(2)) + "°/s";//unit:°/s myObj.gyroscope_z = parseFloat((parseShort(str.substring(12, 16), 16) * 0.01).toFixed(2)) + "°/s";//unit:°/s str = str.substring(16); break; case 0x0902:// magnetometer x myObj.magnetometer_x = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "μT";//unit:μT str = str.substring(8); break; case 0x0a02:// magnetometer y myObj.magnetometer_y = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "μT";//unit:μT str = str.substring(8); break; case 0x0b02:// magnetometer z myObj.magnetometer_z = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "μT";//unit:μT str = str.substring(8); break; default: str = str.substring(7); break; } } return myObj; } ` but nothing this worked for me! function decodeUplink(input) { var hexString=bin2HexStr(input.bytes); return { data: { bytes: input.bytes, payload: rakSensorDataDecode(hexString) }, warnings: [], errors: [] }; } // convert array of bytes to hex string. // e.g: 0188053797109D5900DC140802017A0768580673256D0267011D040214AF0371FFFFFFDDFC2E function bin2HexStr(bytesArr) { var str = ""; for(var i=0; i<bytesArr.length; i++) { var tmp = (bytesArr[i] & 0xff).toString(16); if(tmp.length == 1) { tmp = "0" + tmp; } str += tmp; } return str; } // convert string to short integer function parseShort(str, base) { var n = parseInt(str, base); return (n << 16) >> 16; } // convert string to triple bytes integer function parseTriple(str, base) { var n = parseInt(str, base); return (n << 8) >> 8; } // decode Hex sensor string data to object function rakSensorDataDecode(hexStr) { var str = hexStr; var myObj = {}; while (str.length > 4) { var flag = parseInt(str.substring(0, 4), 16); switch (flag) { case 0x0768:// Humidity myObj.humidity = parseFloat(((parseShort(str.substring(4, 6), 16) * 0.01 / 2) * 100).toFixed(1)) + "%RH";//unit:%RH str = str.substring(6); break; case 0x0673:// Atmospheric pressure myObj.barometer = parseFloat((parseShort(str.substring(4, 8), 16) * 0.1).toFixed(2)) + "hPa";//unit:hPa str = str.substring(8); break; case 0x0267:// Temperature myObj.temperature = parseFloat((parseShort(str.substring(4, 8), 16) * 0.1).toFixed(2)) + "°C";//unit: °C str = str.substring(8); break; case 0x0188:// GPS myObj.latitude = parseFloat((parseTriple(str.substring(4, 10), 16) * 0.0001).toFixed(4)) + "°";//unit:° myObj.longitude = parseFloat((parseTriple(str.substring(10, 16), 16) * 0.0001).toFixed(4)) + "°";//unit:° myObj.altitude = parseFloat((parseTriple(str.substring(16, 22), 16) * 0.01).toFixed(1)) + "m";//unit:m str = str.substring(22); break; case 0x0371:// Triaxial acceleration myObj.acceleration_x = parseFloat((parseShort(str.substring(4, 8), 16) * 0.001).toFixed(3)) + "g";//unit:g myObj.acceleration_y = parseFloat((parseShort(str.substring(8, 12), 16) * 0.001).toFixed(3)) + "g";//unit:g myObj.acceleration_z = parseFloat((parseShort(str.substring(12, 16), 16) * 0.001).toFixed(3)) + "g";//unit:g str = str.substring(16); break; case 0x0402:// air resistance myObj.gasResistance = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "KΩ";//unit:KΩ str = str.substring(8); break; case 0x0802:// Battery Voltage myObj.battery = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "V";//unit:V str = str.substring(8); break; case 0x0586:// gyroscope myObj.gyroscope_x = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "°/s";//unit:°/s myObj.gyroscope_y = parseFloat((parseShort(str.substring(8, 12), 16) * 0.01).toFixed(2)) + "°/s";//unit:°/s myObj.gyroscope_z = parseFloat((parseShort(str.substring(12, 16), 16) * 0.01).toFixed(2)) + "°/s";//unit:°/s str = str.substring(16); break; case 0x0902:// magnetometer x myObj.magnetometer_x = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "μT";//unit:μT str = str.substring(8); break; case 0x0a02:// magnetometer y myObj.magnetometer_y = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "μT";//unit:μT str = str.substring(8); break; case 0x0b02:// magnetometer z myObj.magnetometer_z = parseFloat((parseShort(str.substring(4, 8), 16) * 0.01).toFixed(2)) + "μT";//unit:μT str = str.substring(8); break; default: str = str.substring(7); break; } } return myObj; } forget it, it worked with the simulated uplink, but not with the real uplink idk why :disappointed:
gharchive/issue
2021-06-04T18:17:50
2025-04-01T04:55:33.343117
{ "authors": [ "perezmeyer", "wero1414" ], "repo": "RAKWireless/RUI_LoRa_node_payload_decoder", "url": "https://github.com/RAKWireless/RUI_LoRa_node_payload_decoder/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2009063252
fix: Use example command: amounts amount should be amounts, and the value should be sequence. thanks for spotting this!
gharchive/pull-request
2023-11-24T04:08:01
2025-04-01T04:55:33.358727
{ "authors": [ "lshoo", "zoedberg" ], "repo": "RGB-Tools/rgb-lightning-node", "url": "https://github.com/RGB-Tools/rgb-lightning-node/pull/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
333878964
Review and improve tests Many elements have bare minimum testing. Review each high-priority component and make sure they all have at least one test of substance. More than one for elements with sufficient complexity. High-priority components: [x] rh-card [x] rh-icon [x] rh-icon-panel [x] rh-button [x] rh-cta (rh-tabs is also priority but is already well tested) I also added a test to every element and to the generator that verifies the element was successfully upgraded.
gharchive/issue
2018-06-19T23:57:30
2025-04-01T04:55:33.364592
{ "authors": [ "mwcz" ], "repo": "RHElements/rhelements", "url": "https://github.com/RHElements/rhelements/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2245055520
🛑 Varicol is down In d1c0fc3, Varicol (https://varicol.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Varicol is back up in 9ada028 after 15 minutes.
gharchive/issue
2024-04-16T04:15:59
2025-04-01T04:55:33.432068
{ "authors": [ "RJVCA" ], "repo": "RJVCA/uptime", "url": "https://github.com/RJVCA/uptime/issues/1146", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1768662401
🛑 Activated Male is down In df88d13, Activated Male (http://activatedmale.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Activated Male is back up in fbfe795.
gharchive/issue
2023-06-21T23:23:12
2025-04-01T04:55:33.434496
{ "authors": [ "RJVCA" ], "repo": "RJVCA/uptime", "url": "https://github.com/RJVCA/uptime/issues/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1817094531
🛑 Cleanse Drops Kidney is down In 63de9f8, Cleanse Drops Kidney (http://cleansedropskidney.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Cleanse Drops Kidney is back up in c1b87a9.
gharchive/issue
2023-07-23T11:34:51
2025-04-01T04:55:33.436773
{ "authors": [ "RJVCA" ], "repo": "RJVCA/uptime", "url": "https://github.com/RJVCA/uptime/issues/99", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
165442833
Valve: Stop making gambling sites via Valve infrastructure http://store.steampowered.com/news/22883/ In-Game Item Trading Update [13 Jul] In 2011, we added a feature to Steam that enabled users to trade in-game items as a way to make it easier for people to get the items they wanted in games featuring in-game economies. Since then a number of gambling sites started leveraging the Steam trading system, and there’s been some false assumptions about our involvement with these sites. We’d like to clarify that we have no business relationships with any of these sites. We have never received any revenue from them. And Steam does not have a system for turning in-game items into real world currency. These sites have basically pieced together their operations in a two-part fashion. First, they are using the OpenID API as a way for users to prove ownership of their Steam accounts and items. Any other information they obtain about a user's Steam account is either manually disclosed by the user or obtained from the user’s Steam Community profile (when the user has chosen to make their profile public). Second, they create automated Steam accounts that make the same web calls as individual Steam users. Using the OpenID API and making the same web calls as Steam users to run a gambling business is not allowed by our API nor our user agreements. We are going to start sending notices to these sites requesting they cease operations through Steam, and further pursue the matter as necessary. Users should probably consider this information as they manage their in-game item inventory and trade activity. -Erik Johnson Clearly we need to add a patch that checks the hostname of the machine running the library and refuses to run if it detects anything gambling related. :) Haha. Well, let's all be careful. This opening shot against gambling sites is but just the first step Valve could take that could negatively impact all of us. It would be a shame if they end up just doing a blanket ban on all kinds of bots. Thing is, I could actually see them going that way. For the moment they still rely on bots to organize their open qualifiers (I think FaceIt uses @paralin 's library), however now that they're starting to experiment with the weekend tourneys, I could very well see them organizing these themselves in the future. Once They don't rely on them anymore, they could start banning them... Absolutely :( It would be a shame though, there's a lot of communities that are maintained by bots, lots of cool functionality as well. Not to mention quite a few streamers who use them to inform chat of MMR stats etc.. (I think FaceIt uses @paralin 's library) @Crazy-Duck May I ask which library it is? I'm looking at his github profile and he has a lot of repositories, is it protobuf.js? I'd love to take a look at the code. @Esssport https://github.com/paralin/dota2 @paralin Thank you very much.
gharchive/issue
2016-07-13T23:15:51
2025-04-01T04:55:33.442977
{ "authors": [ "Crazy-Duck", "Esssport", "howardchung", "jimmydorry", "paralin" ], "repo": "RJacksonm1/node-dota2", "url": "https://github.com/RJacksonm1/node-dota2/issues/308", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1948041921
🛑 CEOTR Home Page loads is down In c215362, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in b99c6f8 after 2 minutes.
gharchive/issue
2023-10-17T18:48:22
2025-04-01T04:55:33.445823
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/11415", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1953984193
🛑 CEOTR Home Page loads is down In 3d36241, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in a94bde2 after 2 minutes.
gharchive/issue
2023-10-20T10:19:55
2025-04-01T04:55:33.448179
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/12143", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1955982134
🛑 CEOTR Home Page loads is down In 8a398bc, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in dbdd9f6 after 2 minutes.
gharchive/issue
2023-10-22T17:18:12
2025-04-01T04:55:33.450518
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/12810", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1815324441
🛑 Sensor Tracker login is down In 20e7c15, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 502 Response time: 106 ms Resolved: Sensor tracker login is back up in b78cf89.
gharchive/issue
2023-07-21T07:41:27
2025-04-01T04:55:33.453090
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/1495", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1976154040
🛑 Sensor Tracker login is down In 2938219, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in a177749 after 1 minute.
gharchive/issue
2023-11-03T12:53:42
2025-04-01T04:55:33.455813
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/16035", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1994080313
🛑 Sensor Tracker login is down In 97db876, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in 8257665 after 1 minute.
gharchive/issue
2023-11-15T05:35:43
2025-04-01T04:55:33.458273
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/18971", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1997752993
🛑 Sensor Tracker login is down In 105666e, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in b664c70 after 1 minute.
gharchive/issue
2023-11-16T20:48:33
2025-04-01T04:55:33.460633
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/19365", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2000244690
🛑 Sensor Tracker login is down In 587ef55, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in 7a3134d after 1 minute.
gharchive/issue
2023-11-18T05:28:15
2025-04-01T04:55:33.462975
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/19674", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1898961168
🛑 CEOTR Home Page loads is down In fd79743, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 597d6e5 after 2 minutes.
gharchive/issue
2023-09-15T19:26:20
2025-04-01T04:55:33.465834
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/2380", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1903958568
🛑 CEOTR Home Page loads is down In 7e44d2a, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 32a64f3 after 2 minutes.
gharchive/issue
2023-09-20T02:08:12
2025-04-01T04:55:33.468147
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/3632", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1888177855
🛑 CEOTR Home Page loads is down In b6cab0f, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 06438ba after 2 minutes.
gharchive/issue
2023-09-08T19:00:43
2025-04-01T04:55:33.470430
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/370", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1906842768
🛑 CEOTR Home Page loads is down In 66888af, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 58fe500 after 13 days, 22 hours, 57 minutes.
gharchive/issue
2023-09-21T12:16:29
2025-04-01T04:55:33.472759
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/4035", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1916318594
🛑 Sensor Tracker login is down In 208f532, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in cec6e44 after .
gharchive/issue
2023-09-27T21:03:06
2025-04-01T04:55:33.475507
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/5835", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1806749861
🛑 Sensor Tracker login is down In ef73d24, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 502 Response time: 43 ms Resolved: Sensor tracker login is back up in 48d0ca7.
gharchive/issue
2023-07-16T23:07:43
2025-04-01T04:55:33.478048
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/596", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1847530452
🛑 CEOTR Home Page loads is down In 8ac776e, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in f2dbcb0.
gharchive/issue
2023-08-11T23:07:19
2025-04-01T04:55:33.480314
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/6477", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1852578643
🛑 CEOTR Home Page loads is down In 0d03007, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 4c07526.
gharchive/issue
2023-08-16T06:12:51
2025-04-01T04:55:33.482595
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/7496", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1854717780
🛑 Sensor Tracker login is down In dbb1ec4, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 502 Response time: 106 ms Resolved: Sensor tracker login is back up in 17e6621.
gharchive/issue
2023-08-17T10:24:17
2025-04-01T04:55:33.485232
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/7775", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2747705802
Reproducing Fig. 12 Hi, I am currently working on reproducing the results presented in Fig. 12 for the ICM method and have encountered some challenges. Specifically, according to Fig. 12, the ICM reward appears to converge to approximately 10+ after 1e7 steps. However, when running the notebook provided at https://github.com/RLE-Foundation/RLeXplore/blob/main/1 rlexplore_with_rllte.ipynb and setting the rewards to intrinsic rewards only (instead of the combined intrinsic and extrinsic rewards), I observed a reward of 30 at 5e6 steps. Specifically, I change 'self.storage.rewards += intrinsic_rewards.to(self.device)' to 'self.storage.rewards = intrinsic_rewards.to(self.device)' at https://github.com/RLE-Foundation/rllte/blob/eeefdedb2ceee3ae1abfe88896cae3b8b62b4c05/rllte/common/prototype/on_policy_agent.py#L168. This discrepancy has led me to question whether my understanding of Fig. 12 is correct. Could you kindly clarify the methodology or provide guidance on how to replicate the ICM results as depicted in Fig. 12? Thank you for your time and for making such valuable resources available to the community. I appreciate any insights or suggestions you may offer. Have a nice day. Share your email with me so I can send the exp code to u. Thanks a lot. annezhu1212@outlook.com Share your email with me so I can send the exp code to u. Thanks a lot. annezhu1212@outlook.com sent via email.
gharchive/issue
2024-12-18T12:37:03
2025-04-01T04:55:33.489662
{ "authors": [ "Acedorkz", "AnneZhu1020", "yuanmingqi" ], "repo": "RLE-Foundation/RLeXplore", "url": "https://github.com/RLE-Foundation/RLeXplore/issues/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
335397654
Ajout de la liste des types de ease au wiki Ce serait bien d'ajouter au wiki la liste des types de ease, actuellement c'est un peu difficile de les expliquer aux gens parce qu'on les trouve uniquement dans la recherche du serveur Discord. Pour rappel il y a, d'après le message sur Discord : Linear Quad Cubic Quart Quint Sine Expo Circ Back Elastic Bounce En ajoutant :In :Out ou :InOut devant. Devrait être résolu avec : [RME] Commands' documentation (#264) (Je clôturerai l'issue une fois que celle référencée sera terminée 😄)
gharchive/issue
2018-06-25T13:03:33
2025-04-01T04:55:33.492805
{ "authors": [ "acs-l", "aureliendossantos" ], "repo": "RMEx/RME", "url": "https://github.com/RMEx/RME/issues/398", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
474739803
IMU timestamps go backwards with dashing (1.0.2 OpenCR) - crashes cartographer ISSUE TEMPLATE ver. 0.2.0 Which TurtleBot3 you have? [ ] Waffle Pi Which SBC(Single Board Computer) is working on TurtleBot3? [ ] Raspberry Pi 3 Which OS you installed in SBC? [ ] Raspbian Which OS you installed in Remote PC? [ ] OS X Write down software version and firmware version Software version: [???] Firmware version: [1.0.2]. -- firmware from here: https://github.com/ROBOTIS-GIT/OpenCR-Binaries/blob/master/turtlebot3/ROS2/1.0.2/opencr_update_1.0.2.tar.bz2 This contains the workaround from https://github.com/eProsima/Micro-XRCE-DDS-Agent/issues/99 Write down the commands you used in order Raspberry pi: run MicroXRCE agent for serial & udp, run turtlebot_lidar remote PC: ros2 topic echo imu | tee imu_data_jul30.txt' TB3 waffle pi is sitting idle no movement and still timestamp (nanoseconds) is rolling backwards... Copy and Paste your error message on terminal There is no error message on terminal, but IMU data has timestamps going backwards which causes multiple crashes in cartographer Please, describe detailedly what difficulty you are in cartographer crashes all the time and is totally unusable due to hitting asserts in code because time is going backwards Example IMU timestamp going backwards: header: stamp: sec: 1564510494 nanosec: 375714706 <-- nanoseconds here frame_id: imu_link orientation: x: 0.013068560510873795 ... header: stamp: sec: 1564510494 nanosec: 15053893 <-- nanoseconds here is smaller than before... frame_id: imu_link orientation: x: 0.011846276931464672 ... This causes a number of asserts in cartographer to be hit and cannot complete mapping process. We opened new issue page to summary every issues related turtlebot3 ROS2 #460 I will comment about it later to check it. Hi, @oneattosecond This is probably due to network environment/performance and QoS settings. This is an issue that occurs when ROS2 uses DDS, and you can add/modify QoS by modifying the ~/tb3_sbc_settings/tb3_fastrtps_protile.refs file. My guess is that now KEEP_LAST is set to 10, so the previous data is coming up later. Therefore, please change this to 1 and test it. Please refer to the eProsima document below for QoS settings. About XML Representation on Micro XRCE-DDS Client XML profiles of Fast-RTPS @OpusK I compiled Micro-XRCE from source on top of stock Raspbian Buster, so I do not have tb3_fastrtps_protile.refs and cannot try your suggestion. What is the default QOS profile for Micro-XRCE DDS client? It would seem I am using the default, as I do not have tb3_fastrtps_protile.refs However, I do not think your suggestion will help. The timestamp is created on OpenCR side, in function publishImu() which calls: msg->header.stamp = ros2::now(); Unless MicroXRCE is changing the timestamp in msg->header.stamp, your suggestion cannot help, and the root cause is more likely the timestamp coming from ros2::now() is broken. Can you please investigate further? Regards ~/Micro-XRCE-DDS-Agent/test/agent.refs This file had a KEEP_LAST value set to 10, I changed it to 1, and still saw timestamps go backwards. @oneattosecond I compiled Micro-XRCE from source on top of stock Raspbian Buster, so I do not have tb3_fastrtps_protile.refs and cannot try your suggestion. It seems to me that you did not proceed with the installation according to the e-manual. Is that correct? Please refer to the e-manual installation instructions and try again. TB3 ros2 Yes, I followed the wiki quite strictly. However there are as you know, many steps. I followed all the steps in install.sh, Can you confirm you never see timestamps go backwards? I am just doing "ros2 topic echo imu" for 10 minutes and observing timestamp goes backwards. Do you plan to publish a Raspberry Pi .img for Dashing? That would help reduce possible errors. @oneattosecond Yes, I followed the wiki quite strictly. However there are as you know, many steps. I followed all the steps in install.sh, If you followed the instructions in the e-manual, the tb3_fastrtps_profile.refs file must exist. In e-manual, the following compressed file is downloaded. In this compressed file, the tb3_fastrtps_profile.refs file exists. tb3_sbc_settings.tar.bz2 Can you confirm you never see timestamps go backwards? By the time I tested, I haven't found any problems yet. Do you plan to publish a Raspberry Pi .img for Dashing? That would help reduce possible errors. As @routiful mentioned(#460), we are planning to apply other communication methods than XRCE. After this, we will consider it with the team. OK, I have deleted everything I did before and run install.sh, double check the wiki for any missing steps (none found). Then I modified the KEEP_ALIVE to 1 in tb3_fastrtps_profile.refs and am running all the software as expected via run.sh I double checked and my OpenCR is running 1.0.2 which has the workaround for eProsima issue. I am still seeing many timestamps go backwards over a period of 10 minutes... I wrote a quick script to verify, and the timestamp went backwards (nanoseconds_cur < nanosec_prev within same second) 30 times in 10 minutes. Can you please investigate? I am suspecting the below code is giving wrong timestamp: msg->header.stamp = ros2::now(); @oneattosecond I am still seeing many timestamps go backwards over a period of 10 minutes... I wrote a quick script to verify, and the timestamp went backwards (nanoseconds_cur < nanosec_prev within same second) 30 times in 10 minutes. Can you please investigate? I am suspecting the below code is giving wrong timestamp: msg->header.stamp = ros2::now(); I've never seen this, but I'll check the code anyway Note, as shown in #460, we are trying to apply a different communication method to TB3. In that case, TB3 will not use ros2arduino. So, after this development, I'll check the issue with ros2::now(). For now on, we have been released TB3 ROS 2 Dashing! Please refer to emanual IMU timestamps issues will be solved to download new firmware.
gharchive/issue
2019-07-30T18:27:33
2025-04-01T04:55:33.529579
{ "authors": [ "OpusK", "oneattosecond", "routiful" ], "repo": "ROBOTIS-GIT/turtlebot3", "url": "https://github.com/ROBOTIS-GIT/turtlebot3/issues/458", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2099626845
Unable to detect GPUs on AMI100 gpu Problem Description On our HPC cluster, we are using pytorch/1.13.0 which was installed with rocm/5.2.3. After loading those modules I am unable to detect the AMD GPU [ nvidia_amd_benchmarking]$ python3 Python 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.cuda.is_available() False When running rocminfo I get: nvidia_amd_benchmarking]$ rocminfo ROCk module is loaded Unable to open /dev/kfd read-write: Permission denied Failed to get user name to check for video group membership When running the test from here I get the following: Checking ROCM support... BAD: No ROCM devices found. Checking PyTorch... GOOD: PyTorch is working fine. Checking user groups... Cannot find rocminfo command information. Unable to determine if AMDGPU drivers with ROCM support were installed. rocm-smi shows: ======================= ROCm System Management Interface ======================= ================================= Concise Info ================================= GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 0 33.0c 35.0W 300Mhz 1200Mhz 0% auto 290.0W 0% 0% ================================================================================ ============================= End of ROCm SMI Log ============================== Operating System Red Hat Enterprise Linux 8.4 (Ootpa) CPU Model name: AMD EPYC 7543 32-Core Processor GPU AMD Instinct MI100 ROCm Version ROCm 5.5.0 ROCm Component rocminfo Steps to Reproduce No response (Optional for Linux users) Output of /opt/rocm/bin/rocminfo --support No response Additional Information No response ROCM version is rocm/5.2.3. It did not show when looking at the drop-down menu An internal ticket has been created to investigate the issue. Hi, could you check your user privileges? Here I have just tried rocminfo for a new user, for example: $ sudo useradd -g users test $ id test uid=1001(test) gid=100(users) groups=100(users) $ rocminfo ROCk module is loaded Unable to open /dev/kfd read-write: Permission denied rocm is member of render group $ id rocm uid=1000(rocm) gid=1000(rocm) groups=1000(rocm),4(adm),24(cdrom),27(sudo),30(dip),44(video),46(plugdev),110(render),122(lpadmin),135(lxd),136(sambashare) ). Can you also try to run the commands with sudo? Thanks. So, I do not have user privileges on the cluster, I have reached out to our admins on the cluster regarding the rocminfo permission denied issue. Will get back to you guys. The issue has been fixed! I got removed from the video group without realizing it. Once they put me back in the video group everything worked seamlessly.
gharchive/issue
2024-01-25T05:59:00
2025-04-01T04:55:33.538162
{ "authors": [ "kf-cuanschutz", "nartmada", "vstempen" ], "repo": "ROCm/ROCm", "url": "https://github.com/ROCm/ROCm/issues/2838", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1427352944
Fbar, the kernel's barriers limitation Hi, I would like to get more details about the fbar information reported by rocprof. What does it mean. How can it be interpreted. Is it related to : Max fbarriers/Workgrp: 32 Which is given by rocminfo. Thanks @etiennemlb Apologies for the lack of response. Do you still need assistance with this ticket? If not, please close the ticket. Thanks! Yes I would like more/extensive documentation on the counter the AMD GPUs provide. Do you have a document I could rely on ? Thanks! Hi @etiennemlb, thanks for you patience. Here is a link to the documentation for rocprofv1. However, note that fbar is a deprecated metric from rocprofv1. Moreover, rocprofv1 is no longer under development and there is a new rocprofv3 released in ROCm 6.2 as a beta, which is built on top of the new rocprofiler-sdk. I strongly suggest using rocprofv3 since it is very close to having feature parity, has a lower overhead than v1 and v2, and is significantly better tested. Here is a link to the documentation for rocprofv3 (See the "Using rocprofv3" section). You can also refer to this link for information on relevant performance counters and metrics or try "rocprofv3 --list-metrics" for a list of basic HW counters on your system. Please let me know if need assistance with anything else! Great! And if you encounter any issues with rocprofv3, you can create another ticket under the rocprofiler-sdk repository.
gharchive/issue
2022-10-28T14:31:40
2025-04-01T04:55:33.550163
{ "authors": [ "etiennemlb", "ppanchad-amd", "sohaibnd" ], "repo": "ROCm/rocprofiler", "url": "https://github.com/ROCm/rocprofiler/issues/99", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1251220956
Conv Driver fails with default vector_length parameter Conv Driver throws an error with #1532 PR. Example ./bin/MIOpenDriver convfp16 -c 1 -H 700 -W 700 -y 5 -x 20 -k 32 -n 2 -p 0 -q 0 -u 1 -v 1 -l 1 -j 1 -F 1 -V 1 -i 1 -t 1 MIOpenDriver convfp16 -c 1 -H 700 -W 700 -y 5 -x 20 -k 32 -n 2 -p 0 -q 0 -u 1 -v 1 -l 1 -j 1 -F 1 -V 1 -i 1 -t 1 MIOpen(HIP): Info [get_device_name] Raw device name: gfx1030 MIOpen(HIP): Info [Handle] stream: 0x12a6060, device_id: 0 Invalid Tensor Vectorization Parameter Value - vector_dim:0vector_length:1 Seems the error in the conv_driver and the check should be (vector_length == 1 && vector_dim == 0): https://github.com/ROCmSoftwarePlatform/MIOpen/blob/93d1476463d281ff731e8aea394c44b09cdc1da7/driver/conv_driver.hpp#L575-L588 Thanks @Slimakanzer for report the problem, I made a stupid mistake. Thanks @carlushuang for implement the hot fix.
gharchive/issue
2022-05-27T20:05:47
2025-04-01T04:55:33.552446
{ "authors": [ "Slimakanzer", "aska-0096" ], "repo": "ROCmSoftwarePlatform/MIOpen", "url": "https://github.com/ROCmSoftwarePlatform/MIOpen/issues/1564", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1363202302
add support for Evolution X tiramisu [skip ci] Signed-off-by: NRanjan-17 nalinishranjan05@gmail.com @Apon77 Merge Please
gharchive/pull-request
2022-09-06T12:22:20
2025-04-01T04:55:33.566760
{ "authors": [ "NRanjan-17" ], "repo": "ROM-builders/temporary", "url": "https://github.com/ROM-builders/temporary/pull/13123", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
159172619
Motor with enable control Motors on the new MotoZero board require 3 pins to control them. Forward pin, backward pin and an enable pin. They can be controlled with GPIO Zero by using Motor for the forward and backward pins, and OutputDevice for the enable pin like so: >>> from gpiozero import Motor, OutputDevice >>> motor = Motor(24, 27) >>> motor_enable = OutputDevice(5) >>> motor.forward() # nothing happens >>> motor_enable.on() # the motor goes forward >>> motor_enable.off() # the motor stops >>> motor.backward() # nothing happens >>> motor_enable.on() # the motor goes backward >>> motor.value = 0.5 # the motor goes forward at half speed >>> motor.stop() # the motor stops My instinct is to want to remove the enabling step, and just let Motor turn on the enable pin on init, and never change its state, allowing forward(), backward(), stop() and value to be used to control the state of the motor. However, Keith has vocalised his views that the enable pin is particularly useful: Nice benefit of using enable pin is that it can be PWMed to control the speed, independent of dir. and Enable pin 0 = motor off, Enable pin 1 = Motor full speed. Enable pin PWM 0.5 = Motor 1/2 speed But I don't think this makes it any easier than gpiozero already does: >>> motor = Motor(17, 18) >>> motor.forward() >>> motor.value = 0.5 >>> motor.backward() >>> motor.value = -0.5 >>> motor.reverse() I propose we add an optional third pin to Motor and set it high on init. This will allow it to easily fit into Motor and Robot. Any thoughts? Yes for compatibility adding a third option pin makes perfect sense. My experience was a while ago with RPi.GPIO when things were not as easy as they are now with gpiozero. However it would be good to be able to set the enable pin low to enable debug mode, i.e. set enable low and allow code to run without having to detach power to motors. It's much easier to do PWM in gpiozero than in RPi.GPIO. So yeah. From a bit of internet searching, it seems like you can control the L293D by either treating the IN1 and IN2 pins as digital, and then PWM the speed on ENABLE1; or you can treat the ENABLE1 pin as digital and then PWM on the IN1 and IN2 pins. For microcontrollers with limited PWM pins it's obviously better to only PWM on ENABLE1, but I couldn't find any definitive references saying whether one approach was better than the other or not. So perhaps either approach is equally valid? (In contrast, the datasheet for the L298N explicitly recommends only PWMing the ENABLE pin) Ah I see, the (Digital_forwards, Digital_backwards, PWM_speed) makes sense now. In which case, I'd rather stick with one implementation than two. Either we opt for #367, and PWM is done on the forward or backward pin, or we opt for having a different motor class for these types of motors, which are controlled via the enable pin. I prefer the former solution, but if there's enough support for doing it the latter way, we might have to. From the perspective of an L293 or equivalent, I don't think it matters how you PWM it, its just a case implementation. Since gpiozero does the implementation we do not need to care any more, so the solution in #367 works well. I was not aware the L298 should be PWMed on the enable pin. Our PiWars 2014 entry used a L298 PWMed on the drive pins, it seemed to work well. After reviewing the data sheet today the only reason I can see for having to PWM on the enable pin is because of the 'Fast Stop' feature For 'Fast Stop' to be engaged, enable pin must be high and and the drive pins must be equal. Therefore if the drive pins are being PWMed, each cycle of the PWM the 'Fast Stop" feature will kick in. If PWM is on the enable pin the motors will 'free spin' between cycles. In critical applications this may be important, but in terms of hobby electronics, I'm not sure how important it is. It might reduce the life cycle of the motor driver or the motor. L298 data sheet attached, refer to page 6 for 'Fast Stop' feature. en.CD00000240.pdf @waveform80 what do you think? We also need to consider the four motor robot. Is that a trivial extension? Ahh, I was looking at a different datasheet previously - see https://github.com/RPi-Distro/python-gpiozero/issues/300#issuecomment-215999378 Although now I read it again, I see it refers to both L293 and L298 in the same category, so it seems I was wrong to assume they needed to be treated differently. *shrug*
gharchive/issue
2016-06-08T14:07:45
2025-04-01T04:55:33.684036
{ "authors": [ "bennuttall", "keithellis74", "lurch" ], "repo": "RPi-Distro/python-gpiozero", "url": "https://github.com/RPi-Distro/python-gpiozero/issues/366", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1344938713
9GAG - Trending failed with error 403 Uncaught Exception HttpException: 403 Forbidden at lib/contents.php line 132 #0 index.php:35 #1 actions/DisplayAction.php:134 #2 bridges/NineGagBridge.php:144 #3 lib/contents.php:132 Query string:action=display&bridge=NineGagBridge&context=Popular&d=trending&video=none&p=5&format=Html Version:git.master.372eccd Os:Linux PHP version:7.4.3 I can reproduce this. It's cloudflare antibot. I can reproduce this. It's cloudflare antibot. It was working fine before. Is there any progress on this issue? @mbnoimi you would need to bypass the cloudflare checks somehow. I'm using the curl-impersonate project as per the dockerfile from this pull request: https://github.com/RSS-Bridge/rss-bridge/pull/2941 You can do it yourself by cloning wrobelda's fork, checking out the curlimpersonate branch and building the image. Example yaml below version: '2' services: rss-bridge: container_name: rss-bridge # image: rssbridge/rss-bridge build: ./rss-bridge/ volumes: - ./volumes/whitelist.txt:/app/whitelist.txt - ./volumes/config.ini.php:/app/config.ini.php ports: - 127.0.0.1:9009:80 restart: always
gharchive/issue
2022-08-19T22:16:56
2025-04-01T04:55:33.717829
{ "authors": [ "dugite-code", "dvikan", "mbnoimi" ], "repo": "RSS-Bridge/rss-bridge", "url": "https://github.com/RSS-Bridge/rss-bridge/issues/2972", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
1143272521
fix nordbayern bridge This time: Add support for Nürnberger Nachrichten & Nürnberger Zeitung @theScrabi Can you fix? @theScrabi
gharchive/pull-request
2022-02-18T15:34:46
2025-04-01T04:55:33.718994
{ "authors": [ "dvikan", "theScrabi" ], "repo": "RSS-Bridge/rss-bridge", "url": "https://github.com/RSS-Bridge/rss-bridge/pull/2463", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
1386001110
[RedditBridge] Search for specific flairs Was looking for a way to get posts with a specific flair from a subreddit. The goal was to get an RSS feed for such an URL: https://www.reddit.com/r/selfhosted/?f=flair_name%3A"Proxy" I've been running it like this for 2 weeks now and works as expected. adds an optional input field for the "single" feed in the RedditBridge if that input field is filled it adds flair:"_INPUT_" to the search URL Reddits search then returns only posts with those flairs I don't think this pr supports search by multiple flairs. Is that an intended feature? A single flair works like this flair:"Proxy" But flair:"Proxy, Need Help" nor flair:"Proxy Need Help" nor flair:"Proxy" flair:"Need Help" works. The multi-reddit search in this bridge also just starts multiple CURLs, don't think reddits query language supports that unless I'm missing something. Why do you replace , with %20? No particular reason, was just a leftover from copy&pasting the lines above. This update appears to break existing feeds, any that don't have the new flair &f= in the feed URL are now broken (blank) Apologies, had no previous Reddit feeds on my server, that's why I didn't see the issue. Opened #3087 which fixes that.
gharchive/pull-request
2022-09-26T12:32:03
2025-04-01T04:55:33.724829
{ "authors": [ "dvikan", "joshinat0r", "kinoushe" ], "repo": "RSS-Bridge/rss-bridge", "url": "https://github.com/RSS-Bridge/rss-bridge/pull/3067", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
936300439
Bug fixes for sky130 scripts Fixed bugs with setting the correct path for the target files for generate_fill and check_density. Made syntax corrections for check_antenna. Added error catching to all files involving the .rcfiles and checking whether some files existed or not. Merged and pulled on opencircuitdesign.com; github mirror will update overnight.
gharchive/pull-request
2021-07-03T17:56:55
2025-04-01T04:55:33.761857
{ "authors": [ "RTimothyEdwards", "emayecs" ], "repo": "RTimothyEdwards/open_pdks", "url": "https://github.com/RTimothyEdwards/open_pdks/pull/135", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1955486469
Add the new FCPE F0 method I can see from the official so-vits-svc repository that they are implementing a new F0 detection method that is designed for real-time voice conversion. Here are the official links to this method: Official repository: https://github.com/CNChTu/FCPE Pretrained model: https://huggingface.co/datasets/ylzz1997/rmvpe_pretrain_model/blob/main/fcpe.pt It could be a nice addition to the actual RVC project, maybe it could support or replace the current RMVPE which is already good. This is in the plan. At present, it is found that the anti-noise ability of fcpe is still not ideal. We are adding data to train some better weights for testing. Alright, that's good news. Keep up the hard work! Are there any updates? #1618 It's already implemented. As I understand it, it has been implemented only for the real-time GUI. Is there a way to use it for regular audio file processing? In 2024, rmvpe and fcpe which is best f0 predictor for real-time infering? As I understand it, it has been implemented only for the real-time GUI. Is there a way to use it for regular audio file processing? I would like to know too. As I understand it, it has been implemented only for the real-time GUI. Is there a way to use it for regular audio file processing? https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/pull/1934
gharchive/issue
2023-10-21T13:08:47
2025-04-01T04:55:33.773173
{ "authors": [ "GabryB03", "Kamikadashi", "Tps-F", "cheezebone", "rsxdalv", "ykk648", "yxlllc" ], "repo": "RVC-Project/Retrieval-based-Voice-Conversion-WebUI", "url": "https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/1457", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2352833165
Spontaneous error when separating vocal/accompaniment. This error happens to me spontaneously both on Windows and Linux. On Windows I managed to do separation, training and inference before this error occurred. After it did I wasn't able to get fix it. I decided to switch to Linux - separation worked once but now I see the same result. Here error text: audio.mp3.reformatted.wav->Traceback (most recent call last): File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/core/audio.py", line 176, in load y, sr_native = __soundfile_load(path, offset, duration, dtype) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/core/audio.py", line 209, in __soundfile_load context = sf.SoundFile(path) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/soundfile.py", line 658, in __init__ self._file = self._open(file, mode_int, closefd) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/soundfile.py", line 1216, in _open raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening '/workspace/Retrieval-based-Voice-Conversion-WebUI/TEMP/audio.mp3.reformatted.wav': System error. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/workspace/Retrieval-based-Voice-Conversion-WebUI/infer/modules/uvr5/modules.py", line 74, in uvr pre_fun._path_audio_( File "/workspace/Retrieval-based-Voice-Conversion-WebUI/infer/modules/uvr5/vr.py", line 63, in _path_audio_ ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/core/audio.py", line 184, in load y, sr_native = __audioread_load(path, offset, duration, dtype) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/decorator.py", line 232, in fun return caller(func, *(extras + args), **kw) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/util/decorators.py", line 59, in __wrapper return func(*args, **kwargs) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/core/audio.py", line 240, in __audioread_load reader = audioread.audio_open(path) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/audioread/__init__.py", line 127, in audio_open return BackendClass(path) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/audioread/rawread.py", line 59, in __init__ self._fh = open(filename, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/workspace/Retrieval-based-Voice-Conversion-WebUI/TEMP/audio.mp3.reformatted.wav' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/core/audio.py", line 176, in load y, sr_native = __soundfile_load(path, offset, duration, dtype) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/core/audio.py", line 209, in __soundfile_load context = sf.SoundFile(path) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/soundfile.py", line 658, in __init__ self._file = self._open(file, mode_int, closefd) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/soundfile.py", line 1216, in _open raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening '/workspace/Retrieval-based-Voice-Conversion-WebUI/TEMP/audio.mp3.reformatted.wav': System error. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/workspace/Retrieval-based-Voice-Conversion-WebUI/infer/modules/uvr5/modules.py", line 82, in uvr pre_fun._path_audio_( File "/workspace/Retrieval-based-Voice-Conversion-WebUI/infer/modules/uvr5/vr.py", line 63, in _path_audio_ ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/core/audio.py", line 184, in load y, sr_native = __audioread_load(path, offset, duration, dtype) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/decorator.py", line 232, in fun return caller(func, *(extras + args), **kw) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/util/decorators.py", line 59, in __wrapper return func(*args, **kwargs) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/librosa/core/audio.py", line 240, in __audioread_load reader = audioread.audio_open(path) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/audioread/__init__.py", line 127, in audio_open return BackendClass(path) File "/workspace/Retrieval-based-Voice-Conversion-WebUI/venv/lib/python3.10/site-packages/audioread/rawread.py", line 59, in __init__ self._fh = open(filename, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/workspace/Retrieval-based-Voice-Conversion-WebUI/TEMP/audio.mp3.reformatted.wav' Reinstated ffmpeg. Now it works but only when I upload music as file, but not when I specify folder path
gharchive/issue
2024-06-14T08:26:57
2025-04-01T04:55:33.776312
{ "authors": [ "PMykhailo" ], "repo": "RVC-Project/Retrieval-based-Voice-Conversion-WebUI", "url": "https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/2131", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
137381039
Persist task error messages Persist task error messages in the graph document. This makes it easier to debug failed workflows. Minor feature regression. @RackHD/corecommitters @VulpesArtificem test this please +1 +1
gharchive/pull-request
2016-02-29T21:24:58
2025-04-01T04:55:33.790140
{ "authors": [ "VulpesArtificem", "benbp", "jlongever" ], "repo": "RackHD/on-core", "url": "https://github.com/RackHD/on-core/pull/92", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
125562655
modify generate enclosure job to not override other relation type PR to 1.1.0 branch, the same as https://github.com/RackHD/on-tasks/pull/102 that has been merged into master. Without this PR, the enclosure node that doesn't have "encloses" relationType previously will not be updated to include information about compute nodes. This problem is only exposed with the fix of ODR-314 (https://github.com/RackHD/on-http/pull/88, https://github.com/RackHD/on-core/pull/62) about adding the deleting and updating enclosure/compute node. :+1: Can one of the admins verify this patch? :+1:
gharchive/pull-request
2016-01-08T08:03:29
2025-04-01T04:55:33.796889
{ "authors": [ "anhou", "heckj", "iceiilin", "jlongever" ], "repo": "RackHD/on-tasks", "url": "https://github.com/RackHD/on-tasks/pull/104", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1644750717
Adding linters for Markdown and RST files for documentation Goal: Try to have these for all ROCm repositories with documentation https://github.com/RadeonOpenCompute/ROCm/pull/2207
gharchive/issue
2023-03-28T22:38:08
2025-04-01T04:55:33.842896
{ "authors": [ "samjwu" ], "repo": "RadeonOpenCompute/ROCm", "url": "https://github.com/RadeonOpenCompute/ROCm/issues/1998", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
322929713
como hago una toolkit me pregunto como puedo empezar a crear mi propia toolkit he visto muchas con .sh (bash) alguien me puede explicar Depende de las necesidades que tengas. ¿Qué quieres hacer?
gharchive/issue
2018-05-14T18:28:29
2025-04-01T04:55:33.848583
{ "authors": [ "RafaelAybar", "sqkli" ], "repo": "RafaelAybar/Bash-toolkit", "url": "https://github.com/RafaelAybar/Bash-toolkit/issues/10", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2137819490
Create SPECS_English.md Translated Portuguese to English using Google Docs. Please add this English version of docs. Thank you!!
gharchive/pull-request
2024-02-16T03:46:58
2025-04-01T04:55:33.854143
{ "authors": [ "richlysakowski" ], "repo": "RaffaeleFiorillo/Fast_and_Curious", "url": "https://github.com/RaffaeleFiorillo/Fast_and_Curious/pull/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
971839678
Leading kset to kset join operator fails to parse in DSL Describe the bug The following line of code: m += ( raft::kset( g0, g1, g2 ) >> raft::kset( a, b, c ) ) >= ( j >> print ); should parse according to the grammar, indeed there is a test case in the dev branch to ensure that this is the case. Syntax, however, fails to parse in the master branch. To fix, pull test cases to master branch, and debug from there. Steps To Reproduce Run test case ksetContContJoin.cpp Expected behavior M producers in first kset should be joined to M consumers in second kset Started new branch, will tag when I have some commits for this issue specifically.
gharchive/issue
2021-08-16T15:10:38
2025-04-01T04:55:33.859590
{ "authors": [ "jonathan-beard" ], "repo": "RaftLib/RaftLib", "url": "https://github.com/RaftLib/RaftLib/issues/158", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
63976156
Attachment problem on latest version v1.8.2.291 If I try to send an email with an attachment, the email is not sent: Working fine on my end. Depending on your connection stability, sometimes it can fail the first time and then work, if you insist on it. Other than that, make sure the upload size limit set on the server is enough. My upload size limit is 50Mb, but the file in attach is only 500Kb. Furthermore, this error occurs only after the last update... Try to enable and check the log file. I just tested sending a ~700KB file to myself and it worked fine. Working fine for me... (17Mb test) Have you checked on your admin panel http://installation_URL/?admin if the settings is allright? Yes, in the admin panel the settings is correct. In the log file is no trace of the attachment operation and if I save the draft message, after in the saved email the attachment is not longer present: I think it does not work the loading, although it seems to work. @glardone What is your web server? nginx? My web server is apache.
gharchive/issue
2015-03-24T11:47:33
2025-04-01T04:55:33.909660
{ "authors": [ "RainLoop", "arturbonnett", "glardone", "megadr01d" ], "repo": "RainLoop/rainloop-webmail", "url": "https://github.com/RainLoop/rainloop-webmail/issues/550", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
150514190
PHP Parse Error - PHP 5.3.10 on Ubuntu 12.04.5 LTS When running the scanner with a valid domain name, I get the following error: php app.php -u http://www.domain-name.com -d PHP Parse error: syntax error, unexpected '[' in /home/user/Wordpress-scanner/app.php on line 64 PHP Version: PHP 5.3.10-1ubuntu3.22 OS: Ubuntu 12.04.5 LTS Disregard - source notes 5.4+ is required.
gharchive/issue
2016-04-23T03:33:18
2025-04-01T04:55:34.024004
{ "authors": [ "mrunyon" ], "repo": "RamadhanAmizudin/Wordpress-scanner", "url": "https://github.com/RamadhanAmizudin/Wordpress-scanner/issues/39", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2561558867
Restructure Project to Add Backend in Express (MVC Architecture) Describe the feature @RamakrushnaBiswal, The current project structure is solely focused on the frontend. To accommodate the development of backend routes as outlined in Issues #33 and #30 , we need to restructure the project to include a backend layer built using Express.js, following a Model-View-Controller (MVC) architecture. Proposed Changes: Create a new directory: Create a new directory named backend at the project's root level. Initialize Express.js: Within the backend directory, initialize a new Express.js application using npm init express or a similar command. Implement MVC Structure: Models: Create a models directory within backend to store JavaScript files representing data objects (e.g., User.js, Product.js). These models will interact with the database to retrieve and store data. Controllers: Create a controllers directory within backend to house JavaScript files containing functions that handle incoming requests, process data using models, and render appropriate responses (e.g., UserController.js, ProductController.js). Routes: Define routes in an routes directory within backend to map HTTP methods (GET, POST, PUT, DELETE) to specific controller functions. Add ScreenShots Record [X] I agree to follow this project's Code of Conduct [X] I'm a gssoc-24-extd contributor [X] I want to work on this issue @samar12-rad assigned solved via #45
gharchive/issue
2024-10-02T12:51:55
2025-04-01T04:55:34.028214
{ "authors": [ "RamakrushnaBiswal", "samar12-rad" ], "repo": "RamakrushnaBiswal/PlayCafe", "url": "https://github.com/RamakrushnaBiswal/PlayCafe/issues/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
852937190
Erro ao buscar OS's antigas Olá, o banco de dados foi atualizado para a versão 4.32.1, composer foi rodado em uma instalação local da mesma versão e a pasta copiada para a hospedagem. Após isso, ocorreu o problema na hora de mudar as abas das O.S. Isso só ocorre nessa aba, no financeiro, clientes, produtos e serviços esta ok. @Triaca Você atualizou o seu banco de dados usando os .sql dentro da pasta updates ou por dentro do sistema? utilizando os .sql. copiei um por um até a versão atual @Triaca Tente olhar na pasta application/logs para ver se não tem algum log de erro. Continuo com o problema. Quando vou pesquisar OS's antigas elas não aparecem, se clico para avançar ou no número da página, simplesmente as OS's não aparecem. Porém se clico em pesquisar com o status da OS's. Elas aparecem mas não me deixa mudar a página e checar as antigas. @Triaca Vou investigar esse problema no final de semana
gharchive/issue
2021-04-08T00:12:32
2025-04-01T04:55:34.036484
{ "authors": [ "Pr3d4dor", "Triaca" ], "repo": "RamonSilva20/mapos", "url": "https://github.com/RamonSilva20/mapos/issues/1339", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2114628930
Make "return to menu" button more visual As you can see the return to menu button is not visible in the game view: To address this: Consider adding an appropriate background color to enhance visibility. You can adjust its position for better visual prominence. Hey @Ramzi-Abidi, I am so sorry I didn't add a comment to ask to be assigned this issue, but I hope my pr is still acceptable. I will ensure to ask you beforehand in the future. PR to change styling for "return to home" button: https://github.com/Ramzi-Abidi/Pong/pull/79
gharchive/issue
2024-02-02T10:41:47
2025-04-01T04:55:34.040875
{ "authors": [ "Jawad-A02", "Ramzi-Abidi" ], "repo": "Ramzi-Abidi/Pong", "url": "https://github.com/Ramzi-Abidi/Pong/issues/52", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1097758825
Add Perspective library and Open CSV data file with Perspective Viewer option Perspective JS repo: https://github.com/finos/perspective Perspective viewer setup loading sample superstore.arrow data file: new Open With ... -> Perspective/Tabular Data Viewer option: closing this as all the required hooks are now added to table view and vscode configs. Will finish actual Perspective view implementation to match current Tabular table view in #79.
gharchive/issue
2022-01-10T11:03:57
2025-04-01T04:55:34.103396
{ "authors": [ "RandomFractals" ], "repo": "RandomFractals/tabular-data-viewer", "url": "https://github.com/RandomFractals/tabular-data-viewer/issues/65", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
464998856
Headers or URL Params Break Caching? I can't get caching to work and the only thing I can think of is that the headers or URL parameters are somehow preventing this from working. I imagine this is probably user error of some sort, but I can't see what I'm doing wrong. Any help would be appreciated. I'm using the latest version (2.3.3) from npm and here is my code: const setup = require("axios-cache-adapter").setup; window.axios = setup({ cache: { maxAge: 15 * 60 * 1000, //readHeaders: false, //ignoreCache: false, } }); // Here is the actual ajax call: return axios.get(routesUrl, { params: { status: $store.state.group.show_inactive_pairs_routes ? '' : 'active', groupId: !_.isEmpty(groupId) ? groupId : $store.state.group.nid } }) I've tried it with and without the readHeaders/ignoreCache arguments in the setup arguments. It makes no difference and I cannot get caching to work at all. I'm using this code in a Vue.js application which has a button that triggers an Ajax call. I would expect to see a network request the first time I click the button, and no network call on the second and subsequent clicks. But, instead, no caching occurs. Ever. Here is what one of these API calls looks like: Any ideas? I ended up using this cache adapter instead (https://github.com/kuitos/axios-extensions) and it worked right away. I'm pretty sure there is either a bug here or maybe I was doing something wrong with the setup. But if anyone has a similar issue, at least you can try the other project, which worked for me without any issues. Hey @DrLongGhost ! Sorry it took me so long to get back to you. Glad you found another solution that is working for you. Just in any case someone has this issue, for now, by default axios-cache-adapter does not cache requests with query parameters. You can enable query parameter caching using exclude.query = false option. import { setup } from 'axios-cache-adapter' const axios = setup({ cache: { maxAge: 15 * 60 * 1000, exclude: { query: false } } }) This will change in v3. Sorry for this not being very clear in the documentation. Cheers 🍻 Will close issue for now. If you have any extra feedback do give please do 🚀
gharchive/issue
2019-07-07T22:14:46
2025-04-01T04:55:34.169678
{ "authors": [ "DrLongGhost", "RasCarlito" ], "repo": "RasCarlito/axios-cache-adapter", "url": "https://github.com/RasCarlito/axios-cache-adapter/issues/104", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
495183567
embedding_response_selector has not installed with the installation procedures mentioned in the documentation I have used the below command line to install new version of rasa, $ pip install rasa-x --extra-index-url https://pypi.rasa.com/simple The embedding_response_selector has not installed in rasa.nlu.selectors. To be specific no folders for selectors has been found after installation. @Sarvin92 rasa.nlu.selectors was added in Rasa 1.3.0. Since Rasa-X is currently not compatible with Rasa 1.3.0, pip install fetches Rasa 1.2.5. We are working on getting Rasa-X compatible with Rasa 1.3.0 and releasing it. Thanks for the clarification @dakshvar22 The installation has achieved by building from source, using the following command, $ git clone https://github.com/RasaHQ/rasa.git $ cd rasa $ pip install -r requirements.txt $ pip install -e .
gharchive/issue
2019-09-18T12:06:39
2025-04-01T04:55:34.172589
{ "authors": [ "Sarvin92", "dakshvar22" ], "repo": "RasaHQ/rasa", "url": "https://github.com/RasaHQ/rasa/issues/4482", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
762154137
Don't allow finetuning core with previously unseen labels that were in the domain. Rasa version: 2.2 Issue: Steps to reproduce: We have labels that are in the domain but not in any stories. We run rasa train core We add a story with one of the aforementioned labels. We run rasa train core --finetune Expected result: Exit gracefully. Actual result: Allows the model to be finetuned. @m-vdb I think this issue is very much a feature related issue and should stay in Enable's inbox?
gharchive/issue
2020-12-11T09:42:01
2025-04-01T04:55:34.175405
{ "authors": [ "dakshvar22", "joejuzl" ], "repo": "RasaHQ/rasa", "url": "https://github.com/RasaHQ/rasa/issues/7519", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
350484120
User Message Rasa Core version: 0.10.0 Python version: Python 3.6.6 :: Anaconda Operating system (windows, osx, ...): Windows 8 Issue: Hey everyone!I want to save all the user message in order to use it to train rasa nlu model again .I is there any solution for this problem ? Content of domain file (if used & relevant): Thanks for creating this issue, @tmbo will get back to you about it soon. I am not sure how you run your model, but if you are using the rasa_nlu.server script, there is an argument called --response_log that will dump your incoming queries and the predictions of the model to a file. Does that solve your issue? @abirhajji do you need any more help or can we close this issue? I solve it thank you
gharchive/issue
2018-08-14T15:37:36
2025-04-01T04:55:34.179228
{ "authors": [ "abirhajji", "akelad", "tmbo" ], "repo": "RasaHQ/rasa_core", "url": "https://github.com/RasaHQ/rasa_core/issues/874", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
874850479
$ pip install https://s3-ap-southeast-1.amazonaws.com/posit-speedgo/tensorflow_posit-1.11.0.0.0.1.dev1-cp36-cp36m-linux_x86_64.whl Not able to get this installed , Can you please specify the requirements of the machine and versions of the software Which is exactly the error you get at installation? Running the following commands in order gets to a successful installation on an Ubuntu 18.04 system: pip install numpy-posit pip install requests pip install https://s3-ap-southeast-1.amazonaws.com/posit-speedgo/tensorflow_posit-1.11.0.0.0.1.dev1-cp36-cp36m-linux_x86_64.whl pip install softposit If you get an error with the TensorFlow version on your system, you can check with the different wheels that are found in https://github.com/xman/tensorflow/wiki which python and tensorflow version is suggested ? Python >= 3.6 A modified version of TensorFlow is installed with the wheel file and pip command mentioned in the issue title. If you have TensorFlow previously installed, I guess some problems will raise, since it is not a different-named package such as numpy-posit. I suggest installing these packages in a virtual environment.
gharchive/issue
2021-05-03T19:38:18
2025-04-01T04:55:34.184862
{ "authors": [ "Amithvassit", "RaulMurillo" ], "repo": "RaulMurillo/deep-pensieve", "url": "https://github.com/RaulMurillo/deep-pensieve/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1602368287
【其他】开发群 最近收到很多高质量的PR,想想现在还没有一个用于开发交流的群组,还是建一个吧(主要是想跟你们偷学dotnet技术😶)。 之前的微信群被定义为聊天吹水群,前前后后因为举报封禁,转移过四五次群组了。有的是因为图片尺度,但大部分还是因为意识形态。。。 这次建个开发群,用来主聊技术、dotnet、bili、云这些。二极管和对技术不感兴趣的就不要进了(你们可以去吹水群),欢迎对技术感兴趣的朋友,尤其是以前贡献过代码的朋友进。 另外,我发现有些资源也是可以(或者说应该)和曾经贡献过的开发人员一起分享的: JetBrain免费证书 之前用这个开源仓库申请了一个JetBrain的免费证书,包含resharper,对dotnet开发还是非常有用的,这个应该是与贡献者共享的才对。但之前证书个数好像写成1个了,今年续期时我会留意下。 bili接口文档 目前有个使用apifox创建的api组织,用于调试和开发,这个也可以分享出来,可以一起维护使用。 过期了,请问能拉一下嘛 过期了,能重新拉一下吗 过期了 更新了,仅限开发进,非开发被会请出去哈 可以搞个TG群吗**** 过期了过期了宝 过期了,麻烦更新一下 二维码过期了呢!更新一下呗 搞个tg群吧 过期啦QAQ 过期啦 还能更新下吗 过期拉,能再拉一下吗 过期拉,能再拉一下吗**** 过期罗拉一下~ 过期了 可以重新发下二维码吗 可以重新发下二维码吗 TG群组:https://t.me/+TproSmcI8P44MzU1 希望作者更新一下docker镜像,github或者docker hub的都可以 微信群麻烦大佬更新下 谢谢
gharchive/issue
2023-02-28T04:57:20
2025-04-01T04:55:34.210988
{ "authors": [ "2077872725", "KGLongWamg", "LeeHuas", "Leon19960120", "RayWangQvQ", "Taokyla", "WhiteCjy", "YYYet", "chenliu1993", "elricccc", "fenglindubu", "ichenc", "iven98", "jackview", "jiuchu", "rxxcy", "w32x42y", "xiaowugui-117", "xisuo67" ], "repo": "RayWangQvQ/BiliBiliToolPro", "url": "https://github.com/RayWangQvQ/BiliBiliToolPro/issues/469", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
981950840
Feature Request: Support for the Magic Items module. The spells cast from the Magic Items module have their own popup where you can select the spell level which affects how many item charges are used. It would be awesome if this module could update that dialog as well, potentially replacing the "spell slots remaining" number with the number of charges casting at that level will cost. It looks like the dialog has a render event similar to the one you're currently hooking in to (renderMagicItemUpcastDialog). If there's some other thing that needs changing on the Magic Items module side let me know and I can go and add a feature request on that repo as well. Hey dude, thanks for the feedback, I'll definetly work on that next weekend!
gharchive/issue
2021-08-28T23:35:20
2025-04-01T04:55:34.214739
{ "authors": [ "Mejari", "Rayuaz" ], "repo": "Rayuaz/spell-level-buttons-for-dnd5e", "url": "https://github.com/Rayuaz/spell-level-buttons-for-dnd5e/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
183231389
Errors using Java 8, Spigot 1.8.9. [22:30:31] [Server thread/ERROR]: Could not pass event InventoryClickEvent to DecoHeads v1.3 org.bukkit.event.EventException at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:310) ~[spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:62) ~[spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at org.bukkit.plugin.SimplePluginManager.fireEvent(SimplePluginManager.java:502) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:487) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at net.minecraft.server.v1_8_R3.PlayerConnection.a(PlayerConnection.java:1630) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at net.minecraft.server.v1_8_R3.PacketPlayInWindowClick.a(SourceFile:31) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at net.minecraft.server.v1_8_R3.PacketPlayInWindowClick.a(SourceFile:9) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at net.minecraft.server.v1_8_R3.PlayerConnectionUtils$1.run(SourceFile:13) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_72] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_72] at net.minecraft.server.v1_8_R3.SystemUtils.a(SourceFile:44) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at net.minecraft.server.v1_8_R3.MinecraftServer.B(MinecraftServer.java:715) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at net.minecraft.server.v1_8_R3.DedicatedServer.B(DedicatedServer.java:374) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at net.minecraft.server.v1_8_R3.MinecraftServer.A(MinecraftServer.java:654) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at net.minecraft.server.v1_8_R3.MinecraftServer.run(MinecraftServer.java:557) [spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] Caused by: java.lang.NullPointerException at com.rayzr522.decoheads.DHListener.onInventoryClick(DHListener.java:39) ~[?:?] at sun.reflect.GeneratedMethodAccessor549.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_72] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_72] at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:306) ~[spigot-1.8.9.jar:git-Spigot-db6de12-07c3001] ... 15 more Sorry, this is actually caused by something I didn't know about at the time of writing the plugin (originally). I assume this happened while you were in creative? Yes. :( Sorry for the inconvenience. I will fix it tomorrow and then give you a dev build. Bukkit is stupid sometimes, and at the time of writing the plugin I didn't know that. I'll fix it up for ya :D Oh okay, thanks ! :D And when do you think, the 1.4 will be released ? :D (yeah I'm so impatient xD) Thanks a lot ! :D Not sure, tbh I haven't even touched it in a month or two. Now that I have some stuff to fix / feature requests from you I will start working on it again. So probably a week or two, I work pretty quick :wink: Well I might not the categories system done at that point... it depends, I may or may not be able to modify the GUI to allow that functionality. We'll see :grin: Oh yeahs thanks ! :D Okay good luck ! :) Fantastic plugin, the best I saw. Thank you :grin: People like you are what make developing worth it :smile: :D. Any news ? :D Sorry! I've been busy with my dev group, and the last few days I've been (shudder) cleaning out my hard drive. I used to have only 98GB of free space, now I have something like 240GB! Anyways, I should have more free time, I'll try and get to this in the next 7 days. Thanks for your patience :P Okay, no problem. :dagger: Thaanks. You're welcome. :D I've fixed this specific issue and localized the messages/text in the plugin into a config file. I still need to tweak a couple things, but after that I'll release v1.3.1. It should be out on Monday. If you know how to use Maven, you can compile itself by simply cloning the project, going into the project directory with your terminal/command prompt, and running mvn clean package. Do keep in mind that it's not ready for release yet, and I will most likely change some things with the localization config. I'm not a huge fan of the current formatting. Also just one more note, I will be changing to use Java 8 after this release. I'm sorry if some people don't have Java 8, but it is ridiculous to ignore the huge benefit to me as a developer just for a few people, especially since nearly all server hosts are using Java 8 now and people with homebrew servers can upgrade in a matter of 15-Ish minutes. Big wall o' text for ya :P Hello, I don't know how to use Maven. :( I will wait for the release so. ;) I use Java 8 so this isn't a problem for me. :D Thanks ! :) Fixed in v1.3.1 Yep.
gharchive/issue
2016-10-15T20:31:58
2025-04-01T04:55:34.223123
{ "authors": [ "Rayzr522", "TheIntelloBox" ], "repo": "Rayzr522/DecoHeads", "url": "https://github.com/Rayzr522/DecoHeads/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
416101216
Sideboard guide Is it possible to add an a easier way to add sidebaord guide. Like when we edit the deck, we add a matchup deck name, and we can add the card we get in and out, using the same way we add card to sideboard for exemple, or an other way ? Thank when we go to the deck, it shows a sheet of the sideboard guide ? Thanks Hi!Thank you for your message. Sideboard is one of the main parts of playing events. But for now, i can not see how can we add some sideboard guides.
gharchive/issue
2019-03-01T13:23:34
2025-04-01T04:55:34.225069
{ "authors": [ "Aquaman88", "SebaFR7474" ], "repo": "Razviar/mtga-pro-tracker", "url": "https://github.com/Razviar/mtga-pro-tracker/issues/351", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2760730324
update n8n logo The n8n logo was redesigned in 2022, and this PR updates the corresponding graphic files to align with the new design. Thank you for your PR !
gharchive/pull-request
2024-12-27T10:51:43
2025-04-01T04:55:34.225898
{ "authors": [ "Rbillon59", "csuermann" ], "repo": "Rbillon59/hass-n8n", "url": "https://github.com/Rbillon59/hass-n8n/pull/141", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }