id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
176338229 | custommedia extension markup validation failed (W3C)
Hi
I am using the custom media extension with media="--small" a.s.o. like in the examples.
Unfortunatley this markup is no valid html and so highlighted by the W3C-Validator:
"Bad value --small for attribute media on element source: Expected ( or letter at start of a media query part but saw - instead."
"Stray end tag source."
Is this a normal behavior or do I make something wrong? How could this be avoided?
Thanks,
Frank
You make everything right. Custom media queries might be a future feature and therefore the validator does not validate. We basically polyfill this with lazySizes. It doesn't harm your page.
You can avoid this by using the data-media attribute instead:
<source
srcset="data:,pixel"
media="(max-width: 1px)"
srcset="your_srces"
data-media="--small" />
| gharchive/issue | 2016-09-12T10:03:24 | 2025-04-01T06:37:43.748666 | {
"authors": [
"LDSign",
"aFarkas"
],
"repo": "aFarkas/lazysizes",
"url": "https://github.com/aFarkas/lazysizes/issues/304",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1949196360 | Tests are not running
Closes #12
looks good
| gharchive/pull-request | 2023-10-18T08:41:44 | 2025-04-01T06:37:43.765009 | {
"authors": [
"aabarmin",
"naXa777"
],
"repo": "aabarmin/epam-microservices-training-2022",
"url": "https://github.com/aabarmin/epam-microservices-training-2022/pull/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2343612249 | Improve logging
Description of changes
[x] Add user id to the logs
[x] Separate db and normal logs
[x] Log when calling A+
[x] Document log format
[x] Improve logging documentation
[x] Remove unnecessary log files
Related issues
Closes #725
Closes #247
server/logs is not gitignored
Not necessary as log files won't be created. In the production environment the docker container forwards the cli logs automatically.
| gharchive/pull-request | 2024-06-10T11:06:20 | 2025-04-01T06:37:43.774925 | {
"authors": [
"bntti"
],
"repo": "aalto-grades/aalto-grades",
"url": "https://github.com/aalto-grades/aalto-grades/pull/735",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
448696810 | HTML showing in post
The feed of Peter Rukavina's blog shows fine in my feedreader (image below) but in Monocle it shows the HTML content of the post. When I check the debug-screen, I see there's HTML in the JSON as well. I'm not sure if this is some bad rendering on the side of Peter's feed, forgiveness of Inoreader, sanitization in Aperture or maybe something else?
I need a little more information in order to track this down:
What Microsub server are you using? It looks like it's not Aperture.
What feed of his are you following? I looked at his site and he has Microformats as well as an RSS feed.
Looking at the debug JSON you posted, I see a couple problems with the data there, which is a problem with whatever Microsub server you're using.
author should never be a string, it should always be an object with the name, url and photo properties. https://indieweb.org/Microsub-spec#card
summary is a plaintext value, and cannot contain HTML, which is why you're seeing the HTML tags in Monocle. I need to know what feed of his you're following and which Microsub server you're using in order to tell exactly where the problem is here.
Thanks. I thought I had Aperture enabled but later today I found out I still had Yarns activated in my WordPress environment. So that's the Microsub server I use for this feed. I am subscribed to https://ruk.ca/rss/feedburner.xml on this server. I will do some more testing with both servers and feed. I'll post my findings later.
@frankmeeuwsen, This is definitely a Yarns issue then, I'll check it out and will update you when I have it fixed
I just switched to Monocle to see what would happen. I am subscribed to the same feed, now Monocle show the correct parsed HTML. Thanks for the update @jackjamieson2. Do you need me to file an issue at your repo?
Thanks Frank, Aaron already filed https://github.com/jackjamieson2/yarns-microsub-server/issues/75
| gharchive/issue | 2019-05-27T07:00:28 | 2025-04-01T06:37:43.789995 | {
"authors": [
"aaronpk",
"frankmeeuwsen",
"jackjamieson2"
],
"repo": "aaronpk/Monocle",
"url": "https://github.com/aaronpk/Monocle/issues/40",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
161068735 | Generated schema filename times are not UTC
When multiple team members are in different timezones, migrations may get out of sync because the times used in the generated filenames are local.
I changed the print_new command to use UTC datetime string. It's in 0.7.1 release.
Thanks!
| gharchive/issue | 2016-06-19T11:35:56 | 2025-04-01T06:37:43.799425 | {
"authors": [
"RX14",
"aartur"
],
"repo": "aartur/mschematool",
"url": "https://github.com/aartur/mschematool/issues/6",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1772482972 | Fail to init BITStarPlanner
It seems like bit_star_planner.py is not fully finished, such as some abstract method is not implemented, using undeclared variable.
There is another issue that in bit_star_planner.ipynb,
This line:
result_refined = BITStarPlanner(num_batch=200, stop_when_success=False).plan(env, start, goal, timeout=('time', 10))
cannot return a valid result.
In addition, in the GNN planner, is the path smoother module missing?
| gharchive/issue | 2023-06-24T06:07:06 | 2025-04-01T06:37:43.802120 | {
"authors": [
"devinluo27"
],
"repo": "aaucsd/lemp",
"url": "https://github.com/aaucsd/lemp/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1817928112 | Didn't find parameter 'mode' in signature of z2ui5_cl_xml_view->generictile
error complains no parameter mode found in generictile
error resolved, thanks!
| gharchive/issue | 2023-07-24T08:39:12 | 2025-04-01T06:37:43.805962 | {
"authors": [
"zhang928160849"
],
"repo": "abap2UI5/abap2UI5-samples",
"url": "https://github.com/abap2UI5/abap2UI5-samples/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1181722966 | Docker Service.docker fails in linux build validation pipeline
https://github.com/abbgrade/psdocker/actions/workflows/build-validation.yml?query=branch%3Adevelop
Message:
ProcessCommandException: Cannot find a process with the name "com.docker.service". Verify the process name and call the cmdlet again.
linux and windows agents are effected
| gharchive/issue | 2022-03-26T12:22:48 | 2025-04-01T06:37:43.826066 | {
"authors": [
"abbgrade"
],
"repo": "abbgrade/psdocker",
"url": "https://github.com/abbgrade/psdocker/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2276194767 | Retag existing image instead of building image without additional layers
Closes https://github.com/abcxyz/github-metrics-aggregator/pull/262
Sigh https://github.com/apache/beam/pull/31010
Sigh apache/beam#31010
Is it time to give up on Beam and just build some simple cloud run jobs for our pipelines? We aren't doing anything fancy and we've only ever had one customer look at them as a reference.
Is it time to give up on Beam and just build some simple cloud run jobs for our pipelines? We aren't doing anything fancy and we've only ever had one customer look at them as a reference.
I was ready to give up on day 1, but yes. I think we should give up on Beam and use Cloud Run jobs instead. Any performance benefit we're getting using Beam is outweighed by lost engineering cycles.
| gharchive/pull-request | 2024-05-02T18:19:55 | 2025-04-01T06:37:43.828907 | {
"authors": [
"bradegler",
"sethvargo"
],
"repo": "abcxyz/github-metrics-aggregator",
"url": "https://github.com/abcxyz/github-metrics-aggregator/pull/263",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1362216721 | feat: home page integrate api:
extract components
add redux toolkit
add RTK query(TBC)
報告patrick哥
忘了還有
報告沒問題patrick哥
都忘了還有CORS的問題 真是抱歉
我過幾天再來修他
RTKQ是很麻煩的東西
Boilerplate爆多 不過功能很多
最實用的功能是他的cache
就是像youtube那樣 我們搜尋影片->點某一部進去看->按上一頁回到搜尋結果
那些搜尋結果會馬上出現 看chrome的network tab會發現他沒有真的發request去取資料 因為他用的是cache的資料
Cache很難做 不過RTKQ幫我們做好了
他的核心概念就是把API的動作/狀態跟store綁在一起
真的就是RTK + Query的意思
RTK就是redux那一套 您已經知道了
Query就是跟呼叫api有關的東西
包含endpoint的定義(網址/HTTP method/參數之類的) 還有response handler等等
Endpoint的行為分成兩類 分別是query和mutation
Query endpoint是查詢用的 得到response之後可以把結果cache起來
Mutation endpoint就是修改用的 得到response之後可以修改現有cache的內容
每個endpoint都會被他轉換成hook 這些hook都有自己的state
因為他是存在store裡面的 所以每個endpoint hook的狀態都是整個app共用 像我們把使用者資料存在redux store一樣的概念
例如我可能有 useGetArticlesQuery 這個hook 裡面就會有 isLoading, data 這些state
也會有 refetch 這種方法可以呼叫
雖然體驗不是太好 第一眼看會滿難懂的
不過您有興趣的話還是可以試試看 畢竟也是大框架
我們的pagination type分成 basic 和 cursor 兩種
basic 是一般的分頁 可以選要取第幾頁的資料 然後一頁幾筆的分頁方式
例如說我寫 /articles?page=2&limit=25 就是代表取第3頁 (我們的index是從0開始) 然後一頁25筆 所以我會拿到第51 ~ 75筆資料
這種分頁方式適合跟bootstrap的那種pagination一起用
另一種 cursor 是給infinite scroll用的
就是像instagram那樣 每次滑到最下面就會再拿N筆資料回來
雖然乍看之下好像可以用basic pagination做 但是其實不行
因為這樣新資料加進來就會導致取的結果不對了
例如剛才的 /articles?page=2&limit=25 會拿到第51 ~ 75筆資料 假設他是由新到舊排序
現在有5筆新料被加進來 我再去呼叫 /articles?page=3&limit=25
雖然 page = 3, limit = 25 會給我第76 ~ 100筆資料 但是因為前面有新的5筆
所以我的76 ~ 80其實是剛才51 ~ 75的最後面5筆 所以畫面上就會出現重複id的資料了
不曉得您能不能理解這個概念
cursor pagination就是用來解決這個問題
我們的cursor代表資料的id 他是數字
API只會回傳id大於這個數字的資料
就跟書本一樣 假設一開始總共200頁 0在左 200在右 由左至右遞增
我把書籤插在第25頁 永遠只往右翻
這樣不管0的左邊加了多少頁 我永遠都不會拿到那些東西. 因為我只會往右找
所以只要cursor設對 每次都不重疊 我每次得到的結果就絕對不會重複
不過經您這麼一說 我們應該要允許使用者決定cursor是要往左還是往右才對
這樣才可以讓用API的人自己決定要由新到舊顯示 還是由舊到新
之後再把它加上去
最近也發現了一些不錯的圖
當做中秋節之前的糧食 請您享用
這種分頁方式適合跟bootstrap的那種pagination一起用
報告史蒂芬哥
可以理解新增資料之後 basic pagination會亂掉的部分
但這樣的話即使是bootstrap 那種 pagination也會亂掉吧??
比方說一頁25個 我在第二頁 結果前往第三頁的之前 又新增了26筆資料 這樣第三頁就還會是原本的第二頁
還是說因為每一頁都會重新取資料 不會像infinite scroll那樣保留先前取回來的資料導致畫面上有重複的資料 所以就沒什麼關係??
就跟書本一樣 假設一開始總共200頁 0在左 200在右 由左至右遞增
我把書籤插在第25頁 永遠只往右翻
所以流程會是:
原本cursor = 0
page = 0
limit = 5
當我往下滑到第5個資料(cursor = 5)的時候就會去取後5筆資料
page = 1
limit = 5
繼續滑倒第10個資料(cursor = 10)的時候 就會再去取後5筆資料
這個樣子嗎
最近也發現了一些不錯的圖 當做中秋節之前的糧食 請您享用
這個雙眼皮真的可愛史蒂芬哥
雙眼皮大眼睛跟單眼皮小眼睛就像巨乳之於平胸一樣
真的是很難抉擇
如果可以我都要就太好了
另外報告史蒂芬哥 想跟您討論一下
當瀏覽器寬度縮小到700px左右的時候
文章縮圖的字會擠在一起 有些還會超出去
要再把左邊的board往左邊靠 還是給右邊的文章區加上一個min width呢
報告patrick哥 在經過N年的等待之後我們終於又回歸了
當瀏覽器寬度縮小到700px左右的時候
更新時間的部分會超出去
這部分是也要讓他ellipsis嗎 還是左邊board再往左移呢
我們讓 文章分類 ‧ 作者 ‧ 發佈時間 的那一排wrap如何 這樣他就不會爆開了
隨意做 簡單做
報告patrick哥
Snapshot testing是比對整個HTML 只要跟上次的snapshot比對有任何一個字元不一樣都會造成測試不通過
他的核心精神就是檢查我們改完之後的UI是不是跟上次一模一樣 如果不是就要看看這個改動是刻意的還是意外
如果是刻意的 就要把舊的snapshot換成新的 如果是意外就代表有地方被改壞了
所以他比較適合用在底層UI這種久久才改一次的東西
要用在大頁面也可以 只是大頁面小地方改動的機會比較大
例如某個地方的文字從 #1 改成 No.1 或是span改label這種不影響功能的東西都會造成snapshot testing失敗
所以對大頁面做snapshot testing就會常常要更新那個snapshot
通常開發的流程都是做完直接建PR 要等Github actions跑過才會知道測試是不是都通過
因為測試太多了 大部分情況很難知道自己改了這個檔案要跑哪幾個test file
不過jest的snapshot testing也有支援存取component props/methods的功能
所以他也可以做到Cypress那種e2e testing 但是肯定會常常要更新snapshot 做起來就不是這麼方便
e2e testing指的是模擬使用者的環境和使用情境 按照使用者真正的操作流程操作一次
所以他只在乎東西是否存在 能不能點 點了有沒有用
假設某個頁面的流程是這樣:
進入頁面會看到商品圖和商品描述
點擊數量左/右箭頭可以減少/增加購買數量
點擊購買之後會跳到下個頁面
E2E testing可能就會這樣:
取得圖片的element,檢查圖片連結是否等於 https://.../xxx.jpg
取得商品描述的element,檢查內文是否等於 ...
點擊左/右箭頭,檢查商品數量是否減少/增加
點擊購買按鈕,檢查網頁url是否從 xxx 變成 ooo
這些element通常都是用 class 或 id 去取的 所以他不在乎按鈕到底是 button 還是 span
也不在乎文字到底是 p 還是 div
只要取得到 而且點下去有用就算通過
Snapshot testing則是會從上到下去比對這些東西的HTML 這個對使用者來說就不是這麼重要
因為使用者在乎的是東西有沒有用 整個流程有沒有串起來
Jest的官網有個 Why Snapshot Testing 的區塊 您也可以看看
另外如果大家都連到同一個資料庫 這樣會造成測試根本無法進行
因為資料會一直變動 那些需要依照資料內容來做的測試就會常常失敗
所以老派作法是大家都在自己的電腦上裝資料庫 然後開發/測試的時候都去連本地端的後台
每次後台跑起來就會用固定的資料包 (fixture) 重置本地資料庫
這樣就可以保證工程師之間不會互相影響 每次的資料也都相同
比較新的方式是用docker 這樣工程師就不用自己裝資料庫和架後端環境
一個docker就把所有東西都包進去 真的方便
不過我們在這裡沒有用docker 您可以試試在前端自己寫一個本地端的API
以axios為例 我們可以在request url是 xxx 的時候每次都做固定的事情
具體的作法可以參考這個 axios-mock-adapter
這樣就算沒有後台和資料庫 我們也能在前端模擬出有連資料庫的樣子
| gharchive/pull-request | 2022-09-05T15:53:55 | 2025-04-01T06:37:43.853460 | {
"authors": [
"PatrickKuei",
"abemscac"
],
"repo": "abemscac/inet-PatrickKuei",
"url": "https://github.com/abemscac/inet-PatrickKuei/pull/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1684489603 | UnicodeDecodeError: 'utf-8' codec can't decode byte
For trial to have streaming of response, below error occurs
File [~/miniconda/envs/fastapi/lib/python3.10/site-packages/llama_cpp/llama.py:482](https://file+.vscode-resource.vscode-cdn.net/Users/jinwhan/Documents/Coding/Solidity/Page/cloudRun/cloudrun-fastapi/app/~/miniconda/envs/fastapi/lib/python3.10/site-packages/llama_cpp/llama.py:482), in Llama._create_completion(self, prompt, suffix, max_tokens, temperature, top_p, logprobs, echo, stop, repeat_penalty, top_k, stream)
473 self._completion_bytes.append(text[start:])
474 ###
475 yield {
476 "id": completion_id,
477 "object": "text_completion",
478 "created": created,
479 "model": self.model_path,
480 "choices": [
481 {
--> 482 "text": text[start:].decode("utf-8"),
483 "index": 0,
...
488 }
490 if len(completion_tokens) >= max_tokens:
491 text = self.detokenize(completion_tokens)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xec in position 0: unexpected end of data
My code snippet was prepared as below referring Example.
from llama_cpp import Llama
import json
model_path = /my/model/path/for/ko_vicuna_7b/ggml-model-q4_0.bin"
prompt = "Tell me about Korea in english"
llm = Llama(model_path=model_path, n_ctx=4096, seed=0)
stream = llm(
f"Q: {prompt} \nA: ",
max_tokens=512,
stop=["Q:", "\n"],
stream=True,
temperature=0.1,
)
for output in stream:
print(output['choices'][0]["text"], end='')
Not only 0xec, but also 0xed, 0xf0 occurred for other trial cases. I cannot assure but it may be caused by language of model which is fine tuned for korean from vicuna 7b.
For your reference, several letters were given but it stops suddenly with above error.
May be fixed in #118
| gharchive/issue | 2023-04-26T08:04:09 | 2025-04-01T06:37:43.864748 | {
"authors": [
"gjmulder",
"mozzipa"
],
"repo": "abetlen/llama-cpp-python",
"url": "https://github.com/abetlen/llama-cpp-python/issues/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
774265251 | Create prowjobs-lint-presubmits.yaml
Add linting presubmit job.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: abhay-krishna
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS [abhay-krishna]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@abhay-krishna: Updated the job-config configmap in namespace default at cluster default using the following files:
key prowjobs-lint-presubmits.yaml using file jobs/aws/eks-distro-prow-jobs/prowjobs-lint-presubmits.yaml
In response to this:
Add linting presubmit job.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
| gharchive/pull-request | 2020-12-24T08:25:40 | 2025-04-01T06:37:43.871084 | {
"authors": [
"abhay-krishna"
],
"repo": "abhay-krishna/eks-distro-prow-jobs",
"url": "https://github.com/abhay-krishna/eks-distro-prow-jobs/pull/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2314963144 | [Question]: OPen multiple Same Camera, ValueError: Invalid source with no decodable audio or video stream provided. Aborting!
Issue guidelines
[X] I've read the Issue Guidelines and wholeheartedly agree.
Issue Checklist
[X] I have searched open or closed issues for my problem and found nothing related or helpful.
[X] I have read the Documentation and found nothing related to my problem.
Describe your Question
OPen multiple Same Camera, ValueError: Invalid source with no decodable audio or video stream provided. Aborting!
Terminal log output(Optional)
ValueError: Invalid source with no decodable audio or video stream provided. Aborting!
Python Code(Optional)
decoder = FFdecoder("0", frame_format="bgr24",custom_ffmpeg="./ffmpeg/bin", verbose=True,**ffparams).formulate()
decoder = FFdecoder("1", frame_format="bgr24",custom_ffmpeg="./ffmpeg/bin", verbose=True,**ffparams).formulate()
decoder = FFdecoder("3", frame_format="bgr24",custom_ffmpeg="./ffmpeg/bin", verbose=True,**ffparams).formulate()
DeFFcode Version
0.2.3
Python version
3.10
Operating System version
windows
Any other Relevant Information?
No response
ValueError: Invalid source with no decodable audio or video stream provided. Aborting!
@GavinJIAW This error means that one of the source value is invalid, meaning there's no device at "0" or "1" or "3" index, check logs to get more insight. And it is perfectly fine to open multiple cameras, as long as your system allows it.
| gharchive/issue | 2024-05-24T09:52:46 | 2025-04-01T06:37:43.876413 | {
"authors": [
"GavinJIAW",
"abhiTronix"
],
"repo": "abhiTronix/deffcode",
"url": "https://github.com/abhiTronix/deffcode/issues/47",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
955960617 | NetGear_Async: Add bi-directional data transfer mode
Detailed Description
This issue will oversee implementation of NetGear's bi-directional data transfer exclusive mode for NetGear_Async API.
Context
Currently, NetGear_Async is far efficent than its NetGear counterpart, but lacks in terms of flexibility with Exclusive modes. This issue will attempt to introduce first of them, NetGear's bi-directional data transfer exclusive mode for NetGear_Async API.
Your Current Environment
VidGear version: v0.2.2-dev
Branch: development
Python version: all
Operating System and version: all
Any Other Important Information
https://abhitronix.github.io/vidgear/latest/gears/netgear/advanced/bidirectional_mode/
Where are the new codes, development branch?
@rubar-tech not yet. Only after #106 resolved, be patient.
@rubar-tech This was harder to pull off but I did it. Please wait till I upload related doc commits. But the bare-minimum example for bidirectional video-frames transfer will be as follows:
Server End
# import library
from vidgear.gears.asyncio import NetGear_Async
import cv2, asyncio
import numpy as np
options = {"bidirectional_mode": True}
# initialize Server without any source
server = NetGear_Async(logging=True, **options)
# Create a async frame generator as custom source
async def my_frame_generator():
# !!! define your own video source here !!!
# Open any video stream
stream = cv2.VideoCapture("foo.mp4")
# loop over stream until its terminated
while True:
# read frames
(grabbed, frame) = stream.read()
# check if frame empty
if not grabbed:
# if True break the infinite loop
break
# do something with the frame to be sent here
recv_data = await server.transceive_data()
if not (recv_data is None):
if isinstance(recv_data, np.ndarray):
cv2.imshow("Reciever Frame", recv_data)
key = cv2.waitKey(1) & 0xFF
else:
print(recv_data)
target_data = "Hello, I am a Server."
# yield frame
yield (target_data, frame)
# sleep for sometime
await asyncio.sleep(0)
stream.release()
if __name__ == "__main__":
# set event loop
asyncio.set_event_loop(server.loop)
# Add your custom source generator to Server configuration
server.config["generator"] = my_frame_generator()
# Launch the Server
server.launch()
try:
# run your main function task until it is complete
server.loop.run_until_complete(server.task)
except (KeyboardInterrupt, SystemExit):
# wait for interrupts
pass
finally:
# finally close the server
server.close()
Client End
# import libraries
from vidgear.gears.asyncio import NetGear_Async
import cv2, asyncio
options = {"bidirectional_mode": True}
# define and launch Client with `receive_mode=True`
client = NetGear_Async(receive_mode=True, logging=True, **options).launch()
# Create a async function where you want to show/manipulate your received frames
async def main():
# !!! define your own video source here !!!
# Open any video stream
stream = cv2.VideoCapture("big_buck_bunny_720p_1mb.mp4")
# loop over Client's Asynchronous Frame Generator
async for (data, frame) in client.recv_generator():
if not(data is None):
print(data)
# {do something with received frames here}
# Show output window
cv2.imshow("Output Frame", frame)
key = cv2.waitKey(1) & 0xFF
(grabbed, target_data) = stream.read()
# check if frame empty
if grabbed:
# if True break the infinite loop
await client.transceive_data(data=target_data)
# await before continuing
await asyncio.sleep(0)
stream.release()
if __name__ == "__main__":
# Set event loop to client's
asyncio.set_event_loop(client.loop)
try:
# run your main function task until it is complete
client.loop.run_until_complete(main())
except (KeyboardInterrupt, SystemExit):
# wait for interrupts
pass
# close all output window
cv2.destroyAllWindows()
# safely close client
client.close()
@rubar-tech Btw, if not clear yet, you can install vidgear from source as follow:
# clone the repository and get inside
git clone https://github.com/abhiTronix/vidgear.git && cd vidgear
# checkout the latest development branch
git checkout development
# install normally
pip install .
# OR install with asyncio support
pip install .[asyncio]
Bidirectional data transfer mode for NetGear_Async API successfully added and merged in commit: https://github.com/abhiTronix/vidgear/commit/c73428dff35aded6b598f3486e11d53477468ec1
Related Doc is available here: https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/netgear_async/advanced/bidirectional_mode/
| gharchive/issue | 2021-07-29T15:28:53 | 2025-04-01T06:37:43.884974 | {
"authors": [
"abhiTronix",
"rubar-tech"
],
"repo": "abhiTronix/vidgear",
"url": "https://github.com/abhiTronix/vidgear/issues/235",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
659120123 | Apply regressions to find board-image homography
This is a continuation of issue #37 .
Here, you have to develop a RANSAC-like algorithm with linear and geometric regressions to find indices of vertical and horizontal lines from the Hough transform and assign each remaining image point to an integer pair representing a board point.
I'll take up this issue.
| gharchive/issue | 2020-07-17T10:26:41 | 2025-04-01T06:37:43.886467 | {
"authors": [
"04RR",
"abhidxt299"
],
"repo": "abhidxt299/RoManOV_Automation",
"url": "https://github.com/abhidxt299/RoManOV_Automation/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1404475880 | Updates
Github actions version updates.
Test refactoring.
build.yml - Node version 18.x addition.
Codecov Report
Base: 99.02% // Head: 99.02% // No change to project coverage :thumbsup:
Coverage data is based on head (d0c0a74) compared to base (a1597bc).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## main #18 +/- ##
=======================================
Coverage 99.02% 99.02%
=======================================
Files 1 1
Lines 103 103
=======================================
Hits 102 102
Misses 1 1
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2022-10-11T11:30:37 | 2025-04-01T06:37:43.894984 | {
"authors": [
"abhinavminhas",
"codecov-commenter"
],
"repo": "abhinavminhas/qtest-mstest-parser",
"url": "https://github.com/abhinavminhas/qtest-mstest-parser/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
659696231 | Add my profile README
What type of PR is this? (check all applicable)
[x] 🚀 Added Name
[ ] ✨ Feature
[ ] ✅ Joined Community
[ ] 🌟 ed the repo
[ ] 🐛 Grammatical Error
[ ] 📝 Documentation Update
[ ] 🚩 Other
Description
I added my own profile README.md
Add Link of GitHub Profile
Done
| gharchive/pull-request | 2020-07-17T22:33:06 | 2025-04-01T06:37:43.898810 | {
"authors": [
"oussamabouchikhi"
],
"repo": "abhisheknaiidu/awesome-github-profile-readme",
"url": "https://github.com/abhisheknaiidu/awesome-github-profile-readme/pull/83",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1206368069 | Office staff update
closes #15
@utkarsh-tf141 This is not the kind of link we use in emails, mailto: has to be used. Also, don't put any HTML content inside objects. Render the emails inside anchor tags inside the Description tag.
| gharchive/pull-request | 2022-04-17T12:56:01 | 2025-04-01T06:37:43.899887 | {
"authors": [
"abhishekshree",
"utkarsh-tf141"
],
"repo": "abhishekshree/spo-website",
"url": "https://github.com/abhishekshree/spo-website/pull/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
134089418 | django.utils.importlib is removed in Django 1.9
The module sdjango.py imports django.utils.importlib, which has been deprecated since Django 1.7 and was removed in Django 1.9. Django's importlib had been a copy of Python 2.7's importlib, so the fix should be relatively straightforward.
Should be fixed by acf095b78208edb59b5873662653e12773add3cc. Can we close this ticket?
| gharchive/issue | 2016-02-16T20:32:11 | 2025-04-01T06:37:43.915245 | {
"authors": [
"hartwork",
"jakebuhler"
],
"repo": "abourget/gevent-socketio",
"url": "https://github.com/abourget/gevent-socketio/issues/234",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1339743996 | Chore: Upgrade GraphiQL to 1.11.4
So, this is the first step on a bigger journey to get the GraphiQL client(s) up to date.
I might need little guidance here and there since I only started wokring with the codebase this evening, and not everything might be optimal ( looking at you, build_asset_path/2)
I also have s semi-functional version of graphiql 2.0 behind a new :beta interface key, but that one needs some more work to get headers to work, and to figure out whether we still need the @absinthe/absinthe-socketlibs or whethercreateGraphiQLFetcher()from@graphiql/toolkit` can solve whatever prompted the creation of that in the first place out of the box now.
If someone has more context/knows, why some decisions were made, that would be greatly appreciated!
This PR
the package still used the 5 year old graphiql@0.11.10, which has a bunch of security issues and could benefit from an update in general (bundle size reduction from react@16 among others). This PR updates graphiql and introduces a way to have two react versions in the asset list, so graphiql-workspace(the :advanced interface) stays functional, since it break with newer react versions.
Next Steps
The next big step would be to get graphiql@2 working, especially with headers and subscriptions. The is quite a bit of plumbing going on atm, which I will need to figure out first, plus the new createGraphiQLFetcher function is not shipped in a browser-compatible version, so we might need another intermediary package... 😨
Once 2.0 ships and graduates beta stage, we can probably remove/alias the :advancedinterface, since graphiql provides those enhancements out of the box. Also,graphiql-workspace` hasn't seen an update in ~4 years, so I wouldn't expect and update on that front.
Happy about any and all feedback !
@benwilson512 can't request a review formerly yet, would be great if you could take a look at this after your parental leave :)
Thanks for your work on this @luhagel, I don't have time to help with the initial work, but can definitely try it when you think it's ready for testing as we are using the old version at the moment.
Adding to the convo to say this would be a huge boost for Absinthe on my team. Due to the age and limitations of the available GraphiQL interfaces we are currently dropping in Apollo Sandbox. Now that GraphiQL is up to a stable 2.2.0 release it might be reasonable to target an update to 2.x.
Bump. Curious if an upgrade will be merged in the near term?
cc: @benwilson512 @luhagel
Bump. Curious if an upgrade will be merged in the near term? cc: @benwilson512 @luhagel
Probably needs an upgrade at this point, given that graphiql 3 is underway, I'll try and find some time to get this all cleaned up in the next few weeks @derekbrown
Looking forward to this very needed refresh.
| gharchive/pull-request | 2022-08-16T02:58:13 | 2025-04-01T06:37:43.958508 | {
"authors": [
"aglassman",
"derekbrown",
"drewble",
"duksis",
"luhagel"
],
"repo": "absinthe-graphql/absinthe_plug",
"url": "https://github.com/absinthe-graphql/absinthe_plug/pull/273",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2129181865 | 🛑 AbsorFlix is down
In 1c4a25b, AbsorFlix (https://player.absor.top) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AbsorFlix is back up in ea8b8ed after 22 hours, 23 minutes.
| gharchive/issue | 2024-02-11T21:40:36 | 2025-04-01T06:37:43.961954 | {
"authors": [
"absortian"
],
"repo": "absortian/AbsorStatus",
"url": "https://github.com/absortian/AbsorStatus/issues/227",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2028303657 | I need to create POST requests with different body each one
Hello
I have a big model (generating using Bogus lib - it has unique email, id etc) - I want to run Post Requests to create a lot of entities. Are there any ways to implement this?
I'v found you advised to use PreProcessor here - https://github.com/abstracta/jmeter-java-dsl/issues/7 - but there is no PreProcessor class in dotnet dsl.
Tnanks!
Hello, thank you for bringing this into attention!
We can easily add the jsr223PreProcessor to the dotnet library with support for specifying code in groovy. Providing support for using c# code and dotnet libraries is not that simple though.
Can you provide an example code of what you would like the DSL to support or how would your think you could express in code what you need?
One alternative for generated and distinct data might also be implementing the csvDataSet element in dotnet library (easy to do as well) so you can generate in dotnet/c# code a CSV with whatever content you may like (and using whatever library you desire) and then just use the proper variables in the HTTP Post request.
In any case, I would like to understand a little better what would be the ideal scenario for your use case and then provide a good approximation to it .
Hello, thank you for bringing this into attention!
We can easily add the jsr223PreProcessor to the dotnet library with support for specifying code in groovy. Providing support for using c# code and dotnet libraries is not that simple though.
Can you provide an example code of what you would like the DSL to support or how would your think you could express in code what you need?
One alternative for generated and distinct data might also be implementing the csvDataSet element in dotnet library (easy to do as well) so you can generate in dotnet/c# code a CSV with whatever content you may like (and using whatever library you desire) and then just use the proper variables in the HTTP Post request.
In any case, I would like to understand a little better what would be the ideal scenario for your use case and then provide a good approximation to it .
Tnx for fast reply, this is my code:
.............
using Bogus;
[Fact]
public async Task LoadTest()
{
// Arrange
var requestDto = Create_CreateCompanyRequestBogus();
var request = JsonSerializer.Serialize(requestDto);
// Act
var stats = TestPlan(
ThreadGroup(2, 5,
HttpSampler("https://tst-api.purpleunity.dev/company-and-contact/api/companies")
.Post(request, new MediaTypeHeaderValue(MediaTypeNames.Application.Json))
.Header("Authorization", AuthToken)
),
JtlWriter("jtls")
).Run();
stats.Overall.SampleTimePercentile99.Should().BeLessThan(TimeSpan.FromSeconds(5));
}
private CreateCompanyRequest Create_CreateCompanyRequestBogus()
{
var faker = new Faker<CreateCompanyRequest>()
.CustomInstantiator(f =>
new CreateCompanyRequest(
IdentityId: Guid.NewGuid().ToString().OrNull(f, 0.2f),
Name: f.Company.CompanyName(),
VatNumber: f.Random.Replace("??#########").OrNull(f, 0.2f),
Iban: f.Random.Replace("??######################").OrNull(f, 0.2f),
Bic: f.Finance.Bic().OrNull(f, 0.2f),
ChamberOfCommerceNumber : f.Random.Replace("??-???-########").OrNull(f, 0.2f),
ExternalId: Guid.NewGuid().ToString().OrNull(f, 0.2f),
IsBuyerEvaluated: f.Random.Bool().OrNull(f, 0.2f),
Remarks: f.Lorem.Sentences(1).OrNull(f, 0.2f),
AddressLine: f.Address.StreetAddress().OrNull(f, 0.2f),
Zipcode: f.Address.ZipCode().OrNull(f, 0.2f),
City: f.Address.City().OrNull(f, 0.2f),
Region: f.Address.State().OrNull(f, 0.2f),
CountryCode: f.Address.CountryCode().OrNull(f, 0.2f),
Phone: f.Phone.PhoneNumber().OrNull(f, 0.2f),
Mobile: f.Phone.PhoneNumber().OrNull(f, 0.2f),
Fax: f.Phone.PhoneNumber().OrNull(f, 0.2f),
Email: f.Internet.Email().OrNull(f, 0.2f),
WebsiteUrl: new Uri(f.Internet.Url()),
DefaultContact: null,
UseTheSameAddressForPostal: f.Random.Bool().OrNull(f, 0.2f),
PostalAddressLine: f.Address.StreetAddress().OrNull(f, 0.2f),
PostalZipCode: f.Address.ZipCode().OrNull(f, 0.2f),
PostalCity: f.Address.City().OrNull(f, 0.2f),
PostalRegion: f.Address.State().OrNull(f, 0.2f),
PostalCountry: f.Address.CountryCode().OrNull(f, 0.2f)
)
);
return faker.Generate();
}
I join the question. For each request in each iteration, a new request must be created.
My code:
`[Test]
public void RegisterTest()
{
var stats =
TestPlan(
ThreadGroup(5, 2,
HttpSampler("http://localhost/BonusWebApi/api/processing/register/")
.Post(GetRegRequest(), new MediaTypeHeaderValue("application/json")
))).Run();
Assert.That(stats.Overall.SampleTimePercentile99, Is.LessThan(TimeSpan.FromSeconds(2)));
}
//Random random = new Random();
private SecureRandom random = new SecureRandom();
public string GetCardCode()
{
var getCard = random.Next(0, cardsCount - 1);
logger.Debug(getCard);
var cardCode = cards.Skip(getCard).Take(1).FirstOrDefault()?.CardCode;
logger.Debug(cardCode);
while (cardCode == null)
{
cardCode = cards.Skip(random.Next(0, cardsCount - 1)).Take(1).FirstOrDefault()?.CardCode;
}
return cardCode;
}
public string GetRegRequest()
{
var license = cashes.ToList().ElementAt(random.Next(0, cashCount - 1));
Guid accessTokenGuid = license.AccessTokenGuid;
var cardCode = GetCardCode();
return JsonConvert.SerializeObject(new RegisterRequestDto
{
LicenseGuid = license.LicenseGuid,
AccessTokenGuid = accessTokenGuid,
CardCode = cardCode,
CardRegisterDateTime = DateTime.Now,
RegisterDetailDtos = new List
{
new RegisterDetailDto
{
PositionId="1",
ProductCode="12345",
Quantity=1,
TotalPrice=100
}
}
});
}`
Great, thank you! The information you two provide is very helpful and we have some ideas that we would like to implement to support these scenarios. It is great to see the community interest in this feature.
If some other have similar interests please let us know!
We just released 0.4 which includes CsvDataSet and Jsr223PreProcessor.
None of these are optimal solutions for your use cases, but you can use them in the meantime as workarounds or approximations while we come up with some better solution.
Regards
| gharchive/issue | 2023-12-06T11:10:28 | 2025-04-01T06:37:43.974550 | {
"authors": [
"alexchernyavskiy",
"destructorvad",
"rabelenda"
],
"repo": "abstracta/jmeter-dotnet-dsl",
"url": "https://github.com/abstracta/jmeter-dotnet-dsl/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
169937731 | using PostGateway for authorize.net API as HTTP GET requests not supported
using PostGateway for authorize.net API as HTTP GET requests not supported
Hi thank you for merging this in ...
If you're working towards a new version can you give us an idea of when it will be available via the
PIP packaging system?
| gharchive/pull-request | 2016-08-08T14:31:52 | 2025-04-01T06:37:43.991241 | {
"authors": [
"lsensible"
],
"repo": "abunsen/Paython",
"url": "https://github.com/abunsen/Paython/pull/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
421760515 | Question: logbook dependency
Quick question, why is the logbook used instead of logging? I didn't dig through the codebase much, but it doesn't look like the features of logbook are being used in a way that necessitates using it?
At some point logging was deprecated and logbook was the replacement. I made the change, but then started using warnings more and kept logging for more informational output.
If you have a compelling reason to change things, I have a deprecation warning that needs to be propagated from the develop to the master branch and am willing to do some additional stylistic fine-tuning.
logging was deprecated? I can't find any information about this with a Google search. It's included in python 3.7 even: https://docs.python.org/3/library/logging.html
Mostly just curious. The most compelling reason to use logging instead of logbook is that users don't need to install another dependency if logging is used, especially if none of the logbook functionality is being used.
I wish I could point you somewhere, but it was a few years ago and I don't remember. Clearly it didn't happen.
Removing dependencies is a good enough reason for me. I will put it on the to-do list.
No worries. I am mostly curious because we use logging a lot and if it were deprecated I would want to be moving away from it to something else!
Addressed in new branch, will be out in the next version.
| gharchive/issue | 2019-03-16T02:35:58 | 2025-04-01T06:37:43.995000 | {
"authors": [
"aburrell",
"asreimer"
],
"repo": "aburrell/aacgmv2",
"url": "https://github.com/aburrell/aacgmv2/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2329786579 | Changing number of states does not change model fitting
Changing states to 3 produces model fitting with 4 states:
Could you please add where you changed number of states and what code you ran to fit the models and generate the plots?
I rat fit_with_fit_handler.py and changed only the states variable to 3
Could you please attach the updated scripts you used here. Thanks.
Issue selecting correct model
https://github.com/abuzarmahmood/pytau/blob/d0a2d6e697579bc934ae802a26a41fd2fa059e33/pytau/how_to/scripts/fit_with_fit_handler.py#L57C1-L61C49
dframe = fit_database.fit_database
wanted_exp_name = 'pytau_test'
wanted_frame = dframe.loc[dframe['exp.exp_name'] == wanted_exp_name]
# Pull out a single data_directory
pkl_path = wanted_frame['exp.save_path'].iloc[0]
Plots came out fine once correct model was selected.
| gharchive/issue | 2024-06-02T17:41:35 | 2025-04-01T06:37:43.998060 | {
"authors": [
"abuzarmahmood",
"cmazzio"
],
"repo": "abuzarmahmood/pytau",
"url": "https://github.com/abuzarmahmood/pytau/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1244589668 | 🛑 AceBlock Swagger UI is down
In d525056, AceBlock Swagger UI ($ACEBLOCK_SWAGGER_UI) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AceBlock Swagger UI is back up in 0d54921.
| gharchive/issue | 2022-05-23T05:10:40 | 2025-04-01T06:37:44.038771 | {
"authors": [
"aceblockID"
],
"repo": "aceblockID/monitoring",
"url": "https://github.com/aceblockID/monitoring/issues/1096",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1248821474 | 🛑 AceBlock Swagger UI is down
In 6ba8e20, AceBlock Swagger UI ($ACEBLOCK_SWAGGER_UI) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AceBlock Swagger UI is back up in bce7efb.
| gharchive/issue | 2022-05-25T22:28:32 | 2025-04-01T06:37:44.040956 | {
"authors": [
"aceblockID"
],
"repo": "aceblockID/monitoring",
"url": "https://github.com/aceblockID/monitoring/issues/1139",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1200174599 | 🛑 AceBlock SSI - IPFS endpoint is down
In 3041bd9, AceBlock SSI - IPFS endpoint ($ACEBLOCK_SSI_IPFS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AceBlock SSI - IPFS endpoint is back up in 28ddecc.
| gharchive/issue | 2022-04-11T15:49:35 | 2025-04-01T06:37:44.043513 | {
"authors": [
"aceblockID"
],
"repo": "aceblockID/monitoring",
"url": "https://github.com/aceblockID/monitoring/issues/409",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2297952992 | 🛑 Aceblock SSI registration is down
In e7c2eff, Aceblock SSI registration ($ACEBLOCK_SSI_REGISTRATION) was down:
HTTP code: 503
Response time: 13961 ms
Resolved: Aceblock SSI registration is back up in 185ef1f after 11 minutes.
| gharchive/issue | 2024-05-15T13:37:24 | 2025-04-01T06:37:44.045713 | {
"authors": [
"aceblockID"
],
"repo": "aceblockID/monitoring",
"url": "https://github.com/aceblockID/monitoring/issues/6069",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1222056638 | 🛑 AceBlock Swagger UI is down
In f19f035, AceBlock Swagger UI ($ACEBLOCK_SWAGGER_UI) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AceBlock Swagger UI is back up in e1bb1ce.
| gharchive/issue | 2022-05-01T07:28:36 | 2025-04-01T06:37:44.048419 | {
"authors": [
"aceblockID"
],
"repo": "aceblockID/monitoring",
"url": "https://github.com/aceblockID/monitoring/issues/743",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2220833811 | ERROR about cv_reader when using MSVD
I converted the video according to the method you provided. I found that some errors occurred in the batch of videos(num_workers=12, 4 were correct and 8 were wrong) , the wrong videos are:
./dataset/msvd/videos_240_h264_keyint_60/Nd45qJn61Dw_0_10.avi
./dataset/msvd/videos_240_h264_keyint_60/5P6UU6m3cqk_57_75.avi
./dataset/msvd/videos_240_h264_keyint_60/PD6eQY7yCfw_32_37.avi
./dataset/msvd/videos_240_h264_keyint_60/77iDIp40m9E_159_181.avi
./dataset/msvd/videos_240_h264_keyint_60/9Wr48VFhZH8_45_50.avi
./dataset/msvd/videos_240_h264_keyint_60/HxRK-WqZ5Gk_30_50.avi
./dataset/msvd/videos_240_h264_keyint_60/UgUFP5baQ9Y_0_7.avi
./dataset/msvd/videos_240_h264_keyint_60/PqSZ89FqpiY_65_75.avi
and I converted these wrong videos to mp4 but got the same error.
I wonder if there is something wrong with the MSVD dataset or cv_reader (I can train normally on MSRVTT).
Your help will be greatly appreciated.
I converted the video according to the method you provided. I found that some errors occurred in the batch of videos(num_workers=12, 4 were correct and 8 were wrong) , the wrong videos are:
./dataset/msvd/videos_240_h264_keyint_60/Nd45qJn61Dw_0_10.avi
./dataset/msvd/videos_240_h264_keyint_60/5P6UU6m3cqk_57_75.avi
./dataset/msvd/videos_240_h264_keyint_60/PD6eQY7yCfw_32_37.avi
./dataset/msvd/videos_240_h264_keyint_60/77iDIp40m9E_159_181.avi
./dataset/msvd/videos_240_h264_keyint_60/9Wr48VFhZH8_45_50.avi
./dataset/msvd/videos_240_h264_keyint_60/HxRK-WqZ5Gk_30_50.avi
./dataset/msvd/videos_240_h264_keyint_60/UgUFP5baQ9Y_0_7.avi
./dataset/msvd/videos_240_h264_keyint_60/PqSZ89FqpiY_65_75.avi
and I converted these wrong videos to mp4 but got the same error.
I wonder if there is something wrong with the MSVD dataset or cv_reader (I can train normally on MSRVTT).
Your help will be greatly appreciated.
Hello, i met the same problem on MSVD dataset. Do you solve this problem?? How to solve this problem??
I converted the video according to the method you provided. I found that some errors occurred in the batch of videos(num_workers=12, 4 were correct and 8 were wrong) , the wrong videos are:
./dataset/msvd/videos_240_h264_keyint_60/Nd45qJn61Dw_0_10.avi
./dataset/msvd/videos_240_h264_keyint_60/5P6UU6m3cqk_57_75.avi
./dataset/msvd/videos_240_h264_keyint_60/PD6eQY7yCfw_32_37.avi
./dataset/msvd/videos_240_h264_keyint_60/77iDIp40m9E_159_181.avi
./dataset/msvd/videos_240_h264_keyint_60/9Wr48VFhZH8_45_50.avi
./dataset/msvd/videos_240_h264_keyint_60/HxRK-WqZ5Gk_30_50.avi
./dataset/msvd/videos_240_h264_keyint_60/UgUFP5baQ9Y_0_7.avi
./dataset/msvd/videos_240_h264_keyint_60/PqSZ89FqpiY_65_75.avi
and I converted these wrong videos to mp4 but got the same error.
I wonder if there is something wrong with the MSVD dataset or cv_reader (I can train normally on MSRVTT).
Your help will be greatly appreciated.
I find that it can run normally on MSRVTT. Does the MSVD dataset have some problems?
并没有得到解决,所以我只做了MSRVTT的实验o(╥﹏╥)o
I converted the video according to the method you provided. I found that some errors occurred in the batch of videos(num_workers=12, 4 were correct and 8 were wrong) , the wrong videos are:
./dataset/msvd/videos_240_h264_keyint_60/Nd45qJn61Dw_0_10.avi
./dataset/msvd/videos_240_h264_keyint_60/5P6UU6m3cqk_57_75.avi
./dataset/msvd/videos_240_h264_keyint_60/PD6eQY7yCfw_32_37.avi
./dataset/msvd/videos_240_h264_keyint_60/77iDIp40m9E_159_181.avi
./dataset/msvd/videos_240_h264_keyint_60/9Wr48VFhZH8_45_50.avi
./dataset/msvd/videos_240_h264_keyint_60/HxRK-WqZ5Gk_30_50.avi
./dataset/msvd/videos_240_h264_keyint_60/UgUFP5baQ9Y_0_7.avi
./dataset/msvd/videos_240_h264_keyint_60/PqSZ89FqpiY_65_75.avi
and I converted these wrong videos to mp4 but got the same error.
I wonder if there is something wrong with the MSVD dataset or cv_reader (I can train normally on MSRVTT).
Your help will be greatly appreciated.
I find that it can run normally on MSRVTT. Does the MSVD dataset have some problems?
可以加个微信吗?我也是研究字幕生成的,我也有一些伙伴在群里,要不要一起加个群,以后大家可以一起讨论下?
发自我的iPhone
------------------ Original ------------------
From: Mc @.>
Date: Wed,Aug 14,2024 2:17 PM
To: acherstyx/CoCap @.>
Cc: 习泽宇 @.>, Comment @.>
Subject: Re: [acherstyx/CoCap] ERROR about cv_reader when using MSVD (Issue#12)
Refer to https://github.com/acherstyx/CoCap/issues/13.
| gharchive/issue | 2024-04-02T15:32:41 | 2025-04-01T06:37:44.117330 | {
"authors": [
"NCU-MC",
"Xiyu-AI",
"acherstyx"
],
"repo": "acherstyx/CoCap",
"url": "https://github.com/acherstyx/CoCap/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1942315323 | soundSensor.ino changed
I have added ambulance sensor to the program.
reviewing!
@sumukhacharya03 could you please come meet me in room 001?
| gharchive/pull-request | 2023-10-13T17:13:14 | 2025-04-01T06:37:44.192429 | {
"authors": [
"bun137",
"sumukhacharya03"
],
"repo": "acmpesuecc/Intelligent_Traffic_Light_System",
"url": "https://github.com/acmpesuecc/Intelligent_Traffic_Light_System/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2650594995 | Knowledge - Ingestion time field overlaps with the "Unsupported" status.
Steps to reproduce the problem:
Create agent with knowledge files that are unsupported like .xlsx files.
Notice that Ingestion time field overlaps with the "Unsupported" status when ingestion status is reported for these files.
we changed the UX a bit, this should no longer be revelant
This issue is not relevant anymore.
| gharchive/issue | 2024-11-11T22:59:29 | 2025-04-01T06:37:44.208943 | {
"authors": [
"StrongMonkey",
"sangee2004"
],
"repo": "acorn-io/acorn",
"url": "https://github.com/acorn-io/acorn/issues/542",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2233441836 | Some solvers ind CMF can yield negative source strength results
including:
FISTA
Split-Bregman
LassoLars
LassoLarsBIC(?)
Reproduce with:
import acoular
from acoular import L_p, Calib, MicGeom, Environment, PowerSpectra, \
RectGrid, MaskedTimeSamples,BeamformerCMF, SteeringVector
# other imports
from os import path
from pylab import figure, imshow, colorbar, title
# files
datafile = 'example_data.h5'
calibfile = 'example_calib.xml'
micgeofile = path.join( path.split(acoular.__file__)[0],'xml','array_56.xml')
#octave band of interest
cfreq = 4000
t1 = MaskedTimeSamples(name=datafile)
t1.start = 0 # first sample, default
t1.stop = 16000 # last valid sample = 15999
invalid = [1,7] # list of invalid channels (unwanted microphones etc.)
t1.invalid_channels = invalid
t1.calib = Calib(from_file=calibfile)
m = MicGeom(from_file=micgeofile)
m.invalid_channels = invalid
g = RectGrid(x_min=-0.6, x_max=-0.0, y_min=-0.3, y_max=0.3, z=0.68,
increment=0.05)
env = Environment(c = 346.04)
st = SteeringVector(grid=g, mics=m, env=env)
f = PowerSpectra(time_data=t1,
window='Hanning', overlap='50%', block_size=128, #FFT-parameters
ind_low=8, ind_high=16) #to save computational effort, only
# frequencies with indices 8..15 are used
bcmf = BeamformerCMF(freq_data=f, steer=st, method='LassoLars')
figure(1,(10,6))
smap = bcmf.synthetic(cfreq,1)
print(smap.min())
For sklearn solvers there is a new positive parameter which can be set to enforce strictly non-negative results.
| gharchive/issue | 2024-04-09T13:27:06 | 2025-04-01T06:37:44.219825 | {
"authors": [
"adku1173",
"esarradj"
],
"repo": "acoular/acoular",
"url": "https://github.com/acoular/acoular/issues/204",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1765208208 | Use Sonar GitHub Action
Remove downloading sonar build exexutable and running action manually, instead use out of box action provided by Sonar source
https://github.com/marketplace/actions/sonarcloud-scan
There is no SONAR action out of the box available for classic dot net projects. The default SONAR action oly works on ubuntu image and can be used for dotnet core projects but not for windows based dotnet projects
| gharchive/pull-request | 2023-06-20T11:29:28 | 2025-04-01T06:37:44.236774 | {
"authors": [
"abhijeetnarvekar"
],
"repo": "acrolinx/sidebar-sdk-dotnet",
"url": "https://github.com/acrolinx/sidebar-sdk-dotnet/pull/26",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2277537619 | The horse rider case
A special question in Path Check that trickles down elsewhere
This is mostly done. The formulas on row 475 of Lookups1 are complex. If PA25 is N/A, then it uses PA22. If PA29 is N/A, then PA26. To sanely match this behavior, we need the unit tests to be working, and vary some of these things.
| gharchive/issue | 2024-05-03T12:02:24 | 2025-04-01T06:37:44.244833 | {
"authors": [
"dabreegster"
],
"repo": "acteng/inspectorate_tools",
"url": "https://github.com/acteng/inspectorate_tools/issues/39",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
194790758 | Debug logs need review
One example
ActionsSdkAssistant.prototype.ask = function (inputPrompt, possibleIntents, dialogState) {
debug('ask: inputPrompt=%s, possibleIntents=%s, dialogState=%s',
inputPrompt, possibleIntents, dialogState);
all variables are Objects, so the output is pretty meaningless if the variables are not "stringified"
Thanks. Will fix in next update. Also, see new method signature: ActionsSdkAssistant.prototype.ask = function (inputPrompt, dialogState)
| gharchive/issue | 2016-12-10T20:09:46 | 2025-04-01T06:37:44.252013 | {
"authors": [
"entertailion",
"rotero"
],
"repo": "actions-on-google/actions-on-google-nodejs",
"url": "https://github.com/actions-on-google/actions-on-google-nodejs/issues/4",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
763652841 | "Unable to reserve cache with key"
Hi, I've got a workflow where I'm using this action, however, it doesn't seem to be caching anything. I get the error:
Unable to reserve cache with key Linux-xmltv-cache, another job may be creating this cache.
Obviously, because of it, subsequent jobs do not find any caches.
Cache usage (full workflow):
- name: XMLTV cache
id: cache-xmltv
uses: actions/cache@v2
with:
path: /config/.xmltv/cache
key: ${{ runner.os }}-xmltv-cache
Link to workflow
Hi @fugkco, this is usually a symptom indicating the creation of the cache failed. To avoid wasting unnecessary time creating the cache if it already exists, the cache is first "reserved", then the runner creates the compressed tar with the cache contents, and finally that is uploaded to the server. If something breaks between when the cache is reserved and when the upload finishes, the cache gets stuck in the "reserved" state. You will then see the "Unable to reserve cache...another job may be creating this cache" message until that cache is automatically removed (it could take up to 24 hours but work is underway to reduce this delay).
Instead of waiting that long, you can also change the cache key, for example:
key: ${{ runner.os }}-xmltv-cache-v2
After making this change, take a look at the Post Cache step on the next run. It will show an error if there is any issue creating the cache.
If I had to guess, the failure may have to do with the cache being rooted at /config. If you see any errors related to that path, try moving that folder to within the GitHub workspace (this is the default folder each step runs in).
I realise that I saw this response, and used gave it a cache key, which worked, but I forgot to reply and close this issue. Anyway, so it worked. Thanks for the help!
@dhadka what is the point of creating a new cache key? If this is what it seems, the best course of action would be to ignore the error. You start X concurrent jobs, they all produce the same artifact, then they try to save it using the same cache key. You need only one of these to succeed.
Ideally, you should be able to run only one of those jobs, then make the other X-1 wait on it to get its artifact from the cache, but AFAIK, there is no easy way to do this.
| gharchive/issue | 2020-12-12T11:51:02 | 2025-04-01T06:37:44.261262 | {
"authors": [
"dhadka",
"fugkco",
"mmomtchev"
],
"repo": "actions/cache",
"url": "https://github.com/actions/cache/issues/485",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2573752004 | Restore original behavior of cache-hit output
#1262
#1466
As described in #1466, #1404 broke workflows that relied on the previous behavior of cache-hit being an empty string when there was a cache miss:
if: ${{ steps.cache-restore.outputs.cache-hit }}
If cache-hit is an empty string, this would work as expected. When cache-hit is false, this is a non-empty string and treated as "true".
Actions outputs do not support proper boolean values, and we can't guarantee how users are interpreting this string output.
This PR reverts #1404 and updates the README to clarify all possible values of cache-hit:
'true' if there's an exact match
'false' if there's a cache hit with a restore key
'' if there's a cache miss
This is likely confusing to users, but we should maintain the existing behavior to avoid breaking existing workflows.
@joshmgross
Thank you 🙏
@joshmgross
Thank you!
This is likely confusing to users, but we should maintain the existing behavior to avoid breaking existing workflows.
I can see the reasoning here but this shouldn't mean a weird behaviour should be kept just because it's breaking. IMO having it in the release notes and making the necessary change in version number should be enough, which was already done. Having this PR merged feels like a step in the wrong direction and there should be a plan to properly handle that breaking change.
IMO having it in the release notes and making the necessary change in version number should be enough, which was already done.
Per https://semver.org/, a breaking change needs to be a major version bump (i.e. v5 of this action).
I agreed that we should fix this confusing behavior, but we're not ready to create a new major version of this action.
https://github.com/actions/cache/issues/1466#issuecomment-2400480945
| gharchive/pull-request | 2024-10-08T17:02:43 | 2025-04-01T06:37:44.267385 | {
"authors": [
"AnimMouse",
"Jason3S",
"joshmgross",
"ulgens"
],
"repo": "actions/cache",
"url": "https://github.com/actions/cache/pull/1467",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1167726547 | [docs] remove star search from yarn.lock
i dunno, some recent change to cache caused the **/yarn.lock lookup to start failing for me. I just have a standard project with a yarn.lock in the top level and this fixed it.
@lookfirst Changing the path of yarn file in example may not help everyone. That path is for the yarn.lock file which can be at different location for different projects.
Thanks for your contribution.
@lookfirst Can you try it again with the current recommendation and latest version of cache? It seems to work fine for me
| gharchive/pull-request | 2022-03-13T23:28:54 | 2025-04-01T06:37:44.269187 | {
"authors": [
"Phantsure",
"lookfirst"
],
"repo": "actions/cache",
"url": "https://github.com/actions/cache/pull/765",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1350440134 | Update Android NDK to r25b LTS
Tool name
Android NDK
Tool license
Same as before
Add or update?
[ ] Add
[X] Update
Desired version
25.1.8937393
Approximate size
No response
Brief description of tool
No response
URL for tool's homepage
No response
Provide a basic test case to validate the tool's functionality.
No response
Platforms where you need the tool
[ ] Azure DevOps
[X] GitHub Actions
Virtual environments where you need the tool
[ ] Ubuntu 18.04
[X] Ubuntu 20.04
[X] Ubuntu 22.04
[X] macOS 10.15
[X] macOS 11
[X] macOS 12
[ ] Windows Server 2019
[ ] Windows Server 2022
Can this tool be installed during the build?
No response
Tool installation time in runtime
No response
Are you willing to submit a PR?
No response
Will be deployed next week.
Deployed
| gharchive/issue | 2022-08-25T07:21:13 | 2025-04-01T06:37:44.278413 | {
"authors": [
"Javernaut",
"mikhailkoliada"
],
"repo": "actions/runner-images",
"url": "https://github.com/actions/runner-images/issues/6141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1774255295 | Timeout waiting for SSH.
Description
Syntax-only check passed. Everything looks okay.
Show Packer Version
1.9.1
Build ubuntu2204 VM
==> azure-arm.build_vhd: Running builder ...
==> azure-arm.build_vhd: Getting tokens using client secret
==> azure-arm.build_vhd: Getting tokens using client secret
azure-arm.build_vhd: Creating Azure Resource Manager (ARM) client ...
==> azure-arm.build_vhd: Warning: You are using Azure Packer Builder to create VHDs which is being deprecated, consider using Managed Images. Learn more https://www.packer.io/docs/builders/azure/arm#azure-arm-builder-specific-options
==> azure-arm.build_vhd: Getting source image id for the deployment ...
==> azure-arm.build_vhd: Creating resource group ...
==> azure-arm.build_vhd: Validating deployment template ...
==> azure-arm.build_vhd: Deploying deployment template ...
==> azure-arm.build_vhd: Getting the VM's IP address ...
==> azure-arm.build_vhd: Waiting for SSH to become available...
==> azure-arm.build_vhd: Timeout waiting for SSH.
==> azure-arm.build_vhd:
==> azure-arm.build_vhd: Deleting individual resources ...
Platforms affected
[ ] Azure DevOps
[X] GitHub Actions - Standard Runners
[ ] GitHub Actions - Larger Runners
Runner images affected
[ ] Ubuntu 20.04
[X] Ubuntu 22.04
[ ] macOS 11
[ ] macOS 12
[ ] macOS 13
[ ] Windows Server 2019
[ ] Windows Server 2022
Image version and build link
Current hosted runner is version: '2.305.0'
Is it regression?
n/a
Expected behavior
no errors
Actual behavior
it's broken
Repro steps
see description
@sivakolisetti this means there are the problem in your setup, packer can not connect to a newly created vm, it is outside of our scope
@sivakolisetti as it already said, it looks like issue not on our side. Our builds work fine the same time. Please provide more info about what are you tried to do.
Hello @sivakolisetti!
I am forced to close the issue due to the fact that it is not repeated in our infrastructure and on test runners. If you still have any questions related to this problem, then you can ask them here - we can always reopen the issue. In case of other problems, please open a new issue.
Separately, I can add that the described problem can be both transitive and related to local settings in your organization, but not with the work of the scripts themselves at the moment.
@mikhailkoliada is correct our vnet has does not internet connectivity once vm created. We created separate vnet then actions running successfully .
thanks for guys who reply this issue.
Hi @sivakolisetti, what did you add to the created vnet to accomodate ssh from the created vm? Did you need to add an outbound or inbound ssh rule?
Thanks.
| gharchive/issue | 2023-06-26T08:46:41 | 2025-04-01T06:37:44.287952 | {
"authors": [
"erik-bershel",
"greadtm",
"mikhailkoliada",
"sivakolisetti"
],
"repo": "actions/runner-images",
"url": "https://github.com/actions/runner-images/issues/7784",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1861571943 | Heavy cache miss count
Description:
A large collection of "cache miss" and only a handful of "cache hit".
Action version:
actions/setup-node@v3
Platform:
[x] Ubuntu
[ ] macOS
[ ] Windows
Runner type:
[x] Hosted
[ ] Self-hosted
Tools version:
- name: Setup node
uses: actions/setup-node@v3
with:
node-version: 18
cache: 'npm'
Repro steps:
- name: Setup node
uses: actions/setup-node@v3
with:
node-version: ${{ inputs.node-version }}
cache: 'npm'
- run: npm ci --verbose
shell: bash
Expected behavior:
A restore of previously downloaded artifacts
Actual behavior:
npm http fetch GET 200 https://registry.npmjs.org/locate-path 2237ms (cache miss)
npm http fetch GET 200 https://registry.npmjs.org/json-stable-stringify-without-jsonify 2232ms (cache miss)
npm http fetch GET 200 https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz 777ms (cache hit)
npm http fetch GET 200 https://registry.npmjs.org/update-browserslist-db 2424ms (cache miss)
npm http fetch GET 200 https://registry.npmjs.org/mime-types 2300ms (cache miss)
npm http fetch GET 200 https://registry.npmjs.org/lodash.merge 2297ms (cache miss)
npm http fetch GET 200 https://registry.npmjs.org/json-schema-traverse 2291ms (cache miss)
npm http fetch GET 200 https://dl.fontawesome.com/signed/fontawesome-pro/npm/fortawesome/fontawesome-common-types/6.4.0/fortawesome-fontawesome-common-types-6.4.0.tgz?created=xxx&expires=xxx&source=xxx&signature=xxx 6157ms (cache miss)
tons and tons of cache miss.
Like why was yocto-queue a hit but not fontawesome?
So this is being reported because you get charged for using bandwidth when using Font Awesome. Sadly in our case - the basic example used here does not properly cache all items and I'm not following why. We get a hit restoring the cache blob, but it appears its quite small to what I expect.
/home/runner/.npm
Received 24691139 of 24691139 (100.0%), 30.4 MBs/sec
Cache Size: ~24 MB (24691139 B)
Cache restored successfully
Cache restored from key: node-cache-Linux-npm-912ac010d33c9c8d3694535a04cbea9be930f020a6b09d60b707ae2ce2d4db8b
I know this action does NOT cache node_modules, but then I struggle to understand the benefit I'm gaining here. Its clearly caching and restoring something, but since it doesn't cache the thing that costs us money then :(
We then get billed for overages and Font Awesome support just tells us to follow this guide which we clearly do.
So the large collection of misses was resolved by manually deleting/purging all the caches from that repository. The default cache key I assume is the lockfile hash, so why this happened - I'm not sure. Now the only thing not caching is the links from a private registry.
This private registry (Font Awesome) does not appear to ever successfully cache itself. I assume since the non node_modules cache is HTTP responses and these expire after 20 minutes. So the cache is respecting the TTL and thus becoming misses on builds.
I don't know if this means we should go back to rolling our own cache - (caching node_modules, skipping install if cache hit, etc) or just missing something here.
Hello, @iBotPeaches ! Thanks for reporting this issue, we will take a closer look and see what can be done :)
I too am experiencing this same issue with FontAwesome. It is having a large impact on us and our budgets with overage charges.
We have many repositories that are facing this issue.
The only difference that I can see is that we are using yarn rather than npm.
@codepuncher - I dug into this a lot with FA, who honestly just kept redirecting me back to official GitHub docs on caching. You can't really use setup-node, etc with FA Pro and expect to keep your bandwidth.
We mixed two forms of cache now.
- name: Setup node
uses: actions/setup-node@v3
with:
node-version: ${{ inputs.node-version }}
cache: 'npm'
- uses: actions/cache@v3
id: npm-cache
with:
path: node_modules
key: ${{ inputs.node-version }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: ${{ inputs.node-version }}-node-${{ hashFiles('**/package-lock.json') }}
- run: npm ci
if: steps.npm-cache.outputs.cache-hit != 'true'
working-directory: ${{ inputs.directory }}
shell: bash
(this is in a composite action)
We restore cache using regular setup-node, but then we attempt to restore a 1-1 cache based on the exact hash of the lock file. If we find a match for that, we skip npm ci - which is what would eventually hit FA Pro servers again and bleed bandwidth due to what I described above.
If that cache is a miss, we already restored the npm global cache (the 1st cache) so our cold/miss timing is still pretty fast as we are just installing the FA Pro, the private packages and whatever caused the miss (new version, etc)
This has kept us under the 2gb bandwidth limit for Sept.
Hello @iBotPeaches,
The problem is not reproducible
1st run https://github.com/akv-demo/setup-node-test/actions/runs/7251833566/job/19755011576#step:4:9
Run npm ci --verbose
...
npm http fetch GET [20](https://github.com/akv-demo/setup-node-test/actions/runs/7251833566/job/19755011576#step:4:21)0 https://registry.npmjs.org/font-awesome/-/font-awesome-4.7.0.tgz 93ms (cache miss)
npm http fetch POST 200 https://registry.npmjs.org/-/npm/v1/security/advisories/bulk 357ms
2nd run
Run npm ci --verbose
...
npm http fetch POST [20](https://github.com/akv-demo/setup-node-test/actions/runs/7251833566/job/19755025500#step:4:21)0 https://registry.npmjs.org/-/npm/v1/security/advisories/bulk 127ms
The idea of action is to cahce ~/.npm directory which contains downloaded packages. Caching the node_modules does not seem a good idea because they its content can be out of sync.
It is impossible to find the exact reason why the ~/.npm directory does not contain the whole bunch of packages but in the theory it can be caused the repository has outdated package-lock.json.
Did it help?
Hi @dsame,
My experience was more for the Pro/Private option of FA, which does not appear to be cachable. Your test appears to be the public version of Font Awesome - so I don't think a true comparable replicable test.
I'm not saying my method is the cleanest, but it did solve our immediate problem of exceeding rate limits
Since my comment I've yet to hit a month where we exceed limits. I don't want to cache node_modules, but I did what I had to stay under usage limits.
Hello @iBotPeaches, thank you for the clarification, i supposed there's something related to the private package and wanted to get the confirmation.
Nevertheless, it puzzles me with the dependencies from https://registry.npmjs.org are not cached as well despite I expect them to be stored in ~/.npm folder and cached.
I understand you've solved the problem with the workaround, but can you please double check does the build on the local host has to reload the awesomefonts npm with its dependencies? I still suspect the action has some flaw with the generating .npmrc config that might cause the problem.
In case if the local build has the same problem or you don't have a time, please let us to close the issue with "it does not relate to the action"
Hi @dsame,
I ran a few tests in a private repo like your configuration. With each change I got closer to our affected application, but each time I re-ran it - it seemed fine.
npm verb cli /opt/hostedtoolcache/node/18.19.0/x64/bin/node /opt/hostedtoolcache/node/18.19.0/x64/bin/npm
npm info using npm@10.2.3
npm info using node@v1[8](https://github.com/sourcetoad/actions-node-test/actions/runs/7264253745/job/19791375113#step:5:9).1[9](https://github.com/sourcetoad/actions-node-test/actions/runs/7264253745/job/19791375113#step:5:10).0
npm verb title npm ci
npm verb argv "ci" "--loglevel" "verbose"
npm verb logfile logs-max:[10](https://github.com/sourcetoad/actions-node-test/actions/runs/7264253745/job/19791375113#step:5:11) dir:/home/runner/.npm/_logs/2023-[12](https://github.com/sourcetoad/actions-node-test/actions/runs/7264253745/job/19791375113#step:5:13)-19T15_53_14_549Z-
npm verb logfile /home/runner/.npm/_logs/2023-12-19T15_53_14_549Z-debug-0.log
npm http fetch POST 200 https://registry.npmjs.org/-/npm/v1/security/advisories/bulk 494ms
npm info run @fortawesome/fontawesome-common-types@6.5.1 postinstall node_modules/@fortawesome/fontawesome-common-types node attribution.js
npm info run @fortawesome/fontawesome-svg-core@6.5.1 postinstall node_modules/@fortawesome/fontawesome-svg-core node attribution.js
npm info run @fortawesome/fontawesome-common-types@6.5.1 postinstall { code: 0, signal: null }
npm info run @fortawesome/fontawesome-svg-core@6.5.1 postinstall { code: 0, signal: null }
{
"dependencies": {
"@fortawesome/fontawesome-svg-core": "^6.5.1",
"@fortawesome/pro-solid-svg-icons": "^6.5.1"
}
}
@fortawesome:registry=https://npm.fontawesome.com
//npm.fontawesome.com/:_authToken=${FONTAWESOME_NPM_AUTH_TOKEN}
This worked fine. So I removed my workaround on the actual affected application and it was immediate misses again. Unfortunately the difference between the real affected application and this sample is 98,000~ different node packages.
So at this point - I can't replicate in my test, but my real application continues to replicate. I'm happy with a workaround, but I understand if you can't replicate - you can't fix.
I'll close this and if others stumble upon this with FA Pro and have the same bandwidth overage issue - maybe they can use my sample that works and/or discover the root cause themselves.
Thanks
@iBotPeaches / @dsame / @codepuncher - Is this issue still occurring using actions/setup-node@v4?
Occuring for me!
Hello Everyone, Thank you for bringing this to our attention. We understand your concerns regarding Font Awesome and the caching behavior you're seeing.
Based on our investigation, we were unable to reproduce the issue you're experiencing with actions/setup-node@v4. The caching mechanism in actions/setup-node is designed to cache local npm dependencies (i.e., those fetched from the npm registry), but it does not extend to external resources like CDN assets. Since CDN resources such as Font Awesome are not part of the npm ecosystem, they aren't included in the npm cache and therefore are not cached by the GitHub Actions workflow.
Regarding your concern about npm dependencies from https://registry.npmjs.org not being cached as expected, please note that actions/setup-node caches the npm package cache (~/.npm), but it does not directly cache the node_modules directory. This means that dependencies fetched from the npm registry are cached in the npm cache folder, but they may not be restored directly into the node_modules folder on every run. If local caching is properly configured, there should be no need to reload Font Awesome (or other dependencies) on every build.
For more information on the caching behavior in GitHub Actions and actions/setup-node, you can refer to the following documentation:
GitHub Actions Cache Documentation
setup-node Action Documentation
We hope this helps clarify the caching behavior. Please feel free to reach out if you have any further questions or need additional assistance!
Thank you @iBotPeaches for the suggested workaround. We will give it a try tomorrow.
Our workaround was to use Kits as a way to reduce the significant bandwidth usage caused by our CI/CD because of pro icon packages. Our plan was to configure the Kit to include only the specific icons used in our project, which would have allowed us to uninstall the larger packages containing all icons. Unfortunately, this idea also proved to be a dead end since Kits are not supported in React Native.
As a result, we're left with either caching the entire node_modules folder as you suggested, or disabling CI/CD entirely... for the sake of icons 🤷♂️
We hope this caching issue can be addressed soon.
Hello @iBotPeaches, Thank you for your response, and apologies for the delay. You're absolutely right that the behavior around CDN assets like Font Awesome isn't fully addressed in the current documentation. It’s also true that the caching mechanism in actions/setup-node is primarily designed for npm dependencies, not node_modules and external CDN assets.
I’m glad to hear that your workaround caching node_modules alongside setup-node is working well for you. I agree that using a cold cache to handle potential misses is a solid approach, as it ensures both your npm dependencies and external assets are reliably available. However, please note that caching node_modules or external CDN assets (e.g., Font Awesome) is not implemented by default to avoid inconsistencies across environments (OS, Node.js versions).
We are actively working to update the documentation to provide more clarity on caching behaviors, and we’ll definitely consider your feedback when addressing future caching use cases.
If anything changes or you need further assistance, please don't hesitate to reach out.
Hello @iBotPeaches, Thanks for the response! The setup-node action already keys the cache based on variables such as Node.js version, OS, and architecture. This ensures that the cache is environment-specific, helping to avoid issues with mismatched dependencies and improving cache hit rates across different Node.js versions and platforms.
We are proceeding to close this issue, but we will keep it open for a documentation update in the setup-node action.
| gharchive/issue | 2023-08-22T14:21:29 | 2025-04-01T06:37:44.322098 | {
"authors": [
"MostefaKamalLala",
"aparnajyothi-y",
"codepuncher",
"dsame",
"dusan-trickovic",
"gzurbach",
"iBotPeaches"
],
"repo": "actions/setup-node",
"url": "https://github.com/actions/setup-node/issues/835",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
572790713 | Update/Add azure cli to 2.1.0
Tool information
Tool name: az cli
Add or update? update
Desired version: 2.1.0
Area for Triage:
Virtual environments affected
[ ] macOS 10.15
[ ] Ubuntu 16.04 LTS
[x ] Ubuntu 18.04 LTS
[ ] Windows Server 2016 R2
[x ] Windows Server 2019
Can this tool be installed during the build?
Don't know
Are you willing to submit a PR?
No
Hi @gregorybleiker!
Az-Cli 2.1.0 is already rolling out. You can find it in pre-release readmes here:
https://github.com/actions/virtual-environments/releases
Feel free to reopen the issue if you have any concerns.
Thanks!
Can I use this in my pipelines somehow? What is the ETA for the rollout?
| gharchive/issue | 2020-02-28T14:29:17 | 2025-04-01T06:37:44.347851 | {
"authors": [
"gregorybleiker",
"miketimofeev"
],
"repo": "actions/virtual-environments",
"url": "https://github.com/actions/virtual-environments/issues/479",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1265157954 | Update arithmeticController.js
modificamos suma2
listo...
guardamos
| gharchive/pull-request | 2022-06-08T18:51:05 | 2025-04-01T06:37:44.348893 | {
"authors": [
"oruiz01"
],
"repo": "actionsdemos/calculator",
"url": "https://github.com/actionsdemos/calculator/pull/545",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
978188779 | ⚠️ Processors has degraded performance
In c248588, Processors ($STATUS_URL) experienced degraded performance:
HTTP code: 200
Response time: 94 ms
Resolved: Processors performance has improved in 411518f.
| gharchive/issue | 2021-08-24T14:53:46 | 2025-04-01T06:37:44.444087 | {
"authors": [
"max-acumen"
],
"repo": "acumenlabs/status-page",
"url": "https://github.com/acumenlabs/status-page/issues/1428",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1401724352 | ⚠️ Processors has degraded performance
In b6e451f, Processors ($STATUS_URL) experienced degraded performance:
HTTP code: 200
Response time: 95 ms
Resolved: Processors performance has improved in 26696f8.
| gharchive/issue | 2022-10-07T22:44:09 | 2025-04-01T06:37:44.445552 | {
"authors": [
"danielshir"
],
"repo": "acumenlabs/status-page",
"url": "https://github.com/acumenlabs/status-page/issues/4287",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1571662857 | ⚠️ Processors has degraded performance
In 6bb5d44, Processors ($STATUS_URL) experienced degraded performance:
HTTP code: 200
Response time: 77 ms
Resolved: Processors performance has improved in e49e544.
| gharchive/issue | 2023-02-05T23:32:57 | 2025-04-01T06:37:44.446965 | {
"authors": [
"danielshir"
],
"repo": "acumenlabs/status-page",
"url": "https://github.com/acumenlabs/status-page/issues/4557",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2362459015 | Update Qleverfile.pubchem
Fixing error:
qlever index --system=native
To enable autocompletion, run the following command, and consider adding it to your `.bashrc` or `.zshrc`:
eval "$(register-python-argcomplete qlever)" && export QLEVER_ARGCOMPLETE_ENABLED=1
Command: index
echo '{ "languages-internal": [], "prefixes-external": [""], "ascii-prefixes-only": false, "num-triples-per-batch": 1000000 }' > pubchem.settings.json
ulimit -Sn 1048576; zcat pubchem.additional-ontologies.nt.gz nt.2024-02-03/*.nt.gz | IndexBuilderMain -F ttl -f - -i pubchem -s pubchem.settings.json --stxxl-memory 10G | tee pubchem.index-log.txt
No file matching "pubchem.additional-ontologies.nt.gz" found
Did you call `qlever get-data`? If you did, check GET_DATA_CMD and INPUT_FILES in the QLeverfile
@eltonfss @Qup42 The problem is that these ontologies are many and have to be downloaded from various websites, which change frequently. Hence the comment at the top of the Qleverfile. As is, the file should be removed from INPUT_FILES, as @eltonfss suggests. Right now, I am trying out whether it still works. If it does, I will make the change and close this.
@eltonfss There is now a brand-new Qleverfile for PubChem, which also dowloads the latest version of the ontologies, see 589d5b97ba2ffb68681423c3d844e4c249ae094a . Please try it out and let me know if it works for you.
| gharchive/pull-request | 2024-06-19T13:55:38 | 2025-04-01T06:37:44.450851 | {
"authors": [
"eltonfss",
"hannahbast"
],
"repo": "ad-freiburg/qlever-control",
"url": "https://github.com/ad-freiburg/qlever-control/pull/47",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2354212981 | Infinite Loop Trying to run listdir Example
When attempting to run the listdir example https://github.com/adafruit/Adafruit_CircuitPython_BLE_File_Transfer/blob/main/examples/ble_file_transfer_listdirs.py it's resulting in getting stuck in an infinite loop inside of here: https://github.com/adafruit/Adafruit_CircuitPython_BLE_File_Transfer/blob/main/adafruit_ble_file_transfer.py#L147 I added print statements and can see that the value of read is always 0. I also printed buffer and it was filled with \x00s
It's unclear to me if this worked in the past and perhaps the workflow API was changed, but the library was not, or perhaps this functionality was not ever working.
I tested with a Feather Sense with BLE Workflow enabled and an Itsy Bitsy nrf52840 as the client running the listdir script. I did originally notice this behavior first on the PC using Blinka_Bleio, but then moved off the PC to using two MCUs and confirmed seeing the same infinite looping behavior.
Did you figure this out?
I think I understand it a bit better, but wouldn't say I figured it out completely.
When everything is working as expected this library does run successfully on the micro-controller and does not infinitely loop.
It does still have an infinite loop when run on a PC under blinka_bleio. So this could be closed and if there is a desire to support that environment then an issue could be created over in that repo. For now I've moved to just using bleak module directly without blinka and blinka_bleio and I'm making progress that way.
The times that I did see this library infinite loop on the MCU (mentioned in the original comment), I believe it was due to the "server device" (the one running BLE workflow) had gotten into a "bad state" ultimately caused by partially broken implementations while working on the PC / blinka_bleio version. I did not document specific examples that led to the bad state, but I did experience it a few times. I found that using ctrl+C / ctrl+D to restart the code.py or REPL would typically get it back into a good state. I think maybe once I ended up having to unplug / replug to let it fully reboot.
I'll close this one.
| gharchive/issue | 2024-06-14T23:03:49 | 2025-04-01T06:37:44.463241 | {
"authors": [
"FoamyGuy",
"tannewt"
],
"repo": "adafruit/Adafruit_CircuitPython_BLE_File_Transfer",
"url": "https://github.com/adafruit/Adafruit_CircuitPython_BLE_File_Transfer/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2254704856 | Different pool, different ConnectionManager
Update ConnectionManager, to support multiple different pools at the same time.
Testing steps:
Have a device with onboard WiFi and either a ESP32SPI or WIZNET5K
Connect both and make requests on both and see that they are from different devices.
You can also use a WIZNET5K create it's request session first, and then disconnect it and verify the other device still works.
@anecdata here's another one that would be awesome if you could test (with everything else in main)
Since you easily have the ability to see where requests are coming from
Looks good from a testing perspective, maybe someone wants to code review.
@dhalbert once this is merged, I can take care of any conflicts and open up the 3x use new socketpool PRs
@justmobilize is this a bugfix version bump, or minor or major version bump, in terms of behavior?
@dhalbert I would consider it minor. It shouldn't break anything as it's only changing how pseudo private data is stored.
| gharchive/pull-request | 2024-04-20T20:22:12 | 2025-04-01T06:37:44.472244 | {
"authors": [
"anecdata",
"dhalbert",
"justmobilize"
],
"repo": "adafruit/Adafruit_CircuitPython_ConnectionManager",
"url": "https://github.com/adafruit/Adafruit_CircuitPython_ConnectionManager/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
653942875 | rtc libary in visual studio code
hey,
i am trying to youse the exampel code . but every time i get this error .
i think dear is a libary to short in visual studio code but i dont now witch.( i already installed TinywireM).
can enyone help me?
this is the errors i get:
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'void USI_TWI_Master_Initialise()':
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:64:3: error: 'PORT_USI' was not declared in this scope
PORT_USI |=
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:66:11: error: 'PIN_USI_SDA' was not declared in this scope
<< PIN_USI_SDA); // Enable pullup on SDA, to set high as released state.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:69:11: error: 'PIN_USI_SCL' was not declared in this scope
<< PIN_USI_SCL); // Enable pullup on SCL, to set high as released state.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:71:3: error: 'DDR_USI' was not declared in this scope
DDR_USI |= (1 << PIN_USI_SCL); // Enable SCL as output.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:74:3: error: 'USIDR' was not declared in this scope
USIDR = 0xFF; // Preload dataregister with "released level" data.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:3: error: 'USICR' was not declared in this scope
USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:17: error: 'USISIE' was not declared in this scope
USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:33: error: 'USIOIE' was not declared in this scope
USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:76:17: error: 'USIWM1' was not declared in this scope
(1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:76:33: error: 'USIWM0' was not declared in this scope
(1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:77:17: error: 'USICS1' was not declared in this scope
(1 << USICS1) | (0 << USICS0) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:77:33: error: 'USICS0' was not declared in this scope
(1 << USICS1) | (0 << USICS0) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:78:17: error: 'USICLK' was not declared in this scope
(1 << USICLK) | // Software stobe as counter clock source
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:79:17: error: 'USITC' was not declared in this scope
(0 << USITC);
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:3: error: 'USISR' was not declared in this scope
USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:17: error: 'USISIF' was not declared in this scope
USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:33: error: 'USIOIF' was not declared in this scope
USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:49: error: 'USIPF' was not declared in this scope
USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:81:17: error: 'USIDC' was not declared in this scope
(1 << USIDC) | // Clear flags,
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:82:19: error: 'USICNT0' was not declared in this scope
(0x0 << USICNT0); // and reset counter.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Start_Transceiver_With_Data(unsigned char*, unsigned char)':
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:13: error: 'USISIF' was not declared in this scope
(1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:29: error: 'USIOIF' was not declared in this scope
(1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:45: error: 'USIPF' was not declared in this scope
he terminal process terminated with exit code: 1
Terminal will be reused by tasks, press any key to close it.
Executing task: platformio run <
Processing uno (platform: atmelavr; board: uno; framework: arduino)
Verbose mode can be enabled via -v, --verbose option
CONFIGURATION: https://docs.platformio.org/page/boards/atmelavr/uno.html
PLATFORM: Atmel AVR 2.2.0 > Arduino Uno
HARDWARE: ATMEGA328P 16MHz, 2KB RAM, 31.50KB Flash
DEBUG: Current (simavr) On-board (simavr)
PACKAGES:
framework-arduino-avr 5.0.0
toolchain-atmelavr 1.50400.190710 (5.4.0)
LDF: Library Dependency Finder -> http://bit.ly/configure-pio-ldf
LDF Modes: Finder ~ chain, Compatibility ~ soft
Found 9 compatible libraries
Scanning dependencies...
Dependency Graph
|-- 1.10.0
| |-- 1.1.0
| |-- 1.0
|-- 1.0.12
Building in release mode
Compiling .pio/build/uno/lib9ac/TinyWireM_ID797/USI_TWI_Master.cpp.o
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'void USI_TWI_Master_Initialise()':
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:64:3: error: 'PORT_USI' was not declared in this scope
PORT_USI |=
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:66:11: error: 'PIN_USI_SDA' was not declared in this scope
<< PIN_USI_SDA); // Enable pullup on SDA, to set high as released state.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:69:11: error: 'PIN_USI_SCL' was not declared in this scope
<< PIN_USI_SCL); // Enable pullup on SCL, to set high as released state.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:71:3: error: 'DDR_USI' was not declared in this scope
DDR_USI |= (1 << PIN_USI_SCL); // Enable SCL as output.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:74:3: error: 'USIDR' was not declared in this scope
Compiling .pio/build/uno/liba29/Wire/utility/twi.c.o
USIDR = 0xFF; // Preload dataregister with "released level" data.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:3: error: 'USICR' was not declared in this scope
USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:17: error: 'USISIE' was not declared in this scope
USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:33: error: 'USIOIE' was not declared in this scope
USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:76:17: error: 'USIWM1' was not declared in this scope
(1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:76:33: error: 'USIWM0' was not declared in this scope
(1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:77:17: error: 'USICS1' was not declared in this scope
(1 << USICS1) | (0 << USICS0) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:77:33: error: 'USICS0' was not declared in this scope
(1 << USICS1) | (0 << USICS0) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:78:17: error: 'USICLK' was not declared in this scope
(1 << USICLK) | // Software stobe as counter clock source
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:79:17: error: 'USITC' was not declared in this scope
(0 << USITC);
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:3: error: 'USISR' was not declared in this scope
USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:17: error: 'USISIF' was not declared in this scope
USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:33: error: 'USIOIF' was not declared in this scope
USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:49: error: 'USIPF' was not declared in this scope
USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:81:17: error: 'USIDC' was not declared in this scope
(1 << USIDC) | // Clear flags,
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:82:19: error: 'USICNT0' was not declared in this scope
(0x0 << USICNT0); // and reset counter.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Start_Transceiver_With_Data(unsigned char*, unsigned char)':
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:13: error: 'USISIF' was not declared in this scope
(1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:29: error: 'USIOIF' was not declared in this scope
(1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:45: error: 'USIPF' was not declared in this scope
(1 << USISIF) | (1 << USIOIF) | (1 << USIPF) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:162:13: error: 'USIDC' was not declared in this scope
(1 << USIDC) | // Prepare register value to: Clear flags, and
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:163:15: error: 'USICNT0' was not declared in this scope
(0x0 << USICNT0); // set USI to shift 8 bits i.e. count 16 clock edges.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:229:7: error: 'PORT_USI' was not declared in this scope
PORT_USI &= ~(1 << PIN_USI_SCL); // Pull SCL LOW.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:229:26: error: 'PIN_USI_SCL' was not declared in this scope
PORT_USI &= ~(1 << PIN_USI_SCL); // Pull SCL LOW.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:230:7: error: 'USIDR' was not declared in this scope
USIDR = *(msg++); // Setup data.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:234:7: error: 'DDR_USI' was not declared in this scope
DDR_USI &= ~(1 << PIN_USI_SDA); // Enable SDA as input.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:234:25: error: 'PIN_USI_SDA' was not declared in this scope
DDR_USI &= ~(1 << PIN_USI_SDA); // Enable SDA as input.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:266:7: error: 'DDR_USI' was not declared in this scope
DDR_USI &= ~(1 << PIN_USI_SDA); // Enable SDA as input.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:266:25: error: 'PIN_USI_SDA' was not declared in this scope
DDR_USI &= ~(1 << PIN_USI_SDA); // Enable SDA as input.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:272:9: error: 'USIDR' was not declared in this scope
USIDR = 0xFF; // Load NACK to confirm End Of Transmission.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:274:9: error: 'USIDR' was not declared in this scope
USIDR = 0x00; // Load ACK. Set data register bit 7 (output for SDA) low.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Master_Transfer(unsigned char)':
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:295:3: error: 'USISR' was not declared in this scope
USISR = temp; // Set USISR according to temp.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:297:16: error: 'USISIE' was not declared in this scope
temp = (0 << USISIE) | (0 << USIOIE) | // Interrupts disabled
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:297:32: error: 'USIOIE' was not declared in this scope
temp = (0 << USISIE) | (0 << USIOIE) | // Interrupts disabled
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:298:16: error: 'USIWM1' was not declared in this scope
Compiling .pio/build/uno/lib864/RTClib_ID83/RTClib.cpp.o
(1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:298:32: error: 'USIWM0' was not declared in this scope
(1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:299:16: error: 'USICS1' was not declared in this scope
(1 << USICS1) | (0 << USICS0) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:299:32: error: 'USICS0' was not declared in this scope
(1 << USICS1) | (0 << USICS0) |
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:300:16: error: 'USICLK' was not declared in this scope
(1 << USICLK) | // Software clock strobe as source.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:301:16: error: 'USITC' was not declared in this scope
(1 << USITC); // Toggle Clock Port.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:304:5: error: 'USICR' was not declared in this scope
USICR = temp; // Generate positve SCL edge.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:305:14: error: 'PIN_USI' was not declared in this scope
while (!(PIN_USI & (1 << PIN_USI_SCL)))
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:305:30: error: 'PIN_USI_SCL' was not declared in this scope
while (!(PIN_USI & (1 << PIN_USI_SCL)))
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:309:28: error: 'USIOIF' was not declared in this scope
} while (!(USISR & (1 << USIOIF))); // Check for transfer complete.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:312:10: error: 'USIDR' was not declared in this scope
temp = USIDR; // Read out data.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:314:3: error: 'DDR_USI' was not declared in this scope
DDR_USI |= (1 << PIN_USI_SDA); // Enable SDA as output.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:314:20: error: 'PIN_USI_SDA' was not declared in this scope
DDR_USI |= (1 << PIN_USI_SDA); // Enable SDA as output.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Master_Start()':
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:324:3: error: 'PORT_USI' was not declared in this scope
PORT_USI |= (1 << PIN_USI_SCL); // Release SCL.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:324:21: error: 'PIN_USI_SCL' was not declared in this scope
PORT_USI |= (1 << PIN_USI_SCL); // Release SCL.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:330:22: error: 'PIN_USI_SDA' was not declared in this scope
PORT_USI &= ~(1 << PIN_USI_SDA); // Force SDA LOW.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:336:9: error: 'USISR' was not declared in this scope
if (!(USISR & (1 << USISIF))) {
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:336:23: error: 'USISIF' was not declared in this scope
if (!(USISR & (1 << USISIF))) {
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Master_Stop()':
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:349:3: error: 'PORT_USI' was not declared in this scope
PORT_USI &= ~(1 << PIN_USI_SDA); // Pull SDA low.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:349:22: error: 'PIN_USI_SDA' was not declared in this scope
PORT_USI &= ~(1 << PIN_USI_SDA); // Pull SDA low.
^
Compiling .pio/build/uno/lib631/SimpleDHT_ID849/SimpleDHT.cpp.o
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:350:21: error: 'PIN_USI_SCL' was not declared in this scope
PORT_USI |= (1 << PIN_USI_SCL); // Release SCL.
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:351:12: error: 'PIN_USI' was not declared in this scope
while (!(PIN_USI & (1 << PIN_USI_SCL)))
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:358:9: error: 'USISR' was not declared in this scope
if (!(USISR & (1 << USIPF))) {
^
/home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:358:23: error: 'USIPF' was not declared in this scope
if (!(USISR & (1 << USIPF))) {
^
*** [.pio/build/uno/lib9ac/TinyWireM_ID797/USI_TWI_Master.cpp.o] Error 1
============================================= [FAILED] Took 1.02 seconds =============================================
The terminal process terminated with exit code: 1
Terminal will be reused by tasks, press any key to close it.
Why are you using TinyWireM? This is a Wire replacement for ATtiny microcontrollers that do not have a real I2C port. It instead relies on the “universal serial interface” (USI) to provide this functionality.
The Uno does not have an USI, but it does have a regular I2C port: you should use the regular Wire library with it. Note that RTClib.cpp uses the TinyWireM library only when being compiled for the ATtiny85, which is clearly not your case.
hey ,
i am not using the tinywirelibary i just addit the libary.
this is the exapel code i am yousing:
RTC_DS1307 rtc;
char daysOfTheWeek[7][12] = {"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"};
void setup () {
Serial.begin(9600);
#ifndef ESP8266
while (!Serial); // wait for serial port to connect. Needed for native USB
#endif
if (! rtc.begin()) {
Serial.println("Couldn't find RTC");
Serial.flush();
abort();
}
if (! rtc.isrunning()) {
Serial.println("RTC is NOT running, let's set the time!");
// When time needs to be set on a new device, or after a power loss, the
// following line sets the RTC to the date & time this sketch was compiled
rtc.adjust(DateTime(F(DATE), F(TIME)));
// This line sets the RTC with an explicit date & time, for example to set
// January 21, 2014 at 3am you would call:
// rtc.adjust(DateTime(2014, 1, 21, 3, 0, 0));
}
// When time needs to be re-set on a previously configured device, the
// following line sets the RTC to the date & time this sketch was compiled
// rtc.adjust(DateTime(F(DATE), F(TIME)));
// This line sets the RTC with an explicit date & time, for example to set
// January 21, 2014 at 3am you would call:
// rtc.adjust(DateTime(2014, 1, 21, 3, 0, 0));
}
What do you mean by “i just add it the libary”? You should not need to “add” TinyWireM anywhere to compile this example. If you can't prevent platform.io to try to compile it anyway, you should ask for help in a forum dedicated to platform.io.
What do you mean by “i just add it the libary”? You should not need to “add” TinyWireM anywhere to compile this example. If you can't prevent platform.io to try to compile it anyway, you should ask for help in a forum dedicated to platform.io.
ok thank you for the help.
@edgar-bonet this is a feature in platformio where all dependancies are compiled, even when dependancies are not referenced in code for a platform, we've let @ivankravets know, for now folks have to manually remove the unused library
Sorry for this issue. Please use lib_ignore option and skip libraries from a build process.
This provided some more clarity for me on the TinyWireM issue in VS Code:
https://github.com/platformio/platformio-core/issues/3543
Here is an example of the suggested fix in the platformio.ini file:
[env:featheresp32]
platform = espressif32
board = featheresp32
framework = arduino
lib_ignore = TinyWireM
| gharchive/issue | 2020-07-09T10:15:22 | 2025-04-01T06:37:44.550197 | {
"authors": [
"edgar-bonet",
"ivankravets",
"jeroen24",
"joshlikespics",
"ladyada"
],
"repo": "adafruit/RTClib",
"url": "https://github.com/adafruit/RTClib/issues/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
171557032 | Noob qstn
Can this be installed to sit on a website rather than a server?
How would I do that?
You would install this onto a server. The server will then expose an IP address:port to the public.
You can access that IP address:port in your web browser.
I'm afraid I can't help with this issue through GitHub issues. These issues are just for making suggestions or submitting bug reports with Staytus. I cannot help individually with installing or running the application in the wild. I'd suggest trying posting your issues on a site like Stack Overflow.
| gharchive/issue | 2016-08-17T01:52:21 | 2025-04-01T06:37:44.578346 | {
"authors": [
"SKSJeffrey",
"TomasHurtz",
"adamcooke"
],
"repo": "adamcooke/staytus",
"url": "https://github.com/adamcooke/staytus/issues/94",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2177490444 | 🛑 mastodon-relay.moew.science is down
In 8bb68c5, mastodon-relay.moew.science (https://mastodon-relay.moew.science/actor) was down:
HTTP code: 0
Response time: 0 ms
Resolved: mastodon-relay.moew.science is back up in 638770b after 29 minutes.
| gharchive/issue | 2024-03-10T02:29:23 | 2025-04-01T06:37:44.603078 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/11149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2185684982 | 🛑 relay.froth.zone is down
In 7f563fb, relay.froth.zone (https://relay.froth.zone/actor) was down:
HTTP code: 503
Response time: 100 ms
Resolved: relay.froth.zone is back up in 60b155d after 23 minutes.
| gharchive/issue | 2024-03-14T08:00:09 | 2025-04-01T06:37:44.606154 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/11216",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1500080694 | 🛑 relay.fedi.agency is down
In d93d22d, relay.fedi.agency (https://relay.fedi.agency/actor) was down:
HTTP code: 502
Response time: 533 ms
Resolved: relay.fedi.agency is back up in ba983fe.
| gharchive/issue | 2022-12-16T11:51:00 | 2025-04-01T06:37:44.608805 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/1996",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1510174104 | 🛑 relay.homunyan.com is down
In 3662ded, relay.homunyan.com (https://relay.homunyan.com/actor) was down:
HTTP code: 0
Response time: 0 ms
Resolved: relay.homunyan.com is back up in f5a4338.
| gharchive/issue | 2022-12-24T19:01:50 | 2025-04-01T06:37:44.611859 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/2416",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1552106726 | 🛑 relay.wagnersnetz.de is down
In 71039ac, relay.wagnersnetz.de (https://relay.wagnersnetz.de/actor) was down:
HTTP code: 0
Response time: 0 ms
Resolved: relay.wagnersnetz.de is back up in 4146357.
| gharchive/issue | 2023-01-22T12:55:16 | 2025-04-01T06:37:44.615050 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/2902",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1575132118 | 🛑 relay.wagnersnetz.de is down
In e2f72dd, relay.wagnersnetz.de (https://relay.wagnersnetz.de/actor) was down:
HTTP code: 0
Response time: 0 ms
Resolved: relay.wagnersnetz.de is back up in 0a816a8.
| gharchive/issue | 2023-02-07T22:41:18 | 2025-04-01T06:37:44.618135 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/3292",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1795155439 | 🛑 relay.masto.social is down
In 73e53b6, relay.masto.social (https://relay.masto.social/actor) was down:
HTTP code: 0
Response time: 0 ms
Resolved: relay.masto.social is back up in 8247ee2.
| gharchive/issue | 2023-07-08T22:50:27 | 2025-04-01T06:37:44.620747 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/6452",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2021982675 | 🛑 relay.mstdn.live is down
In 20aa525, relay.mstdn.live (https://relay.mstdn.live/actor) was down:
HTTP code: 0
Response time: 0 ms
Resolved: relay.mstdn.live is back up in 609ea60 after 1 hour, 4 minutes.
| gharchive/issue | 2023-12-02T10:28:35 | 2025-04-01T06:37:44.623127 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/9723",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2033781309 | 🛑 relay.fedi.agency is down
In f621722, relay.fedi.agency (https://relay.fedi.agency/actor) was down:
HTTP code: 502
Response time: 4564 ms
Resolved: relay.fedi.agency is back up in 67e4fdb after 15 minutes.
| gharchive/issue | 2023-12-09T10:15:04 | 2025-04-01T06:37:44.625495 | {
"authors": [
"adamus1red"
],
"repo": "adamus1red/ActivityPub-Relays",
"url": "https://github.com/adamus1red/ActivityPub-Relays/issues/9936",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113775310 | README: Update setup/development instructions
Verify that the current docs are correct and improve if we can.
@gauntface Current workflow I'm using:
Development:
nodemon server/app.js && gulp dev`
Production build:
gulp
Otherwise, we drop anything to do with AppEngine that's already in there. Anything else worth adding?
| gharchive/issue | 2015-10-28T09:18:57 | 2025-04-01T06:37:44.664482 | {
"authors": [
"addyosmani"
],
"repo": "addyosmani/app-shell",
"url": "https://github.com/addyosmani/app-shell/issues/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1316642301 | Is development stopped on this extension?
I want to know that is development stopped on this extension?
Asking since last update is more than 2 years ago.
Hi @akash07k - it's a good question.
It's true that no new functionality has been added for a while, but I believe the debugger still works in the latest version of VSCode and I'm happy to accept PRs for any bug fixes or new features people want to add.
Ok, so does the last release work well with latest versions of VS Code?
Also, will we get code autocompletion support?
On 8/3/2022 1:28 AM, Dave Holoway wrote:
Hi @akash07k https://github.com/akash07k - it's a good question.
It's true that no new functionality has been added for a while, but I
believe the debugger still works in the latest version of VSCode and
I'm happy to accept PRs for any bug fixes or new features people want
to add.
—
Reply to this email directly, view it on GitHub
https://github.com/adelphes/android-dev-ext/issues/135#issuecomment-1203156498,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABQWBM3SALXED3PEGTCP4G3VXF4WPANCNFSM54R2VT6A.
You are receiving this because you were mentioned.Message ID:
@.***>
Yes, the current release of the Android extension still works with the latest version of VSCode.
It includes:
Debugging support (breakpoints, stepping, evaluate local variables, etc)
View logcat
Java code-completion
Also, will we get code autocompletion support?
Java code-completion for standard Android libraries should work. Pressing ctrl + space should bring up relevant identifiers if they don't appear automatically.
| gharchive/issue | 2022-07-25T10:55:14 | 2025-04-01T06:37:44.670379 | {
"authors": [
"adelphes",
"akash07k"
],
"repo": "adelphes/android-dev-ext",
"url": "https://github.com/adelphes/android-dev-ext/issues/135",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
578043232 | DNS challenge with cloudflare issue
I am using your docker-compose file with latest letsencrypt-dns and cloudflare as dns provider.
When starting the docker container it get the following output:
letsencrypt-dns | 2020-03-09 16:18:55 circus[1] [INFO] Starting master on pid 1
letsencrypt-dns | 2020-03-09 16:18:55 circus[1] [INFO] Arbiter now waiting for commands
letsencrypt-dns | 2020-03-09 16:18:55 circus[1] [INFO] crond started
letsencrypt-dns | 2020-03-09 16:18:55 circus[1] [INFO] watch-domains started
letsencrypt-dns | 2020-03-09 16:18:55 [17] | #### Registering Let's Encrypt account if needed ####
letsencrypt-dns | 2020-03-09 16:18:55 [17] | Saving debug log to /etc/letsencrypt/logs/letsencrypt.log
letsencrypt-dns | 2020-03-09 16:18:55 [17] | There is an existing account; registration of a duplicate account with this command is currently unsupported.
letsencrypt-dns | 2020-03-09 16:18:55 [17] | #### Clean autorestart/autocmd jobs
letsencrypt-dns | 2020-03-09 16:18:55 [17] | #### Creating missing certificates if needed (~1min for each) ####
letsencrypt-dns | 2020-03-09 16:18:55 [17] | >>> Creating a certificate for domain(s): -d nextcloud.mydomain.de
letsencrypt-dns | 2020-03-09 16:18:56 [17] | Saving debug log to /etc/letsencrypt/logs/letsencrypt.log
letsencrypt-dns | 2020-03-09 16:18:56 [17] | Plugins selected: Authenticator manual, Installer None
letsencrypt-dns | 2020-03-09 16:18:57 [17] | Obtaining a new certificate
letsencrypt-dns | 2020-03-09 16:18:57 [17] | Performing the following challenges:
letsencrypt-dns | 2020-03-09 16:18:57 [17] | dns-01 challenge for nextcloud.mydomain.de
letsencrypt-dns | 2020-03-09 16:18:57 [17] | Running manual-auth-hook command: /var/lib/letsencrypt/hooks/authenticator.sh
letsencrypt-dns | 2020-03-09 16:18:59 [17] | manual-auth-hook command "/var/lib/letsencrypt/hooks/authenticator.sh" returned error code 1
letsencrypt-dns | 2020-03-09 16:18:59 [17] | Error output from manual-auth-hook command authenticator.sh:
letsencrypt-dns | 2020-03-09 16:18:59 [17] | Traceback (most recent call last):
letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/bin/lexicon", line 8, in <module>
letsencrypt-dns | 2020-03-09 16:18:59 [17] | sys.exit(main())
letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/cli.py", line 117, in main
letsencrypt-dns | 2020-03-09 16:18:59 [17] | results = client.execute()
letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute
letsencrypt-dns | 2020-03-09 16:18:59 [17] | self.provider.authenticate()
letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate
letsencrypt-dns | 2020-03-09 16:18:59 [17] | return self._authenticate()
letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate
letsencrypt-dns | 2020-03-09 16:18:59 [17] | payload = self._get('/zones', {
letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get
letsencrypt-dns | 2020-03-09 16:18:59 [17] | return self._request('GET', url, query_params=query_params)
letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request
letsencrypt-dns | 2020-03-09 16:18:59 [17] | response.raise_for_status()
letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/request
letsencrypt-dns | 2020-03-09 16:18:59 [17] | s/models.py", line 941, in raise_for_status
letsencrypt-dns | 2020-03-09 16:18:59 [17] | raise HTTPError(http_error_msg, response=self)
letsencrypt-dns | 2020-03-09 16:18:59 [17] | requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.de&status=active
letsencrypt-dns | 2020-03-09 16:18:59 [17] | Waiting for verification...
letsencrypt-dns | 2020-03-09 16:19:01 [17] | Challenge failed for domain nextcloud.mydomain.de
letsencrypt-dns | 2020-03-09 16:19:01 [17] | dns-01 challenge for nextcloud.mydomain.de
letsencrypt-dns | 2020-03-09 16:19:01 [17] | Cleaning up challenges
I am a little bit lost as this is the first time i use your docker container and also the first time i use cloudflare. I was creating the dns zone at cloudflare very minimalistic (just a A-record for main domain, but not an A-record for nextcloud.mydomain.de, as it should not be required when doing dns challenges). As far as i understand it is not even necessary that cloudflare is my active nameserver (at least not in this phase of the process: when trying to seed the TXT record) - at validation of dns challenge it is required to have cloudflare being the active nameserver for my domain.
At cloudflare this token has all possilble permissions (account + zone) across all resources (accounts and zones), so i think it is not a permission problem.
I made cloudflare nameservers active for my domain (confirmed as active by cloudflare webUI) for a second run but still get the same issue (do i have to wait 24h to be sure, that letsencrypt CA is picking the right nameservers?).
Please help.
OMG!
I filled the env variable LEXICON_PROVIDER_OPTIONS with --auth-username=myname@mydomain.de --auth-token=cloudflare_api_token, but instead of a cloudflare API Token i should use the cloudflare API Key for --auth-token - naming is a bit confusing here...
Now it works. Maybe we should improve the documentation with more examples (at least for all DNS providers with confusing naming).
A user-friendly error message instead of a stacktrace would be cool.
Yes, I agree that the error is absolutely not obvious here. For cloudflare specifically pathological here, since you have global API keys, and more regular scoped API tokens. Here currently only global API keys are available, and an upstream issue is opened to get the scoped API tokens (https://github.com/AnalogJ/lexicon/pull/460).
Getting a nice error from my Docker is quite difficult however, since Lexicon is not sending an fine-grain error about what was wrong. I think it should be quite useful to raise an issue on the Lexicon GitHub project (and continue there, since I also one of the maintainers for this project).
I close the issue here, since the error is "technically" solved, but do not hesitate to give further feedbacks!
Hello guys, I try to use the Globar API Key as @daniela-waranie suggested but it didn't work for me either. I've been using the docker image for this but I don't know if there is any difference between native and docker run.
API Token log
dnsrobocert | Saving debug log to /etc/letsencrypt/logs/letsencrypt.log
dnsrobocert | Plugins selected: Authenticator manual, Installer None
dnsrobocert | Obtaining a new certificate
dnsrobocert | Performing the following challenges:
dnsrobocert | dns-01 challenge for api.mydomain.com
dnsrobocert | Running manual-auth-hook command: /usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"
dnsrobocert | manual-auth-hook command "/usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"" returned error code 1
dnsrobocert | Error output from manual-auth-hook command python3:
dnsrobocert | 2020-04-19 23:23:51 0f6824afd7a9 __main__[73] ERROR Error while executing the `auth` hook:
dnsrobocert | 2020-04-19 23:23:51 0f6824afd7a9 __main__[73] ERROR 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active
dnsrobocert | Traceback (most recent call last):
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 48, in main
dnsrobocert | globals()[parsed_args.type](dnsrobocert_config, parsed_args.lineage)
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 63, in auth
dnsrobocert | _txt_challenge(profile, token, domain, action="create")
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 174, in _txt_challenge
dnsrobocert | Client(lexicon_config).execute()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute
dnsrobocert | self.provider.authenticate()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate
dnsrobocert | return self._authenticate()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate
dnsrobocert | payload = self._get('/zones', {
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get
dnsrobocert | return self._request('GET', url, query_params=query_params)
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request
dnsrobocert | response.raise_for_status()
dnsrobocert | File "/usr/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
dnsrobocert | raise HTTPError(http_error_msg, response=self)
dnsrobocert | requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active
dnsrobocert |
dnsrobocert | Waiting for verification...
dnsrobocert | Challenge failed for domain api.mydomain.com
dnsrobocert | dns-01 challenge for api.mydomain.com
dnsrobocert | Cleaning up challenges
dnsrobocert | Running manual-cleanup-hook command: /usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"
dnsrobocert | manual-cleanup-hook command "/usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"" returned error code 1
dnsrobocert | Error output from manual-cleanup-hook command python3:
dnsrobocert | 2020-04-19 23:23:56 0f6824afd7a9 __main__[75] ERROR Error while executing the `cleanup` hook:
dnsrobocert | 2020-04-19 23:23:56 0f6824afd7a9 __main__[75] ERROR 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active
dnsrobocert | Traceback (most recent call last):
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 48, in main
dnsrobocert | globals()[parsed_args.type](dnsrobocert_config, parsed_args.lineage)
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 125, in cleanup
dnsrobocert | _txt_challenge(profile, token, domain, action="delete")
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 174, in _txt_challenge
dnsrobocert | Client(lexicon_config).execute()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute
dnsrobocert | self.provider.authenticate()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate
dnsrobocert | return self._authenticate()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate
dnsrobocert | payload = self._get('/zones', {
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get
dnsrobocert | return self._request('GET', url, query_params=query_params)
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request
dnsrobocert | response.raise_for_status()
dnsrobocert | File "/usr/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
dnsrobocert | raise HTTPError(http_error_msg, response=self)
dnsrobocert | requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active
dnsrobocert |
dnsrobocert | Some challenges have failed.
dnsrobocert | IMPORTANT NOTES:
dnsrobocert | - The following errors were reported by the server:
dnsrobocert |
dnsrobocert | Domain: api.mydomain.com
dnsrobocert | Type: dns
dnsrobocert | Detail: DNS problem: NXDOMAIN looking up TXT for
dnsrobocert | _acme-challenge.api.mydomain.com - check that a DNS record
dnsrobocert | exists for this domain
dnsrobocert | ----------
dnsrobocert | 2020-04-19 23:23:56 0f6824afd7a9 dnsrobocert.core.main[1] ERROR An error occurred while processing certificate config `{'domains': ['api.mydomain.com'], 'profile': 'maestro_profile'}`:
dnsrobocert | Command '['/usr/bin/python3', '-m', 'dnsrobocert.core.certbot', 'certonly', '-n', '--config-dir', '/etc/letsencrypt', '--work-dir', '/etc/letsencrypt/workdir', '--logs-dir', '/etc/letsencrypt/logs', '--manual', '--preferred-challenges=dns', '--manual-auth-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--manual-cleanup-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--manual-public-ip-logging-ok', '--expand', '--deploy-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t deploy -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--server', 'https://acme-staging-v02.api.letsencrypt.org/directory', '--cert-name', 'api.mydomain.com', '-d', 'api.mydomain.com']' returned non-zero exit status 1.
dnsrobocert | 2020-04-19 23:23:56 0f6824afd7a9 dnsrobocert.core.main[1] INFO Revoke and delete certificates if needed
Global API Key log
dnsrobocert | Saving debug log to /etc/letsencrypt/logs/letsencrypt.log
dnsrobocert | Plugins selected: Authenticator manual, Installer None
dnsrobocert | Obtaining a new certificate
dnsrobocert | Performing the following challenges:
dnsrobocert | dns-01 challenge for api.mydomain.com
dnsrobocert | Running manual-auth-hook command: /usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"
dnsrobocert | manual-auth-hook command "/usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"" returned error code 1
dnsrobocert | Error output from manual-auth-hook command python3:
dnsrobocert | 2020-04-19 23:30:44 0f6824afd7a9 __main__[81] ERROR Error while executing the `auth` hook:
dnsrobocert | 2020-04-19 23:30:44 0f6824afd7a9 __main__[81] ERROR 403 Client Error: Forbidden for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active
dnsrobocert | Traceback (most recent call last):
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 48, in main
dnsrobocert | globals()[parsed_args.type](dnsrobocert_config, parsed_args.lineage)
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 63, in auth
dnsrobocert | _txt_challenge(profile, token, domain, action="create")
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 174, in _txt_challenge
dnsrobocert | Client(lexicon_config).execute()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute
dnsrobocert | self.provider.authenticate()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate
dnsrobocert | return self._authenticate()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate
dnsrobocert | payload = self._get('/zones', {
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get
dnsrobocert | return self._request('GET', url, query_params=query_params)
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request
dnsrobocert | response.raise_for_status()
dnsrobocert | File "/usr/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
dnsrobocert | raise HTTPError(http_error_msg, response=self)
dnsrobocert | requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active
dnsrobocert |
dnsrobocert | Waiting for verification...
dnsrobocert | Challenge failed for domain api.mydomain.com
dnsrobocert | dns-01 challenge for api.mydomain.com
dnsrobocert | Cleaning up challenges
dnsrobocert | Running manual-cleanup-hook command: /usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"
dnsrobocert | manual-cleanup-hook command "/usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"" returned error code 1
dnsrobocert | Error output from manual-cleanup-hook command python3:
dnsrobocert | 2020-04-19 23:30:50 0f6824afd7a9 __main__[83] ERROR Error while executing the `cleanup` hook:
dnsrobocert | 2020-04-19 23:30:50 0f6824afd7a9 __main__[83] ERROR 403 Client Error: Forbidden for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active
dnsrobocert | Traceback (most recent call last):
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 48, in main
dnsrobocert | globals()[parsed_args.type](dnsrobocert_config, parsed_args.lineage)
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 125, in cleanup
dnsrobocert | _txt_challenge(profile, token, domain, action="delete")
dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 174, in _txt_challenge
dnsrobocert | Client(lexicon_config).execute()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute
dnsrobocert | self.provider.authenticate()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate
dnsrobocert | return self._authenticate()
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate
dnsrobocert | payload = self._get('/zones', {
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get
dnsrobocert | return self._request('GET', url, query_params=query_params)
dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request
dnsrobocert | response.raise_for_status()
dnsrobocert | File "/usr/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
dnsrobocert | raise HTTPError(http_error_msg, response=self)
dnsrobocert | requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active
dnsrobocert |
dnsrobocert | Some challenges have failed.
dnsrobocert | IMPORTANT NOTES:
dnsrobocert | - The following errors were reported by the server:
dnsrobocert |
dnsrobocert | Domain: api.mydomain.com
dnsrobocert | Type: dns
dnsrobocert | Detail: DNS problem: NXDOMAIN looking up TXT for
dnsrobocert | _acme-challenge.api.mydomain.com - check that a DNS record
dnsrobocert | exists for this domain
dnsrobocert | ----------
dnsrobocert | 2020-04-19 23:30:50 0f6824afd7a9 dnsrobocert.core.main[1] ERROR An error occurred while processing certificate config `{'domains': ['api.mydomain.com'], 'profile': 'maestro_profile'}`:
dnsrobocert | Command '['/usr/bin/python3', '-m', 'dnsrobocert.core.certbot', 'certonly', '-n', '--config-dir', '/etc/letsencrypt', '--work-dir', '/etc/letsencrypt/workdir', '--logs-dir', '/etc/letsencrypt/logs', '--manual', '--preferred-challenges=dns', '--manual-auth-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--manual-cleanup-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--manual-public-ip-logging-ok', '--expand', '--deploy-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t deploy -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--server', 'https://acme-staging-v02.api.letsencrypt.org/directory', '--cert-name', 'api.mydomain.com', '-d', 'api.mydomain.com']' returned non-zero exit status 1.
dnsrobocert | 2020-04-19 23:30:50 0f6824afd7a9 dnsrobocert.core.main[1] INFO Revoke and delete certificates if needed
As you may notice, errors are like follow:
API Token: 400 Client Error: Bad Request for url
Global API KEY: 403 Client Error: Forbidden for url
I'm setting the variables directly in the config.yml as suggested in the documentation
draft: false
acme:
email_account: devops@mydomain.com
staging: true
profiles:
- name: maestro_profile
provider: cloudflare
provider_options:
auth_username: devops@mydomain.com
auth_token: <global_api_key|api_token>
certificates:
- domains:
- api.mydomain.com
profile: maestro_profile
Do you have any suggestions I could follow? I already checked the issues on the other repositories but they are not merged yet.
| gharchive/issue | 2020-03-09T16:44:50 | 2025-04-01T06:37:44.716711 | {
"authors": [
"adferrand",
"daniela-waranie",
"yonathan9669"
],
"repo": "adferrand/docker-letsencrypt-dns",
"url": "https://github.com/adferrand/docker-letsencrypt-dns/issues/78",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
228722996 | Disable --single-process option
Hey, thank you for great work on compiling chrome for Lambda.
I'm trying to run perf audits, but it seems --single-process option, breaks performance metrics significantly.
No --single-process:
With --single-process:
Could you explain, why there's --single-process option and how to avoid it? I've tried to run without it, but in this case http://127.0.0.1:9222/json returns empty value and chrome just does not work.
Hi @alekseykulikov,
I apologise for my brief reply. In short: I don't really know.
I haven't had a chance to dig into it much myself—but, like you've discovered, headless Chrome doesn't run correctly without the --single-process flag when running within the Lambda environment and running in single-process mode breaks or disables some reporting. For example, Chrome will log to stderr Started multiple compositor clients (Browser, Renderer) in one process. Some metrics will be disabled. when started with --single-process—these may be the metrics tools like Lighthouse rely on for some of their reporting.
My best guess is that it has something to do with the sandboxing of Chrome and it's processes. It's possible that there may also be a bug in headless Chrome itself. In the Lambda environment, AWS has things pretty restricted and I suspect some combination of things Chrome is trying to do to isolate / restrict the different layers of processes relies on Linux OS features which aren't available within Lambda. For example, if you listen to stderr on the spawned chrome process, without --single-process, Chrome will log a lot of prctl(PR_SET_NO_NEW_PRIVS) failed errors. This may be keeping Chrome from starting a separate process for a browser tab.
I'll raise this issue on the headless-dev group and follow up here with any news.
Thank you very much @adieuadieu for detailed reply!
Yes, let's see in headless-dev group, maybe some prevention of prctl(PR_SET_NO_NEW_PRIVS) failed will allow to start a browser tab.
I've also found, that without --no-sandbox option, chrome does not start at all.
I've asked about --single-process and --no-sandbox here.
This is a bit late: but anyone have any update on running Lighthouse via headless Chrome on Lambda? Try as we can, we can't get it working without due to the need for --single-process requirement.
Did someone managed to run headless shell without --single-process on lambda?
I am looking for the solution too
| gharchive/issue | 2017-05-15T13:55:22 | 2025-04-01T06:37:44.728101 | {
"authors": [
"adieuadieu",
"alekseykulikov",
"bluepeter",
"iamkhalidbashir",
"miroljub1995"
],
"repo": "adieuadieu/serverless-chrome",
"url": "https://github.com/adieuadieu/serverless-chrome/issues/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2136783797 | lexica.art API Too Many Requests
I can't find any documentation regarding the API
raise GeneratorException(f"Error while generating prompt: {e}") from e
dynamicprompts.generators.promptgenerator.GeneratorException: Error while generating prompt: 429 Client Error:
Too Many Requests for url: https://lexica.art/api/v1/search?
q=cat%20in%20wizard%20outfit%20casting%20a%20spell,%20dramatic,%20magic%20effects,%20cute
Do I need to slightly alter the search words each time?
I added some randomness to the search string each time and it seems to be working, is this a new API change, it worked fine 2 days ago?
It's entirely possible it's a Lexica API change – we're not in control of the API, so we can't do much about this, sorry!
| gharchive/issue | 2024-02-15T15:12:50 | 2025-04-01T06:37:44.729922 | {
"authors": [
"akx",
"lingondricka2"
],
"repo": "adieyal/dynamicprompts",
"url": "https://github.com/adieyal/dynamicprompts/issues/119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
375317766 | [feat/company-all-with-offset] improve Get all companies with other information
Add new method #all_with_offset to return not only results as response but also hasMore and offset, so it would be possible to get all companies. For instance
response = Hubspot::Company.all_with_offset(count: 100)
while response['hasMore']
# DO SOMETHING with response['results']
response = Hubspot::Company.all_with_offset(count: 100, offset: response['offset'])
end
Thanks for making those updates. In the near future I'm going to start using the Projects feature to create some high level tickets to track things that I would like to see get fixed or clarified or whatever. Then we can start making some of the breaking changes for v1.0.0 that will standardize some of the things that are currently not.
| gharchive/pull-request | 2018-10-30T04:36:56 | 2025-04-01T06:37:44.732319 | {
"authors": [
"CarolHsu",
"cbisnett"
],
"repo": "adimichele/hubspot-ruby",
"url": "https://github.com/adimichele/hubspot-ruby/pull/134",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1396696823 | newUI
newUI for desktop view and mobile view
see issue #46 for more
:)
Can you please address this issue
I used desktop view in mobile phone and this showed up
Because i made this responsive only for desktop and mobile. I think i have to make more media queries .
How can we code for desktop view in mobile . @aditya-singh9 , @Substancia
https://user-images.githubusercontent.com/96531798/193963955-98aa2594-ffd5-47f5-a18b-24dcc4f28202.mp4
need to push again the changes
| gharchive/pull-request | 2022-10-04T18:42:46 | 2025-04-01T06:37:44.755160 | {
"authors": [
"Rudresh-pandey",
"aditya-singh9"
],
"repo": "aditya-singh9/kekfinder",
"url": "https://github.com/aditya-singh9/kekfinder/pull/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
716930566 | convert all contributions to python 3.8 syntax
There are few contributions which still include python 2.7 syntax, which needs to be converted to python 3.8 syntax
can You pls list it so contributors can easily get it.?
List includes
print " " should be replaced by print()
raw_input() should be replace by input()
more will pop up when you view this repo inside of an ide running python3
Hey..
I just wanted to ask if someone was working on this issue currently.?
I'd like to take this if still available?
@Siddhant-K-code since u asked the question first, are you going to work on it ?
Hi,
I would like to contribute to this issue.
@Siddhant-K-code since u asked the question first, are you going to work on it ?
Yes, I Started working on it. & Will add some more problems of leetcode.
@adityaarakeri Please Assign Me!
Some programs like Leetcode/14.py are giving wrong answers !!
@adityaarakeri some files are giving wrong answers and errors , should i change it my Solutions ?
@adityaarakeri some files are giving wrong answers and errors , can i change it my Solutions ?
Go head.
@adityaarakeri Done, Please check My PR #93
Hello, How can I contribute to it?
Thanks.
Hi,
Is this section still open for contributions?
Thanks
Hello are there any files which still need to be coverted to Python 3 ?
| gharchive/issue | 2020-10-08T00:14:17 | 2025-04-01T06:37:44.760986 | {
"authors": [
"ImagineZero0",
"JaniPaasiluoto",
"Shreyas-Gupta-21",
"Siddhant-K-code",
"adityaarakeri",
"ealvarez968",
"sowmya8900",
"vlozan"
],
"repo": "adityaarakeri/Interview-solved",
"url": "https://github.com/adityaarakeri/Interview-solved/issues/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
213186000 | sbJson 2.0 spatial
Mapping of sbJson spatial to mdTranslator internal data structure:
[ ] reader complete
[x] writer complete
{
"spatial": {
"representationalPoint": [123,23],
"representationalPointIsDerived": false,
"boundingBox": {
"maxY": 23.22,
"maxX": 12.12,
"minX": 23.21,
"minY": 43
}
}
}
Definition: This is the spatial representation of the item.
Mapping...
representationalPoint: A collection of two points (saved as Doubles). The first value is latitude and must be within -180 & +180. The second value is the longitude and must be within -90 & 90.
Map to: [schema][metadata][resourceInfo][extents][n][geographicExtents][n][geographicElement][n][geographicElements][n].type = 'Feature'
view mdTools
Map to: [schema][metadata][resourceInfo][extents][n][geographicExtents][n][geographicElement][n][geographicElements][n].id= 'representationalPoint'
view mdTools
Map to: [schema][metadata][resourceInfo][extents][n][geographicExtents][n][geographicElement][n][geographicElements][n][geometryObject].type= 'Point'
view mdTools
Map to: [schema][metadata][resourceInfo][extents][n][geographicExtents][n][geographicElement][n][geographicElements][n][geometryObject].coordinates
view mdTools
representationalPointIsDerived: Whether or not the representational point is derived. This is generally false.
Map to: [schema][metadata][resourceInfo][extents][n][geographicExtents][n][geographicElement][n][geographicElements][n][properties].acquisitionMethod = 'notDerived'
view mdTools
boundingBox: The bounding box of an item. The maxY, minY, maxX, minX represent the four corners of the bounding box.
Map to: **[schema][metadata][resourceInfo][extents][n][geographicExtents][n][geographicElement][n][geographicElements][n].bbox
view mdTools
Mapping to mdTranslator
def newBase
intObj = {
metadata: {
resourceInfo: {
extents: [
{
description: nil,
geographicExtents: [
{
containsData: true,
identifier: {},
boundingBox: {},
geographicElement: [
{
nativeGeoJson: [],
geographicElements: [
{
type: 'Feature',
id: 'representationalPoint',
bbox: [12.12, 43, 23.21, 23.22],
geometryObject: {
type: 'Point',
coordinates: [123, 23]
},
properties: {
featureNames: [],
description: nil,
identifiers: [],
featureScope: nil,
acquisitionMethod: 'notDerived'
},
computedBbox: []
}
],
computedBbox: {}
}
]
}
],
temporalExtents: [],
verticalExtents: []
}
]
}
}
}
end
Translation to mdJson
{
"metadata": {
"resourceInfo": {
"extent": [
{
"geographicExtent": [
{
"containsData": true,
"identifier": {},
"boundingBox": {},
"geographicElement": [
{
"type": "Feature",
"id": "representationalPoint",
"bbox": [12.12, 43, 23.21, 23.22],
"geometry": {
"type": "Point",
"coordinates": [123, 23]
},
"properties": {
"featureName": [],
"description": "",
"includesData": true,
"temporalElement": {},
"verticalElement": [],
"identifier": [],
"featureScope": "",
"featureAcquisitionMethod": "notDerived"
}
}
]
}
]
}
]
}
}
}
Spatial objects in mdJson are encoded in the GeoJSON standard. I have represented the spatial object as a single GeoJSON Feature class. This translation can be seen in the 'Translation to mdJson' section above. The same 'GeoJson' object will be inserted into the geographicElements's 'nativeGeoJson' object so that it can be used by other writers such as the html writer.
Since the derivation method is Boolean I coded the 'featureAcquisitionMethod' to be either 'derived' or 'notDerived'.
I need to know the order of the bounding box coordinates. min and max are possibly ambiguous. Is maxX the most East or West? Is maxY always the most North regardless of hemisphere?
boundingBox should map to http://mdtools.adiwg.org/#viewer-page?v=1-0-5-0-3-2-2-0-1-0-2
mdJson has multiple extents and a computed bbox for each extent. Compute a new bbox using all the computed bbox as geometries. Use this as the sbJson spatial bbox.
Ignore representational point.
| gharchive/issue | 2017-03-09T22:31:18 | 2025-04-01T06:37:44.785601 | {
"authors": [
"jlblcc",
"stansmith907"
],
"repo": "adiwg/mdTranslator",
"url": "https://github.com/adiwg/mdTranslator/issues/74",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
105066721 | Don't replace Doodles.
Shouldn't replace Google Doodles.
Example HTML for a Doodle:
<div style="height:233px;margin-top:89px" id="lga">
<div id="hplogo" style="display:inline-block;position:relative;height:173px;width:390px">
<a href="/search?site=webhp&q=nfl+scores&oi=ddle&ct=google-gameday-doodle-kickoff-5927321670254592&hl=en&sa=X&ved=0CAMQNmoVChMIgrvToeDsxwIVwYENCh0UWABb"><img alt="Google Gameday Doodle" border="0" height="173" src="/logos/doodles/2015/google-gameday-doodle-kickoff-5927321670254592.3-hp.gif" style="padding-top:28px" title="Google Gameday Doodle" width="390" onload="window.lol&&lol()"></a>
<div>
<style>._P7b{height:26px;opacity:0.8;position:absolute;width:26px}._P7b:hover{opacity:1}._aGb,._bGb{height:22px;position:absolute;width:22px}._aGb{border-radius:6px;left:0;top:0}._bGb{left:2px;top:2px}a:active ._aGb,a:active ._bGb{margin:2px}</style>
<div class="_P7b" style="left:349px;top:36px">
<a href="javascript:void(0);" data-async-trigger="ddlshare" title="Share" jsaction="async.u" data-ved="0CAQQ4zhqFQoTCIK706Hg7McCFcGBDQodFFgAWw">
<div class="_aGb" style="background-color:#ffffff;border:2px solid #ffffff;opacity:0"></div>
<input value="//g.co/doodle/4cqztw" class="ddl-shortlink" type="hidden"><img class="_bGb" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABYAAAAWCAYAAADEtGw7AAAAoklEQVR42mM48/8/AxnYAYhnAvEDKJ4JFYOrIdfQM0D8Hw2fQTacHINnYjEUhmdSYvADPAY/GHQGOwPxZVKDAl9MywDxSiQDbhEbebhi+izUgs9QPoiuAGIXYpMbvpiG4dVALEdssBETIZ+A2JXUSCYppskxGF9QXKbExbgi7xalYYwruYFiv5ySVEEIk52OaZrzhmchRNPymGY1CM3qPKIwABNk7AjGirNhAAAAAElFTkSuQmCC" border="0" height="22" width="22" onload="google.aft&&google.aft(this)">
</a>
</div>
<div style="display:none" data-jiis="up" data-async-type="ddlshare" id="ddlshare" class="y yp" jsaction="asyncFilled:ddls.show"></div>
</div>
</div>
</div>
| gharchive/issue | 2015-09-06T02:40:40 | 2025-04-01T06:37:44.788192 | {
"authors": [
"adjohnson916"
],
"repo": "adjohnson916/google-logo-fonts.user.js",
"url": "https://github.com/adjohnson916/google-logo-fonts.user.js/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
921959249 | 🛑 Massar Login is down
In 7071de5, Massar Login (https://massarservice.men.gov.ma/moutamadris/Account) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Massar Login is back up in fe980fe.
| gharchive/issue | 2021-06-16T00:58:56 | 2025-04-01T06:37:44.801337 | {
"authors": [
"adnane-X-tebbaa"
],
"repo": "adnane-X-tebbaa/DownTime-Score",
"url": "https://github.com/adnane-X-tebbaa/DownTime-Score/issues/255",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
177827224 | feature: make Timmy modular, Fuel.py as a module
make module wrapper in python to allow usage of Timmy with modules from other python projects
use the same wrapper in cli.py to avoid code duplication
done!
| gharchive/issue | 2016-09-19T16:08:57 | 2025-04-01T06:37:44.803012 | {
"authors": [
"f3flight"
],
"repo": "adobdin/timmy",
"url": "https://github.com/adobdin/timmy/issues/57",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1531037950 | Pre-signed UploadPart - Exposed Headers - ETag is not exposed
So here is my problem:
I created presigned urls to upload parts of a multipart upload from client-side (browser), but when I need to retrieve the ETag from those responses, I can't access them because ETag header is not exposed and there is no way (as far as I know) to configure CORS Access-Control-Expose-Headers with S3Mock (PutBucketCorsCommand is not availabe neither).
So, is there any way to achieve this at the moment, or should I wait for a commit that implements this? Maybe ETag should be always exposed as default configuration?
Thanks! I will gladly help or contribute if someone guides me or gives me a hint of what to do.
@rubencfu thanks for raising this issue.
I allowed all headers for CORS in the linked PR. Let me see if I can add some tests as well, then I'll release. :)
Thanks! Glad it helped, and thank you again for your amazing work here!
I will create more issues if I encounter more problems, but I don't think so, everything works like a charm so far.
@rubencfu I had problems pushing releases to hub.docker.com, but finally got those resolved this week.
I released 2.12.1 yesterday which includes this fix.
Thanks!
| gharchive/issue | 2023-01-12T16:40:42 | 2025-04-01T06:37:44.818075 | {
"authors": [
"afranken",
"rubencfu"
],
"repo": "adobe/S3Mock",
"url": "https://github.com/adobe/S3Mock/issues/1023",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
475484878 | Fixed issue with loading of swagger spec json
When deployed as action, swagger json was not correctly loaded.
Fixed this.
LGTM
| gharchive/pull-request | 2019-08-01T05:48:58 | 2025-04-01T06:37:44.819114 | {
"authors": [
"Himavanth",
"sandeep-paliwal"
],
"repo": "adobe/adobeio-cna-core-target",
"url": "https://github.com/adobe/adobeio-cna-core-target/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
701324710 | archetype 24 does not getting creating says Archetype not found
###Getting below Error while creating an archetype###
[WARNING] Archetype not found in any catalog. Falling back to central repository.
[WARNING] Add a repository with id 'archetype' in your settings.xml if archetype's repository is elsewhere.
[WARNING] The POM for com.adobe.granite.archetypes:aem-project-archetype:jar:24 is missing, no dependency information available
try below command in any new machine where your archetype not installed####
mvn org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate -B -DarchetypeGroupId=com.adobe.granite.archetypes -DarchetypeArtifactId=aem-project-archetype -DarchetypeVersion=24 -DfrontendModule=angular -DgroupId="John" -Dversion=1.0-SNAPSHOT -Dpackage="com.DummyProject" -DappId="DummyProject" -DartifactId="DummyProject" -DappTitle="DummyProject" -DincludeDispatcherConfig=n -DincludeErrorHandler=n -DincludeExamples=y
It throws below error:
Failed to execute goal org.apache.maven.plugins:maven-archetype-plugin:3.2.0:generate (default-cli) on project standalone-pom: The desired archetype does not exist (com.adobe.granite.archetypes:aem-project-archetype:24)
For some reason it seems that the aem-project-archetype is not listed in https://repo1.maven.org/maven2/archetype-catalog.xml. @vladbailescu Maybe you need to raise a ticket at Sonatype to ask why the catalog has not been automatically updated.
Starting with version 24, the archetype is deployed directly to Maven Central (vis OSSRH). Please not that the groupId has changed to com.adobe.aem (via #406), which was documented at https://github.com/adobe/aem-project-archetype/commit/41c5905432872072ff421ce3e919b200f6737ee0#diff-04c6e90faac2675aa89e2176d2eec7d8L48
@vladbailescu Thanks you for the update!!
I see the griupId is updated and now I was able to generate the Archetype 24 project locally.
Starting with version 24, the archetype is deployed directly to Maven Central (vis OSSRH). Please note that the groupId has changed to com.adobe.aem (via #406), which was documented at 41c5905#diff-04c6e90faac2675aa89e2176d2eec7d8L48
Thanks, it is getting creating now at windows as well as linux. But it's failing in ubuntu 16.04 while doing the build. I will create a separate bug for it.
| gharchive/issue | 2020-09-14T18:23:51 | 2025-04-01T06:37:44.829398 | {
"authors": [
"amit-gup",
"kwin",
"nikhilkumar24dev",
"vladbailescu"
],
"repo": "adobe/aem-project-archetype",
"url": "https://github.com/adobe/aem-project-archetype/issues/507",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2642648638 | Transfer AEPTestUtils to Core
Description
This PR transfers over AEPTestUtils to the Core repo as a separate module. At a high level the updates include:
Creation of new testutils module with current AEPTestUtils implementation from Android AEPTestUtils: https://github.com/timkimadobe/aepsdk-testutils-android
Gradle updates to:
Include testutils as a project module
Add AEPTestUtils to Core test and androidTest dependencies
Gradle config for AEPTestUtils
Include Core as a local project dependency
In publishing, add all other non-Core dependencies as Maven dependencies
Makefile updates
Add AEPTestUtil related make rules (excluding checkstyle)
API dump for AEPTestUtils - file created using gradle apiDump
All other new files are transfers from AEPTestUtils existing implementation (unchanged).
Tested this configuration by adding the following code snippets using a class from AEPTestUtils to Core tests:
Unit test: MobileCoreTests.kt - testDispatchEventSimple()
Functional test: DataMarshallerTests.kt - marshalDeepLinkData()
val latch = ADBCountDownLatch(1)
latch.countDown()
assertEquals(0, latch.initialCount - latch.currentCount)
Related Issue
Motivation and Context
How Has This Been Tested?
Screenshots (if appropriate):
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist:
[ ] I have signed the Adobe Open Source CLA.
[ ] My code follows the code style of this project.
[ ] My change requires a change to the documentation.
[ ] I have updated the documentation accordingly.
[ ] I have read the CONTRIBUTING document.
[ ] I have added tests to cover my changes.
[ ] All new and existing tests passed.
Can you also update jitpack.yml file to include test utils?
| gharchive/pull-request | 2024-11-08T02:26:06 | 2025-04-01T06:37:44.838251 | {
"authors": [
"praveek",
"timkimadobe"
],
"repo": "adobe/aepsdk-core-android",
"url": "https://github.com/adobe/aepsdk-core-android/pull/723",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
806268238 | There's no difference between "aio app build" and "aio app deploy --skip-deploy"
Maybe it would make sense to remove "aio app deploy --skip-deploy" before it's too late ?
Hi @icaraps , that's exactly why we have kept it as an alternative ;-)
skip-deploy is will be deprecated in the next release should we close this one ?
| gharchive/issue | 2021-02-11T10:40:58 | 2025-04-01T06:37:44.839763 | {
"authors": [
"icaraps",
"meryllblanchet",
"moritzraho"
],
"repo": "adobe/aio-cli-plugin-app",
"url": "https://github.com/adobe/aio-cli-plugin-app/issues/378",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
216850220 | Custom font is ignored
Prerequisites
[x] Can you reproduce the problem with Debug -> Reload Without Extensions?
[x] Did you perform a cursory search to see if your bug or enhancement is already reported?
[x] Did you read the Troubleshooting guide?
Description
A custom font set in View->Themes is ignored.
Steps to Reproduce
Open Brackets
View->Themes
Set any other font. In my case, Fira Code, so: Fira Code, FiraCode, 'SourceCodePro-Medium', MS ゴシック, 'MS Gothic', monospace
Expected behavior:
Custom font should be displayed in the editor.
Actual behavior:
Custom font is set, but only once, after committing the setting. After zooming in/out, after a restart, or a reload, the default font is displayed again.
Versions
Brackets 1.9
Windows 10
This is a regression from 1.8, where setting a custom font worked just fine.
‘I’ve been through the same situation!
same situation... tried changing the defaultPreferneces.json.... Still no luck...
I'm seeing this also.
This issue is already posted here,
https://github.com/adobe/brackets/issues/13205
Its not working even in Ubuntu (in my case Ubuntu 16.04 LTS x64)
That appears to be a different issue, as it also reports the problem in 1.8.
This particular case is a regression from 1.8, meaning the feature worked fine in 1.8, but it broke as of 1.9.
I have modified the brackets.min.css.
Can this please be fixed with a maintenance update?
It happens too often that we need to wait 3 months or longer, for an issue to get solved in the next release. And any new release may introduce other breaking bugs, causing us to have to wait even longer.
Can this please be fixed with a maintenance update?
I +1 this comment. Its such annoying bug..
+1
@thany We are working on this issue and hopefully have a fix by next week. Once the fix is in, we will trigger a pre-release which would be available under Brackets GitHub releases.
I have this bug too.
Closing this as fixed by #13279 and available as part of 1.10 pre-release 1 build https://github.com/adobe/brackets/releases/tag/release-1.10-prerelease-1
I just noticed 1.9 is still up as the latest release, even though this bug is still in there, supposedly. Why aren't you releasing this 1.10-prerelease as a maintenance update to 1.9? You can call it 1.9.1 or something.
| gharchive/issue | 2017-03-24T17:12:27 | 2025-04-01T06:37:44.855145 | {
"authors": [
"clementi",
"guillecro",
"gxnie",
"jpdupont",
"kobezhu",
"prasadmudedla",
"swmitra",
"thany",
"vahid-sanati"
],
"repo": "adobe/brackets",
"url": "https://github.com/adobe/brackets/issues/13221",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1137154215 | Add option to push down content
After we switch to not pushing down content by default anymore, users should have the option to turn it back on.
:tada: This issue has been resolved in version 5.1.0 :tada:
The release is available on:
GitHub release
v5.1.0
Your semantic-release bot :package::rocket:
| gharchive/issue | 2022-02-14T11:19:53 | 2025-04-01T06:37:44.873630 | {
"authors": [
"rofe",
"trieloff"
],
"repo": "adobe/helix-sidekick-extension",
"url": "https://github.com/adobe/helix-sidekick-extension/issues/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2343241385 | Pagespeed Link with Pure CSS icon
This is a riff on #555 and #554 with following changes:
the pagespeed icon is pure CSS, similar in minimalism to the exisiting icons
the display of the link is controlled through the pagespeed attribute
feat(link-facet): enable pagespeed attribute to link to pagespeed insights
feat(rum-explorer): add minimalistic, pure-css pagespeed icon
Max will give you the same comment he gave me - wrap the URL in the querystring in encodeURIComponent() - otherwise looks good to me.
| gharchive/pull-request | 2024-06-10T08:30:20 | 2025-04-01T06:37:44.875953 | {
"authors": [
"langswei",
"trieloff"
],
"repo": "adobe/helix-website",
"url": "https://github.com/adobe/helix-website/pull/557",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2206545070 | Update docs
Description
There were some broken examples in the "Leonardo JS API" page.
Motivation
Previously, this was the example code on the site has this code which doesn't work:
const red = new Color({...})
// Change the colors ratios
theme.updateColor = {name: 'red', ratios: [3, 4.5, 7]};
// Change the colors colorKeys
theme.updateColor = {name: 'red', colorKeys: ['#ff0000']};
// Change the color's name
theme.updateColor = {name: 'red', name: 'Crimson'};
I changed it to this:
// Change the colors ratios
theme.updateColor = {color: 'red', ratios: [3, 4.5, 7]};
// Change the colors colorKeys
theme.updateColor = {color: 'red', colorKeys: ['#ff0000']};
// Change the color's name
theme.updateColor = {color: 'red', name: 'Crimson'};
I also added this, (which helps #229 ).
// It's also possible to change the color name and colorKeys in the same function
theme.updateColor = {color: 'red', ratios: [3, 4.5, 7], colorKeys: ['#ff0000'], name: 'Crimson'};
To-do list
[x] I have read the CONTRIBUTING document.
[x] This pull request is ready to merge.
Run report for 823d723a
Total time: 19.8s | Comparison time: 18.4s | Estimated loss: 1.4s (6.9% slower)
Action
Time
Status
Info
🟩
SyncWorkspace
0ms
Passed
⬛️
SetupNodeTool(~20.11)
1.1s
Skipped
🟩
InstallNodeDeps(~20.11)
9.1s
Passed
🟩
SyncNodeProject(contrast-colors)
0.2ms
Passed
🟩
SyncNodeProject(ui)
0.2ms
Passed
🟩
RunTask(ui:makeDistDir)
33ms
Passed
🟩
RunTask(ui:copyCNAME)
49.5ms
Passed
🟩
RunTask(ui:copyUIIcons)
46.7ms
Passed
🟩
RunTask(ui:copyWorkflowIcons)
46.7ms
Passed
🟩
RunTask(ui:buildSite)
9.5s
Passed
Touched files
docs/ui/src/views/home_gettingStarted.html
| gharchive/pull-request | 2024-03-25T19:37:14 | 2025-04-01T06:37:44.886763 | {
"authors": [
"GarthDB"
],
"repo": "adobe/leonardo",
"url": "https://github.com/adobe/leonardo/pull/239",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2176288804 | Escaping HTML tags in JSDoc
Storybook uses JSDoc to generate description for a component properties.
When using HTML tags in the JSDoc comment without escaping it, it can cause issues.
/**
* The <form> element to associate the button with.
* The value of this attribute must be the id of a <form> in the same document.
*/
form?: string,
This throws an error because it can't render a <form> inside a <p> element but we don't want to actually render an html <form> in the description.
The effect can be seen here:
https://react-spectrum.adobe.com/react-aria-starter/index.html?path=/docs/button--docs
and here:
https://react-spectrum.adobe.com/react-aria-tailwind-starter/index.html?path=/docs/button--docs
Originally posted by @RBilly in https://github.com/adobe/react-spectrum/discussions/6026
We can surround these with backticks. This should also fix them being rendered in IDEs.
Haha it's funny to me that I fixed this exact issue for RAC docs in the past with https://github.com/adobe/react-spectrum/pull/5614, but didn't realize there were whole different comment-parsing and processing going on in the starter story docs.
Interestingly both RAC docs and storybook docs use the same underlying markdown parser markdown-to-jsx. What's different, however, is that storybook docs doesn't allow the user to pass options to said markdown-to-jsx. This point is important because this issue can easily be solved by passing options.disableParsingRawHTML to markdown-to-jsx.
This is how storybook renders the description rows in <ArgTypes> doc block component:
https://github.com/storybookjs/storybook/blob/bfa05701390eea3954e853de4208f18b6862cd68/code/ui/blocks/src/components/ArgsTable/ArgRow.tsx#L103-L107
So no way to pass said options.disableParsingRawHTML to <Markdown> in storybook docs.
A similar issue seems to have been reported in storybook repo, but got closed with a comment effectively saying 'use ` to avoid the issue'.
So I guess that's the way to fix it as @reidbarber mentioned above!
| gharchive/issue | 2024-03-08T15:45:03 | 2025-04-01T06:37:44.893955 | {
"authors": [
"RBilly",
"reidbarber",
"sookmax"
],
"repo": "adobe/react-spectrum",
"url": "https://github.com/adobe/react-spectrum/issues/6027",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1826732642 | Fix InlineAlert docs package data
It was pointing to the wrong package
Dupe of https://github.com/adobe/react-spectrum/pull/4830 :)
| gharchive/pull-request | 2023-07-28T16:38:23 | 2025-04-01T06:37:44.895302 | {
"authors": [
"dannify",
"reidbarber"
],
"repo": "adobe/react-spectrum",
"url": "https://github.com/adobe/react-spectrum/pull/4839",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2138585847 | fix: experimentation message format
Please ensure your pull request adheres to the following guidelines:
[ ] make sure to link the related issues in this description
[ ] when merging / squashing, make sure the fixed issue references are visible in the commits, for easy compilation of release notes
Related Issues
Thanks for contributing!
:tada: This PR is included in version 1.6.3 :tada:
The release is available on:
v1.6.3
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2024-02-16T13:03:21 | 2025-04-01T06:37:44.898340 | {
"authors": [
"ekremney",
"solaris007"
],
"repo": "adobe/spacecat-audit-post-processor",
"url": "https://github.com/adobe/spacecat-audit-post-processor/pull/100",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2729905995 | MWPW-163234 | Parallel load stack for Unity enabled page performance
Parallel load stack for Unity enabled page performance
Call stack with the changes
Resolves: MWPW-163234
Test URLs:
Before: https://main--cc--adobecom.hlx.live/products/photoshop/edit-photos
After: https://unityload--cc--adobecom.hlx.live/products/photoshop/edit-photos
Validation on the pr can be done , after it goes to stage . Currently see the below performance
https://pagespeed.web.dev/analysis/https-stage--cc--adobecom-hlx-live-products-photoshop-edit-photos/0nfk5o8jkl?form_factor=desktop - stage
https://pagespeed.web.dev/analysis/https-unityload--cc--adobecom-hlx-live-products-photoshop-edit-photos/iyfh0211dz?form_factor=desktop - milolibs
| gharchive/pull-request | 2024-12-10T11:52:11 | 2025-04-01T06:37:44.919347 | {
"authors": [
"aishwaryamathuria",
"spadmasa"
],
"repo": "adobecom/cc",
"url": "https://github.com/adobecom/cc/pull/480",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
269967960 | Context.onReady is not a function
With the newest release, there seems to be a problem on startup. When I try to run the server it throws me the following error TypeError: Context.onReady is not a function. This even occurs when I don't define a Middleware on any routes.
at AuthProvider.boot (M:\Dev\Server\node_modules@adonisjs\auth\providers\AuthProvider.js:140:13)
Make sure you are on latest version of @adonisjs/framework
| gharchive/issue | 2017-10-31T13:58:10 | 2025-04-01T06:37:44.941933 | {
"authors": [
"TeachMeAnything",
"thetutlage"
],
"repo": "adonisjs/adonis-auth",
"url": "https://github.com/adonisjs/adonis-auth/issues/69",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2372840007 | Switch to pnpm
yarn can't and shouldn't be trusted.
The failures on >= v4 are expected right now and consistent with before this PR
| gharchive/pull-request | 2024-06-25T14:22:45 | 2025-04-01T06:37:44.949077 | {
"authors": [
"NullVoxPopuli"
],
"repo": "adopted-ember-addons/ember-moment",
"url": "https://github.com/adopted-ember-addons/ember-moment/pull/405",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1904408830 | [adoptium.net-2239] sometimes the contributor component is empty
Description of change
Reported issue is https://github.com/adoptium/adoptium.net/issues/2239
To fix the problem of empty contributor component, we have to store (in window.localStorage) information about
the current repository (repoToCheck). Else we are trying to retrieve a random page of a smaller repository.
Checklist
[x] npm test passes
Happy to see this fix - thanks @xavierfacq
| gharchive/pull-request | 2023-09-20T08:01:40 | 2025-04-01T06:37:44.951250 | {
"authors": [
"tellison",
"xavierfacq"
],
"repo": "adoptium/adoptium.net",
"url": "https://github.com/adoptium/adoptium.net/pull/2240",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
961864174 | Tweetest: Fetching tweets related to external tests
For the Tweetest project, we need to fetch the tweets which have certain keywords like external_custom, etc. We will list the criteria that a tweet should match in order to be fetched via the Twitter v2 API in this issue.
@smlambert @sophia-guo @llxia
@smlambert @llxia @sophia-guo I have written the code for fetching the tweets related to the external_custom hashtags. I have currently stored the code in a repository https://github.com/SAY-droid427/testTweet/blob/main/index.js. Where should I put the code in the aqa-test repo?
The outputs now relate to all tests, however we can change it to external_test(currently there are no tweets related to external_custom)
Nice @SAY-droid427 !
I was thinking this can go into a subdir of buildenv called tweetest:
https://github.com/adoptium/aqa-tests/tree/master/buildenv/tweetest
But will ask @sophia-guo and @llxia to weigh in with their thoughts on location.
Should I go ahead with this? @lxia @sophia-guo
| gharchive/issue | 2021-08-05T14:05:06 | 2025-04-01T06:37:44.955080 | {
"authors": [
"SAY-droid427",
"smlambert"
],
"repo": "adoptium/aqa-tests",
"url": "https://github.com/adoptium/aqa-tests/issues/2785",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2030944109 | pre-stage test libs on the machine for parallel case
In PR https://github.com/adoptium/aqa-tests/pull/4902, it missed the parallel case.
Add the parallel case support.
This PR needs to be merged with https://github.com/adoptium/TKG/pull/479
related: https://github.com/adoptium/aqa-tests/issues/4500
resolves: https://github.com/adoptium/TKG/issues/478
Windows:
parallel: https://openj9-jenkins.osuosl.org/job/Grinder/3105/ (win2012-x86-3)
serial: https://openj9-jenkins.osuosl.org/job/Grinder/3106/
xlinux:
parallel: https://openj9-jenkins.osuosl.org/job/Grinder/3108
serial: https://openj9-jenkins.osuosl.org/job/Grinder/3107
| gharchive/pull-request | 2023-12-07T15:00:16 | 2025-04-01T06:37:44.959748 | {
"authors": [
"llxia"
],
"repo": "adoptium/aqa-tests",
"url": "https://github.com/adoptium/aqa-tests/pull/4908",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
807155732 | NODE_LABEL should be more configurable
🆕🐥🐶 First Timers Only
This issue is reserved for people who never contributed to Open Source before. We know that the process of creating a pull request is the biggest barrier for new contributors. This issue is for you 💝
👾 Description of the issue
For the AdoptOpenJDK build we need some better configuration options for variables. Internally Groovy is used to executer parts of the CI build of AdoptOpenJDK. These parts of the build should be configurable. Today some variables are still hard coded. This issue should refactor two of these parts:
https://github.com/AdoptOpenJDK/openjdk-build/blob/7a85811d607845586de20a70df10878490617be9/pipelines/build/common/build_base_file.groovy#L119
https://github.com/AdoptOpenJDK/openjdk-build/blob/7a85811d607845586de20a70df10878490617be9/pipelines/build/common/config_regeneration.groovy#L279
This line of Groovy code contains 2 problems:
[ ] "${additionalNodeLabels}&&${platformConfig.os}&&${platformConfig.arch}" label is hardcoded and could be parameterized via our config files (see https://github.com/AdoptOpenJDK/openjdk-build/pull/2100 for an example of this being done for Docker node labels). Specifically, the && parts of the string are preventing users from making their own node strings without &&. We should have an additional configuration value (CUSTOM_NODE_LABEL?) that specifies a user provided node string.
📋 Step by Step
To solve this issue and contribute a fix you should check the following step-by-step list. A more detailed documentation of the workflow can be found here.
[x] Claim this issue: Comment below.
[ ] Fork the repository in github by simply clicking the 'fork' button.
[ ] Check out the forked repository
[ ] Create a feature branch for the issue. We do not have any naming definition for branches.
[ ] Commit your changes.
[ ] Start a Pull Request.
[ ] Done 👍 Ask in comments for a review :)
[ ] If the reviewer find some missing peaces or a problem he will start a discussion with you and describe the next steps how the problem can be solved.
[ ] You did it 🎉 We will merge the fix in the master branch.
[ ] Thanks, thanks, thanks for being part of this project as an open source contributor ❤️
🎉 Contribute to Hacktoberfest
Solve this issue as part of the Hacktoberfest event and get a change to receive cool goodies like a T-Shirt. 🎽
🤔❓ Questions
If you have any questions just ask us directly in this issue by adding a comment. You can join our community chat at Slack. Next to this you can find a general manual about open source contributions here.
Hi can I be assigned this issue
You're assigned! Let me know if you need any help 🙂
Hi Davis, I would like to assign, thanks.
Hi WenZhouNo worries, please unassign me, recently working on a another project so haven’t get much time on this one.Sorry about it, appreciate.Regards Liz ------------------ Original ------------------From: Wen Zhou @.>Date: Thu, Nov 24, 2022 7:38 PMTo: adoptium/ci-jenkins-pipelines @.>Cc: Liz77 @.>, Mention @.>Subject: Re: [adoptium/ci-jenkins-pipelines] NODE_LABEL should be moreconfigurable (#33)
@Pip7747 you still work on this or should I unassign you?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.>
[
{
@.": "http://schema.org",
@.": "EmailMessage",
"potentialAction": {
@.": "ViewAction",
"target": "https://github.com/adoptium/ci-jenkins-pipelines/issues/33#issuecomment-1326193267",
"url": "https://github.com/adoptium/ci-jenkins-pipelines/issues/33#issuecomment-1326193267",
"name": "View Issue"
},
"description": "View this Issue on GitHub",
"publisher": {
@.***": "Organization",
"name": "GitHub",
"url": "https://github.com"
}
}
]
| gharchive/issue | 2020-11-06T15:29:49 | 2025-04-01T06:37:44.976757 | {
"authors": [
"Antonynans",
"M-Davies",
"Pip7747"
],
"repo": "adoptium/ci-jenkins-pipelines",
"url": "https://github.com/adoptium/ci-jenkins-pipelines/issues/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1963935129 | Remove s390x, arm32 for jdk21 debian from jenkins job if arch=all
Signed-off-by: Sophia Guo sophia.gwf@gmail.com
/merge
/approve
I'm a little reluctant to approve this given s390x will be good for the next release, and hopefully we can look at pushing arm32 forward.
Since we're able to complete this for the 21.0.1 release without using all I would suggest we do that for this release and avoid having to remember to back these changes out and accidentally failing to ship the excluded platforms next time round.
Yes, we can complete the release without using all. However this change actually should be part of https://github.com/adoptium/installer/pull/757/files.
@sophia-guo Do you want to close this PR then?
| gharchive/pull-request | 2023-10-26T16:33:16 | 2025-04-01T06:37:44.980142 | {
"authors": [
"gdams",
"karianna",
"sophia-guo",
"sxa"
],
"repo": "adoptium/installer",
"url": "https://github.com/adoptium/installer/pull/763",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1282772055 | Do not match jdk build '0' new version branching tags
When new versions are branched, eg.jdk-18.0.2 from jdk-18.0.1 a "0" build tag is tagged at the commit prior to the new branch commit, and thus prior any changes or proper builds for that new version. Version jdk-18.0.1 may still be doing builds or .1 builds after that. Not until the make/conf/version-numbers.conf is changed and a jdk-18.0.2+1 build is tagged is that considered the latest.
Fixes: https://github.com/adoptium/temurin-build/issues/2998
Signed-off-by: Andrew Leonard anleonar@redhat.com
Good test build: https://ci.adoptopenjdk.net/job/build-scripts/job/jobs/job/jdk18u/job/jdk18u-windows-x64-temurin/62/
| gharchive/pull-request | 2022-06-23T18:14:40 | 2025-04-01T06:37:44.983136 | {
"authors": [
"andrew-m-leonard"
],
"repo": "adoptium/temurin-build",
"url": "https://github.com/adoptium/temurin-build/pull/3000",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
58107684 | Feature/timeliness validation
Add Timeliness validation for :validates_timeliness
Fix operate, -- eval when used this way can be highly unpredictable and breaks date comparison.
This would be better suited as an application specific addition.
| gharchive/pull-request | 2015-02-18T18:30:07 | 2025-04-01T06:37:44.984665 | {
"authors": [
"aeberlin",
"itsthatguy"
],
"repo": "adorableio/judge",
"url": "https://github.com/adorableio/judge/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
703377014 | fix: add availability ton configure authenticationFlowBindingOverrides on client
This PR fix : https://github.com/adorsys/keycloak-config-cli/issues/170
Here is an overview of what got changed by this pull request:
Issues
======
- Added 2
Clones added
============
- src/test/java/de/adorsys/keycloak/config/service/ImportClientsIT.java 22
See the complete overview on Codacy
Here is an overview of what got changed by this pull request:
Issues
======
- Added 1
Clones added
============
- src/test/java/de/adorsys/keycloak/config/service/ImportClientsIT.java 22
See the complete overview on Codacy
Here is an overview of what got changed by this pull request:
Issues
======
- Added 1
Clones added
============
- src/test/java/de/adorsys/keycloak/config/service/ImportClientsIT.java 22
See the complete overview on Codacy
Here is an overview of what got changed by this pull request:
Issues
======
- Added 1
Clones added
============
- src/test/java/de/adorsys/keycloak/config/service/ImportClientsIT.java 27
See the complete overview on Codacy
Native builds checks seems to be broken since : 1b134d3
Please do a rebase
I miss a test minimum with an id instead a name.
Could you add a test that verifies that the import still works with a flow id?
If you not interested in some code optimization, just let me know and ignore my last comment. :)
Then I will merge it, if the test is there.
As it, it's not possible to import a realm with an authFlowBindingOverrides with an id instead a name. Really sure that we need this ?
As it, it's not possible to import a realm with an authFlowBindingOverrides with an id instead a name. Really sure that we need this ?
Are you 100% sure?
If I define the id of the flow inside my import like here
https://github.com/adorsys/keycloak-config-cli/blob/897155d2ed6d3d1ad233e08989ed0280550b6e38/src/test/resources/import-files/auth-flows/26_update_realm__update-non-top-level-flow-with-pseudo-id.json#L12
The id should be known, predictive and could be use by authFlowBindingOverrides.
Or does Keycloak override the id on creation?
Looking at Keycloaks source code, the id of the import model will be used if defined.
https://github.com/keycloak/keycloak/blob/5d5e56dde3bc841c7d1d0261cfcaf372cdf119bc/model/jpa/src/main/java/org/keycloak/models/jpa/RealmAdapter.java#L1724
As it, it's not possible to import a realm with an authFlowBindingOverrides with an id instead a name
Yes, it's possible. The example below partially work with master branch now but 2 runs are required first the flow, then the client. After that I could verify the the override setting at a client. Since the branch handle the import order of the settings, this error should be gone after merge.
Define overrides with an id works and are required if you want to import an exported realm with such configuration.
Demo realm config:
{
"enabled": true,
"realm": "test",
"authenticationFlows": [
{
"id": "fbee9bfe-430a-48ac-8ef7-00dd17a1ab43",
"alias": "custom flow",
"description": "Custom flow for testing",
"providerId": "basic-flow",
"topLevel": true,
"builtIn": false,
"authenticationExecutions": [
{
"authenticator": "docker-http-basic-authenticator",
"requirement": "DISABLED",
"priority": 0,
"userSetupAllowed": true,
"autheticatorFlow": false
}
]
}
],
"clients": [
{
"clientId": "another-client",
"name": "another-client",
"description": "Another-Client",
"enabled": true,
"clientAuthenticatorType": "client-secret",
"secret": "my-other-client-secret",
"redirectUris": [
"*"
],
"webOrigins": [
"*"
],
"authenticationFlowBindingOverrides": {
"browser": "fbee9bfe-430a-48ac-8ef7-00dd17a1ab43"
}
}
]
}
Currently, the branch will throw an error that flow alias fbee9bfe-430a-48ac-8ef7-00dd17a1ab43 could not be found. At review, I saw this potential issue, that was the reason for the requested test.
This is should should be fixed by lookup flows by id first and if none found, fallback to the flow alias lookup.
Hi, I'll try to add that no problem.
When creating a new realm with a new flow, new client : no problem.
When updating a flow (in order to set an id, for exemple), since keycloak-config-cli recreates this flow and ignore id property ( https://github.com/adorsys/keycloak-config-cli/blob/2c0d021881b7641fa60c3c8667c7c22eef274c62/src/main/java/de/adorsys/keycloak/config/service/AuthenticationFlowsImportService.java#L266), is not possible to handle the case properly.
I have no idea for why this property is ignored here, but import of authenticationFlow seems to be a little tricky.
I'not sure about the impact of modifying this in AuthenticationFlowsImportService.
is not possible to handle the case properly.
I'm fine that this case just works on the initial realm creation.
A resource id feels immutable. If someone need this, he should create a new issue.
As alternative, he could define an override by flow alias as this PR provide this.
Shouldn't we handle only alias (on authFlow) to be more simple ?
I'm with you.
But keycloak-config-cli should support a default realm export at minimum and could enrich experience in some points.
An export generated by keycloak itself would put an id here.
Looks good to me
@jkroepke, is it possible to release this?
I would like to take a look at #183 first.
If you are using the docker image, the tag master should include this fix.
This is a critical piece of our system, I would rather not have our config set to master 😄
Have you a rough idea how long it will take to address #183 ?
@ybonnefond 2.3.0 is out.
Awesome, Thanks!
| gharchive/pull-request | 2020-09-17T08:24:30 | 2025-04-01T06:37:45.001432 | {
"authors": [
"jBouyoud",
"jkroepke",
"ybonnefond"
],
"repo": "adorsys/keycloak-config-cli",
"url": "https://github.com/adorsys/keycloak-config-cli/pull/178",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2245208681 | Switch to ADTypes.jl v1.0
Define TracerSparsityDetector following the AbstractSparsityDetector framework of ADTypes v1.0
Remove SparseDiffTools extension
Add tests
Add docs
Commit manifests and tweak docs CI because ADTypes v1.0 is not released
Put test extras in the main project file to avoid https://github.com/JuliaLang/Pkg.jl/issues/1585
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 78.31%. Comparing base (e450dd7) to head (c139d41).
Additional details and impacted files
@@ Coverage Diff @@
## main #16 +/- ##
==========================================
- Coverage 79.31% 78.31% -1.00%
==========================================
Files 6 6
Lines 87 83 -4
==========================================
- Hits 69 65 -4
Misses 18 18
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
As you wish, no rush
| gharchive/pull-request | 2024-04-16T06:31:18 | 2025-04-01T06:37:45.008700 | {
"authors": [
"codecov-commenter",
"gdalle"
],
"repo": "adrhill/SparseConnectivityTracer.jl",
"url": "https://github.com/adrhill/SparseConnectivityTracer.jl/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1651035329 | adriangb.com/scikeras is DOWN
https://adriangb.com/scikeras/
Should be back up. Can you confirm?
Works now. 🙂 May I ask, is this project still actively maintained? (and perhaps planned to be, for the near future)
The page being down had nothing to do with the projects maintenance, I was just doing some re-arranging of my GitHub pages.
I'm not planning on adding extensive new feature or any time soon, but I have been keeping the project up to date with TensorFlow releases and such. Is there anything missing that makes you ask that?
No worries! The question was not necessarily related. I just like to know whether projects are maintained before I decide to further study the documentation and perhaps use it in my models.
| gharchive/issue | 2023-04-02T17:49:28 | 2025-04-01T06:37:45.011624 | {
"authors": [
"adriangb",
"smith558"
],
"repo": "adriangb/scikeras",
"url": "https://github.com/adriangb/scikeras/issues/294",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1750871559 | Teslamate does not start anymore: canceling statement due to user request
Is there an existing issue for this?
[X] I have searched the existing issues
What happened?
I rebooted the machine with sudo reboot after normal system upgrades. I suspect the database may not have finished writing cleanly.
Now teslamate wont start due to below error log. Grafana still works on the database.
Expected Behavior
I would have hoped it would survive a planned reboot - and it did so many times - even power outages without corrupting data. Maybe this is just a very unlucky coincidence albeit there was another issue quite similar to mine https://github.com/adriankumpf/teslamate/issues/2442
I would also be okay to just delete the data for that day if that would help.
Steps To Reproduce
No response
Relevant log output
docker-compose logs
Attaching to teslamate_teslamate_1, teslamate_grafana_1, teslamate_database_1
database_1 |
database_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
database_1 |
database_1 | 2023-06-09 08:53:35.721 UTC [1] LOG: starting PostgreSQL 12.15 (Debian 12.15-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
database_1 | 2023-06-09 08:53:35.721 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
database_1 | 2023-06-09 08:53:35.721 UTC [1] LOG: listening on IPv6 address "::", port 5432
database_1 | 2023-06-09 08:53:35.725 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
database_1 | 2023-06-09 08:53:35.738 UTC [27] LOG: database system was shut down at 2023-06-06 20:04:52 UTC
database_1 | 2023-06-09 08:53:35.744 UTC [1] LOG: database system is ready to accept connections
database_1 | 2023-06-09 08:53:57.486 UTC [36] ERROR: canceling statement due to user request
database_1 | 2023-06-09 08:53:57.486 UTC [36] STATEMENT: WITH last_position AS (
database_1 | SELECT date, convert_celsius(outside_temp, 'C') AS "Outside Temperature [°C]"
database_1 | FROM positions
database_1 | WHERE car_id = '1' AND outside_temp IS NOT NULL AND date AT TIME ZONE 'Etc/UTC' >= (NOW() - interval '60m')
database_1 | ORDER BY date DESC
database_1 | LIMIT 1
database_1 | ),
database_1 | last_charge AS (
database_1 | SELECT date, convert_celsius(outside_temp, 'C') AS "Outside Temperature [°C]"
database_1 | FROM charges
database_1 | JOIN charging_processes ON charges.charging_process_id = charging_processes.id
database_1 | WHERE car_id = '1' AND outside_temp IS NOT NULL AND date AT TIME ZONE 'Etc/UTC' >= (NOW() - interval '60m')
database_1 | ORDER BY date DESC
database_1 | LIMIT 1
database_1 | )
database_1 | SELECT * FROM last_position
database_1 | UNION ALL
database_1 | SELECT * FROM last_charge
database_1 | ORDER BY date DESC
database_1 | LIMIT 1;
database_1 | 2023-06-09 08:53:57.486 UTC [39] ERROR: canceling statement due to user request
database_1 | 2023-06-09 08:53:57.486 UTC [39] STATEMENT: SELECT
database_1 | date,
database_1 | convert_celsius(inside_temp, 'C') AS "Inside Temperature [°C]"
database_1 | FROM positions
database_1 | WHERE
database_1 | car_id = '1'
database_1 | and inside_temp is not null AND date AT TIME ZONE 'Etc/UTC' >= (NOW() - interval '60m')
database_1 | order by date desc
database_1 | limit 1
database_1 | 2023-06-09 08:53:57.486 UTC [35] ERROR: canceling statement due to user request
database_1 | 2023-06-09 08:53:57.486 UTC [35] STATEMENT: SELECT
database_1 | date AS "time",
database_1 | convert_celsius(driver_temp_setting, 'C') as "Driver Temperature [°C]"
database_1 | FROM positions
database_1 | WHERE driver_temp_setting IS NOT NULL AND car_id = '1' AND date AT TIME ZONE 'Etc/UTC' >= (NOW() - interval '60m')
database_1 | ORDER BY date DESC
database_1 | LIMIT 1;
Screenshots
No response
Additional data
No response
Type of installation
Docker
Version
latest
I've had the same problem. I also had a problem with my MQTT connection in teslamate. When i fixed that this error message disappeared. Maybe that helps.
And it just works again without any changes whatsoever
| gharchive/issue | 2023-06-10T08:29:23 | 2025-04-01T06:37:45.020994 | {
"authors": [
"arnoutvw",
"jmtatsch"
],
"repo": "adriankumpf/teslamate",
"url": "https://github.com/adriankumpf/teslamate/issues/3237",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.